Bayesian B-spline mapping for dynamic quantitative traits.
Xing, Jun; Li, Jiahan; Yang, Runqing; Zhou, Xiaojing; Xu, Shizhong
2012-04-01
Owing to their ability and flexibility to describe individual gene expression at different time points, random regression (RR) analyses have become a popular procedure for the genetic analysis of dynamic traits whose phenotypes are collected over time. Specifically, when modelling the dynamic patterns of gene expressions in the RR framework, B-splines have been proved successful as an alternative to orthogonal polynomials. In the so-called Bayesian B-spline quantitative trait locus (QTL) mapping, B-splines are used to characterize the patterns of QTL effects and individual-specific time-dependent environmental errors over time, and the Bayesian shrinkage estimation method is employed to estimate model parameters. Extensive simulations demonstrate that (1) in terms of statistical power, Bayesian B-spline mapping outperforms the interval mapping based on the maximum likelihood; (2) for the simulated dataset with complicated growth curve simulated by B-splines, Legendre polynomial-based Bayesian mapping is not capable of identifying the designed QTLs accurately, even when higher-order Legendre polynomials are considered and (3) for the simulated dataset using Legendre polynomials, the Bayesian B-spline mapping can find the same QTLs as those identified by Legendre polynomial analysis. All simulation results support the necessity and flexibility of B-spline in Bayesian mapping of dynamic traits. The proposed method is also applied to a real dataset, where QTLs controlling the growth trajectory of stem diameters in Populus are located.
An, Lihua; Fung, Karen Y; Krewski, Daniel
2010-09-01
Spontaneous adverse event reporting systems are widely used to identify adverse reactions to drugs following their introduction into the marketplace. In this article, a James-Stein type shrinkage estimation strategy was developed in a Bayesian logistic regression model to analyze pharmacovigilance data. This method is effective in detecting signals as it combines information and borrows strength across medically related adverse events. Computer simulation demonstrated that the shrinkage estimator is uniformly better than the maximum likelihood estimator in terms of mean squared error. This method was used to investigate the possible association of a series of diabetic drugs and the risk of cardiovascular events using data from the Canada Vigilance Online Database.
Bayesian image reconstruction for improving detection performance of muon tomography.
Wang, Guobao; Schultz, Larry J; Qi, Jinyi
2009-05-01
Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.
Inferring metabolic networks using the Bayesian adaptive graphical lasso with informative priors.
Peterson, Christine; Vannucci, Marina; Karakas, Cemal; Choi, William; Ma, Lihua; Maletić-Savatić, Mirjana
2013-10-01
Metabolic processes are essential for cellular function and survival. We are interested in inferring a metabolic network in activated microglia, a major neuroimmune cell in the brain responsible for the neuroinflammation associated with neurological diseases, based on a set of quantified metabolites. To achieve this, we apply the Bayesian adaptive graphical lasso with informative priors that incorporate known relationships between covariates. To encourage sparsity, the Bayesian graphical lasso places double exponential priors on the off-diagonal entries of the precision matrix. The Bayesian adaptive graphical lasso allows each double exponential prior to have a unique shrinkage parameter. These shrinkage parameters share a common gamma hyperprior. We extend this model to create an informative prior structure by formulating tailored hyperpriors on the shrinkage parameters. By choosing parameter values for each hyperprior that shift probability mass toward zero for nodes that are close together in a reference network, we encourage edges between covariates with known relationships. This approach can improve the reliability of network inference when the sample size is small relative to the number of parameters to be estimated. When applied to the data on activated microglia, the inferred network includes both known relationships and associations of potential interest for further investigation.
Inferring metabolic networks using the Bayesian adaptive graphical lasso with informative priors
PETERSON, CHRISTINE; VANNUCCI, MARINA; KARAKAS, CEMAL; CHOI, WILLIAM; MA, LIHUA; MALETIĆ-SAVATIĆ, MIRJANA
2014-01-01
Metabolic processes are essential for cellular function and survival. We are interested in inferring a metabolic network in activated microglia, a major neuroimmune cell in the brain responsible for the neuroinflammation associated with neurological diseases, based on a set of quantified metabolites. To achieve this, we apply the Bayesian adaptive graphical lasso with informative priors that incorporate known relationships between covariates. To encourage sparsity, the Bayesian graphical lasso places double exponential priors on the off-diagonal entries of the precision matrix. The Bayesian adaptive graphical lasso allows each double exponential prior to have a unique shrinkage parameter. These shrinkage parameters share a common gamma hyperprior. We extend this model to create an informative prior structure by formulating tailored hyperpriors on the shrinkage parameters. By choosing parameter values for each hyperprior that shift probability mass toward zero for nodes that are close together in a reference network, we encourage edges between covariates with known relationships. This approach can improve the reliability of network inference when the sample size is small relative to the number of parameters to be estimated. When applied to the data on activated microglia, the inferred network includes both known relationships and associations of potential interest for further investigation. PMID:24533172
Burgette, Lane F; Reiter, Jerome P
2013-06-01
Multinomial outcomes with many levels can be challenging to model. Information typically accrues slowly with increasing sample size, yet the parameter space expands rapidly with additional covariates. Shrinking all regression parameters towards zero, as often done in models of continuous or binary response variables, is unsatisfactory, since setting parameters equal to zero in multinomial models does not necessarily imply "no effect." We propose an approach to modeling multinomial outcomes with many levels based on a Bayesian multinomial probit (MNP) model and a multiple shrinkage prior distribution for the regression parameters. The prior distribution encourages the MNP regression parameters to shrink toward a number of learned locations, thereby substantially reducing the dimension of the parameter space. Using simulated data, we compare the predictive performance of this model against two other recently-proposed methods for big multinomial models. The results suggest that the fully Bayesian, multiple shrinkage approach can outperform these other methods. We apply the multiple shrinkage MNP to simulating replacement values for areal identifiers, e.g., census tract indicators, in order to protect data confidentiality in public use datasets.
Genome-wide regression and prediction with the BGLR statistical package.
Pérez, Paulino; de los Campos, Gustavo
2014-10-01
Many modern genomic data analyses require implementing regressions where the number of parameters (p, e.g., the number of marker effects) exceeds sample size (n). Implementing these large-p-with-small-n regressions poses several statistical and computational challenges, some of which can be confronted using Bayesian methods. This approach allows integrating various parametric and nonparametric shrinkage and variable selection procedures in a unified and consistent manner. The BGLR R-package implements a large collection of Bayesian regression models, including parametric variable selection and shrinkage methods and semiparametric procedures (Bayesian reproducing kernel Hilbert spaces regressions, RKHS). The software was originally developed for genomic applications; however, the methods implemented are useful for many nongenomic applications as well. The response can be continuous (censored or not) or categorical (either binary or ordinal). The algorithm is based on a Gibbs sampler with scalar updates and the implementation takes advantage of efficient compiled C and Fortran routines. In this article we describe the methods implemented in BGLR, present examples of the use of the package, and discuss practical issues emerging in real-data analysis. Copyright © 2014 by the Genetics Society of America.
Mejia, Amanda F; Nebel, Mary Beth; Barber, Anita D; Choe, Ann S; Pekar, James J; Caffo, Brian S; Lindquist, Martin A
2018-05-15
Reliability of subject-level resting-state functional connectivity (FC) is determined in part by the statistical techniques employed in its estimation. Methods that pool information across subjects to inform estimation of subject-level effects (e.g., Bayesian approaches) have been shown to enhance reliability of subject-level FC. However, fully Bayesian approaches are computationally demanding, while empirical Bayesian approaches typically rely on using repeated measures to estimate the variance components in the model. Here, we avoid the need for repeated measures by proposing a novel measurement error model for FC describing the different sources of variance and error, which we use to perform empirical Bayes shrinkage of subject-level FC towards the group average. In addition, since the traditional intra-class correlation coefficient (ICC) is inappropriate for biased estimates, we propose a new reliability measure denoted the mean squared error intra-class correlation coefficient (ICC MSE ) to properly assess the reliability of the resulting (biased) estimates. We apply the proposed techniques to test-retest resting-state fMRI data on 461 subjects from the Human Connectome Project to estimate connectivity between 100 regions identified through independent components analysis (ICA). We consider both correlation and partial correlation as the measure of FC and assess the benefit of shrinkage for each measure, as well as the effects of scan duration. We find that shrinkage estimates of subject-level FC exhibit substantially greater reliability than traditional estimates across various scan durations, even for the most reliable connections and regardless of connectivity measure. Additionally, we find partial correlation reliability to be highly sensitive to the choice of penalty term, and to be generally worse than that of full correlations except for certain connections and a narrow range of penalty values. This suggests that the penalty needs to be chosen carefully when using partial correlations. Copyright © 2018. Published by Elsevier Inc.
Bayesian LASSO, scale space and decision making in association genetics.
Pasanen, Leena; Holmström, Lasse; Sillanpää, Mikko J
2015-01-01
LASSO is a penalized regression method that facilitates model fitting in situations where there are as many, or even more explanatory variables than observations, and only a few variables are relevant in explaining the data. We focus on the Bayesian version of LASSO and consider four problems that need special attention: (i) controlling false positives, (ii) multiple comparisons, (iii) collinearity among explanatory variables, and (iv) the choice of the tuning parameter that controls the amount of shrinkage and the sparsity of the estimates. The particular application considered is association genetics, where LASSO regression can be used to find links between chromosome locations and phenotypic traits in a biological organism. However, the proposed techniques are relevant also in other contexts where LASSO is used for variable selection. We separate the true associations from false positives using the posterior distribution of the effects (regression coefficients) provided by Bayesian LASSO. We propose to solve the multiple comparisons problem by using simultaneous inference based on the joint posterior distribution of the effects. Bayesian LASSO also tends to distribute an effect among collinear variables, making detection of an association difficult. We propose to solve this problem by considering not only individual effects but also their functionals (i.e. sums and differences). Finally, whereas in Bayesian LASSO the tuning parameter is often regarded as a random variable, we adopt a scale space view and consider a whole range of fixed tuning parameters, instead. The effect estimates and the associated inference are considered for all tuning parameters in the selected range and the results are visualized with color maps that provide useful insights into data and the association problem considered. The methods are illustrated using two sets of artificial data and one real data set, all representing typical settings in association genetics.
Andrinopoulou, Eleni-Rosalina; Rizopoulos, Dimitris
2016-11-20
The joint modeling of longitudinal and survival data has recently received much attention. Several extensions of the standard joint model that consists of one longitudinal and one survival outcome have been proposed including the use of different association structures between the longitudinal and the survival outcomes. However, in general, relatively little attention has been given to the selection of the most appropriate functional form to link the two outcomes. In common practice, it is assumed that the underlying value of the longitudinal outcome is associated with the survival outcome. However, it could be that different characteristics of the patients' longitudinal profiles influence the hazard. For example, not only the current value but also the slope or the area under the curve of the longitudinal outcome. The choice of which functional form to use is an important decision that needs to be investigated because it could influence the results. In this paper, we use a Bayesian shrinkage approach in order to determine the most appropriate functional forms. We propose a joint model that includes different association structures of different biomarkers and assume informative priors for the regression coefficients that correspond to the terms of the longitudinal process. Specifically, we assume Bayesian lasso, Bayesian ridge, Bayesian elastic net, and horseshoe. These methods are applied to a dataset consisting of patients with a chronic liver disease, where it is important to investigate which characteristics of the biomarkers have an influence on survival. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Gutiérrez, J. M.; Natxiondo, A.; Nieves, J.; Zabala, A.; Sertucha, J.
2017-04-01
The study of shrinkage incidence variations in nodular cast irons is an important aspect of manufacturing processes. These variations change the feeding requirements on castings and the optimization of risers' size is consequently affected when avoiding the formation of shrinkage defects. The effect of a number of processing variables on the shrinkage size has been studied using a layout specifically designed for this purpose. The β parameter has been defined as the relative volume reduction from the pouring temperature up to the room temperature. It is observed that shrinkage size and β decrease as effective carbon content increases and when inoculant is added in the pouring stream. A similar effect is found when the parameters selected from cooling curves show high graphite nucleation during solidification of cast irons for a given inoculation level. Pearson statistical analysis has been used to analyze the correlations among all involved variables and a group of Bayesian networks have been subsequently built so as to get the best accurate model for predicting β as a function of the input processing variables. The developed models can be used in foundry plants to study the shrinkage incidence variations in the manufacturing process and to optimize the related costs.
Robust, open-source removal of systematics in Kepler data
NASA Astrophysics Data System (ADS)
Aigrain, S.; Parviainen, H.; Roberts, S.; Reece, S.; Evans, T.
2017-10-01
We present ARC2 (Astrophysically Robust Correction 2), an open-source python-based systematics-correction pipeline, to correct for the Kepler prime mission long-cadence light curves. The ARC2 pipeline identifies and corrects any isolated discontinuities in the light curves and then removes trends common to many light curves. These trends are modelled using the publicly available co-trending basis vectors, within an (approximate) Bayesian framework with 'shrinkage' priors to minimize the risk of overfitting and the injection of any additional noise into the corrected light curves, while keeping any astrophysical signals intact. We show that the ARC2 pipeline's performance matches that of the standard Kepler PDC-MAP data products using standard noise metrics, and demonstrate its ability to preserve astrophysical signals using injection tests with simulated stellar rotation and planetary transit signals. Although it is not identical, the ARC2 pipeline can thus be used as an open-source alternative to PDC-MAP, whenever the ability to model the impact of the systematics removal process on other kinds of signal is important.
Bayesian analysis of heterogeneous treatment effects for patient-centered outcomes research.
Henderson, Nicholas C; Louis, Thomas A; Wang, Chenguang; Varadhan, Ravi
2016-01-01
Evaluation of heterogeneity of treatment effect (HTE) is an essential aspect of personalized medicine and patient-centered outcomes research. Our goal in this article is to promote the use of Bayesian methods for subgroup analysis and to lower the barriers to their implementation by describing the ways in which the companion software beanz can facilitate these types of analyses. To advance this goal, we describe several key Bayesian models for investigating HTE and outline the ways in which they are well-suited to address many of the commonly cited challenges in the study of HTE. Topics highlighted include shrinkage estimation, model choice, sensitivity analysis, and posterior predictive checking. A case study is presented in which we demonstrate the use of the methods discussed.
Monitoring the Deformation of High-Rise Buildings in Shanghai Luijiazui Zone by Tomo-Psinsar
NASA Astrophysics Data System (ADS)
Zhou, L. F.; Ma, P. F.; Xia, Y.; Xie, C. H.
2018-05-01
In this study, we utilize a Tomography-based Persistent Scatterers Interferometry (Tomo-PSInSAR) approach for monitoring the deformation performances of high-rise buildings, i.e. SWFC and Jin Mao Tower, in Shanghai Lujiazui Zone. For the purpose of this study, we use 31 Stripmap acquisitions from TerraSAR-X missions, spanning from December 2009 to February 2013. Considering thermal expansion, creep and shrinkage are two long-term movements that occur in high-rise buildings with concrete structures, we use an extended 4-D SAR phase model, and three parameters (height, deformation velocity, and thermal amplitude) are estimated simultaneously. Moreover, we apply a two-tier network strategy to detect single and double PSs with no need for preliminary removal of the atmospheric phase screen (APS) in the study area, avoiding possible error caused by the uncertainty in spatiotemporal filtering. Thermal expansion is illustrated in the thermal amplitude map, and deformation due to creep and shrinkage is revealed in the linear deformation velocity map. The thermal amplitude map demonstrates that the derived thermal amplitude of the two high-rise buildings both dilate and contract periodically, which is highly related to the building height due to the upward accumulative effect of thermal expansion. The linear deformation velocity map reveals that SWFC is subject to deformation during the new built period due to creep and shrinkage, which is height-dependent movements in the linear velocity map. It is worth mention that creep and shrinkage induces movements that increase with the increasing height in the downward direction. In addition, the deformation rates caused by creep and shrinkage are largest at the beginning and gradually decrease, and at last achieve a steady state as time goes infinity. On the contrary, the linear deformation velocity map shows that Jin Mao Tower is almost stable, and the reason is that it is an old built building, which is not influenced by creep and shrinkage as the load is relaxed and dehydration proceeds. This study underlines the potential of the Tomo-PSInSAR solution for the monitoring deformation performance of high-rise buildings, which offers a quantitative indicator to local authorities and planners for assessing potential damages.
Comparability and Reliability Considerations of Adequate Yearly Progress
ERIC Educational Resources Information Center
Maier, Kimberly S.; Maiti, Tapabrata; Dass, Sarat C.; Lim, Chae Young
2012-01-01
The purpose of this study is to develop an estimate of Adequate Yearly Progress (AYP) that will allow for reliable and valid comparisons among student subgroups, schools, and districts. A shrinkage-type estimator of AYP using the Bayesian framework is described. Using simulated data, the performance of the Bayes estimator will be compared to…
Bayesian Group Bridge for Bi-level Variable Selection.
Mallick, Himel; Yi, Nengjun
2017-06-01
A Bayesian bi-level variable selection method (BAGB: Bayesian Analysis of Group Bridge) is developed for regularized regression and classification. This new development is motivated by grouped data, where generic variables can be divided into multiple groups, with variables in the same group being mechanistically related or statistically correlated. As an alternative to frequentist group variable selection methods, BAGB incorporates structural information among predictors through a group-wise shrinkage prior. Posterior computation proceeds via an efficient MCMC algorithm. In addition to the usual ease-of-interpretation of hierarchical linear models, the Bayesian formulation produces valid standard errors, a feature that is notably absent in the frequentist framework. Empirical evidence of the attractiveness of the method is illustrated by extensive Monte Carlo simulations and real data analysis. Finally, several extensions of this new approach are presented, providing a unified framework for bi-level variable selection in general models with flexible penalties.
NASA Astrophysics Data System (ADS)
Wheeler, David C.; Waller, Lance A.
2009-03-01
In this paper, we compare and contrast a Bayesian spatially varying coefficient process (SVCP) model with a geographically weighted regression (GWR) model for the estimation of the potentially spatially varying regression effects of alcohol outlets and illegal drug activity on violent crime in Houston, Texas. In addition, we focus on the inherent coefficient shrinkage properties of the Bayesian SVCP model as a way to address increased coefficient variance that follows from collinearity in GWR models. We outline the advantages of the Bayesian model in terms of reducing inflated coefficient variance, enhanced model flexibility, and more formal measuring of model uncertainty for prediction. We find spatially varying effects for alcohol outlets and drug violations, but the amount of variation depends on the type of model used. For the Bayesian model, this variation is controllable through the amount of prior influence placed on the variance of the coefficients. For example, the spatial pattern of coefficients is similar for the GWR and Bayesian models when a relatively large prior variance is used in the Bayesian model.
Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan; ...
2017-09-07
In this paper, we demonstrate a statistical procedure for learning a high-order eddy viscosity model (EVM) from experimental data and using it to improve the predictive skill of a Reynolds-averaged Navier–Stokes (RANS) simulator. The method is tested in a three-dimensional (3D), transonic jet-in-crossflow (JIC) configuration. The process starts with a cubic eddy viscosity model (CEVM) developed for incompressible flows. It is fitted to limited experimental JIC data using shrinkage regression. The shrinkage process removes all the terms from the model, except an intercept, a linear term, and a quadratic one involving the square of the vorticity. The shrunk eddy viscositymore » model is implemented in an RANS simulator and calibrated, using vorticity measurements, to infer three parameters. The calibration is Bayesian and is solved using a Markov chain Monte Carlo (MCMC) method. A 3D probability density distribution for the inferred parameters is constructed, thus quantifying the uncertainty in the estimate. The phenomenal cost of using a 3D flow simulator inside an MCMC loop is mitigated by using surrogate models (“curve-fits”). A support vector machine classifier (SVMC) is used to impose our prior belief regarding parameter values, specifically to exclude nonphysical parameter combinations. The calibrated model is compared, in terms of its predictive skill, to simulations using uncalibrated linear and CEVMs. Finally, we find that the calibrated model, with one quadratic term, is more accurate than the uncalibrated simulator. The model is also checked at a flow condition at which the model was not calibrated.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan
In this paper, we demonstrate a statistical procedure for learning a high-order eddy viscosity model (EVM) from experimental data and using it to improve the predictive skill of a Reynolds-averaged Navier–Stokes (RANS) simulator. The method is tested in a three-dimensional (3D), transonic jet-in-crossflow (JIC) configuration. The process starts with a cubic eddy viscosity model (CEVM) developed for incompressible flows. It is fitted to limited experimental JIC data using shrinkage regression. The shrinkage process removes all the terms from the model, except an intercept, a linear term, and a quadratic one involving the square of the vorticity. The shrunk eddy viscositymore » model is implemented in an RANS simulator and calibrated, using vorticity measurements, to infer three parameters. The calibration is Bayesian and is solved using a Markov chain Monte Carlo (MCMC) method. A 3D probability density distribution for the inferred parameters is constructed, thus quantifying the uncertainty in the estimate. The phenomenal cost of using a 3D flow simulator inside an MCMC loop is mitigated by using surrogate models (“curve-fits”). A support vector machine classifier (SVMC) is used to impose our prior belief regarding parameter values, specifically to exclude nonphysical parameter combinations. The calibrated model is compared, in terms of its predictive skill, to simulations using uncalibrated linear and CEVMs. Finally, we find that the calibrated model, with one quadratic term, is more accurate than the uncalibrated simulator. The model is also checked at a flow condition at which the model was not calibrated.« less
Liao, J. G.; Mcmurry, Timothy; Berg, Arthur
2014-01-01
Empirical Bayes methods have been extensively used for microarray data analysis by modeling the large number of unknown parameters as random effects. Empirical Bayes allows borrowing information across genes and can automatically adjust for multiple testing and selection bias. However, the standard empirical Bayes model can perform poorly if the assumed working prior deviates from the true prior. This paper proposes a new rank-conditioned inference in which the shrinkage and confidence intervals are based on the distribution of the error conditioned on rank of the data. Our approach is in contrast to a Bayesian posterior, which conditions on the data themselves. The new method is almost as efficient as standard Bayesian methods when the working prior is close to the true prior, and it is much more robust when the working prior is not close. In addition, it allows a more accurate (but also more complex) non-parametric estimate of the prior to be easily incorporated, resulting in improved inference. The new method’s prior robustness is demonstrated via simulation experiments. Application to a breast cancer gene expression microarray dataset is presented. Our R package rank.Shrinkage provides a ready-to-use implementation of the proposed methodology. PMID:23934072
Building Map Skills: The Other Energy Crisis.
ERIC Educational Resources Information Center
Branson, Margaret S.
1983-01-01
People in the developing world worry about the shrinkage of forests and the scarcity of firewood. An international team of geographers assessed desertification and prepared a map. Students can analyze a simplified map and answer questions that will help them understand desertification. (AM)
Xu, Xu Steven; Yuan, Min; Yang, Haitao; Feng, Yan; Xu, Jinfeng; Pinheiro, Jose
2017-01-01
Covariate analysis based on population pharmacokinetics (PPK) is used to identify clinically relevant factors. The likelihood ratio test (LRT) based on nonlinear mixed effect model fits is currently recommended for covariate identification, whereas individual empirical Bayesian estimates (EBEs) are considered unreliable due to the presence of shrinkage. The objectives of this research were to investigate the type I error for LRT and EBE approaches, to confirm the similarity of power between the LRT and EBE approaches from a previous report and to explore the influence of shrinkage on LRT and EBE inferences. Using an oral one-compartment PK model with a single covariate impacting on clearance, we conducted a wide range of simulations according to a two-way factorial design. The results revealed that the EBE-based regression not only provided almost identical power for detecting a covariate effect, but also controlled the false positive rate better than the LRT approach. Shrinkage of EBEs is likely not the root cause for decrease in power or inflated false positive rate although the size of the covariate effect tends to be underestimated at high shrinkage. In summary, contrary to the current recommendations, EBEs may be a better choice for statistical tests in PPK covariate analysis compared to LRT. We proposed a three-step covariate modeling approach for population PK analysis to utilize the advantages of EBEs while overcoming their shortcomings, which allows not only markedly reducing the run time for population PK analysis, but also providing more accurate covariate tests.
Shrinkage Estimators for a Composite Measure of Quality Conceptualized as a Formative Construct
Shwartz, Michael; Peköz, Erol A; Christiansen, Cindy L; Burgess, James F; Berlowitz, Dan
2013-01-01
Objective To demonstrate the value of shrinkage estimators when calculating a composite quality measure as the weighted average of a set of individual quality indicators. Data Sources Rates of 28 quality indicators (QIs) calculated from the minimum dataset from residents of 112 Veterans Health Administration nursing homes in fiscal years 2005–2008. Study Design We compared composite scores calculated from the 28 QIs using both observed rates and shrunken rates derived from a Bayesian multivariate normal-binomial model. Principal Findings Shrunken-rate composite scores, because they take into account unreliability of estimates from small samples and the correlation among QIs, have more intuitive appeal than observed-rate composite scores. Facilities can be profiled based on more policy-relevant measures than point estimates of composite scores, and interval estimates can be calculated without assuming the QIs are independent. Usually, shrunken-rate composite scores in 1 year are better able to predict the observed total number of QI events or the observed-rate composite scores in the following year than the initial year observed-rate composite scores. Conclusion Shrinkage estimators can be useful when a composite measure is conceptualized as a formative construct. PMID:22716650
A study of polymerization shrinkage kinetics using digital image correlation.
Lau, Andrew; Li, Jianying; Heo, Young Cheul; Fok, Alex
2015-04-01
To investigate the polymerization shrinkage kinetics of dental resin composites by measuring in real time the full-field shrinkage strain using a novel technique based on digital image correlation (DIC). Polymerization shrinkage in resin composite specimens (Filtek LS and Z100) was measured as a function of time and position. The main experimental setup included a CCD camera and an external shutter inversely synchronized to that of the camera. The specimens (2 mm × 4 mm × 5 mm) were irradiated for 40s at 1200 mW/cm(2), while alternating image acquisition and obstruction of the curing light occurred at 15 fps. The acquired images were processed using proprietary software to obtain the full-field strain maps as a function of time. Z100 showed a higher final shrinkage value and rate of development than LS. The final volumetric shrinkage for Z100 and LS were 1.99% and 1.19%, respectively. The shrinkage behavior followed an established shrinkage strain kinetics model. The corresponding characteristic time and reaction order exponent for LS and Z100 were calculated to be approximately 23s and 0.84, and 14s and 0.7, respectively, at a distance of 1.0mm from the irradiated surface, the position where maximum shrinkage strain occurred. Thermal expansion from the exothermic reaction could have affected the accuracy of these parameters. The new DIC method using an inversely synchronized shutter provided realtime, full-field results that could aid in assessing the shrinkage strain kinetics of dental resin composites as a function of specimen depth. It could also help determine the optimal curing modes for dental resin composites. Copyright © 2015 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Sparse-grid, reduced-basis Bayesian inversion: Nonaffine-parametric nonlinear equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Peng, E-mail: peng@ices.utexas.edu; Schwab, Christoph, E-mail: christoph.schwab@sam.math.ethz.ch
2016-07-01
We extend the reduced basis (RB) accelerated Bayesian inversion methods for affine-parametric, linear operator equations which are considered in [16,17] to non-affine, nonlinear parametric operator equations. We generalize the analysis of sparsity of parametric forward solution maps in [20] and of Bayesian inversion in [48,49] to the fully discrete setting, including Petrov–Galerkin high-fidelity (“HiFi”) discretization of the forward maps. We develop adaptive, stochastic collocation based reduction methods for the efficient computation of reduced bases on the parametric solution manifold. The nonaffinity and nonlinearity with respect to (w.r.t.) the distributed, uncertain parameters and the unknown solution is collocated; specifically, by themore » so-called Empirical Interpolation Method (EIM). For the corresponding Bayesian inversion problems, computational efficiency is enhanced in two ways: first, expectations w.r.t. the posterior are computed by adaptive quadratures with dimension-independent convergence rates proposed in [49]; the present work generalizes [49] to account for the impact of the PG discretization in the forward maps on the convergence rates of the Quantities of Interest (QoI for short). Second, we propose to perform the Bayesian estimation only w.r.t. a parsimonious, RB approximation of the posterior density. Based on the approximation results in [49], the infinite-dimensional parametric, deterministic forward map and operator admit N-term RB and EIM approximations which converge at rates which depend only on the sparsity of the parametric forward map. In several numerical experiments, the proposed algorithms exhibit dimension-independent convergence rates which equal, at least, the currently known rate estimates for N-term approximation. We propose to accelerate Bayesian estimation by first offline construction of reduced basis surrogates of the Bayesian posterior density. The parsimonious surrogates can then be employed for online data assimilation and for Bayesian estimation. They also open a perspective for optimal experimental design.« less
Le, Quang A; Doctor, Jason N
2011-05-01
As quality-adjusted life years have become the standard metric in health economic evaluations, mapping health-profile or disease-specific measures onto preference-based measures to obtain quality-adjusted life years has become a solution when health utilities are not directly available. However, current mapping methods are limited due to their predictive validity, reliability, and/or other methodological issues. We employ probability theory together with a graphical model, called a Bayesian network, to convert health-profile measures into preference-based measures and to compare the results to those estimated with current mapping methods. A sample of 19,678 adults who completed both the 12-item Short Form Health Survey (SF-12v2) and EuroQoL 5D (EQ-5D) questionnaires from the 2003 Medical Expenditure Panel Survey was split into training and validation sets. Bayesian networks were constructed to explore the probabilistic relationships between each EQ-5D domain and 12 items of the SF-12v2. The EQ-5D utility scores were estimated on the basis of the predicted probability of each response level of the 5 EQ-5D domains obtained from the Bayesian inference process using the following methods: Monte Carlo simulation, expected utility, and most-likely probability. Results were then compared with current mapping methods including multinomial logistic regression, ordinary least squares, and censored least absolute deviations. The Bayesian networks consistently outperformed other mapping models in the overall sample (mean absolute error=0.077, mean square error=0.013, and R overall=0.802), in different age groups, number of chronic conditions, and ranges of the EQ-5D index. Bayesian networks provide a new robust and natural approach to map health status responses into health utility measures for health economic evaluations.
Johnson, Eric D; Tubau, Elisabet
2017-06-01
Presenting natural frequencies facilitates Bayesian inferences relative to using percentages. Nevertheless, many people, including highly educated and skilled reasoners, still fail to provide Bayesian responses to these computationally simple problems. We show that the complexity of relational reasoning (e.g., the structural mapping between the presented and requested relations) can help explain the remaining difficulties. With a non-Bayesian inference that required identical arithmetic but afforded a more direct structural mapping, performance was universally high. Furthermore, reducing the relational demands of the task through questions that directed reasoners to use the presented statistics, as compared with questions that prompted the representation of a second, similar sample, also significantly improved reasoning. Distinct error patterns were also observed between these presented- and similar-sample scenarios, which suggested differences in relational-reasoning strategies. On the other hand, while higher numeracy was associated with better Bayesian reasoning, higher-numerate reasoners were not immune to the relational complexity of the task. Together, these findings validate the relational-reasoning view of Bayesian problem solving and highlight the importance of considering not only the presented task structure, but also the complexity of the structural alignment between the presented and requested relations.
A Bayesian approach to tracking patients having changing pharmacokinetic parameters
NASA Technical Reports Server (NTRS)
Bayard, David S.; Jelliffe, Roger W.
2004-01-01
This paper considers the updating of Bayesian posterior densities for pharmacokinetic models associated with patients having changing parameter values. For estimation purposes it is proposed to use the Interacting Multiple Model (IMM) estimation algorithm, which is currently a popular algorithm in the aerospace community for tracking maneuvering targets. The IMM algorithm is described, and compared to the multiple model (MM) and Maximum A-Posteriori (MAP) Bayesian estimation methods, which are presently used for posterior updating when pharmacokinetic parameters do not change. Both the MM and MAP Bayesian estimation methods are used in their sequential forms, to facilitate tracking of changing parameters. Results indicate that the IMM algorithm is well suited for tracking time-varying pharmacokinetic parameters in acutely ill and unstable patients, incurring only about half of the integrated error compared to the sequential MM and MAP methods on the same example.
A Bayesian multi-stage cost-effectiveness design for animal studies in stroke research
Cai, Chunyan; Ning, Jing; Huang, Xuelin
2017-01-01
Much progress has been made in the area of adaptive designs for clinical trials. However, little has been done regarding adaptive designs to identify optimal treatment strategies in animal studies. Motivated by an animal study of a novel strategy for treating strokes, we propose a Bayesian multi-stage cost-effectiveness design to simultaneously identify the optimal dose and determine the therapeutic treatment window for administrating the experimental agent. We consider a non-monotonic pattern for the dose-schedule-efficacy relationship and develop an adaptive shrinkage algorithm to assign more cohorts to admissible strategies. We conduct simulation studies to evaluate the performance of the proposed design by comparing it with two standard designs. These simulation studies show that the proposed design yields a significantly higher probability of selecting the optimal strategy, while it is generally more efficient and practical in terms of resource usage. PMID:27405325
Montazeri, Zahra; Yanofsky, Corey M; Bickel, David R
2010-01-01
Research on analyzing microarray data has focused on the problem of identifying differentially expressed genes to the neglect of the problem of how to integrate evidence that a gene is differentially expressed with information on the extent of its differential expression. Consequently, researchers currently prioritize genes for further study either on the basis of volcano plots or, more commonly, according to simple estimates of the fold change after filtering the genes with an arbitrary statistical significance threshold. While the subjective and informal nature of the former practice precludes quantification of its reliability, the latter practice is equivalent to using a hard-threshold estimator of the expression ratio that is not known to perform well in terms of mean-squared error, the sum of estimator variance and squared estimator bias. On the basis of two distinct simulation studies and data from different microarray studies, we systematically compared the performance of several estimators representing both current practice and shrinkage. We find that the threshold-based estimators usually perform worse than the maximum-likelihood estimator (MLE) and they often perform far worse as quantified by estimated mean-squared risk. By contrast, the shrinkage estimators tend to perform as well as or better than the MLE and never much worse than the MLE, as expected from what is known about shrinkage. However, a Bayesian measure of performance based on the prior information that few genes are differentially expressed indicates that hard-threshold estimators perform about as well as the local false discovery rate (FDR), the best of the shrinkage estimators studied. Based on the ability of the latter to leverage information across genes, we conclude that the use of the local-FDR estimator of the fold change instead of informal or threshold-based combinations of statistical tests and non-shrinkage estimators can be expected to substantially improve the reliability of gene prioritization at very little risk of doing so less reliably. Since the proposed replacement of post-selection estimates with shrunken estimates applies as well to other types of high-dimensional data, it could also improve the analysis of SNP data from genome-wide association studies.
Inferring the most probable maps of underground utilities using Bayesian mapping model
NASA Astrophysics Data System (ADS)
Bilal, Muhammad; Khan, Wasiq; Muggleton, Jennifer; Rustighi, Emiliano; Jenks, Hugo; Pennock, Steve R.; Atkins, Phil R.; Cohn, Anthony
2018-03-01
Mapping the Underworld (MTU), a major initiative in the UK, is focused on addressing social, environmental and economic consequences raised from the inability to locate buried underground utilities (such as pipes and cables) by developing a multi-sensor mobile device. The aim of MTU device is to locate different types of buried assets in real time with the use of automated data processing techniques and statutory records. The statutory records, even though typically being inaccurate and incomplete, provide useful prior information on what is buried under the ground and where. However, the integration of information from multiple sensors (raw data) with these qualitative maps and their visualization is challenging and requires the implementation of robust machine learning/data fusion approaches. An approach for automated creation of revised maps was developed as a Bayesian Mapping model in this paper by integrating the knowledge extracted from sensors raw data and available statutory records. The combination of statutory records with the hypotheses from sensors was for initial estimation of what might be found underground and roughly where. The maps were (re)constructed using automated image segmentation techniques for hypotheses extraction and Bayesian classification techniques for segment-manhole connections. The model consisting of image segmentation algorithm and various Bayesian classification techniques (segment recognition and expectation maximization (EM) algorithm) provided robust performance on various simulated as well as real sites in terms of predicting linear/non-linear segments and constructing refined 2D/3D maps.
Dumont, Cyrielle; Lestini, Giulia; Le Nagard, Hervé; Mentré, France; Comets, Emmanuelle; Nguyen, Thu Thuy; Group, For The Pfim
2018-03-01
Nonlinear mixed-effect models (NLMEMs) are increasingly used for the analysis of longitudinal studies during drug development. When designing these studies, the expected Fisher information matrix (FIM) can be used instead of performing time-consuming clinical trial simulations. The function PFIM is the first tool for design evaluation and optimization that has been developed in R. In this article, we present an extended version, PFIM 4.0, which includes several new features. Compared with version 3.0, PFIM 4.0 includes a more complete pharmacokinetic/pharmacodynamic library of models and accommodates models including additional random effects for inter-occasion variability as well as discrete covariates. A new input method has been added to specify user-defined models through an R function. Optimization can be performed assuming some fixed parameters or some fixed sampling times. New outputs have been added regarding the FIM such as eigenvalues, conditional numbers, and the option of saving the matrix obtained after evaluation or optimization. Previously obtained results, which are summarized in a FIM, can be taken into account in evaluation or optimization of one-group protocols. This feature enables the use of PFIM for adaptive designs. The Bayesian individual FIM has been implemented, taking into account a priori distribution of random effects. Designs for maximum a posteriori Bayesian estimation of individual parameters can now be evaluated or optimized and the predicted shrinkage is also reported. It is also possible to visualize the graphs of the model and the sensitivity functions without performing evaluation or optimization. The usefulness of these approaches and the simplicity of use of PFIM 4.0 are illustrated by two examples: (i) an example of designing a population pharmacokinetic study accounting for previous results, which highlights the advantage of adaptive designs; (ii) an example of Bayesian individual design optimization for a pharmacodynamic study, showing that the Bayesian individual FIM can be a useful tool in therapeutic drug monitoring, allowing efficient prediction of estimation precision and shrinkage for individual parameters. PFIM 4.0 is a useful tool for design evaluation and optimization of longitudinal studies in pharmacometrics and is freely available at http://www.pfim.biostat.fr. Copyright © 2018 Elsevier B.V. All rights reserved.
Fine Mapping Causal Variants with an Approximate Bayesian Method Using Marginal Test Statistics
Chen, Wenan; Larrabee, Beth R.; Ovsyannikova, Inna G.; Kennedy, Richard B.; Haralambieva, Iana H.; Poland, Gregory A.; Schaid, Daniel J.
2015-01-01
Two recently developed fine-mapping methods, CAVIAR and PAINTOR, demonstrate better performance over other fine-mapping methods. They also have the advantage of using only the marginal test statistics and the correlation among SNPs. Both methods leverage the fact that the marginal test statistics asymptotically follow a multivariate normal distribution and are likelihood based. However, their relationship with Bayesian fine mapping, such as BIMBAM, is not clear. In this study, we first show that CAVIAR and BIMBAM are actually approximately equivalent to each other. This leads to a fine-mapping method using marginal test statistics in the Bayesian framework, which we call CAVIAR Bayes factor (CAVIARBF). Another advantage of the Bayesian framework is that it can answer both association and fine-mapping questions. We also used simulations to compare CAVIARBF with other methods under different numbers of causal variants. The results showed that both CAVIARBF and BIMBAM have better performance than PAINTOR and other methods. Compared to BIMBAM, CAVIARBF has the advantage of using only marginal test statistics and takes about one-quarter to one-fifth of the running time. We applied different methods on two independent cohorts of the same phenotype. Results showed that CAVIARBF, BIMBAM, and PAINTOR selected the same top 3 SNPs; however, CAVIARBF and BIMBAM had better consistency in selecting the top 10 ranked SNPs between the two cohorts. Software is available at https://bitbucket.org/Wenan/caviarbf. PMID:25948564
BM-Map: Bayesian Mapping of Multireads for Next-Generation Sequencing Data
Ji, Yuan; Xu, Yanxun; Zhang, Qiong; Tsui, Kam-Wah; Yuan, Yuan; Norris, Clift; Liang, Shoudan; Liang, Han
2011-01-01
Summary Next-generation sequencing (NGS) technology generates millions of short reads, which provide valuable information for various aspects of cellular activities and biological functions. A key step in NGS applications (e.g., RNA-Seq) is to map short reads to correct genomic locations within the source genome. While most reads are mapped to a unique location, a significant proportion of reads align to multiple genomic locations with equal or similar numbers of mismatches; these are called multireads. The ambiguity in mapping the multireads may lead to bias in downstream analyses. Currently, most practitioners discard the multireads in their analysis, resulting in a loss of valuable information, especially for the genes with similar sequences. To refine the read mapping, we develop a Bayesian model that computes the posterior probability of mapping a multiread to each competing location. The probabilities are used for downstream analyses, such as the quantification of gene expression. We show through simulation studies and RNA-Seq analysis of real life data that the Bayesian method yields better mapping than the current leading methods. We provide a C++ program for downloading that is being packaged into a user-friendly software. PMID:21517792
MapReduce Based Parallel Bayesian Network for Manufacturing Quality Control
NASA Astrophysics Data System (ADS)
Zheng, Mao-Kuan; Ming, Xin-Guo; Zhang, Xian-Yu; Li, Guo-Ming
2017-09-01
Increasing complexity of industrial products and manufacturing processes have challenged conventional statistics based quality management approaches in the circumstances of dynamic production. A Bayesian network and big data analytics integrated approach for manufacturing process quality analysis and control is proposed. Based on Hadoop distributed architecture and MapReduce parallel computing model, big volume and variety quality related data generated during the manufacturing process could be dealt with. Artificial intelligent algorithms, including Bayesian network learning, classification and reasoning, are embedded into the Reduce process. Relying on the ability of the Bayesian network in dealing with dynamic and uncertain problem and the parallel computing power of MapReduce, Bayesian network of impact factors on quality are built based on prior probability distribution and modified with posterior probability distribution. A case study on hull segment manufacturing precision management for ship and offshore platform building shows that computing speed accelerates almost directly proportionally to the increase of computing nodes. It is also proved that the proposed model is feasible for locating and reasoning of root causes, forecasting of manufacturing outcome, and intelligent decision for precision problem solving. The integration of bigdata analytics and BN method offers a whole new perspective in manufacturing quality control.
Fine Mapping Causal Variants with an Approximate Bayesian Method Using Marginal Test Statistics.
Chen, Wenan; Larrabee, Beth R; Ovsyannikova, Inna G; Kennedy, Richard B; Haralambieva, Iana H; Poland, Gregory A; Schaid, Daniel J
2015-07-01
Two recently developed fine-mapping methods, CAVIAR and PAINTOR, demonstrate better performance over other fine-mapping methods. They also have the advantage of using only the marginal test statistics and the correlation among SNPs. Both methods leverage the fact that the marginal test statistics asymptotically follow a multivariate normal distribution and are likelihood based. However, their relationship with Bayesian fine mapping, such as BIMBAM, is not clear. In this study, we first show that CAVIAR and BIMBAM are actually approximately equivalent to each other. This leads to a fine-mapping method using marginal test statistics in the Bayesian framework, which we call CAVIAR Bayes factor (CAVIARBF). Another advantage of the Bayesian framework is that it can answer both association and fine-mapping questions. We also used simulations to compare CAVIARBF with other methods under different numbers of causal variants. The results showed that both CAVIARBF and BIMBAM have better performance than PAINTOR and other methods. Compared to BIMBAM, CAVIARBF has the advantage of using only marginal test statistics and takes about one-quarter to one-fifth of the running time. We applied different methods on two independent cohorts of the same phenotype. Results showed that CAVIARBF, BIMBAM, and PAINTOR selected the same top 3 SNPs; however, CAVIARBF and BIMBAM had better consistency in selecting the top 10 ranked SNPs between the two cohorts. Software is available at https://bitbucket.org/Wenan/caviarbf. Copyright © 2015 by the Genetics Society of America.
Sun, Jirun; Eidelman, Naomi; Lin-Gibson, Sheng
2009-03-01
The objectives of this study were to (1) demonstrate X-ray micro-computed tomography (microCT) as a viable method for determining the polymerization shrinkage and microleakage on the same sample accurately and non-destructively, and (2) investigate the effect of sample geometry (e.g., C-factor and volume) on polymerization shrinkage and microleakage. Composites placed in a series of model cavities of controlled C-factors and volumes were imaged using microCT to determine their precise location and volume before and after photopolymerization. Shrinkage was calculated by comparing the volume of composites before and after polymerization and leakage was predicted based on gap formation between composites and cavity walls as a function of position. Dye penetration experiments were used to validate microCT results. The degree of conversion (DC) of composites measured using FTIR microspectroscopy in reflectance mode was nearly identical for composites filled in all model cavity geometries. The shrinkage of composites calculated based on microCT results was statistically identical regardless of sample geometry. Microleakage, on the other hand, was highly dependent on the C-factor as well as the composite volume, with higher C-factors and larger volumes leading to a greater probability of microleakage. Spatial distribution of microleakage determined by microCT agreed well with results determined by dye penetration. microCT has proven to be a powerful technique in quantifying polymerization shrinkage and corresponding microleakage for clinically relevant cavity geometries.
Sequential Inverse Problems Bayesian Principles and the Logistic Map Example
NASA Astrophysics Data System (ADS)
Duan, Lian; Farmer, Chris L.; Moroz, Irene M.
2010-09-01
Bayesian statistics provides a general framework for solving inverse problems, but is not without interpretation and implementation problems. This paper discusses difficulties arising from the fact that forward models are always in error to some extent. Using a simple example based on the one-dimensional logistic map, we argue that, when implementation problems are minimal, the Bayesian framework is quite adequate. In this paper the Bayesian Filter is shown to be able to recover excellent state estimates in the perfect model scenario (PMS) and to distinguish the PMS from the imperfect model scenario (IMS). Through a quantitative comparison of the way in which the observations are assimilated in both the PMS and the IMS scenarios, we suggest that one can, sometimes, measure the degree of imperfection.
Bayesian geostatistics in health cartography: the perspective of malaria.
Patil, Anand P; Gething, Peter W; Piel, Frédéric B; Hay, Simon I
2011-06-01
Maps of parasite prevalences and other aspects of infectious diseases that vary in space are widely used in parasitology. However, spatial parasitological datasets rarely, if ever, have sufficient coverage to allow exact determination of such maps. Bayesian geostatistics (BG) is a method for finding a large sample of maps that can explain a dataset, in which maps that do a better job of explaining the data are more likely to be represented. This sample represents the knowledge that the analyst has gained from the data about the unknown true map. BG provides a conceptually simple way to convert these samples to predictions of features of the unknown map, for example regional averages. These predictions account for each map in the sample, yielding an appropriate level of predictive precision.
Bayesian geostatistics in health cartography: the perspective of malaria
Patil, Anand P.; Gething, Peter W.; Piel, Frédéric B.; Hay, Simon I.
2011-01-01
Maps of parasite prevalences and other aspects of infectious diseases that vary in space are widely used in parasitology. However, spatial parasitological datasets rarely, if ever, have sufficient coverage to allow exact determination of such maps. Bayesian geostatistics (BG) is a method for finding a large sample of maps that can explain a dataset, in which maps that do a better job of explaining the data are more likely to be represented. This sample represents the knowledge that the analyst has gained from the data about the unknown true map. BG provides a conceptually simple way to convert these samples to predictions of features of the unknown map, for example regional averages. These predictions account for each map in the sample, yielding an appropriate level of predictive precision. PMID:21420361
The Spike-and-Slab Lasso Generalized Linear Models for Prediction and Associated Genes Detection.
Tang, Zaixiang; Shen, Yueping; Zhang, Xinyan; Yi, Nengjun
2017-01-01
Large-scale "omics" data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, there are considerable challenges in analyzing high-dimensional molecular data, including the large number of potential molecular predictors, limited number of samples, and small effect of each predictor. We propose new Bayesian hierarchical generalized linear models, called spike-and-slab lasso GLMs, for prognostic prediction and detection of associated genes using large-scale molecular data. The proposed model employs a spike-and-slab mixture double-exponential prior for coefficients that can induce weak shrinkage on large coefficients, and strong shrinkage on irrelevant coefficients. We have developed a fast and stable algorithm to fit large-scale hierarchal GLMs by incorporating expectation-maximization (EM) steps into the fast cyclic coordinate descent algorithm. The proposed approach integrates nice features of two popular methods, i.e., penalized lasso and Bayesian spike-and-slab variable selection. The performance of the proposed method is assessed via extensive simulation studies. The results show that the proposed approach can provide not only more accurate estimates of the parameters, but also better prediction. We demonstrate the proposed procedure on two cancer data sets: a well-known breast cancer data set consisting of 295 tumors, and expression data of 4919 genes; and the ovarian cancer data set from TCGA with 362 tumors, and expression data of 5336 genes. Our analyses show that the proposed procedure can generate powerful models for predicting outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). Copyright © 2017 by the Genetics Society of America.
Automated high resolution mapping of coffee in Rwanda using an expert Bayesian network
NASA Astrophysics Data System (ADS)
Mukashema, A.; Veldkamp, A.; Vrieling, A.
2014-12-01
African highland agro-ecosystems are dominated by small-scale agricultural fields that often contain a mix of annual and perennial crops. This makes such systems difficult to map by remote sensing. We developed an expert Bayesian network model to extract the small-scale coffee fields of Rwanda from very high resolution data. The model was subsequently applied to aerial orthophotos covering more than 99% of Rwanda and on one QuickBird image for the remaining part. The method consists of a stepwise adjustment of pixel probabilities, which incorporates expert knowledge on size of coffee trees and fields, and on their location. The initial naive Bayesian network, which is a spectral-based classification, yielded a coffee map with an overall accuracy of around 50%. This confirms that standard spectral variables alone cannot accurately identify coffee fields from high resolution images. The combination of spectral and ancillary data (DEM and a forest map) allowed mapping of coffee fields and associated uncertainties with an overall accuracy of 87%. Aggregated to district units, the mapped coffee areas demonstrated a high correlation with the coffee areas reported in the detailed national coffee census of 2009 (R2 = 0.92). Unlike the census data our map provides high spatial resolution of coffee area patterns of Rwanda. The proposed method has potential for mapping other perennial small scale cropping systems in the East African Highlands and elsewhere.
Bayesian Localization and Mapping Using GNSS SNR Measurements
2014-05-01
Bayesian Localization and Mapping Using GNSS SNR Measurements Jason T. Isaacs1, Andrew T. Irish1, François Quitin2, Upamanyu Madhow1, and João P...Hespanha1 Abstract— In urban areas, GNSS localization quality is often degraded due to signal blockage and multi-path reflections. When several GNSS ...signals are blocked by buildings, the remaining unblocked GNSS satellites are typically in a poor geometry for localization (nearly collinear along the
NASA Astrophysics Data System (ADS)
Agapiou, Sergios; Burger, Martin; Dashti, Masoumeh; Helin, Tapio
2018-04-01
We consider the inverse problem of recovering an unknown functional parameter u in a separable Banach space, from a noisy observation vector y of its image through a known possibly non-linear map {{\\mathcal G}} . We adopt a Bayesian approach to the problem and consider Besov space priors (see Lassas et al (2009 Inverse Problems Imaging 3 87-122)), which are well-known for their edge-preserving and sparsity-promoting properties and have recently attracted wide attention especially in the medical imaging community. Our key result is to show that in this non-parametric setup the maximum a posteriori (MAP) estimates are characterized by the minimizers of a generalized Onsager-Machlup functional of the posterior. This is done independently for the so-called weak and strong MAP estimates, which as we show coincide in our context. In addition, we prove a form of weak consistency for the MAP estimators in the infinitely informative data limit. Our results are remarkable for two reasons: first, the prior distribution is non-Gaussian and does not meet the smoothness conditions required in previous research on non-parametric MAP estimates. Second, the result analytically justifies existing uses of the MAP estimate in finite but high dimensional discretizations of Bayesian inverse problems with the considered Besov priors.
Predicting Quantitative Traits With Regression Models for Dense Molecular Markers and Pedigree
de los Campos, Gustavo; Naya, Hugo; Gianola, Daniel; Crossa, José; Legarra, Andrés; Manfredi, Eduardo; Weigel, Kent; Cotes, José Miguel
2009-01-01
The availability of genomewide dense markers brings opportunities and challenges to breeding programs. An important question concerns the ways in which dense markers and pedigrees, together with phenotypic records, should be used to arrive at predictions of genetic values for complex traits. If a large number of markers are included in a regression model, marker-specific shrinkage of regression coefficients may be needed. For this reason, the Bayesian least absolute shrinkage and selection operator (LASSO) (BL) appears to be an interesting approach for fitting marker effects in a regression model. This article adapts the BL to arrive at a regression model where markers, pedigrees, and covariates other than markers are considered jointly. Connections between BL and other marker-based regression models are discussed, and the sensitivity of BL with respect to the choice of prior distributions assigned to key parameters is evaluated using simulation. The proposed model was fitted to two data sets from wheat and mouse populations, and evaluated using cross-validation methods. Results indicate that inclusion of markers in the regression further improved the predictive ability of models. An R program that implements the proposed model is freely available. PMID:19293140
ANOTHER LOOK AT THE FAST ITERATIVE SHRINKAGE/THRESHOLDING ALGORITHM (FISTA)*
Kim, Donghwan; Fessler, Jeffrey A.
2017-01-01
This paper provides a new way of developing the “Fast Iterative Shrinkage/Thresholding Algorithm (FISTA)” [3] that is widely used for minimizing composite convex functions with a nonsmooth term such as the ℓ1 regularizer. In particular, this paper shows that FISTA corresponds to an optimized approach to accelerating the proximal gradient method with respect to a worst-case bound of the cost function. This paper then proposes a new algorithm that is derived by instead optimizing the step coefficients of the proximal gradient method with respect to a worst-case bound of the composite gradient mapping. The proof is based on the worst-case analysis called Performance Estimation Problem in [11]. PMID:29805242
Bayesian Nonparametric Ordination for the Analysis of Microbial Communities.
Ren, Boyu; Bacallado, Sergio; Favaro, Stefano; Holmes, Susan; Trippa, Lorenzo
2017-01-01
Human microbiome studies use sequencing technologies to measure the abundance of bacterial species or Operational Taxonomic Units (OTUs) in samples of biological material. Typically the data are organized in contingency tables with OTU counts across heterogeneous biological samples. In the microbial ecology community, ordination methods are frequently used to investigate latent factors or clusters that capture and describe variations of OTU counts across biological samples. It remains important to evaluate how uncertainty in estimates of each biological sample's microbial distribution propagates to ordination analyses, including visualization of clusters and projections of biological samples on low dimensional spaces. We propose a Bayesian analysis for dependent distributions to endow frequently used ordinations with estimates of uncertainty. A Bayesian nonparametric prior for dependent normalized random measures is constructed, which is marginally equivalent to the normalized generalized Gamma process, a well-known prior for nonparametric analyses. In our prior, the dependence and similarity between microbial distributions is represented by latent factors that concentrate in a low dimensional space. We use a shrinkage prior to tune the dimensionality of the latent factors. The resulting posterior samples of model parameters can be used to evaluate uncertainty in analyses routinely applied in microbiome studies. Specifically, by combining them with multivariate data analysis techniques we can visualize credible regions in ecological ordination plots. The characteristics of the proposed model are illustrated through a simulation study and applications in two microbiome datasets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciuca, Razvan; Hernández, Oscar F., E-mail: razvan.ciuca@mail.mcgill.ca, E-mail: oscarh@physics.mcgill.ca
There exists various proposals to detect cosmic strings from Cosmic Microwave Background (CMB) or 21 cm temperature maps. Current proposals do not aim to find the location of strings on sky maps, all of these approaches can be thought of as a statistic on a sky map. We propose a Bayesian interpretation of cosmic string detection and within that framework, we derive a connection between estimates of cosmic string locations and cosmic string tension G μ. We use this Bayesian framework to develop a machine learning framework for detecting strings from sky maps and outline how to implement this frameworkmore » with neural networks. The neural network we trained was able to detect and locate cosmic strings on noiseless CMB temperature map down to a string tension of G μ=5 ×10{sup −9} and when analyzing a CMB temperature map that does not contain strings, the neural network gives a 0.95 probability that G μ≤2.3×10{sup −9}.« less
Varadhan, Ravi; Wang, Sue-Jane
2016-01-01
Treatment effect heterogeneity is a well-recognized phenomenon in randomized controlled clinical trials. In this paper, we discuss subgroup analyses with prespecified subgroups of clinical or biological importance. We explore various alternatives to the naive (the traditional univariate) subgroup analyses to address the issues of multiplicity and confounding. Specifically, we consider a model-based Bayesian shrinkage (Bayes-DS) and a nonparametric, empirical Bayes shrinkage approach (Emp-Bayes) to temper the optimism of traditional univariate subgroup analyses; a standardization approach (standardization) that accounts for correlation between baseline covariates; and a model-based maximum likelihood estimation (MLE) approach. The Bayes-DS and Emp-Bayes methods model the variation in subgroup-specific treatment effect rather than testing the null hypothesis of no difference between subgroups. The standardization approach addresses the issue of confounding in subgroup analyses. The MLE approach is considered only for comparison in simulation studies as the “truth” since the data were generated from the same model. Using the characteristics of a hypothetical large outcome trial, we perform simulation studies and articulate the utilities and potential limitations of these estimators. Simulation results indicate that Bayes-DS and Emp-Bayes can protect against optimism present in the naïve approach. Due to its simplicity, the naïve approach should be the reference for reporting univariate subgroup-specific treatment effect estimates from exploratory subgroup analyses. Standardization, although it tends to have a larger variance, is suggested when it is important to address the confounding of univariate subgroup effects due to correlation between baseline covariates. The Bayes-DS approach is available as an R package (DSBayes). PMID:26485117
Efficient Posterior Probability Mapping Using Savage-Dickey Ratios
Penny, William D.; Ridgway, Gerard R.
2013-01-01
Statistical Parametric Mapping (SPM) is the dominant paradigm for mass-univariate analysis of neuroimaging data. More recently, a Bayesian approach termed Posterior Probability Mapping (PPM) has been proposed as an alternative. PPM offers two advantages: (i) inferences can be made about effect size thus lending a precise physiological meaning to activated regions, (ii) regions can be declared inactive. This latter facility is most parsimoniously provided by PPMs based on Bayesian model comparisons. To date these comparisons have been implemented by an Independent Model Optimization (IMO) procedure which separately fits null and alternative models. This paper proposes a more computationally efficient procedure based on Savage-Dickey approximations to the Bayes factor, and Taylor-series approximations to the voxel-wise posterior covariance matrices. Simulations show the accuracy of this Savage-Dickey-Taylor (SDT) method to be comparable to that of IMO. Results on fMRI data show excellent agreement between SDT and IMO for second-level models, and reasonable agreement for first-level models. This Savage-Dickey test is a Bayesian analogue of the classical SPM-F and allows users to implement model comparison in a truly interactive manner. PMID:23533640
USDA-ARS?s Scientific Manuscript database
As a first step towards the genetic mapping of quantitative trait loci (QTL) affecting stress response variation in rainbow trout, we performed complex segregation analyses (CSA) fitting mixed inheritance models of plasma cortisol using Bayesian methods in large full-sib families of rainbow trout. ...
Snake River Plain Geothermal Play Fairway Analysis - Phase 1 Raster Files
John Shervais
2015-10-09
Snake River Plain Play Fairway Analysis - Phase 1 CRS Raster Files. This dataset contains raster files created in ArcGIS. These raster images depict Common Risk Segment (CRS) maps for HEAT, PERMEABILITY, AND SEAL, as well as selected maps of Evidence Layers. These evidence layers consist of either Bayesian krige functions or kernel density functions, and include: (1) HEAT: Heat flow (Bayesian krige map), Heat flow standard error on the krige function (data confidence), volcanic vent distribution as function of age and size, groundwater temperature (equivalue interval and natural breaks bins), and groundwater T standard error. (2) PERMEABILTY: Fault and lineament maps, both as mapped and as kernel density functions, processed for both dilational tendency (TD) and slip tendency (ST), along with data confidence maps for each data type. Data types include mapped surface faults from USGS and Idaho Geological Survey data bases, as well as unpublished mapping; lineations derived from maximum gradients in magnetic, deep gravity, and intermediate depth gravity anomalies. (3) SEAL: Seal maps based on presence and thickness of lacustrine sediments and base of SRP aquifer. Raster size is 2 km. All files generated in ArcGIS.
Radiation dose reduction in computed tomography perfusion using spatial-temporal Bayesian methods
NASA Astrophysics Data System (ADS)
Fang, Ruogu; Raj, Ashish; Chen, Tsuhan; Sanelli, Pina C.
2012-03-01
In current computed tomography (CT) examinations, the associated X-ray radiation dose is of significant concern to patients and operators, especially CT perfusion (CTP) imaging that has higher radiation dose due to its cine scanning technique. A simple and cost-effective means to perform the examinations is to lower the milliampere-seconds (mAs) parameter as low as reasonably achievable in data acquisition. However, lowering the mAs parameter will unavoidably increase data noise and degrade CT perfusion maps greatly if no adequate noise control is applied during image reconstruction. To capture the essential dynamics of CT perfusion, a simple spatial-temporal Bayesian method that uses a piecewise parametric model of the residual function is used, and then the model parameters are estimated from a Bayesian formulation of prior smoothness constraints on perfusion parameters. From the fitted residual function, reliable CTP parameter maps are obtained from low dose CT data. The merit of this scheme exists in the combination of analytical piecewise residual function with Bayesian framework using a simpler prior spatial constrain for CT perfusion application. On a dataset of 22 patients, this dynamic spatial-temporal Bayesian model yielded an increase in signal-tonoise-ratio (SNR) of 78% and a decrease in mean-square-error (MSE) of 40% at low dose radiation of 43mA.
A Gaussian random field model for similarity-based smoothing in Bayesian disease mapping.
Baptista, Helena; Mendes, Jorge M; MacNab, Ying C; Xavier, Miguel; Caldas-de-Almeida, José
2016-08-01
Conditionally specified Gaussian Markov random field (GMRF) models with adjacency-based neighbourhood weight matrix, commonly known as neighbourhood-based GMRF models, have been the mainstream approach to spatial smoothing in Bayesian disease mapping. In the present paper, we propose a conditionally specified Gaussian random field (GRF) model with a similarity-based non-spatial weight matrix to facilitate non-spatial smoothing in Bayesian disease mapping. The model, named similarity-based GRF, is motivated for modelling disease mapping data in situations where the underlying small area relative risks and the associated determinant factors do not vary systematically in space, and the similarity is defined by "similarity" with respect to the associated disease determinant factors. The neighbourhood-based GMRF and the similarity-based GRF are compared and accessed via a simulation study and by two case studies, using new data on alcohol abuse in Portugal collected by the World Mental Health Survey Initiative and the well-known lip cancer data in Scotland. In the presence of disease data with no evidence of positive spatial correlation, the simulation study showed a consistent gain in efficiency from the similarity-based GRF, compared with the adjacency-based GMRF with the determinant risk factors as covariate. This new approach broadens the scope of the existing conditional autocorrelation models. © The Author(s) 2016.
F-MAP: A Bayesian approach to infer the gene regulatory network using external hints
Shahdoust, Maryam; Mahjub, Hossein; Sadeghi, Mehdi
2017-01-01
The Common topological features of related species gene regulatory networks suggest reconstruction of the network of one species by using the further information from gene expressions profile of related species. We present an algorithm to reconstruct the gene regulatory network named; F-MAP, which applies the knowledge about gene interactions from related species. Our algorithm sets a Bayesian framework to estimate the precision matrix of one species microarray gene expressions dataset to infer the Gaussian Graphical model of the network. The conjugate Wishart prior is used and the information from related species is applied to estimate the hyperparameters of the prior distribution by using the factor analysis. Applying the proposed algorithm on six related species of drosophila shows that the precision of reconstructed networks is improved considerably compared to the precision of networks constructed by other Bayesian approaches. PMID:28938012
Bayesian component separation: The Planck experience
NASA Astrophysics Data System (ADS)
Wehus, Ingunn Kathrine; Eriksen, Hans Kristian
2018-05-01
Bayesian component separation techniques have played a central role in the data reduction process of Planck. The most important strength of this approach is its global nature, in which a parametric and physical model is fitted to the data. Such physical modeling allows the user to constrain very general data models, and jointly probe cosmological, astrophysical and instrumental parameters. This approach also supports statistically robust goodness-of-fit tests in terms of data-minus-model residual maps, which are essential for identifying residual systematic effects in the data. The main challenges are high code complexity and computational cost. Whether or not these costs are justified for a given experiment depends on its final uncertainty budget. We therefore predict that the importance of Bayesian component separation techniques is likely to increase with time for intensity mapping experiments, similar to what has happened in the CMB field, as observational techniques mature, and their overall sensitivity improves.
Wetlands shrinkage, fragmentation and their links to agriculture in the Muleng-Xingkai Plain, China.
Song, Kaishan; Wang, Zongming; Li, Lin; Tedesco, Lenore; Li, Fang; Jin, Cui; Du, Jia
2012-11-30
In the past five decades, the wetlands in the Muleng-Xingkai Plain, Northeast China, have experienced rapid shrinkage and fragmentation. In this study, wetlands cover change and agricultural cultivation were investigated through a time series of thematic maps from 1954, and Landsat satellite images representing the last five decades (1976, 1986, 1995, 2000, and 2005). Wetlands shrinkage and fragmentation were studied based on landscape metrics and the land use changes transition matrix. Furthermore, the driving forces were explored according to socioeconomic development and major natural environmental factors. The results indicate a significant decrease in the wetlands area in the past five decades, with an average annual decrease rate of 9004 ha/yr. Of the 625,268 ha of native wetlands in 1954, approximately 64% has been converted to other land use types by 2005, of which conversion to cropland accounts for the largest share (83%). The number of patches decreased from 1272 (1954) to 197 (1986) and subsequently increased to 326 (2005). The mean patch size changed from 480 ha (1954) to 1521 ha (1976), and then steadily decreased to 574 ha (2005). The largest patch index (total core area index) indicates wetlands shrinkage with decreased values from 31.73 (177,935 ha) to 3.45 (39,421 ha) respectively. Climatic changes occurred over the study period, providing a potentially favorable environment for agricultural development. At the same time population, groundwater harvesting, and fertilizer application increased significantly, resulting in wetlands degradation. According to the results, the shrinkage and fragmentation of wetlands could be explained by socioeconomic development and secondarily aided by changing climatic conditions. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Shijun; Jing, Zhongliang; Li, Jianxun
2005-01-01
The rotation invariant feature of the target is obtained using the multi-direction feature extraction property of the steerable filter. Combining the morphological operation top-hat transform with the self-organizing feature map neural network, the adaptive topological region is selected. Using the erosion operation, the topological region shrinkage is achieved. The steerable filter based morphological self-organizing feature map neural network is applied to automatic target recognition of binary standard patterns and real-world infrared sequence images. Compared with Hamming network and morphological shared-weight networks respectively, the higher recognition correct rate, robust adaptability, quick training, and better generalization of the proposed method are achieved.
XID+: Next generation XID development
NASA Astrophysics Data System (ADS)
Hurley, Peter
2017-04-01
XID+ is a prior-based source extraction tool which carries out photometry in the Herschel SPIRE (Spectral and Photometric Imaging Receiver) maps at the positions of known sources. It uses a probabilistic Bayesian framework that provides a natural framework in which to include prior information, and uses the Bayesian inference tool Stan to obtain the full posterior probability distribution on flux estimates.
ERIC Educational Resources Information Center
Doskey, Steven Craig
2014-01-01
This research presents an innovative means of gauging Systems Engineering effectiveness through a Systems Engineering Relative Effectiveness Index (SE REI) model. The SE REI model uses a Bayesian Belief Network to map causal relationships in government acquisitions of Complex Information Systems (CIS), enabling practitioners to identify and…
Ohyama, Akio; Shirasawa, Kenta; Matsunaga, Hiroshi; Negoro, Satomi; Miyatake, Koji; Yamaguchi, Hirotaka; Nunome, Tsukasa; Iwata, Hiroyoshi; Fukuoka, Hiroyuki; Hayashi, Takeshi
2017-08-01
Using newly developed euchromatin-derived genomic SSR markers and a flexible Bayesian mapping method, 13 significant agricultural QTLs were identified in a segregating population derived from a four-way cross of tomato. So far, many QTL mapping studies in tomato have been performed for progeny obtained from crosses between two genetically distant parents, e.g., domesticated tomatoes and wild relatives. However, QTL information of quantitative traits related to yield (e.g., flower or fruit number, and total or average weight of fruits) in such intercross populations would be of limited use for breeding commercial tomato cultivars because individuals in the populations have specific genetic backgrounds underlying extremely different phenotypes between the parents such as large fruit in domesticated tomatoes and small fruit in wild relatives, which may not be reflective of the genetic variation in tomato breeding populations. In this study, we constructed F 2 population derived from a cross between two commercial F 1 cultivars in tomato to extract QTL information practical for tomato breeding. This cross corresponded to a four-way cross, because the four parental lines of the two F 1 cultivars were considered to be the founders. We developed 2510 new expressed sequence tag (EST)-based (euchromatin-derived) genomic SSR markers and selected 262 markers from these new SSR markers and publicly available SSR markers to construct a linkage map. QTL analysis for ten agricultural traits of tomato was performed based on the phenotypes and marker genotypes of F 2 plants using a flexible Bayesian method. As results, 13 QTL regions were detected for six traits by the Bayesian method developed in this study.
Bayesian Estimation of the Spatially Varying Completeness Magnitude of Earthquake Catalogs
NASA Astrophysics Data System (ADS)
Mignan, A.; Werner, M.; Wiemer, S.; Chen, C.; Wu, Y.
2010-12-01
Assessing the completeness magnitude Mc of earthquake catalogs is an essential prerequisite for any seismicity analysis. We employ a simple model to compute Mc in space, based on the proximity to seismic stations in a network. We show that a relationship of the form Mcpred(d) = ad^b+c, with d the distance to the 5th nearest seismic station, fits the observations well. We then propose a new Mc mapping approach, the Bayesian Magnitude of Completeness (BMC) method, based on a 2-step procedure: (1) a spatial resolution optimization to minimize spatial heterogeneities and uncertainties in Mc estimates and (2) a Bayesian approach that merges prior information about Mc based on the proximity to seismic stations with locally observed values weighted by their respective uncertainties. This new methodology eliminates most weaknesses associated with current Mc mapping procedures: the radius that defines which earthquakes to include in the local magnitude distribution is chosen according to an objective criterion and there are no gaps in the spatial estimation of Mc. The method solely requires the coordinates of seismic stations. Here, we investigate the Taiwan Central Weather Bureau (CWB) earthquake catalog by computing a Mc map for the period 1994-2010.
Andrea Havron; Chris Goldfinger; Sarah Henkel; Bruce G. Marcot; Chris Romsos; Lisa Gilbane
2017-01-01
Resource managers increasingly use habitat suitability map products to inform risk management and policy decisions. Modeling habitat suitability of data-poor species over large areas requires careful attention to assumptions and limitations. Resulting habitat suitability maps can harbor uncertainties from data collection and modeling processes; yet these limitations...
Maximum entropy perception-action space: a Bayesian model of eye movement selection
NASA Astrophysics Data System (ADS)
Colas, Francis; Bessière, Pierre; Girard, Benoît
2011-03-01
In this article, we investigate the issue of the selection of eye movements in a free-eye Multiple Object Tracking task. We propose a Bayesian model of retinotopic maps with a complex logarithmic mapping. This model is structured in two parts: a representation of the visual scene, and a decision model based on the representation. We compare different decision models based on different features of the representation and we show that taking into account uncertainty helps predict the eye movements of subjects recorded in a psychophysics experiment. Finally, based on experimental data, we postulate that the complex logarithmic mapping has a functional relevance, as the density of objects in this space in more uniform than expected. This may indicate that the representation space and control strategies are such that the object density is of maximum entropy.
Empirical Bayesian Geographical Mapping of Occupational Accidents among Iranian Workers.
Vahabi, Nasim; Kazemnejad, Anoshirvan; Datta, Somnath
2017-05-01
Work-related accidents are believed to be a serious preventable cause of mortality and disability worldwide. This study aimed to provide Bayesian geographical maps of occupational injury rates among workers insured by the Iranian Social Security Organization. The participants included all insured workers in the Iranian Social Security Organization database in 2012. One of the applications of the Bayesian approach called the Poisson-Gamma model was applied to estimate the relative risk of occupational accidents. Data analysis and mapping were performed using R 3.0.3, Open-Bugs 3.2.3 rev 1012 and ArcMap9.3. The majority of all 21,484 investigated occupational injury victims were male (98.3%) including 16,443 (76.5%) single workers aged 20 - 29 years. The accidents were more frequent in basic metal, electric, and non-electric machining jobs. About 0.4% (96) of work-related accidents led to death, 2.2% (457) led to disability (partial and total), 4.6% (980) led to fixed compensation, and 92.8% (19,951) of the injured victims recovered completely. The geographical maps of estimated relative risk of occupational accidents were also provided. The results showed that the highest estimations pertained to provinces which were mostly located along mountain chains, some of which are categorized as deprived provinces in Iran. The study revealed the need for further investigation of the role of economic and climatic factors in high risk areas. The application of geographical mapping together with statistical approaches can provide more accurate tools for policy makers to make better decisions in order to prevent and reduce the risks and adverse outcomes of work-related accidents.
Multi-dimensional SAR tomography for monitoring the deformation of newly built concrete buildings
NASA Astrophysics Data System (ADS)
Ma, Peifeng; Lin, Hui; Lan, Hengxing; Chen, Fulong
2015-08-01
Deformation often occurs in buildings at early ages, and the constant inspection of deformation is of significant importance to discover possible cracking and avoid wall failure. This paper exploits the multi-dimensional SAR tomography technique to monitor the deformation performances of two newly built buildings (B1 and B2) with a special focus on the effects of concrete creep and shrinkage. To separate the nonlinear thermal expansion from total deformations, the extended 4-D SAR technique is exploited. The thermal map estimated from 44 TerraSAR-X images demonstrates that the derived thermal amplitude is highly related to the building height due to the upward accumulative effect of thermal expansion. The linear deformation velocity map reveals that B1 is subject to settlement during the construction period, in addition, the creep and shrinkage of B1 lead to wall shortening that is a height-dependent movement in the downward direction, and the asymmetrical creep of B2 triggers wall deflection that is a height-dependent movement in the deflection direction. It is also validated that the extended 4-D SAR can rectify the bias of estimated wall shortening and wall deflection by 4-D SAR.
Logistic Stick-Breaking Process
Ren, Lu; Du, Lan; Carin, Lawrence; Dunson, David B.
2013-01-01
A logistic stick-breaking process (LSBP) is proposed for non-parametric clustering of general spatially- or temporally-dependent data, imposing the belief that proximate data are more likely to be clustered together. The sticks in the LSBP are realized via multiple logistic regression functions, with shrinkage priors employed to favor contiguous and spatially localized segments. The LSBP is also extended for the simultaneous processing of multiple data sets, yielding a hierarchical logistic stick-breaking process (H-LSBP). The model parameters (atoms) within the H-LSBP are shared across the multiple learning tasks. Efficient variational Bayesian inference is derived, and comparisons are made to related techniques in the literature. Experimental analysis is performed for audio waveforms and images, and it is demonstrated that for segmentation applications the LSBP yields generally homogeneous segments with sharp boundaries. PMID:25258593
Ferragina, A.; de los Campos, G.; Vazquez, A. I.; Cecchinato, A.; Bittante, G.
2017-01-01
The aim of this study was to assess the performance of Bayesian models commonly used for genomic selection to predict “difficult-to-predict” dairy traits, such as milk fatty acid (FA) expressed as percentage of total fatty acids, and technological properties, such as fresh cheese yield and protein recovery, using Fourier-transform infrared (FTIR) spectral data. Our main hypothesis was that Bayesian models that can estimate shrinkage and perform variable selection may improve our ability to predict FA traits and technological traits above and beyond what can be achieved using the current calibration models (e.g., partial least squares, PLS). To this end, we assessed a series of Bayesian methods and compared their prediction performance with that of PLS. The comparison between models was done using the same sets of data (i.e., same samples, same variability, same spectral treatment) for each trait. Data consisted of 1,264 individual milk samples collected from Brown Swiss cows for which gas chromatographic FA composition, milk coagulation properties, and cheese-yield traits were available. For each sample, 2 spectra in the infrared region from 5,011 to 925 cm−1 were available and averaged before data analysis. Three Bayesian models: Bayesian ridge regression (Bayes RR), Bayes A, and Bayes B, and 2 reference models: PLS and modified PLS (MPLS) procedures, were used to calibrate equations for each of the traits. The Bayesian models used were implemented in the R package BGLR (http://cran.r-project.org/web/packages/BGLR/index.html), whereas the PLS and MPLS were those implemented in the WinISI II software (Infrasoft International LLC, State College, PA). Prediction accuracy was estimated for each trait and model using 25 replicates of a training-testing validation procedure. Compared with PLS, which is currently the most widely used calibration method, MPLS and the 3 Bayesian methods showed significantly greater prediction accuracy. Accuracy increased in moving from calibration to external validation methods, and in moving from PLS and MPLS to Bayesian methods, particularly Bayes A and Bayes B. The maximum R2 value of validation was obtained with Bayes B and Bayes A. For the FA, C10:0 (% of each FA on total FA basis) had the highest R2 (0.75, achieved with Bayes A and Bayes B), and among the technological traits, fresh cheese yield R2 of 0.82 (achieved with Bayes B). These 2 methods have proven to be useful instruments in shrinking and selecting very informative wavelengths and inferring the structure and functions of the analyzed traits. We conclude that Bayesian models are powerful tools for deriving calibration equations, and, importantly, these equations can be easily developed using existing open-source software. As part of our study, we provide scripts based on the open source R software BGLR, which can be used to train customized prediction equations for other traits or populations. PMID:26387015
Functional Multi-Locus QTL Mapping of Temporal Trends in Scots Pine Wood Traits
Li, Zitong; Hallingbäck, Henrik R.; Abrahamsson, Sara; Fries, Anders; Gull, Bengt Andersson; Sillanpää, Mikko J.; García-Gil, M. Rosario
2014-01-01
Quantitative trait loci (QTL) mapping of wood properties in conifer species has focused on single time point measurements or on trait means based on heterogeneous wood samples (e.g., increment cores), thus ignoring systematic within-tree trends. In this study, functional QTL mapping was performed for a set of important wood properties in increment cores from a 17-yr-old Scots pine (Pinus sylvestris L.) full-sib family with the aim of detecting wood trait QTL for general intercepts (means) and for linear slopes by increasing cambial age. Two multi-locus functional QTL analysis approaches were proposed and their performances were compared on trait datasets comprising 2 to 9 time points, 91 to 455 individual tree measurements and genotype datasets of amplified length polymorphisms (AFLP), and single nucleotide polymorphism (SNP) markers. The first method was a multilevel LASSO analysis whereby trend parameter estimation and QTL mapping were conducted consecutively; the second method was our Bayesian linear mixed model whereby trends and underlying genetic effects were estimated simultaneously. We also compared several different hypothesis testing methods under either the LASSO or the Bayesian framework to perform QTL inference. In total, five and four significant QTL were observed for the intercepts and slopes, respectively, across wood traits such as earlywood percentage, wood density, radial fiberwidth, and spiral grain angle. Four of these QTL were represented by candidate gene SNPs, thus providing promising targets for future research in QTL mapping and molecular function. Bayesian and LASSO methods both detected similar sets of QTL given datasets that comprised large numbers of individuals. PMID:25305041
Functional multi-locus QTL mapping of temporal trends in Scots pine wood traits.
Li, Zitong; Hallingbäck, Henrik R; Abrahamsson, Sara; Fries, Anders; Gull, Bengt Andersson; Sillanpää, Mikko J; García-Gil, M Rosario
2014-10-09
Quantitative trait loci (QTL) mapping of wood properties in conifer species has focused on single time point measurements or on trait means based on heterogeneous wood samples (e.g., increment cores), thus ignoring systematic within-tree trends. In this study, functional QTL mapping was performed for a set of important wood properties in increment cores from a 17-yr-old Scots pine (Pinus sylvestris L.) full-sib family with the aim of detecting wood trait QTL for general intercepts (means) and for linear slopes by increasing cambial age. Two multi-locus functional QTL analysis approaches were proposed and their performances were compared on trait datasets comprising 2 to 9 time points, 91 to 455 individual tree measurements and genotype datasets of amplified length polymorphisms (AFLP), and single nucleotide polymorphism (SNP) markers. The first method was a multilevel LASSO analysis whereby trend parameter estimation and QTL mapping were conducted consecutively; the second method was our Bayesian linear mixed model whereby trends and underlying genetic effects were estimated simultaneously. We also compared several different hypothesis testing methods under either the LASSO or the Bayesian framework to perform QTL inference. In total, five and four significant QTL were observed for the intercepts and slopes, respectively, across wood traits such as earlywood percentage, wood density, radial fiberwidth, and spiral grain angle. Four of these QTL were represented by candidate gene SNPs, thus providing promising targets for future research in QTL mapping and molecular function. Bayesian and LASSO methods both detected similar sets of QTL given datasets that comprised large numbers of individuals. Copyright © 2014 Li et al.
Chad Babcock; Hans Andersen; Andrew O. Finley; Bruce D. Cook
2015-01-01
Models leveraging repeat LiDAR and field collection campaigns may be one possible mechanism to monitor carbon flux in remote forested regions. Here, we look to the spatio-temporally data-rich Kenai Peninsula in Alaska, USA to examine the potential for Bayesian spatio-temporal mapping of terrestrial forest carbon storage and uncertainty.
Khana, Diba; Rossen, Lauren M; Hedegaard, Holly; Warner, Margaret
2018-01-01
Hierarchical Bayes models have been used in disease mapping to examine small scale geographic variation. State level geographic variation for less common causes of mortality outcomes have been reported however county level variation is rarely examined. Due to concerns about statistical reliability and confidentiality, county-level mortality rates based on fewer than 20 deaths are suppressed based on Division of Vital Statistics, National Center for Health Statistics (NCHS) statistical reliability criteria, precluding an examination of spatio-temporal variation in less common causes of mortality outcomes such as suicide rates (SRs) at the county level using direct estimates. Existing Bayesian spatio-temporal modeling strategies can be applied via Integrated Nested Laplace Approximation (INLA) in R to a large number of rare causes of mortality outcomes to enable examination of spatio-temporal variations on smaller geographic scales such as counties. This method allows examination of spatiotemporal variation across the entire U.S., even where the data are sparse. We used mortality data from 2005-2015 to explore spatiotemporal variation in SRs, as one particular application of the Bayesian spatio-temporal modeling strategy in R-INLA to predict year and county-specific SRs. Specifically, hierarchical Bayesian spatio-temporal models were implemented with spatially structured and unstructured random effects, correlated time effects, time varying confounders and space-time interaction terms in the software R-INLA, borrowing strength across both counties and years to produce smoothed county level SRs. Model-based estimates of SRs were mapped to explore geographic variation.
A Bayesian approach to traffic light detection and mapping
NASA Astrophysics Data System (ADS)
Hosseinyalamdary, Siavash; Yilmaz, Alper
2017-03-01
Automatic traffic light detection and mapping is an open research problem. The traffic lights vary in color, shape, geolocation, activation pattern, and installation which complicate their automated detection. In addition, the image of the traffic lights may be noisy, overexposed, underexposed, or occluded. In order to address this problem, we propose a Bayesian inference framework to detect and map traffic lights. In addition to the spatio-temporal consistency constraint, traffic light characteristics such as color, shape and height is shown to further improve the accuracy of the proposed approach. The proposed approach has been evaluated on two benchmark datasets and has been shown to outperform earlier studies. The results show that the precision and recall rates for the KITTI benchmark are 95.78 % and 92.95 % respectively and the precision and recall rates for the LARA benchmark are 98.66 % and 94.65 % .
Advanced obstacle avoidance for a laser based wheelchair using optimised Bayesian neural networks.
Trieu, Hoang T; Nguyen, Hung T; Willey, Keith
2008-01-01
In this paper we present an advanced method of obstacle avoidance for a laser based intelligent wheelchair using optimized Bayesian neural networks. Three neural networks are designed for three separate sub-tasks: passing through a door way, corridor and wall following and general obstacle avoidance. The accurate usable accessible space is determined by including the actual wheelchair dimensions in a real-time map used as inputs to each networks. Data acquisitions are performed separately to collect the patterns required for specified sub-tasks. Bayesian frame work is used to determine the optimal neural network structure in each case. Then these networks are trained under the supervision of Bayesian rule. Experiment results showed that compare to the VFH algorithm our neural networks navigated a smoother path following a near optimum trajectory.
NASA Astrophysics Data System (ADS)
Tien Bui, Dieu; Hoang, Nhat-Duc
2017-09-01
In this study, a probabilistic model, named as BayGmmKda, is proposed for flood susceptibility assessment in a study area in central Vietnam. The new model is a Bayesian framework constructed by a combination of a Gaussian mixture model (GMM), radial-basis-function Fisher discriminant analysis (RBFDA), and a geographic information system (GIS) database. In the Bayesian framework, GMM is used for modeling the data distribution of flood-influencing factors in the GIS database, whereas RBFDA is utilized to construct a latent variable that aims at enhancing the model performance. As a result, the posterior probabilistic output of the BayGmmKda model is used as flood susceptibility index. Experiment results showed that the proposed hybrid framework is superior to other benchmark models, including the adaptive neuro-fuzzy inference system and the support vector machine. To facilitate the model implementation, a software program of BayGmmKda has been developed in MATLAB. The BayGmmKda program can accurately establish a flood susceptibility map for the study region. Accordingly, local authorities can overlay this susceptibility map onto various land-use maps for the purpose of land-use planning or management.
Disease Mapping for Stomach Cancer in Libya Based on Besag– York– Mollié (BYM) Model
Alhdiri, Maryam Ahmed Salem; Samat, Nor Azah; Mohamed, Zulkifley
2017-06-25
Globally, Cancer is the ever-increasing health problem and most common cause of medical deaths. In Libya, it is an important health concern, especially in the setting of an aging population and limited healthcare facilities. Therefore, the goal of this research is to map of the county’ cancer incidence rate using the Bayesian method and identify the high-risk regions (for the first time in a decade). In the field of disease mapping, very little has been done to address the issue of analyzing sparse cancer diseases in Libya. Standardized Morbidity Ratio or SMR is known as a traditional approach to measure the relative risk of the disease, which is the ratio of observed and expected number of accounts in a region that has the greatest uncertainty if the disease is rare or small geographical region. Therefore, to solve some of SMR’s problems, we used statistical smoothing or Bayesian models to estimate the relative risk for stomach cancer incidence in Libya in 2007 based on the BYM model. This research begins with a short offer of the SMR and Bayesian model with BYM model, which we applied to stomach cancer incidence in Libya. We compared all of the results using maps and tables. We found that BYM model is potentially beneficial, because it gives better relative risk estimates compared to SMR method. As well as, it has can overcome the classical method problem when there is no observed stomach cancer in a region. Creative Commons Attribution License
Part of the ecological risk assessment process involves examining the potential for environmental stressors and ecological receptors to co-occur across a landscape. In this study, we introduce a Bayesian joint modeling framework for use in evaluating and mapping the co-occurrence...
Houngbedji, Clarisse A; Chammartin, Frédérique; Yapi, Richard B; Hürlimann, Eveline; N'Dri, Prisca B; Silué, Kigbafori D; Soro, Gotianwa; Koudou, Benjamin G; Assi, Serge-Brice; N'Goran, Eliézer K; Fantodji, Agathe; Utzinger, Jürg; Vounatsou, Penelope; Raso, Giovanna
2016-09-07
In Côte d'Ivoire, malaria remains a major public health issue, and thus a priority to be tackled. The aim of this study was to identify spatially explicit indicators of Plasmodium falciparum infection among school-aged children and to undertake a model-based spatial prediction of P. falciparum infection risk using environmental predictors. A cross-sectional survey was conducted, including parasitological examinations and interviews with more than 5,000 children from 93 schools across Côte d'Ivoire. A finger-prick blood sample was obtained from each child to determine Plasmodium species-specific infection and parasitaemia using Giemsa-stained thick and thin blood films. Household socioeconomic status was assessed through asset ownership and household characteristics. Children were interviewed for preventive measures against malaria. Environmental data were gathered from satellite images and digitized maps. A Bayesian geostatistical stochastic search variable selection procedure was employed to identify factors related to P. falciparum infection risk. Bayesian geostatistical logistic regression models were used to map the spatial distribution of P. falciparum infection and to predict the infection prevalence at non-sampled locations via Bayesian kriging. Complete data sets were available from 5,322 children aged 5-16 years across Côte d'Ivoire. P. falciparum was the predominant species (94.5 %). The Bayesian geostatistical variable selection procedure identified land cover and socioeconomic status as important predictors for infection risk with P. falciparum. Model-based prediction identified high P. falciparum infection risk in the north, central-east, south-east, west and south-west of Côte d'Ivoire. Low-risk areas were found in the south-eastern area close to Abidjan and the south-central and west-central part of the country. The P. falciparum infection risk and related uncertainty estimates for school-aged children in Côte d'Ivoire represent the most up-to-date malaria risk maps. These tools can be used for spatial targeting of malaria control interventions.
Applications of Bayesian spectrum representation in acoustics
NASA Astrophysics Data System (ADS)
Botts, Jonathan M.
This dissertation utilizes a Bayesian inference framework to enhance the solution of inverse problems where the forward model maps to acoustic spectra. A Bayesian solution to filter design inverts a acoustic spectra to pole-zero locations of a discrete-time filter model. Spatial sound field analysis with a spherical microphone array is a data analysis problem that requires inversion of spatio-temporal spectra to directions of arrival. As with many inverse problems, a probabilistic analysis results in richer solutions than can be achieved with ad-hoc methods. In the filter design problem, the Bayesian inversion results in globally optimal coefficient estimates as well as an estimate the most concise filter capable of representing the given spectrum, within a single framework. This approach is demonstrated on synthetic spectra, head-related transfer function spectra, and measured acoustic reflection spectra. The Bayesian model-based analysis of spatial room impulse responses is presented as an analogous problem with equally rich solution. The model selection mechanism provides an estimate of the number of arrivals, which is necessary to properly infer the directions of simultaneous arrivals. Although, spectrum inversion problems are fairly ubiquitous, the scope of this dissertation has been limited to these two and derivative problems. The Bayesian approach to filter design is demonstrated on an artificial spectrum to illustrate the model comparison mechanism and then on measured head-related transfer functions to show the potential range of application. Coupled with sampling methods, the Bayesian approach is shown to outperform least-squares filter design methods commonly used in commercial software, confirming the need for a global search of the parameter space. The resulting designs are shown to be comparable to those that result from global optimization methods, but the Bayesian approach has the added advantage of a filter length estimate within the same unified framework. The application to reflection data is useful for representing frequency-dependent impedance boundaries in finite difference acoustic simulations. Furthermore, since the filter transfer function is a parametric model, it can be modified to incorporate arbitrary frequency weighting and account for the band-limited nature of measured reflection spectra. Finally, the model is modified to compensate for dispersive error in the finite difference simulation, from the filter design process. Stemming from the filter boundary problem, the implementation of pressure sources in finite difference simulation is addressed in order to assure that schemes properly converge. A class of parameterized source functions is proposed and shown to offer straightforward control of residual error in the simulation. Guided by the notion that the solution to be approximated affects the approximation error, sources are designed which reduce residual dispersive error to the size of round-off errors. The early part of a room impulse response can be characterized by a series of isolated plane waves. Measured with an array of microphones, plane waves map to a directional response of the array or spatial intensity map. Probabilistic inversion of this response results in estimates of the number and directions of image source arrivals. The model-based inversion is shown to avoid ambiguities associated with peak-finding or inspection of the spatial intensity map. For this problem, determining the number of arrivals in a given frame is critical for properly inferring the state of the sound field. This analysis is effectively compression of the spatial room response, which is useful for analysis or encoding of the spatial sound field. Parametric, model-based formulations of these problems enhance the solution in all cases, and a Bayesian interpretation provides a principled approach to model comparison and parameter estimation. v
Micro and Macro Segregation in Alloys Solidifying with Equiaxed Morphology
NASA Technical Reports Server (NTRS)
Stefanescu, Doru M.; Curreri, Peter A.; Leon-Torres, Jose; Sen, Subhayu
1996-01-01
To understand macro segregation formation in Al-Cu alloys, experiments were run under terrestrial gravity (1g) and under low gravity during parabolic flights (10(exp -2) g). Alloys of two different compositions (2% and 5% Cu) were solidified at two different cooling rates. Systematic microscopic and SEM observations produced microstructural and segregation maps for all samples. These maps may be used as benchmark experiments for validation of microstructure evolution and segregation models. As expected, the macro segregation maps are very complex. When segregation was measured along the central axis of the sample, the highest macro segregation for samples solidified at 1g was obtained for the lowest cooling rate. This behavior is attributed to the longer time available for natural convection and shrinkage flow to affect solute redistribution. In samples solidified under low-g, the highest macro-segregation was obtained at the highest cooling rate. In general, low-gravity solidification resulted in less segregation. To explain the experimental findings, an analytical (Flemings-Nereo) and a numerical model were used. For the numerical model, the continuum formulation was employed to describe the macroscopic transports of mass, energy, and momentum, associated with the microscopic transport phenomena, for a two-phase system. The model proposed considers that liquid flow is driven by thermal and solutal buoyancy, and by solidification shrinkage. The Flemings-Nereo model explains well macro segregation in the initial stages of low-gravity segregation. The numerical model can describe the complex macro segregation pattern and the differences between low- and high-gravity solidification.
Li, Ben; Li, Yunxiao; Qin, Zhaohui S
2017-06-01
Modern high-throughput biotechnologies such as microarray and next generation sequencing produce a massive amount of information for each sample assayed. However, in a typical high-throughput experiment, only limited amount of data are observed for each individual feature, thus the classical 'large p , small n ' problem. Bayesian hierarchical model, capable of borrowing strength across features within the same dataset, has been recognized as an effective tool in analyzing such data. However, the shrinkage effect, the most prominent feature of hierarchical features, can lead to undesirable over-correction for some features. In this work, we discuss possible causes of the over-correction problem and propose several alternative solutions. Our strategy is rooted in the fact that in the Big Data era, large amount of historical data are available which should be taken advantage of. Our strategy presents a new framework to enhance the Bayesian hierarchical model. Through simulation and real data analysis, we demonstrated superior performance of the proposed strategy. Our new strategy also enables borrowing information across different platforms which could be extremely useful with emergence of new technologies and accumulation of data from different platforms in the Big Data era. Our method has been implemented in R package "adaptiveHM", which is freely available from https://github.com/benliemory/adaptiveHM.
Li, Ben; Li, Yunxiao; Qin, Zhaohui S.
2016-01-01
Modern high-throughput biotechnologies such as microarray and next generation sequencing produce a massive amount of information for each sample assayed. However, in a typical high-throughput experiment, only limited amount of data are observed for each individual feature, thus the classical ‘large p, small n’ problem. Bayesian hierarchical model, capable of borrowing strength across features within the same dataset, has been recognized as an effective tool in analyzing such data. However, the shrinkage effect, the most prominent feature of hierarchical features, can lead to undesirable over-correction for some features. In this work, we discuss possible causes of the over-correction problem and propose several alternative solutions. Our strategy is rooted in the fact that in the Big Data era, large amount of historical data are available which should be taken advantage of. Our strategy presents a new framework to enhance the Bayesian hierarchical model. Through simulation and real data analysis, we demonstrated superior performance of the proposed strategy. Our new strategy also enables borrowing information across different platforms which could be extremely useful with emergence of new technologies and accumulation of data from different platforms in the Big Data era. Our method has been implemented in R package “adaptiveHM”, which is freely available from https://github.com/benliemory/adaptiveHM. PMID:28919931
Wang, Chao; Gao, Qiong; Wang, Xian; Yu, Mei
2015-01-01
Land use land cover (LULC) changes frequently in ecotones due to the large climate and soil gradients, and complex landscape composition and configuration. Accurate mapping of LULC changes in ecotones is of great importance for assessment of ecosystem functions/services and policy-decision support. Decadal or sub-decadal mapping of LULC provides scenarios for modeling biogeochemical processes and their feedbacks to climate, and evaluating effectiveness of land-use policies, e.g. forest conversion. However, it remains a great challenge to produce reliable LULC maps in moderate resolution and to evaluate their uncertainties over large areas with complex landscapes. In this study we developed a robust LULC classification system using multiple classifiers based on MODIS (Moderate Resolution Imaging Spectroradiometer) data and posterior data fusion. Not only does the system create LULC maps with high statistical accuracy, but also it provides pixel-level uncertainties that are essential for subsequent analyses and applications. We applied the classification system to the Agro-pasture transition band in northern China (APTBNC) to detect the decadal changes in LULC during 2003-2013 and evaluated the effectiveness of the implementation of major Key Forestry Programs (KFPs). In our study, the random forest (RF), support vector machine (SVM), and weighted k-nearest neighbors (WKNN) classifiers outperformed the artificial neural networks (ANN) and naive Bayes (NB) in terms of high classification accuracy and low sensitivity to training sample size. The Bayesian-average data fusion based on the results of RF, SVM, and WKNN achieved the 87.5% Kappa statistics, higher than any individual classifiers and the majority-vote integration. The pixel-level uncertainty map agreed with the traditional accuracy assessment. However, it conveys spatial variation of uncertainty. Specifically, it pinpoints the southwestern area of APTBNC has higher uncertainty than other part of the region, and the open shrubland is likely to be misclassified to the bare ground in some locations. Forests, closed shrublands, and grasslands in APTBNC expanded by 23%, 50%, and 9%, respectively, during 2003-2013. The expansion of these land cover types is compensated with the shrinkages in croplands (20%), bare ground (15%), and open shrublands (30%). The significant decline in agricultural lands is primarily attributed to the KFPs implemented in the end of last century and the nationwide urbanization in recent decade. The increased coverage of grass and woody plants would largely reduce soil erosion, improve mitigation of climate change, and enhance carbon sequestration in this region.
Zheng, Qi; Grice, Elizabeth A
2016-10-01
Accurate mapping of next-generation sequencing (NGS) reads to reference genomes is crucial for almost all NGS applications and downstream analyses. Various repetitive elements in human and other higher eukaryotic genomes contribute in large part to ambiguously (non-uniquely) mapped reads. Most available NGS aligners attempt to address this by either removing all non-uniquely mapping reads, or reporting one random or "best" hit based on simple heuristics. Accurate estimation of the mapping quality of NGS reads is therefore critical albeit completely lacking at present. Here we developed a generalized software toolkit "AlignerBoost", which utilizes a Bayesian-based framework to accurately estimate mapping quality of ambiguously mapped NGS reads. We tested AlignerBoost with both simulated and real DNA-seq and RNA-seq datasets at various thresholds. In most cases, but especially for reads falling within repetitive regions, AlignerBoost dramatically increases the mapping precision of modern NGS aligners without significantly compromising the sensitivity even without mapping quality filters. When using higher mapping quality cutoffs, AlignerBoost achieves a much lower false mapping rate while exhibiting comparable or higher sensitivity compared to the aligner default modes, therefore significantly boosting the detection power of NGS aligners even using extreme thresholds. AlignerBoost is also SNP-aware, and higher quality alignments can be achieved if provided with known SNPs. AlignerBoost's algorithm is computationally efficient, and can process one million alignments within 30 seconds on a typical desktop computer. AlignerBoost is implemented as a uniform Java application and is freely available at https://github.com/Grice-Lab/AlignerBoost.
System Analysis by Mapping a Fault-tree into a Bayesian-network
NASA Astrophysics Data System (ADS)
Sheng, B.; Deng, C.; Wang, Y. H.; Tang, L. H.
2018-05-01
In view of the limitations of fault tree analysis in reliability assessment, Bayesian Network (BN) has been studied as an alternative technology. After a brief introduction to the method for mapping a Fault Tree (FT) into an equivalent BN, equations used to calculate the structure importance degree, the probability importance degree and the critical importance degree are presented. Furthermore, the correctness of these equations is proved mathematically. Combining with an aircraft landing gear’s FT, an equivalent BN is developed and analysed. The results show that richer and more accurate information have been achieved through the BN method than the FT, which demonstrates that the BN is a superior technique in both reliability assessment and fault diagnosis.
Kinetics of corneal thermal shrinkage
NASA Astrophysics Data System (ADS)
Borja, David; Manns, Fabrice; Lee, William E.; Parel, Jean-Marie
2004-07-01
Purpose: The purpose of this study was to determine the effects of temperature and heating duration on the kinetics of thermal shrinkage in corneal strips using a custom-made shrinkage device. Methods: Thermal shrinkage was induced and measured in corneal strips under a constant load placed while bathed in 25% Dextran irrigation solution. A study was performed on 57 Florida Lions Eye Bank donated human cadaver eyes to determine the effect of temperature on the amount and rate of thermal shrinkage. Further experiments were performed on 20 human cadaver eyes to determine the effects of heating duration on permanent shrinkage. Data analysis was performed to determine the effects of temperature, heating duration, and age on the amount and kinetics of shrinkage. Results: Shrinkage consisted of two phases: a shrinkage phase during heating and a regression phase after heating. Permanent shrinkage increased with temperature and duration. The shrinkage and regression time constants followed Arrhenius type temperature dependence. The shrinkage time constants where calculated to be 67, 84, 121, 560 and 1112 (s) at 80, 75, 70, 65, and 60°C respectively. At 65°C the permanent shrinkage time constant was calculated to be 945s. Conclusion: These results show that shrinkage treatments need to raise the temperature of the tissue above 75°C for several seconds in order to prevent regression of the shrinkage effect immediately after treatment and to induce the maximum amount of permanent irreversible shrinkage.
Bayesian depth estimation from monocular natural images.
Su, Che-Chun; Cormack, Lawrence K; Bovik, Alan C
2017-05-01
Estimating an accurate and naturalistic dense depth map from a single monocular photographic image is a difficult problem. Nevertheless, human observers have little difficulty understanding the depth structure implied by photographs. Two-dimensional (2D) images of the real-world environment contain significant statistical information regarding the three-dimensional (3D) structure of the world that the vision system likely exploits to compute perceived depth, monocularly as well as binocularly. Toward understanding how this might be accomplished, we propose a Bayesian model of monocular depth computation that recovers detailed 3D scene structures by extracting reliable, robust, depth-sensitive statistical features from single natural images. These features are derived using well-accepted univariate natural scene statistics (NSS) models and recent bivariate/correlation NSS models that describe the relationships between 2D photographic images and their associated depth maps. This is accomplished by building a dictionary of canonical local depth patterns from which NSS features are extracted as prior information. The dictionary is used to create a multivariate Gaussian mixture (MGM) likelihood model that associates local image features with depth patterns. A simple Bayesian predictor is then used to form spatial depth estimates. The depth results produced by the model, despite its simplicity, correlate well with ground-truth depths measured by a current-generation terrestrial light detection and ranging (LIDAR) scanner. Such a strong form of statistical depth information could be used by the visual system when creating overall estimated depth maps incorporating stereopsis, accommodation, and other conditions. Indeed, even in isolation, the Bayesian predictor delivers depth estimates that are competitive with state-of-the-art "computer vision" methods that utilize highly engineered image features and sophisticated machine learning algorithms.
Variational Bayesian Learning for Wavelet Independent Component Analysis
NASA Astrophysics Data System (ADS)
Roussos, E.; Roberts, S.; Daubechies, I.
2005-11-01
In an exploratory approach to data analysis, it is often useful to consider the observations as generated from a set of latent generators or "sources" via a generally unknown mapping. For the noisy overcomplete case, where we have more sources than observations, the problem becomes extremely ill-posed. Solutions to such inverse problems can, in many cases, be achieved by incorporating prior knowledge about the problem, captured in the form of constraints. This setting is a natural candidate for the application of the Bayesian methodology, allowing us to incorporate "soft" constraints in a natural manner. The work described in this paper is mainly driven by problems in functional magnetic resonance imaging of the brain, for the neuro-scientific goal of extracting relevant "maps" from the data. This can be stated as a `blind' source separation problem. Recent experiments in the field of neuroscience show that these maps are sparse, in some appropriate sense. The separation problem can be solved by independent component analysis (ICA), viewed as a technique for seeking sparse components, assuming appropriate distributions for the sources. We derive a hybrid wavelet-ICA model, transforming the signals into a domain where the modeling assumption of sparsity of the coefficients with respect to a dictionary is natural. We follow a graphical modeling formalism, viewing ICA as a probabilistic generative model. We use hierarchical source and mixing models and apply Bayesian inference to the problem. This allows us to perform model selection in order to infer the complexity of the representation, as well as automatic denoising. Since exact inference and learning in such a model is intractable, we follow a variational Bayesian mean-field approach in the conjugate-exponential family of distributions, for efficient unsupervised learning in multi-dimensional settings. The performance of the proposed algorithm is demonstrated on some representative experiments.
Methods for Measuring the Influence of Concept Mapping on Student Information Literacy.
ERIC Educational Resources Information Center
Gordon, Carol A.
2002-01-01
Discusses research traditions in education and in information retrieval and explores the theory of expected information which uses formulas derived from the Fano measure and Bayesian statistics. Demonstrates its application in a study on the effects of concept mapping on the search behavior of tenth-grade biology students. (Author/LRW)
NASA Astrophysics Data System (ADS)
D'Addabbo, Annarita; Refice, Alberto; Lovergine, Francesco P.; Pasquariello, Guido
2018-03-01
High-resolution, remotely sensed images of the Earth surface have been proven to be of help in producing detailed flood maps, thanks to their synoptic overview of the flooded area and frequent revisits. However, flood scenarios can be complex situations, requiring the integration of different data in order to provide accurate and robust flood information. Several processing approaches have been recently proposed to efficiently combine and integrate heterogeneous information sources. In this paper, we introduce DAFNE, a Matlab®-based, open source toolbox, conceived to produce flood maps from remotely sensed and other ancillary information, through a data fusion approach. DAFNE is based on Bayesian Networks, and is composed of several independent modules, each one performing a different task. Multi-temporal and multi-sensor data can be easily handled, with the possibility of following the evolution of an event through multi-temporal output flood maps. Each DAFNE module can be easily modified or upgraded to meet different user needs. The DAFNE suite is presented together with an example of its application.
Ferragina, A; de los Campos, G; Vazquez, A I; Cecchinato, A; Bittante, G
2015-11-01
The aim of this study was to assess the performance of Bayesian models commonly used for genomic selection to predict "difficult-to-predict" dairy traits, such as milk fatty acid (FA) expressed as percentage of total fatty acids, and technological properties, such as fresh cheese yield and protein recovery, using Fourier-transform infrared (FTIR) spectral data. Our main hypothesis was that Bayesian models that can estimate shrinkage and perform variable selection may improve our ability to predict FA traits and technological traits above and beyond what can be achieved using the current calibration models (e.g., partial least squares, PLS). To this end, we assessed a series of Bayesian methods and compared their prediction performance with that of PLS. The comparison between models was done using the same sets of data (i.e., same samples, same variability, same spectral treatment) for each trait. Data consisted of 1,264 individual milk samples collected from Brown Swiss cows for which gas chromatographic FA composition, milk coagulation properties, and cheese-yield traits were available. For each sample, 2 spectra in the infrared region from 5,011 to 925 cm(-1) were available and averaged before data analysis. Three Bayesian models: Bayesian ridge regression (Bayes RR), Bayes A, and Bayes B, and 2 reference models: PLS and modified PLS (MPLS) procedures, were used to calibrate equations for each of the traits. The Bayesian models used were implemented in the R package BGLR (http://cran.r-project.org/web/packages/BGLR/index.html), whereas the PLS and MPLS were those implemented in the WinISI II software (Infrasoft International LLC, State College, PA). Prediction accuracy was estimated for each trait and model using 25 replicates of a training-testing validation procedure. Compared with PLS, which is currently the most widely used calibration method, MPLS and the 3 Bayesian methods showed significantly greater prediction accuracy. Accuracy increased in moving from calibration to external validation methods, and in moving from PLS and MPLS to Bayesian methods, particularly Bayes A and Bayes B. The maximum R(2) value of validation was obtained with Bayes B and Bayes A. For the FA, C10:0 (% of each FA on total FA basis) had the highest R(2) (0.75, achieved with Bayes A and Bayes B), and among the technological traits, fresh cheese yield R(2) of 0.82 (achieved with Bayes B). These 2 methods have proven to be useful instruments in shrinking and selecting very informative wavelengths and inferring the structure and functions of the analyzed traits. We conclude that Bayesian models are powerful tools for deriving calibration equations, and, importantly, these equations can be easily developed using existing open-source software. As part of our study, we provide scripts based on the open source R software BGLR, which can be used to train customized prediction equations for other traits or populations. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Genetic basis of climatic adaptation in scots pine by bayesian quantitative trait locus analysis.
Hurme, P; Sillanpää, M J; Arjas, E; Repo, T; Savolainen, O
2000-01-01
We examined the genetic basis of large adaptive differences in timing of bud set and frost hardiness between natural populations of Scots pine. As a mapping population, we considered an "open-pollinated backcross" progeny by collecting seeds of a single F(1) tree (cross between trees from southern and northern Finland) growing in southern Finland. Due to the special features of the design (no marker information available on grandparents or the father), we applied a Bayesian quantitative trait locus (QTL) mapping method developed previously for outcrossed offspring. We found four potential QTL for timing of bud set and seven for frost hardiness. Bayesian analyses detected more QTL than ANOVA for frost hardiness, but the opposite was true for bud set. These QTL included alleles with rather large effects, and additionally smaller QTL were supported. The largest QTL for bud set date accounted for about a fourth of the mean difference between populations. Thus, natural selection during adaptation has resulted in selection of at least some alleles of rather large effect. PMID:11063704
Hierarchical Bayesian method for mapping biogeochemical hot spots using induced polarization imaging
Wainwright, Haruko M.; Flores Orozco, Adrian; Bucker, Matthias; ...
2016-01-29
In floodplain environments, a naturally reduced zone (NRZ) is considered to be a common biogeochemical hot spot, having distinct microbial and geochemical characteristics. Although important for understanding their role in mediating floodplain biogeochemical processes, mapping the subsurface distribution of NRZs over the dimensions of a floodplain is challenging, as conventional wellbore data are typically spatially limited and the distribution of NRZs is heterogeneous. In this work, we present an innovative methodology for the probabilistic mapping of NRZs within a three-dimensional (3-D) subsurface domain using induced polarization imaging, which is a noninvasive geophysical technique. Measurements consist of surface geophysical surveys andmore » drilling-recovered sediments at the U.S. Department of Energy field site near Rifle, CO (USA). Inversion of surface time domain-induced polarization (TDIP) data yielded 3-D images of the complex electrical resistivity, in terms of magnitude and phase, which are associated with mineral precipitation and other lithological properties. By extracting the TDIP data values colocated with wellbore lithological logs, we found that the NRZs have a different distribution of resistivity and polarization from the other aquifer sediments. To estimate the spatial distribution of NRZs, we developed a Bayesian hierarchical model to integrate the geophysical and wellbore data. In addition, the resistivity images were used to estimate hydrostratigraphic interfaces under the floodplain. Validation results showed that the integration of electrical imaging and wellbore data using a Bayesian hierarchical model was capable of mapping spatially heterogeneous interfaces and NRZ distributions thereby providing a minimally invasive means to parameterize a hydrobiogeochemical model of the floodplain.« less
Simple summation rule for optimal fixation selection in visual search.
Najemnik, Jiri; Geisler, Wilson S
2009-06-01
When searching for a known target in a natural texture, practiced humans achieve near-optimal performance compared to a Bayesian ideal searcher constrained with the human map of target detectability across the visual field [Najemnik, J., & Geisler, W. S. (2005). Optimal eye movement strategies in visual search. Nature, 434, 387-391]. To do so, humans must be good at choosing where to fixate during the search [Najemnik, J., & Geisler, W.S. (2008). Eye movement statistics in humans are consistent with an optimal strategy. Journal of Vision, 8(3), 1-14. 4]; however, it seems unlikely that a biological nervous system would implement the computations for the Bayesian ideal fixation selection because of their complexity. Here we derive and test a simple heuristic for optimal fixation selection that appears to be a much better candidate for implementation within a biological nervous system. Specifically, we show that the near-optimal fixation location is the maximum of the current posterior probability distribution for target location after the distribution is filtered by (convolved with) the square of the retinotopic target detectability map. We term the model that uses this strategy the entropy limit minimization (ELM) searcher. We show that when constrained with human-like retinotopic map of target detectability and human search error rates, the ELM searcher performs as well as the Bayesian ideal searcher, and produces fixation statistics similar to human.
Spatiotemporal Bayesian analysis of Lyme disease in New York state, 1990-2000.
Chen, Haiyan; Stratton, Howard H; Caraco, Thomas B; White, Dennis J
2006-07-01
Mapping ordinarily increases our understanding of nontrivial spatial and temporal heterogeneities in disease rates. However, the large number of parameters required by the corresponding statistical models often complicates detailed analysis. This study investigates the feasibility of a fully Bayesian hierarchical regression approach to the problem and identifies how it outperforms two more popular methods: crude rate estimates (CRE) and empirical Bayes standardization (EBS). In particular, we apply a fully Bayesian approach to the spatiotemporal analysis of Lyme disease incidence in New York state for the period 1990-2000. These results are compared with those obtained by CRE and EBS in Chen et al. (2005). We show that the fully Bayesian regression model not only gives more reliable estimates of disease rates than the other two approaches but also allows for tractable models that can accommodate more numerous sources of variation and unknown parameters.
Zheng, Qi; Grice, Elizabeth A.
2016-01-01
Accurate mapping of next-generation sequencing (NGS) reads to reference genomes is crucial for almost all NGS applications and downstream analyses. Various repetitive elements in human and other higher eukaryotic genomes contribute in large part to ambiguously (non-uniquely) mapped reads. Most available NGS aligners attempt to address this by either removing all non-uniquely mapping reads, or reporting one random or "best" hit based on simple heuristics. Accurate estimation of the mapping quality of NGS reads is therefore critical albeit completely lacking at present. Here we developed a generalized software toolkit "AlignerBoost", which utilizes a Bayesian-based framework to accurately estimate mapping quality of ambiguously mapped NGS reads. We tested AlignerBoost with both simulated and real DNA-seq and RNA-seq datasets at various thresholds. In most cases, but especially for reads falling within repetitive regions, AlignerBoost dramatically increases the mapping precision of modern NGS aligners without significantly compromising the sensitivity even without mapping quality filters. When using higher mapping quality cutoffs, AlignerBoost achieves a much lower false mapping rate while exhibiting comparable or higher sensitivity compared to the aligner default modes, therefore significantly boosting the detection power of NGS aligners even using extreme thresholds. AlignerBoost is also SNP-aware, and higher quality alignments can be achieved if provided with known SNPs. AlignerBoost’s algorithm is computationally efficient, and can process one million alignments within 30 seconds on a typical desktop computer. AlignerBoost is implemented as a uniform Java application and is freely available at https://github.com/Grice-Lab/AlignerBoost. PMID:27706155
Pruvot, M; Kutz, S; Barkema, H W; De Buck, J; Orsel, K
2014-11-01
Mycobacterium avium subsp. paratuberculosis (MAP) and Neospora caninum (NC) are two pathogens causing important production limiting diseases in the cattle industry. Significant impacts of MAP and NC have been reported on dairy cattle herds, but little is known about the importance, risk factors and transmission patterns in western Canadian cow-calf herds. In this cross-sectional study, the prevalence of MAP and NC infection in southwest Alberta cow-calf herds was estimated, risk factors for NC were identified, and the reproductive impacts of the two pathogens were assessed. Blood and fecal samples were collected from 840 cows on 28 cow-calf operations. Individual cow and herd management information was collected by self-administered questionnaires and one-on-one interviews. Bayesian estimates of the true prevalence of MAP and NC were computed, and bivariable and multivariable statistical analysis were done to assess the association between the NC serological status and ranch management risk factors, and the clinical effects of the two pathogens. Bayesian estimates of true prevalence indicated that 20% (95% probability interval: 8-38%) of herds had at least one MAP-positive cow, with a within-herd prevalence in positive herds of 22% (8-45%). From the Bayesian posterior distributions of NC prevalence, the median herd-level prevalence was 66% (33-95%) with 10% (4-21%) cow-level prevalence in positive herds. Multivariable analysis indicated that introducing purchased animals in the herd might increase the risk of NC. The negative association of NC with proper carcass disposal and presence of horses on ranch (possibly in relation to herd monitoring and guarding activities), may suggest the importance of wild carnivores in the dynamics of this pathogen in the study area. We also observed an association between MAP and NC serological status and the number of abortions. Additional studies should be done to further examine specific risk factors for MAP and NC, assess the consequences on the reproductive performances in cow-calf herds, and evaluate the overall impact of these pathogens on cow-calf operations. Copyright © 2014 Elsevier B.V. All rights reserved.
Polymerization shrinkage stress of composite resins and resin cements - What do we need to know?
Soares, Carlos José; Faria-E-Silva, André Luis; Rodrigues, Monise de Paula; Vilela, Andomar Bruno Fernandes; Pfeifer, Carmem Silvia; Tantbirojn, Daranee; Versluis, Antheunis
2017-08-28
Polymerization shrinkage stress of resin-based materials have been related to several unwanted clinical consequences, such as enamel crack propagation, cusp deflection, marginal and internal gaps, and decreased bond strength. Despite the absence of strong evidence relating polymerization shrinkage to secondary caries or fracture of posterior teeth, shrinkage stress has been associated with post-operative sensitivity and marginal stain. The latter is often erroneously used as a criterion for replacement of composite restorations. Therefore, an indirect correlation can emerge between shrinkage stress and the longevity of composite restorations or resin-bonded ceramic restorations. The relationship between shrinkage and stress can be best studied in laboratory experiments and a combination of various methodologies. The objective of this review article is to discuss the concept and consequences of polymerization shrinkage and shrinkage stress of composite resins and resin cements. Literature relating to polymerization shrinkage and shrinkage stress generation, research methodologies, and contributing factors are selected and reviewed. Clinical techniques that could reduce shrinkage stress and new developments on low-shrink dental materials are also discussed.
SOMBI: Bayesian identification of parameter relations in unstructured cosmological data
NASA Astrophysics Data System (ADS)
Frank, Philipp; Jasche, Jens; Enßlin, Torsten A.
2016-11-01
This work describes the implementation and application of a correlation determination method based on self organizing maps and Bayesian inference (SOMBI). SOMBI aims to automatically identify relations between different observed parameters in unstructured cosmological or astrophysical surveys by automatically identifying data clusters in high-dimensional datasets via the self organizing map neural network algorithm. Parameter relations are then revealed by means of a Bayesian inference within respective identified data clusters. Specifically such relations are assumed to be parametrized as a polynomial of unknown order. The Bayesian approach results in a posterior probability distribution function for respective polynomial coefficients. To decide which polynomial order suffices to describe correlation structures in data, we include a method for model selection, the Bayesian information criterion, to the analysis. The performance of the SOMBI algorithm is tested with mock data. As illustration we also provide applications of our method to cosmological data. In particular, we present results of a correlation analysis between galaxy and active galactic nucleus (AGN) properties provided by the SDSS catalog with the cosmic large-scale-structure (LSS). The results indicate that the combined galaxy and LSS dataset indeed is clustered into several sub-samples of data with different average properties (for example different stellar masses or web-type classifications). The majority of data clusters appear to have a similar correlation structure between galaxy properties and the LSS. In particular we revealed a positive and linear dependency between the stellar mass, the absolute magnitude and the color of a galaxy with the corresponding cosmic density field. A remaining subset of data shows inverted correlations, which might be an artifact of non-linear redshift distortions.
Bukovinszky, Katalin; Molnár, Lilla; Bakó, József; Szalóki, Melinda; Hegedus, Csaba
2014-03-01
The polymerization shrinkage and shrinkage stress of dental composites are in the center of the interest of researchers and manufacturers. It is a great challenge to minimize this important property as low as possible. Many factors are related and are in complicated correlation with each other affecting the polymerization shrinkage. Polymerization shrinkage stress degree of conversion and elasticity has high importance from this aspect. Our aim was to study the polymerization shrinkage and related properties (modulus of elasticity, degree of conversion, shrinkage stress) of three flowable composite (Charisma Opal Flow, SDR, Filtek Ultimate) and an unfilled composite resin. Modulus of elasticity was measured using three point flexure tests on universal testing machine. The polymerization shrinkage stress was determined using bonded-disc technique. The degree of conversion measurements were performed by FT-IR spectroscopy. And the volumetric shrinkage was investigated using Archimedes principle and was measured on analytical balance with special additional equipment. The unfilled resin generally showed higher shrinkage (8,26%), shrinkage stress (0,8 MPa) and degree of conversion (38%), and presented the lowest modulus of elasticity (3047,02MPa). Highest values of unfilled resin correspond to the literature. The lack of fillers enlarges the shrinkage, and the shrinkage stress, but gives the higher flexibility and higher degree of conversion. Further investigations needs to be done to understand and reveal the differences between the composites.
Mapping local and global variability in plant trait distributions
Butler, Ethan E.; Datta, Abhirup; Flores-Moreno, Habacuc; ...
2017-12-01
Accurate trait-environment relationships and global maps of plant trait distributions represent a needed stepping stone in global biogeography and are critical constraints of key parameters for land models. Here, we use a global data set of plant traits to map trait distributions closely coupled to photosynthesis and foliar respiration: specific leaf area (SLA), and dry mass-based concentrations of leaf nitrogen (Nm) and phosphorus (Pm); We propose two models to extrapolate geographically sparse point data to continuous spatial surfaces. The first is a categorical model using species mean trait values, categorized into plant functional types (PFTs) and extrapolating to PFT occurrencemore » ranges identified by remote sensing. The second is a Bayesian spatial model that incorporates information about PFT, location and environmental covariates to estimate trait distributions. Both models are further stratified by varying the number of PFTs; The performance of the models was evaluated based on their explanatory and predictive ability. The Bayesian spatial model leveraging the largest number of PFTs produced the best maps; The interpolation of full trait distributions enables a wider diversity of vegetation to be represented across the land surface. These maps may be used as input to Earth System Models and to evaluate other estimates of functional diversity.« less
Mapping local and global variability in plant trait distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, Ethan E.; Datta, Abhirup; Flores-Moreno, Habacuc
Accurate trait-environment relationships and global maps of plant trait distributions represent a needed stepping stone in global biogeography and are critical constraints of key parameters for land models. Here, we use a global data set of plant traits to map trait distributions closely coupled to photosynthesis and foliar respiration: specific leaf area (SLA), and dry mass-based concentrations of leaf nitrogen (Nm) and phosphorus (Pm); We propose two models to extrapolate geographically sparse point data to continuous spatial surfaces. The first is a categorical model using species mean trait values, categorized into plant functional types (PFTs) and extrapolating to PFT occurrencemore » ranges identified by remote sensing. The second is a Bayesian spatial model that incorporates information about PFT, location and environmental covariates to estimate trait distributions. Both models are further stratified by varying the number of PFTs; The performance of the models was evaluated based on their explanatory and predictive ability. The Bayesian spatial model leveraging the largest number of PFTs produced the best maps; The interpolation of full trait distributions enables a wider diversity of vegetation to be represented across the land surface. These maps may be used as input to Earth System Models and to evaluate other estimates of functional diversity.« less
Chen, Zhijian; Craiu, Radu V; Bull, Shelley B
2014-11-01
In focused studies designed to follow up associations detected in a genome-wide association study (GWAS), investigators can proceed to fine-map a genomic region by targeted sequencing or dense genotyping of all variants in the region, aiming to identify a functional sequence variant. For the analysis of a quantitative trait, we consider a Bayesian approach to fine-mapping study design that incorporates stratification according to a promising GWAS tag SNP in the same region. Improved cost-efficiency can be achieved when the fine-mapping phase incorporates a two-stage design, with identification of a smaller set of more promising variants in a subsample taken in stage 1, followed by their evaluation in an independent stage 2 subsample. To avoid the potential negative impact of genetic model misspecification on inference we incorporate genetic model selection based on posterior probabilities for each competing model. Our simulation study shows that, compared to simple random sampling that ignores genetic information from GWAS, tag-SNP-based stratified sample allocation methods reduce the number of variants continuing to stage 2 and are more likely to promote the functional sequence variant into confirmation studies. © 2014 WILEY PERIODICALS, INC.
Park, Y W; Han, K; Ahn, S S; Choi, Y S; Chang, J H; Kim, S H; Kang, S-G; Kim, E H; Lee, S-K
2018-04-01
Prediction of the isocitrate dehydrogenase 1 (IDH1)-mutation and 1p/19q-codeletion status of World Health Organization grade ll gliomas preoperatively may assist in predicting prognosis and planning treatment strategies. Our aim was to characterize the histogram and texture analyses of apparent diffusion coefficient and fractional anisotropy maps to determine IDH1 -mutation and 1p/19q-codeletion status in World Health Organization grade II gliomas. Ninety-three patients with World Health Organization grade II gliomas with known IDH1- mutation and 1p/19q-codeletion status (18 IDH1 wild-type, 45 IDH1 mutant and no 1p/19q codeletion, 30 IDH1- mutant and 1p/19q codeleted tumors) underwent DTI. ROIs were drawn on every section of the T2-weighted images and transferred to the ADC and the fractional anisotropy maps to derive volume-based data of the entire tumor. Histogram and texture analyses were correlated with the IDH1 -mutation and 1p/19q-codeletion status. The predictive powers of imaging features for IDH1 wild-type tumors and 1p/19q-codeletion status in IDH1 -mutant subgroups were evaluated using the least absolute shrinkage and selection operator. Various histogram and texture parameters differed significantly according to IDH1 -mutation and 1p/19q-codeletion status. The skewness and energy of ADC, 10th and 25th percentiles, and correlation of fractional anisotropy were independent predictors of an IDH1 wild-type in the least absolute shrinkage and selection operator. The area under the receiver operating curve for the prediction model was 0.853. The skewness and cluster shade of ADC, energy, and correlation of fractional anisotropy were independent predictors of a 1p/19q codeletion in IDH1 -mutant tumors in the least absolute shrinkage and selection operator. The area under the receiver operating curve was 0.807. Whole-tumor histogram and texture features of the ADC and fractional anisotropy maps are useful for predicting the IDH1 -mutation and 1p/19q-codeletion status in World Health Organization grade II gliomas. © 2018 by American Journal of Neuroradiology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Asamoto, Shingo, E-mail: asamoto@mail.saitama-u.ac.j; Ohtsuka, Ayumu; Kuwahara, Yuta
In this paper, the effects of actual environmental actions on shrinkage, creep and shrinkage cracking of concrete are studied comprehensively. Prismatic specimens of plain concrete were exposed to three sets of artificial outdoor conditions with or without solar radiation and rain to examine the shrinkage. For the purpose of studying shrinkage cracking behavior, prismatic concrete specimens with reinforcing steel were also subjected to the above conditions at the same time. The shrinkage behavior is described focusing on the effects of solar radiation and rain based on the moisture loss. The significant environment actions to induce shrinkage cracks are investigated frommore » viewpoints of the amount of the shrinkage and the tensile strength. Finally, specific compressive creep behavior according to solar radiation and rainfall is discussed. It is found that rain can greatly inhibit the progresses of concrete shrinkage and creep while solar radiation is likely to promote shrinkage cracking and creep.« less
Crevice corrosion - NaCl concentration map for grade-2 titanium at elevated temperature
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsujikawa, Shigeo; Kojima, Yoichi
1993-12-31
The repassivation potential, ER, for metal/metal-crevice of Commercially Pure Titanium, C.P.Ti, was determined in NaCl solutions at temperatures up to 250C. The ER has its least noble value near 100C and becomes more noble as the temperature increases. As shown in previous research, the shrinkage of the repassivation region should continue with increasing temperatures. However, in conducting this same experiment at temperatures higher than 100C, an examination of the NaCl concentration - temperature - crevice corrosion map verifies that the repassivation region began to expand again when the temperature exceeded 140C. This expansion continued as the temperature continued to increase.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martínez-García, Eric E.; González-Lópezlira, Rosa A.; Bruzual A, Gustavo
2017-01-20
Stellar masses of galaxies are frequently obtained by fitting stellar population synthesis models to galaxy photometry or spectra. The state of the art method resolves spatial structures within a galaxy to assess the total stellar mass content. In comparison to unresolved studies, resolved methods yield, on average, higher fractions of stellar mass for galaxies. In this work we improve the current method in order to mitigate a bias related to the resolved spatial distribution derived for the mass. The bias consists in an apparent filamentary mass distribution and a spatial coincidence between mass structures and dust lanes near spiral arms.more » The improved method is based on iterative Bayesian marginalization, through a new algorithm we have named Bayesian Successive Priors (BSP). We have applied BSP to M51 and to a pilot sample of 90 spiral galaxies from the Ohio State University Bright Spiral Galaxy Survey. By quantitatively comparing both methods, we find that the average fraction of stellar mass missed by unresolved studies is only half what previously thought. In contrast with the previous method, the output BSP mass maps bear a better resemblance to near-infrared images.« less
NASA Astrophysics Data System (ADS)
Nakada, Tomohiro; Takadama, Keiki; Watanabe, Shigeyoshi
This paper proposes the classification method using Bayesian analytical method to classify the time series data in the international emissions trading market depend on the agent-based simulation and compares the case with Discrete Fourier transform analytical method. The purpose demonstrates the analytical methods mapping time series data such as market price. These analytical methods have revealed the following results: (1) the classification methods indicate the distance of mapping from the time series data, it is easier the understanding and inference than time series data; (2) these methods can analyze the uncertain time series data using the distance via agent-based simulation including stationary process and non-stationary process; and (3) Bayesian analytical method can show the 1% difference description of the emission reduction targets of agent.
Gravitational Acceleration Effects on Macrosegregation: Experiment and Computational Modeling
NASA Technical Reports Server (NTRS)
Leon-Torres, J.; Curreri, P. A.; Stefanescu, D. M.; Sen, S.
1999-01-01
Experiments were performed under terrestrial gravity (1g) and during parabolic flights (10-2 g) to study the solidification and macrosegregation patterns of Al-Cu alloys. Alloys having 2% and 5% Cu were solidified against a chill at two different cooling rates. Microscopic and Electron Microprobe characterization was used to produce microstructural and macrosegregation maps. In all cases positive segregation occurred next to the chill because shrinkage flow, as expected. This positive segregation was higher in the low-g samples, apparently because of the higher heat transfer coefficient. A 2-D computational model was used to explain the experimental results. The continuum formulation was employed to describe the macroscopic transports of mass, energy, and momentum, associated with the solidification phenomena, for a two-phase system. The model considers that liquid flow is driven by thermal and solutal buoyancy, and by solidification shrinkage. The solidification event was divided into two stages. In the first one, the liquid containing freely moving equiaxed grains was described through the relative viscosity concept. In the second stage, when a fixed dendritic network was formed after dendritic coherency, the mushy zone was treated as a porous medium. The macrosegregation maps and the cooling curves obtained during experiments were used for validation of the solidification and segregation model. The model can explain the solidification and macrosegregation patterns and the differences between low- and high-gravity results.
Sparse Bayesian Information Filters for Localization and Mapping
2008-02-01
a set of smaller, more manageable maps [76, 51, 139, 77, 12]. These appropriately-named submap algorithms greatly reduce the effects of map size on...An intuitive way of dealing with this limitation is to divide the world into numerous sub-environments, each comprised of a more manageable number of...p (xt, M I z t , u t) = p (M I xt, zt) • p (xt zt, ut) (2.16) 6 This assumes knowledge of the mean, which is necessary for observations that are
A new method to measure the polymerization shrinkage kinetics of light cured composites.
Lee, I B; Cho, B H; Son, H H; Um, C M
2005-04-01
This study was undertaken to develop a new measurement method to determine the initial dynamic volumetric shrinkage of composite resins during polymerization, and to investigate the effect of curing light intensity on the polymerization shrinkage kinetics. The instrument was basically an electromagnetic balance that was constructed with a force transducer using a position sensitive photo detector (PSPD) and a negative feedback servo amplifier. The volumetric change of composites during polymerization was detected continuously as a buoyancy change in distilled water by means of the Archimedes' principle. Using this new instrument, the dynamic patterns of the polymerization shrinkage of seven commercial composite resins were measured. The polymerization shrinkage of the composites was 1.92 approximately 4.05 volume %. The shrinkage of a packable composite was the lowest, and that of a flowable composite was the highest. The maximum rate of polymerization shrinkage increased with increasing light intensity but the peak shrinkage rate time decreased with increasing light intensity. A strong positive relationship was observed between the square root of the light intensity and the maximum shrinkage rate. The shrinkage rate per unit time, dVol%/dt, showed that the instrument can be a valuable research method for investigating the polymerization reaction kinetics. This new shrinkage-measuring instrument has some advantages that it was insensitive to temperature changes and could measure the dynamic volumetric shrinkage in real time without complicated processes. Therefore, it can be used to characterize the shrinkage kinetics in a wide range of commercial and experimental visible-light-cure materials in relation to their composition and chemistry.
Do low-shrink composites reduce polymerization shrinkage effects?
Tantbirojn, D; Pfeifer, C S; Braga, R R; Versluis, A
2011-05-01
Progress in polymer science has led to continuous reduction of polymerization shrinkage, exemplified by a new generation of "low-shrink composites". The common inference that shrinkage stress effects will be reduced in teeth restored with such restoratives with lower shrinkage was tested in extracted human premolars. Mesio-occluso-distal slot-shaped cavities were cut and restored with a conventional (SupremePlus) or low-shrink (RefleXions, Premise, Kalore, and LS) composite (N = 5). We digitized the coronal surfaces before and 10 min after restoration to determine cuspal deflection from the buccal and lingual volume change/area. We also determined the main properties involved (total shrinkage, post-gel shrinkage, degree of conversion, and elastic modulus), as well as microleakage, to verify adequate bonding. It was shown that, due to shrinkage stresses, buccal and lingual surfaces pulled inward after restoration (9-14 microns). Only Kalore and LS resulted in significantly lower tooth deformation (ANOVA/Student-Newman-Keuls post hoc, p = 0.05). The other two low-shrink composites, despite having the lowest and highest total shrinkage values, did not cause significant differences in cuspal deflection. Deflection seemed most related to the combination of post-gel shrinkage and elastic modulus. Therefore, even for significantly lower total shrinkage values, shrinkage stress is not necessarily reduced.
Bayesian population receptive field modelling.
Zeidman, Peter; Silson, Edward Harry; Schwarzkopf, Dietrich Samuel; Baker, Chris Ian; Penny, Will
2017-09-08
We introduce a probabilistic (Bayesian) framework and associated software toolbox for mapping population receptive fields (pRFs) based on fMRI data. This generic approach is intended to work with stimuli of any dimension and is demonstrated and validated in the context of 2D retinotopic mapping. The framework enables the experimenter to specify generative (encoding) models of fMRI timeseries, in which experimental stimuli enter a pRF model of neural activity, which in turns drives a nonlinear model of neurovascular coupling and Blood Oxygenation Level Dependent (BOLD) response. The neuronal and haemodynamic parameters are estimated together on a voxel-by-voxel or region-of-interest basis using a Bayesian estimation algorithm (variational Laplace). This offers several novel contributions to receptive field modelling. The variance/covariance of parameters are estimated, enabling receptive fields to be plotted while properly representing uncertainty about pRF size and location. Variability in the haemodynamic response across the brain is accounted for. Furthermore, the framework introduces formal hypothesis testing to pRF analysis, enabling competing models to be evaluated based on their log model evidence (approximated by the variational free energy), which represents the optimal tradeoff between accuracy and complexity. Using simulations and empirical data, we found that parameters typically used to represent pRF size and neuronal scaling are strongly correlated, which is taken into account by the Bayesian methods we describe when making inferences. We used the framework to compare the evidence for six variants of pRF model using 7 T functional MRI data and we found a circular Difference of Gaussians (DoG) model to be the best explanation for our data overall. We hope this framework will prove useful for mapping stimulus spaces with any number of dimensions onto the anatomy of the brain. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Influence of length-to-diameter ratio on shrinkage of basalt fiber concrete
NASA Astrophysics Data System (ADS)
Ruijie, MA; Yang, Jiansen; Liu, Yuan; Zheng, Xiaojun
2017-09-01
In order to study the shrinkage performance of basalt concrete, using the shrinkage rate as index, the work not only studied the influence of different length-to-diameter ratio (LDR) on plastic shrinkage and drying shrinkage of basalt fiber concrete, but also analyzed the action mechanism. The results show that when the fiber content is 0.1%, the LDR of 800 and 1200 take better effects on reducing plastic shrinkage, however the fiber content is 0.3%, that of LDR 600 is better. To improve drying shrinkage, the fiber of LDR 800 takes best effect. In the concrete structure, the adding basalt fibers form a uniform and chaotic supporting system, optimize the pore and the void structure of concrete, make the material further compacted, reduce the water loss, so as to decrease the shrinkage of concrete effectively.
NASA Astrophysics Data System (ADS)
Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad
2016-05-01
Bayesian inference has traditionally been conceived as the proper framework for the formal incorporation of expert knowledge in parameter estimation of groundwater models. However, conventional Bayesian inference is incapable of taking into account the imprecision essentially embedded in expert provided information. In order to solve this problem, a number of extensions to conventional Bayesian inference have been introduced in recent years. One of these extensions is 'fuzzy Bayesian inference' which is the result of integrating fuzzy techniques into Bayesian statistics. Fuzzy Bayesian inference has a number of desirable features which makes it an attractive approach for incorporating expert knowledge in the parameter estimation process of groundwater models: (1) it is well adapted to the nature of expert provided information, (2) it allows to distinguishably model both uncertainty and imprecision, and (3) it presents a framework for fusing expert provided information regarding the various inputs of the Bayesian inference algorithm. However an important obstacle in employing fuzzy Bayesian inference in groundwater numerical modeling applications is the computational burden, as the required number of numerical model simulations often becomes extremely exhaustive and often computationally infeasible. In this paper, a novel approach of accelerating the fuzzy Bayesian inference algorithm is proposed which is based on using approximate posterior distributions derived from surrogate modeling, as a screening tool in the computations. The proposed approach is first applied to a synthetic test case of seawater intrusion (SWI) in a coastal aquifer. It is shown that for this synthetic test case, the proposed approach decreases the number of required numerical simulations by an order of magnitude. Then the proposed approach is applied to a real-world test case involving three-dimensional numerical modeling of SWI in Kish Island, located in the Persian Gulf. An expert elicitation methodology is developed and applied to the real-world test case in order to provide a road map for the use of fuzzy Bayesian inference in groundwater modeling applications.
Stude, Philipp; Enax-Krumova, Elena K; Lenz, Melanie; Lissek, Silke; Nicolas, Volkmar; Peters, Soeren; Westermann, Amy; Tegenthoff, Martin; Maier, Christoph
2014-01-01
Patients with complex regional pain syndrome type I (CRPS I) show a cortical reorganization with contralateral shrinkage of cortical maps in S1. The relevance of pain and disuse for the development and the maintenance of this shrinkage is unclear. Aim of the study was to assess whether short-term pain relief induces changes in the cortical representation of the affected hand in patients with CRPS type I. Case series analysis of prospectively collected data. We enrolled a case series of 5 consecutive patients with CRPS type I (disease duration 3 - 36 months) of the non-dominant upper-limb and previously diagnosed sympathetically maintained pain (SMP) by reduction of the pain intensity of more than > 30% after prior diagnostic sympathetic block. We performed fMRI for analysis of the cortical representation of the affected hand immediately before as well as one hour after isolated sympathetic block of the stellate ganglion on the affected side. Wilcoxon-Test, paired t-test, P < 0.05. Pain decrease after isolated sympathetic block (pain intensity on the numerical rating scale (0 - 10) before block: 6.8 ± 1.9, afterwards: 3.8 ± 1.3) was accompanied by an increase in the blood oxygenation level dependent (BOLD) response of cortical representational maps only of the affected hand which had been reduced before the block, despite the fact that clinical and neurophysiological assessment revealed no changes in the sensorimotor function. The interpretation of the present results is partly limited due to the small number of included patients and the missing control group with placebo injection. The association between recovery of the cortical representation and pain relief supports the hypothesis that pain could be a relevant factor for changes of somatosensory cortical maps in CRPS, and that these are rapidly reversible.
Sparse Bayesian Learning for Identifying Imaging Biomarkers in AD Prediction
Shen, Li; Qi, Yuan; Kim, Sungeun; Nho, Kwangsik; Wan, Jing; Risacher, Shannon L.; Saykin, Andrew J.
2010-01-01
We apply sparse Bayesian learning methods, automatic relevance determination (ARD) and predictive ARD (PARD), to Alzheimer’s disease (AD) classification to make accurate prediction and identify critical imaging markers relevant to AD at the same time. ARD is one of the most successful Bayesian feature selection methods. PARD is a powerful Bayesian feature selection method, and provides sparse models that is easy to interpret. PARD selects the model with the best estimate of the predictive performance instead of choosing the one with the largest marginal model likelihood. Comparative study with support vector machine (SVM) shows that ARD/PARD in general outperform SVM in terms of prediction accuracy. Additional comparison with surface-based general linear model (GLM) analysis shows that regions with strongest signals are identified by both GLM and ARD/PARD. While GLM P-map returns significant regions all over the cortex, ARD/PARD provide a small number of relevant and meaningful imaging markers with predictive power, including both cortical and subcortical measures. PMID:20879451
A Bayesian nonparametric approach to dynamical noise reduction
NASA Astrophysics Data System (ADS)
Kaloudis, Konstantinos; Hatjispyros, Spyridon J.
2018-06-01
We propose a Bayesian nonparametric approach for the noise reduction of a given chaotic time series contaminated by dynamical noise, based on Markov Chain Monte Carlo methods. The underlying unknown noise process (possibly) exhibits heavy tailed behavior. We introduce the Dynamic Noise Reduction Replicator model with which we reconstruct the unknown dynamic equations and in parallel we replicate the dynamics under reduced noise level dynamical perturbations. The dynamic noise reduction procedure is demonstrated specifically in the case of polynomial maps. Simulations based on synthetic time series are presented.
Source Detection with Bayesian Inference on ROSAT All-Sky Survey Data Sample
NASA Astrophysics Data System (ADS)
Guglielmetti, F.; Voges, W.; Fischer, R.; Boese, G.; Dose, V.
2004-07-01
We employ Bayesian inference for the joint estimation of sources and background on ROSAT All-Sky Survey (RASS) data. The probabilistic method allows for detection improvement of faint extended celestial sources compared to the Standard Analysis Software System (SASS). Background maps were estimated in a single step together with the detection of sources without pixel censoring. Consistent uncertainties of background and sources are provided. The source probability is evaluated for single pixels as well as for pixel domains to enhance source detection of weak and extended sources.
A Bayesian network model for predicting pregnancy after in vitro fertilization.
Corani, G; Magli, C; Giusti, A; Gianaroli, L; Gambardella, L M
2013-11-01
We present a Bayesian network model for predicting the outcome of in vitro fertilization (IVF). The problem is characterized by a particular missingness process; we propose a simple but effective averaging approach which improves parameter estimates compared to the traditional MAP estimation. We present results with generated data and the analysis of a real data set. Moreover, we assess by means of a simulation study the effectiveness of the model in supporting the selection of the embryos to be transferred. © 2013 Elsevier Ltd. All rights reserved.
Polymerization shrinkage kinetics and shrinkage-stress in dental resin-composites.
Al Sunbul, Hanan; Silikas, Nick; Watts, David C
2016-08-01
To investigate a set of resin-composites and the effect of their composition on polymerization shrinkage strain and strain kinetics, shrinkage stress and the apparent elastic modulus. Eighteen commercially available resin-composites were investigated. Three specimens (n=3) were made per material and light-cured with an LED unit (1200mW/cm(2)) for 20s. The bonded-disk method was used to measure the shrinkage strain and Bioman shrinkage stress instrument was used to measure shrinkage stress. The shrinkage strain kinetics at 23°C was monitored for 60min. Maximum strain and stress was evaluated at 60min. The shrinkage strain rate was calculated using numerical differentiation. The shrinkage strain values ranged from 1.83 (0.09) % for Tetric Evoceram (TEC) to 4.68 (0.04) % for Beautifil flow plus (BFP). The shrinkage strain rate ranged from 0.11 (0.01%s(-1)) for Gaenial posterior (GA-P) to 0.59 (0.07) %s(-1) for BFP. Shrinkage stress values ranged from 3.94 (0.40)MPa for TET to 10.45 (0.41)MPa for BFP. The apparent elastic modulus ranged from 153.56 (18.7)MPa for Ever X posterior (EVX) to 277.34 (25.5) MPa for Grandio SO heavy flow (GSO). The nature of the monomer system determines the amount of the bulk contraction that occurs during polymerization and the resultant stress. Higher values of shrinkage strain and stress were demonstrated by the investigated flowable materials. The bulk-fill materials showed comparable result when compared to the traditional resin-composites. Copyright © 2016 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Development of concrete shrinkage performance specifications.
DOT National Transportation Integrated Search
2003-01-01
During its service life, concrete undergoes volume changes. One of the types of deformation is shrinkage. The four main types of shrinkage associated with concrete are plastic, autogenous, carbonation, and drying shrinkage. The volume changes in conc...
Evaluation of shrinkage and cracking in concrete of ring test by acoustic emission method
NASA Astrophysics Data System (ADS)
Watanabe, Takeshi; Hashimoto, Chikanori
2015-03-01
Drying shrinkage of concrete is one of the typical problems related to reduce durability and defilation of concrete structures. Lime stone, expansive additive and low-heat Portland cement are used to reduce drying shrinkage in Japan. Drying shrinkage is commonly evaluated by methods of measurement for length change of mortar and concrete. In these methods, there is detected strain due to drying shrinkage of free body, although visible cracking does not occur. In this study, the ring test was employed to detect strain and age cracking of concrete. The acoustic emission (AE) method was adopted to detect micro cracking due to shrinkage. It was recognized that in concrete using lime stone, expansive additive and low-heat Portland cement are effective to decrease drying shrinkage and visible cracking. Micro cracking due to shrinkage of this concrete was detected and evaluated by the AE method.
Huang, Lei; Goldsmith, Jeff; Reiss, Philip T.; Reich, Daniel S.; Crainiceanu, Ciprian M.
2013-01-01
Diffusion tensor imaging (DTI) measures water diffusion within white matter, allowing for in vivo quantification of brain pathways. These pathways often subserve specific functions, and impairment of those functions is often associated with imaging abnormalities. As a method for predicting clinical disability from DTI images, we propose a hierarchical Bayesian “scalar-on-image” regression procedure. Our procedure introduces a latent binary map that estimates the locations of predictive voxels and penalizes the magnitude of effect sizes in these voxels, thereby resolving the ill-posed nature of the problem. By inducing a spatial prior structure, the procedure yields a sparse association map that also maintains spatial continuity of predictive regions. The method is demonstrated on a simulation study and on a study of association between fractional anisotropy and cognitive disability in a cross-sectional sample of 135 multiple sclerosis patients. PMID:23792220
Semi-blind Bayesian inference of CMB map and power spectrum
NASA Astrophysics Data System (ADS)
Vansyngel, Flavien; Wandelt, Benjamin D.; Cardoso, Jean-François; Benabed, Karim
2016-04-01
We present a new blind formulation of the cosmic microwave background (CMB) inference problem. The approach relies on a phenomenological model of the multifrequency microwave sky without the need for physical models of the individual components. For all-sky and high resolution data, it unifies parts of the analysis that had previously been treated separately such as component separation and power spectrum inference. We describe an efficient sampling scheme that fully explores the component separation uncertainties on the inferred CMB products such as maps and/or power spectra. External information about individual components can be incorporated as a prior giving a flexible way to progressively and continuously introduce physical component separation from a maximally blind approach. We connect our Bayesian formalism to existing approaches such as Commander, spectral mismatch independent component analysis (SMICA), and internal linear combination (ILC), and discuss possible future extensions.
Douali, Nassim; Csaba, Huszka; De Roo, Jos; Papageorgiou, Elpiniki I; Jaulent, Marie-Christine
2014-01-01
Several studies have described the prevalence and severity of diagnostic errors. Diagnostic errors can arise from cognitive, training, educational and other issues. Examples of cognitive issues include flawed reasoning, incomplete knowledge, faulty information gathering or interpretation, and inappropriate use of decision-making heuristics. We describe a new approach, case-based fuzzy cognitive maps, for medical diagnosis and evaluate it by comparison with Bayesian belief networks. We created a semantic web framework that supports the two reasoning methods. We used database of 174 anonymous patients from several European hospitals: 80 of the patients were female and 94 male with an average age 45±16 (average±stdev). Thirty of the 80 female patients were pregnant. For each patient, signs/symptoms/observables/age/sex were taken into account by the system. We used a statistical approach to compare the two methods. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Uwano, Ikuko; Sasaki, Makoto; Kudo, Kohsuke; Boutelier, Timothé; Kameda, Hiroyuki; Mori, Futoshi; Yamashita, Fumio
2017-01-10
The Bayesian estimation algorithm improves the precision of bolus tracking perfusion imaging. However, this algorithm cannot directly calculate Tmax, the time scale widely used to identify ischemic penumbra, because Tmax is a non-physiological, artificial index that reflects the tracer arrival delay (TD) and other parameters. We calculated Tmax from the TD and mean transit time (MTT) obtained by the Bayesian algorithm and determined its accuracy in comparison with Tmax obtained by singular value decomposition (SVD) algorithms. The TD and MTT maps were generated by the Bayesian algorithm applied to digital phantoms with time-concentration curves that reflected a range of values for various perfusion metrics using a global arterial input function. Tmax was calculated from the TD and MTT using constants obtained by a linear least-squares fit to Tmax obtained from the two SVD algorithms that showed the best benchmarks in a previous study. Correlations between the Tmax values obtained by the Bayesian and SVD methods were examined. The Bayesian algorithm yielded accurate TD and MTT values relative to the true values of the digital phantom. Tmax calculated from the TD and MTT values with the least-squares fit constants showed excellent correlation (Pearson's correlation coefficient = 0.99) and agreement (intraclass correlation coefficient = 0.99) with Tmax obtained from SVD algorithms. Quantitative analyses of Tmax values calculated from Bayesian-estimation algorithm-derived TD and MTT from a digital phantom correlated and agreed well with Tmax values determined using SVD algorithms.
Wang, Tingting; Chen, Yi-Ping Phoebe; Bowman, Phil J; Goddard, Michael E; Hayes, Ben J
2016-09-21
Bayesian mixture models in which the effects of SNP are assumed to come from normal distributions with different variances are attractive for simultaneous genomic prediction and QTL mapping. These models are usually implemented with Monte Carlo Markov Chain (MCMC) sampling, which requires long compute times with large genomic data sets. Here, we present an efficient approach (termed HyB_BR), which is a hybrid of an Expectation-Maximisation algorithm, followed by a limited number of MCMC without the requirement for burn-in. To test prediction accuracy from HyB_BR, dairy cattle and human disease trait data were used. In the dairy cattle data, there were four quantitative traits (milk volume, protein kg, fat% in milk and fertility) measured in 16,214 cattle from two breeds genotyped for 632,002 SNPs. Validation of genomic predictions was in a subset of cattle either from the reference set or in animals from a third breeds that were not in the reference set. In all cases, HyB_BR gave almost identical accuracies to Bayesian mixture models implemented with full MCMC, however computational time was reduced by up to 1/17 of that required by full MCMC. The SNPs with high posterior probability of a non-zero effect were also very similar between full MCMC and HyB_BR, with several known genes affecting milk production in this category, as well as some novel genes. HyB_BR was also applied to seven human diseases with 4890 individuals genotyped for around 300 K SNPs in a case/control design, from the Welcome Trust Case Control Consortium (WTCCC). In this data set, the results demonstrated again that HyB_BR performed as well as Bayesian mixture models with full MCMC for genomic predictions and genetic architecture inference while reducing the computational time from 45 h with full MCMC to 3 h with HyB_BR. The results for quantitative traits in cattle and disease in humans demonstrate that HyB_BR can perform equally well as Bayesian mixture models implemented with full MCMC in terms of prediction accuracy, but with up to 17 times faster than the full MCMC implementations. The HyB_BR algorithm makes simultaneous genomic prediction, QTL mapping and inference of genetic architecture feasible in large genomic data sets.
Automated Bayesian model development for frequency detection in biological time series.
Granqvist, Emma; Oldroyd, Giles E D; Morris, Richard J
2011-06-24
A first step in building a mathematical model of a biological system is often the analysis of the temporal behaviour of key quantities. Mathematical relationships between the time and frequency domain, such as Fourier Transforms and wavelets, are commonly used to extract information about the underlying signal from a given time series. This one-to-one mapping from time points to frequencies inherently assumes that both domains contain the complete knowledge of the system. However, for truncated, noisy time series with background trends this unique mapping breaks down and the question reduces to an inference problem of identifying the most probable frequencies. In this paper we build on the method of Bayesian Spectrum Analysis and demonstrate its advantages over conventional methods by applying it to a number of test cases, including two types of biological time series. Firstly, oscillations of calcium in plant root cells in response to microbial symbionts are non-stationary and noisy, posing challenges to data analysis. Secondly, circadian rhythms in gene expression measured over only two cycles highlights the problem of time series with limited length. The results show that the Bayesian frequency detection approach can provide useful results in specific areas where Fourier analysis can be uninformative or misleading. We demonstrate further benefits of the Bayesian approach for time series analysis, such as direct comparison of different hypotheses, inherent estimation of noise levels and parameter precision, and a flexible framework for modelling the data without pre-processing. Modelling in systems biology often builds on the study of time-dependent phenomena. Fourier Transforms are a convenient tool for analysing the frequency domain of time series. However, there are well-known limitations of this method, such as the introduction of spurious frequencies when handling short and noisy time series, and the requirement for uniformly sampled data. Biological time series often deviate significantly from the requirements of optimality for Fourier transformation. In this paper we present an alternative approach based on Bayesian inference. We show the value of placing spectral analysis in the framework of Bayesian inference and demonstrate how model comparison can automate this procedure.
Automated Bayesian model development for frequency detection in biological time series
2011-01-01
Background A first step in building a mathematical model of a biological system is often the analysis of the temporal behaviour of key quantities. Mathematical relationships between the time and frequency domain, such as Fourier Transforms and wavelets, are commonly used to extract information about the underlying signal from a given time series. This one-to-one mapping from time points to frequencies inherently assumes that both domains contain the complete knowledge of the system. However, for truncated, noisy time series with background trends this unique mapping breaks down and the question reduces to an inference problem of identifying the most probable frequencies. Results In this paper we build on the method of Bayesian Spectrum Analysis and demonstrate its advantages over conventional methods by applying it to a number of test cases, including two types of biological time series. Firstly, oscillations of calcium in plant root cells in response to microbial symbionts are non-stationary and noisy, posing challenges to data analysis. Secondly, circadian rhythms in gene expression measured over only two cycles highlights the problem of time series with limited length. The results show that the Bayesian frequency detection approach can provide useful results in specific areas where Fourier analysis can be uninformative or misleading. We demonstrate further benefits of the Bayesian approach for time series analysis, such as direct comparison of different hypotheses, inherent estimation of noise levels and parameter precision, and a flexible framework for modelling the data without pre-processing. Conclusions Modelling in systems biology often builds on the study of time-dependent phenomena. Fourier Transforms are a convenient tool for analysing the frequency domain of time series. However, there are well-known limitations of this method, such as the introduction of spurious frequencies when handling short and noisy time series, and the requirement for uniformly sampled data. Biological time series often deviate significantly from the requirements of optimality for Fourier transformation. In this paper we present an alternative approach based on Bayesian inference. We show the value of placing spectral analysis in the framework of Bayesian inference and demonstrate how model comparison can automate this procedure. PMID:21702910
Bouhrara, Mustapha; Reiter, David A; Sexton, Kyle W; Bergeron, Christopher M; Zukley, Linda M; Spencer, Richard G
2017-11-01
We applied our recently introduced Bayesian analytic method to achieve clinically-feasible in-vivo mapping of the proteoglycan water fraction (PgWF) of human knee cartilage with improved spatial resolution and stability as compared to existing methods. Multicomponent driven equilibrium single-pulse observation of T 1 and T 2 (mcDESPOT) datasets were acquired from the knees of two healthy young subjects and one older subject with previous knee injury. Each dataset was processed using Bayesian Monte Carlo (BMC) analysis incorporating a two-component tissue model. We assessed the performance and reproducibility of BMC and of the conventional analysis of stochastic region contraction (SRC) in the estimation of PgWF. Stability of the BMC analysis of PgWF was tested by comparing independent high-resolution (HR) datasets from each of the two young subjects. Unlike SRC, the BMC-derived maps from the two HR datasets were essentially identical. Furthermore, SRC maps showed substantial random variation in estimated PgWF, and mean values that differed from those obtained using BMC. In addition, PgWF maps derived from conventional low-resolution (LR) datasets exhibited partial volume and magnetic susceptibility effects. These artifacts were absent in HR PgWF images. Finally, our analysis showed regional variation in PgWF estimates, and substantially higher values in the younger subjects as compared to the older subject. BMC-mcDESPOT permits HR in-vivo mapping of PgWF in human knee cartilage in a clinically-feasible acquisition time. HR mapping reduces the impact of partial volume and magnetic susceptibility artifacts compared to LR mapping. Finally, BMC-mcDESPOT demonstrated excellent reproducibility in the determination of PgWF. Published by Elsevier Inc.
Experiments in Error Propagation within Hierarchal Combat Models
2015-09-01
Bayesian Information Criterion CNO Chief of Naval Operations DOE Design of Experiments DOD Department of Defense MANA Map Aware Non-uniform Automata ...ground up” approach. First, it develops a mission-level model for one on one submarine combat in Map Aware Non-uniform Automata (MANA) simulation, an... Automata (MANA), an agent based simulation that can model the different postures of submarines. It feeds the results from MANA into stochastic
Yu, Hwa-Lung; Chiang, Chi-Ting; Lin, Shu-De; Chang, Tsun-Kuo
2010-02-01
Incidence rate of oral cancer in Changhua County is the highest among the 23 counties of Taiwan during 2001. However, in health data analysis, crude or adjusted incidence rates of a rare event (e.g., cancer) for small populations often exhibit high variances and are, thus, less reliable. We proposed a generalized Bayesian Maximum Entropy (GBME) analysis of spatiotemporal disease mapping under conditions of considerable data uncertainty. GBME was used to study the oral cancer population incidence in Changhua County (Taiwan). Methodologically, GBME is based on an epistematics principles framework and generates spatiotemporal estimates of oral cancer incidence rates. In a way, it accounts for the multi-sourced uncertainty of rates, including small population effects, and the composite space-time dependence of rare events in terms of an extended Poisson-based semivariogram. The results showed that GBME analysis alleviates the noises of oral cancer data from population size effect. Comparing to the raw incidence data, the maps of GBME-estimated results can identify high risk oral cancer regions in Changhua County, where the prevalence of betel quid chewing and cigarette smoking is relatively higher than the rest of the areas. GBME method is a valuable tool for spatiotemporal disease mapping under conditions of uncertainty. 2010 Elsevier Inc. All rights reserved.
Jat, Prahlad; Serre, Marc L
2016-12-01
Widespread contamination of surface water chloride is an emerging environmental concern. Consequently accurate and cost-effective methods are needed to estimate chloride along all river miles of potentially contaminated watersheds. Here we introduce a Bayesian Maximum Entropy (BME) space/time geostatistical estimation framework that uses river distances, and we compare it with Euclidean BME to estimate surface water chloride from 2005 to 2014 in the Gunpowder-Patapsco, Severn, and Patuxent subbasins in Maryland. River BME improves the cross-validation R 2 by 23.67% over Euclidean BME, and river BME maps are significantly different than Euclidean BME maps, indicating that it is important to use river BME maps to assess water quality impairment. The river BME maps of chloride concentration show wide contamination throughout Baltimore and Columbia-Ellicott cities, the disappearance of a clean buffer separating these two large urban areas, and the emergence of multiple localized pockets of contamination in surrounding areas. The number of impaired river miles increased by 0.55% per year in 2005-2009 and by 1.23% per year in 2011-2014, corresponding to a marked acceleration of the rate of impairment. Our results support the need for control measures and increased monitoring of unassessed river miles. Copyright © 2016. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Underwood, Kristen L.; Rizzo, Donna M.; Schroth, Andrew W.; Dewoolkar, Mandar M.
2017-12-01
Given the variable biogeochemical, physical, and hydrological processes driving fluvial sediment and nutrient export, the water science and management communities need data-driven methods to identify regions prone to production and transport under variable hydrometeorological conditions. We use Bayesian analysis to segment concentration-discharge linear regression models for total suspended solids (TSS) and particulate and dissolved phosphorus (PP, DP) using 22 years of monitoring data from 18 Lake Champlain watersheds. Bayesian inference was leveraged to estimate segmented regression model parameters and identify threshold position. The identified threshold positions demonstrated a considerable range below and above the median discharge—which has been used previously as the default breakpoint in segmented regression models to discern differences between pre and post-threshold export regimes. We then applied a Self-Organizing Map (SOM), which partitioned the watersheds into clusters of TSS, PP, and DP export regimes using watershed characteristics, as well as Bayesian regression intercepts and slopes. A SOM defined two clusters of high-flux basins, one where PP flux was predominantly episodic and hydrologically driven; and another in which the sediment and nutrient sourcing and mobilization were more bimodal, resulting from both hydrologic processes at post-threshold discharges and reactive processes (e.g., nutrient cycling or lateral/vertical exchanges of fine sediment) at prethreshold discharges. A separate DP SOM defined two high-flux clusters exhibiting a bimodal concentration-discharge response, but driven by differing land use. Our novel framework shows promise as a tool with broad management application that provides insights into landscape drivers of riverine solute and sediment export.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Le; Timbie, Peter T.; Bunn, Emory F.
In this paper, we present a new Bayesian semi-blind approach for foreground removal in observations of the 21 cm signal measured by interferometers. The technique, which we call H i Expectation–Maximization Independent Component Analysis (HIEMICA), is an extension of the Independent Component Analysis technique developed for two-dimensional (2D) cosmic microwave background maps to three-dimensional (3D) 21 cm cosmological signals measured by interferometers. This technique provides a fully Bayesian inference of power spectra and maps and separates the foregrounds from the signal based on the diversity of their power spectra. Relying only on the statistical independence of the components, this approachmore » can jointly estimate the 3D power spectrum of the 21 cm signal, as well as the 2D angular power spectrum and the frequency dependence of each foreground component, without any prior assumptions about the foregrounds. This approach has been tested extensively by applying it to mock data from interferometric 21 cm intensity mapping observations under idealized assumptions of instrumental effects. We also discuss the impact when the noise properties are not known completely. As a first step toward solving the 21 cm power spectrum analysis problem, we compare the semi-blind HIEMICA technique to the commonly used Principal Component Analysis. Under the same idealized circumstances, the proposed technique provides significantly improved recovery of the power spectrum. This technique can be applied in a straightforward manner to all 21 cm interferometric observations, including epoch of reionization measurements, and can be extended to single-dish observations as well.« less
a Novel Discrete Optimal Transport Method for Bayesian Inverse Problems
NASA Astrophysics Data System (ADS)
Bui-Thanh, T.; Myers, A.; Wang, K.; Thiery, A.
2017-12-01
We present the Augmented Ensemble Transform (AET) method for generating approximate samples from a high-dimensional posterior distribution as a solution to Bayesian inverse problems. Solving large-scale inverse problems is critical for some of the most relevant and impactful scientific endeavors of our time. Therefore, constructing novel methods for solving the Bayesian inverse problem in more computationally efficient ways can have a profound impact on the science community. This research derives the novel AET method for exploring a posterior by solving a sequence of linear programming problems, resulting in a series of transport maps which map prior samples to posterior samples, allowing for the computation of moments of the posterior. We show both theoretical and numerical results, indicating this method can offer superior computational efficiency when compared to other SMC methods. Most of this efficiency is derived from matrix scaling methods to solve the linear programming problem and derivative-free optimization for particle movement. We use this method to determine inter-well connectivity in a reservoir and the associated uncertainty related to certain parameters. The attached file shows the difference between the true parameter and the AET parameter in an example 3D reservoir problem. The error is within the Morozov discrepancy allowance with lower computational cost than other particle methods.
Profile-Based LC-MS Data Alignment—A Bayesian Approach
Tsai, Tsung-Heng; Tadesse, Mahlet G.; Wang, Yue; Ressom, Habtom W.
2014-01-01
A Bayesian alignment model (BAM) is proposed for alignment of liquid chromatography-mass spectrometry (LC-MS) data. BAM belongs to the category of profile-based approaches, which are composed of two major components: a prototype function and a set of mapping functions. Appropriate estimation of these functions is crucial for good alignment results. BAM uses Markov chain Monte Carlo (MCMC) methods to draw inference on the model parameters and improves on existing MCMC-based alignment methods through 1) the implementation of an efficient MCMC sampler and 2) an adaptive selection of knots. A block Metropolis-Hastings algorithm that mitigates the problem of the MCMC sampler getting stuck at local modes of the posterior distribution is used for the update of the mapping function coefficients. In addition, a stochastic search variable selection (SSVS) methodology is used to determine the number and positions of knots. We applied BAM to a simulated data set, an LC-MS proteomic data set, and two LC-MS metabolomic data sets, and compared its performance with the Bayesian hierarchical curve registration (BHCR) model, the dynamic time-warping (DTW) model, and the continuous profile model (CPM). The advantage of applying appropriate profile-based retention time correction prior to performing a feature-based approach is also demonstrated through the metabolomic data sets. PMID:23929872
Law, Jane
2016-01-01
Intrinsic conditional autoregressive modeling in a Bayeisan hierarchical framework has been increasingly applied in small-area ecological studies. This study explores the specifications of spatial structure in this Bayesian framework in two aspects: adjacency, i.e., the set of neighbor(s) for each area; and (spatial) weight for each pair of neighbors. Our analysis was based on a small-area study of falling injuries among people age 65 and older in Ontario, Canada, that was aimed to estimate risks and identify risk factors of such falls. In the case study, we observed incorrect adjacencies information caused by deficiencies in the digital map itself. Further, when equal weights was replaced by weights based on a variable of expected count, the range of estimated risks increased, the number of areas with probability of estimated risk greater than one at different probability thresholds increased, and model fit improved. More importantly, significance of a risk factor diminished. Further research to thoroughly investigate different methods of variable weights; quantify the influence of specifications of spatial weights; and develop strategies for better defining spatial structure of a map in small-area analysis in Bayesian hierarchical spatial modeling is recommended. PMID:29546147
Nitta, Keiko; Nomoto, Rie; Tsubota, Yuji; Tsuchikawa, Masuji; Hayakawa, Tohru
2017-11-29
The purpose of this study was to evaluate polymerization shrinkage and other physical properties of newly-developed cavity base materials for bulk filling technique, with the brand name BULK BASE (BBS). Polymerization shrinkage was measured according to ISO/FDIS 17304. BBS showed the significantly lowest polymerization shrinkage and significantly higher depth of cure than conventional flowable resin composites (p<0.05). The Knoop hardness, flexural strength and elastic modulus of that were significantly lower than conventional flowable resin composites (p<0.05). BBS had the significantly greatest filler content (p<0.05). SEM images of the surface showed failure of fillers. The lowest polymerization shrinkage was due to the incorporation of a new type of low shrinkage monomer, which has urethane moieties. There were no clear correlations between inorganic filler contents and polymerization shrinkage, flexural strength and elastic modulus. In conclusion, the low polymerization shrinkage of BBS will be useful for cavity treatment in dental clinics.
Wu, Wei Mo; Wang, Jia Qiang; Cao, Qi; Wu, Jia Ping
2017-02-01
Accurate prediction of soil organic carbon (SOC) distribution is crucial for soil resources utilization and conservation, climate change adaptation, and ecosystem health. In this study, we selected a 1300 m×1700 m solonchak sampling area in northern Tarim Basin, Xinjiang, China, and collected a total of 144 soil samples (5-10 cm). The objectives of this study were to build a Baye-sian geostatistical model to predict SOC content, and to assess the performance of the Bayesian model for the prediction of SOC content by comparing with other three geostatistical approaches [ordinary kriging (OK), sequential Gaussian simulation (SGS), and inverse distance weighting (IDW)]. In the study area, soil organic carbon contents ranged from 1.59 to 9.30 g·kg -1 with a mean of 4.36 g·kg -1 and a standard deviation of 1.62 g·kg -1 . Sample semivariogram was best fitted by an exponential model with the ratio of nugget to sill being 0.57. By using the Bayesian geostatistical approach, we generated the SOC content map, and obtained the prediction variance, upper 95% and lower 95% of SOC contents, which were then used to evaluate the prediction uncertainty. Bayesian geostatistical approach performed better than that of the OK, SGS and IDW, demonstrating the advantages of Bayesian approach in SOC prediction.
Health, Height, Height Shrinkage, and SES at Older Ages: Evidence from China†
Huang, Wei; Lei, Xiaoyan; Ridder, Geert; Strauss, John
2015-01-01
In this paper, we build on the literature that examines associations between height and health outcomes of the elderly. We investigate the associations of height shrinkage at older ages with socioeconomic status, finding that height shrinkage for both men and women is negatively associated with better schooling, current urban residence, and household per capita expenditures. We then investigate the relationships between pre-shrinkage height, height shrinkage, and a rich set of health outcomes of older respondents, finding that height shrinkage is positively associated with poor health outcomes across a variety of outcomes, being especially strong for cognition outcomes. PMID:26594311
PLASTIC SHRINKAGE CONTROLLING EFFECT BY POLYPROPYLENE SHORT FIBER WITH HYDROPHILY
NASA Astrophysics Data System (ADS)
Hosoda, Akira; Sadatsuki, Yoshitomo; Oshima, Akihiro; Ishii, Akina; Tsubaki, Tatsuya
The aim of this research is to clarify the mechanism of controlling plastic shrinkage crack by adding small amout of synthetic short fiber, and to propose optimum polypropylene short fiber to control plastic shrinkage crack. In this research, the effect of the hydrophily of polypropylene fiber was investigated in the amount of plastic shrinkage of mortar, total area of plastic shrinkage crack, and bond properties between fiber and mortar. The plastic shrinkage test of morar was conducted under high temperature, low relative humidity, and constant wind velocity. When polypropylene fiber had hydrophily, the amount of plastic shrinkage of mortar was restrained, which was because cement paste in morar was captured by hydrophilic fiber and then bleeding of mortar was restrained. With hydrophily, plastic shrinkage of mortar was restrained and bridging effect was improved due to better bond, which led to remarkable reduction of plastic shrinkage crack. Based on experimental results, the way of developing optimum polypropylene short fiber for actual construction was proposed. The fiber should have large hydrophily and small diameter, and should be used in as small amount as possible in order not to disturb workability of concrete.
Cure shrinkage in casting resins
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spencer, J. Brock
2015-02-01
A method is described whereby the shrinkage of a casting resin can be determined. Values for the shrinkage of several resin systems in frequent use by Sandia have been measured. A discussion of possible methods for determining the stresses generated by cure shrinkage and thermal contraction is also included.
Ghasaban, S; Atai, M; Imani, M; Zandi, M; Shokrgozar, M-A
2011-11-01
The study investigates the photo-polymerization shrinkage behavior, dynamic mechanical properties, and biocompatibility of cyanoacrylate bioadhesives containing POSS nanostructures and TMPTMA as crosslinking agents. Adhesives containing 2-octyl cyanoacrylate (2-OCA) and different percentages of POSS nanostructures and TMPTMA as crosslinking agents were prepared. The 1-phenyl-1, 2-propanedione (PPD) was incorporated as photo-initiator into the adhesive in 1.5, 3, and 4 wt %. The shrinkage strain of the specimens was measured using bonded-disk technique. Shrinkage strain, shrinkage strain rate, maximum and time at maximum shrinkage strain rate were measured and compared. Mechanical properties of the adhesives were also studied using dynamic mechanical thermal analysis (DMTA). Biocompatibility of the adhesives was examined by MTT method. The results showed that shrinkage strain increased with increasing the initiator concentration up to 3 wt % in POSS-containing and 1.5 wt % in TMPTMA-containing specimens and plateaued out at higher concentrations. By increasing the crosslinking agent, shrinkage strain, and shrinkage strain rate increased and the time at maximum shrinkage strain rate decreased. The study indicates that the incorporation of crosslinking agents into the cyanoacrylate adhesives resulted in improved mechanical properties. Preliminary MTT studies also revealed better biocompatibility profile for the adhesives containing crosslinking agents comparing to the neat specimens. Copyright © 2011 Wiley Periodicals, Inc.
An Experimental Study on Shrinkage Strains of Normal-and High-Strength Concrete-Filled Frp Tubes
NASA Astrophysics Data System (ADS)
Vincent, Thomas; Ozbakkaloglu, Togay
2017-09-01
It is now well established that concrete-filled fiber reinforced polymer (FRP) tubes (CFFTs) are an attractive construction technique for new columns, however studies examining concrete shrinkage in CFFTs remain limited. Concrete shrinkage may pose a concern for CFFTs, as in these members the curing of concrete takes place inside the FRP tube. This paper reports the findings from an experimental study on concrete shrinkage strain measurements for CFFTs manufactured with normal- and high-strength concrete (NSC and HSC). A total of 6 aramid FRP (AFRP)-confined concrete specimens with circular cross-sections were manufactured, with 3 specimens each manufactured using NSC and HSC. The specimens were instrumented with surface and embedded strain gauges to monitor shrinkage development of exposed concrete and concrete sealed inside the CFFTs, respectively. All specimens were cylinders with a 152 mm diameter and 305 mm height, and their unconfined concrete strengths were 44.8 or 83.2 MPa. Analysis of the shrinkage measurements from concrete sealed inside the CFFTs revealed that embedment depth and concrete compressive strength only had minor influences on recorded shrinkage strains. However, an analysis of shrinkage measurements from the exposed concrete surface revealed that higher amounts of shrinkage can occur in HSC. Finally, it was observed that shrinkage strains are significantly higher for concrete exposed at the surface compared to concrete sealed inside the CFFTs.
Kadarmideen, Haja N; Janss, Luc L G
2005-11-01
Bayesian segregation analyses were used to investigate the mode of inheritance of osteochondral lesions (osteochondrosis, OC) in pigs. Data consisted of 1163 animals with OC and their pedigrees included 2891 animals. Mixed-inheritance threshold models (MITM) and several variants of MITM, in conjunction with Markov chain Monte Carlo methods, were developed for the analysis of these (categorical) data. Results showed major genes with significant and substantially higher variances (range 1.384-37.81), compared to the polygenic variance (sigmau2). Consequently, heritabilities for a mixed inheritance (range 0.65-0.90) were much higher than the heritabilities from the polygenes. Disease allele frequencies range was 0.38-0.88. Additional analyses estimating the transmission probabilities of the major gene showed clear evidence for Mendelian segregation of a major gene affecting osteochondrosis. The variants, MITM with informative prior on sigmau2, showed significant improvement in marginal distributions and accuracy of parameters. MITM with a "reduced polygenic model" for parameterization of polygenic effects avoided convergence problems and poor mixing encountered in an "individual polygenic model." In all cases, "shrinkage estimators" for fixed effects avoided unidentifiability for these parameters. The mixed-inheritance linear model (MILM) was also applied to all OC lesions and compared with the MITM. This is the first study to report evidence of major genes for osteochondral lesions in pigs; these results may also form a basis for underpinning the genetic inheritance of this disease in other animals as well as in humans.
Howard, Réka; Carriquiry, Alicia L.; Beavis, William D.
2014-01-01
Parametric and nonparametric methods have been developed for purposes of predicting phenotypes. These methods are based on retrospective analyses of empirical data consisting of genotypic and phenotypic scores. Recent reports have indicated that parametric methods are unable to predict phenotypes of traits with known epistatic genetic architectures. Herein, we review parametric methods including least squares regression, ridge regression, Bayesian ridge regression, least absolute shrinkage and selection operator (LASSO), Bayesian LASSO, best linear unbiased prediction (BLUP), Bayes A, Bayes B, Bayes C, and Bayes Cπ. We also review nonparametric methods including Nadaraya-Watson estimator, reproducing kernel Hilbert space, support vector machine regression, and neural networks. We assess the relative merits of these 14 methods in terms of accuracy and mean squared error (MSE) using simulated genetic architectures consisting of completely additive or two-way epistatic interactions in an F2 population derived from crosses of inbred lines. Each simulated genetic architecture explained either 30% or 70% of the phenotypic variability. The greatest impact on estimates of accuracy and MSE was due to genetic architecture. Parametric methods were unable to predict phenotypic values when the underlying genetic architecture was based entirely on epistasis. Parametric methods were slightly better than nonparametric methods for additive genetic architectures. Distinctions among parametric methods for additive genetic architectures were incremental. Heritability, i.e., proportion of phenotypic variability, had the second greatest impact on estimates of accuracy and MSE. PMID:24727289
Lunar Terrain and Albedo Reconstruction from Apollo Imagery
NASA Technical Reports Server (NTRS)
Nefian, Ara V.; Kim, Taemin; Broxton, Michael; Moratto, Zach
2010-01-01
Generating accurate three dimensional planetary models and albedo maps is becoming increasingly more important as NASA plans more robotics missions to the Moon in the coming years. This paper describes a novel approach for separation of topography and albedo maps from orbital Lunar images. Our method uses an optimal Bayesian correlator to refine the stereo disparity map and generate a set of accurate digital elevation models (DEM). The albedo maps are obtained using a multi-image formation model that relies on the derived DEMs and the Lunar- Lambert reflectance model. The method is demonstrated on a set of high resolution scanned images from the Apollo era missions.
Long term economic relationships from cointegration maps
NASA Astrophysics Data System (ADS)
Vicente, Renato; Pereira, Carlos de B.; Leite, Vitor B. P.; Caticha, Nestor
2007-07-01
We employ the Bayesian framework to define a cointegration measure aimed to represent long term relationships between time series. For visualization of these relationships we introduce a dissimilarity matrix and a map based on the sorting points into neighborhoods (SPIN) technique, which has been previously used to analyze large data sets from DNA arrays. We exemplify the technique in three data sets: US interest rates (USIR), monthly inflation rates and gross domestic product (GDP) growth rates.
Li, Xin-Xu; Ren, Zhou-Peng; Wang, Li-Xia; Zhang, Hui; Jiang, Shi-Wen; Chen, Jia-Xu; Wang, Jin-Feng; Zhou, Xiao-Nong
2016-01-01
Both pulmonary tuberculosis (PTB) and intestinal helminth infection (IHI) affect millions of individuals every year in China. However, the national-scale estimation of prevalence predictors and prevalence maps for these diseases, as well as co-endemic relative risk (RR) maps of both diseases’ prevalence are not well developed. There are co-endemic, high prevalence areas of both diseases, whose delimitation is essential for devising effective control strategies. Bayesian geostatistical logistic regression models including socio-economic, climatic, geographical and environmental predictors were fitted separately for active PTB and IHI based on data from the national surveys for PTB and major human parasitic diseases that were completed in 2010 and 2004, respectively. Prevalence maps and co-endemic RR maps were constructed for both diseases by means of Bayesian Kriging model and Bayesian shared component model capable of appraising the fraction of variance of spatial RRs shared by both diseases, and those specific for each one, under an assumption that there are unobserved covariates common to both diseases. Our results indicate that gross domestic product (GDP) per capita had a negative association, while rural regions, the arid and polar zones and elevation had positive association with active PTB prevalence; for the IHI prevalence, GDP per capita and distance to water bodies had a negative association, the equatorial and warm zones and the normalized difference vegetation index had a positive association. Moderate to high prevalence of active PTB and low prevalence of IHI were predicted in western regions, low to moderate prevalence of active PTB and low prevalence of IHI were predicted in north-central regions and the southeast coastal regions, and moderate to high prevalence of active PTB and high prevalence of IHI were predicted in the south-western regions. Thus, co-endemic areas of active PTB and IHI were located in the south-western regions of China, which might be determined by socio-economic factors, such as GDP per capita. PMID:27088504
Development of shrinkage resistant microfibre-reinforced cement-based composites
NASA Astrophysics Data System (ADS)
Hamedanimojarrad, P.; Adam, G.; Ray, A. S.; Thomas, P. S.; Vessalas, K.
2012-06-01
Different shrinkage types may cause serious durability dilemma on restrained concrete parts due to crack formation and propagation. Several classes of fibres are used by concrete industry in order to reduce crack size and crack number. In previous studies, most of these fibre types were found to be effective in reducing the number and sizes of the cracks, but not in shrinkage strain reduction. This study deals with the influence of a newly introduced type of polyethylene fibre on drying shrinkage reduction. The novel fibre is a polyethylene microfibre in a new geometry, which is proved to reduce the amount of total shrinkage in mortars. This special hydrophobic polyethylene microfibre also reduces moisture loss of mortar samples. The experimental results on short and long-term drying shrinkage as well as on several other properties are reported. The hydrophobic polyethylene microfibre showed promising improvement in shrinkage reduction even at very low concentrations (0.1% of cement weight).
A Monte Carlo Evaluation of Estimated Parameters of Five Shrinkage Estimate Formuli.
ERIC Educational Resources Information Center
Newman, Isadore; And Others
1979-01-01
A Monte Carlo simulation was employed to determine the accuracy with which the shrinkage in R squared can be estimated by five different shrinkage formulas. The study dealt with the use of shrinkage formulas for various sample sizes, different R squared values, and different degrees of multicollinearity. (Author/JKS)
Van de Wal, Bart A E; Leroux, Olivier; Steppe, Kathy
2018-05-01
Grapevines are characterized by a period of irreversible stem shrinkage around the onset of ripening of the grape berries. Since this shrinkage is unrelated to meteorological conditions or drought, it is often suggested that it is caused by the increased sink strength of the grape berries during this period. However, no studies so far have experimentally investigated the mechanisms underlying this irreversible stem shrinkage. We therefore combined continuous measurements of stem diameter variations and histology of potted 2-year-old grapevines (Vitis vinifera L. 'Boskoop Glory'). Sink strength was altered by pruning all grape clusters (treatment P), while non-pruned grapevines served as control (treatment C). Unexpectedly, our results showed irreversible post-veraison stem shrinkage in both treatments, suggesting that the shrinkage is not linked to grape berry sink strength. Anatomical analysis indicated that the shrinkage is the result of the formation of successive concentric periderm layers, and the subsequent dehydration and compression of the older bark tissues, an anatomical feature that is characteristic of Vitis stems. Stem shrinkage is hence unrelated to grape berry development, in contrast to what has been previously suggested.
Post-resection mucosal margin shrinkage in oral cancer: quantification and significance.
Mistry, Rajesh C; Qureshi, Sajid S; Kumaran, C
2005-08-01
The importance of tumor free margins in outcome of cancer surgery is well known. Often the pathological margins are reported to be significantly smaller than the in situ margins. This discrepancy is due to margin shrinkage the quantum of which has not been studied in patients with oral cancers. To quantify the shrinkage of mucosal margin following excision for carcinoma of the oral tongue and buccal mucosa. Mucosal margins were measured prior to resection and half an hour after excision in 27 patients with carcinoma of the tongue and buccal mucosa. The mean margin shrinkage was assessed and the variables affecting the quantum of shrinkage analyzed. The mean shrinkage from the in situ to the post resection margin status was 22.7% (P < 0.0001). The mean shrinkage of the tongue margins was 23.5%, compared to 21.2% for buccal mucosa margins. The mean shrinkage in T1/T2 tumors (25.6%) was significantly more than in T3/T4 (9.2%, P < 0.011). There is significant shrinkage of mucosal margins after surgery. Hence this should be considered and appropriate margins should be taken at initial resection to prevent the agony of post-operative positive surgical margins. Copyright 2005 Wiley-Liss, Inc.
Katsen-Globa, Alisa; Puetz, Norbert; Gepp, Michael M; Neubauer, Julia C; Zimmermann, Heiko
2016-11-01
One of the often reported artefacts during cell preparation to scanning electron microscopy (SEM) is the shrinkage of cellular objects, that mostly occurs at a certain time-dependent stage of cell drying. Various methods of drying for SEM, such as critical point drying, freeze-drying, as well as hexamethyldisilazane (HMDS)-drying, were usually used. The latter becomes popular since it is a low cost and fast method. However, the correlation of drying duration and real shrinkage of objects was not investigated yet. In this paper, cell shrinkage at each stage of preparation for SEM was studied. We introduce a shrinkage coefficient using correlative light microscopy (LM) and SEM of the same human mesenchymal stem cells (hMSCs). The influence of HMDS-drying duration on the cell shrinkage is shown: the longer drying duration, the more shrinkage is observed. Furthermore, it was demonstrated that cell shrinkage is inversely proportional to cultivation time: the longer cultivation time, the more cell spreading area and the less cell shrinkage. Our results can be applicable for an exact SEM quantification of cell size and determination of cell spreading area in engineering of artificial cellular environments using biomaterials. SCANNING 38:625-633, 2016. © 2016 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.
Effect of low-shrinkage monomers on the physicochemical properties of experimental composite resin
He, Jingwei; Garoushi, Sufyan; Vallittu, Pekka K.; Lassila, Lippo
2018-01-01
Abstract This study was conducted to determine whether novel experimental low-shrinkage dimethacrylate co-monomers could provide low polymerization shrinkage composites without sacrifice to degree of conversion, and mechanical properties of the composites. Experimental composites were prepared by mixing 28.6 wt% of bisphenol-A-glycidyl dimethacrylate based resin matrix (bis-GMA) with various weight-fractions of co-monomers; tricyclo decanedimethanol dacrylate (SR833s) and isobornyl acrylate (IBOA) to 71.4 wt% of particulate-fillers. A composite based on bis-GMA/TEGDMA (triethylene glycol dimethacrylate) was used as a control. Fracture toughness and flexural strength were determined for each experimental material following international standards. Degree of monomer-conversion (DC%) was determined by FTIR spectrometry. The volumetric shrinkage in percent was calculated as a buoyancy change in distilled water by means of the Archimedes’ principle. Polymerization shrinkage-strain and -stress of the specimens were measured using the strain-gage technique and tensilometer, respectively with respect to time. Statistical analysis revealed that control group had the highest double-bond conversion (p < .05) among the experimental resins tested. All of the experimental composite resins had comparable flexural strength, modulus, and fracture toughness (p > .05). Volumetric shrinkage and shrinkage stress decreased with increasing IBOA concentration. Replacing TEGDMA with SR833s and IBOA can decrease the volumetric shrinkage, shrinkage strain, and shrinkage stress of composite resins without affecting the mechanical properties. However, the degree of conversion was also decreased. PMID:29536025
Effect of low-shrinkage monomers on the physicochemical properties of experimental composite resin.
He, Jingwei; Garoushi, Sufyan; Vallittu, Pekka K; Lassila, Lippo
2018-01-01
This study was conducted to determine whether novel experimental low-shrinkage dimethacrylate co-monomers could provide low polymerization shrinkage composites without sacrifice to degree of conversion, and mechanical properties of the composites. Experimental composites were prepared by mixing 28.6 wt% of bisphenol-A-glycidyl dimethacrylate based resin matrix ( bis -GMA) with various weight-fractions of co-monomers; tricyclo decanedimethanol dacrylate (SR833s) and isobornyl acrylate (IBOA) to 71.4 wt% of particulate-fillers. A composite based on bis -GMA/TEGDMA (triethylene glycol dimethacrylate) was used as a control. Fracture toughness and flexural strength were determined for each experimental material following international standards. Degree of monomer-conversion (DC%) was determined by FTIR spectrometry. The volumetric shrinkage in percent was calculated as a buoyancy change in distilled water by means of the Archimedes' principle. Polymerization shrinkage-strain and -stress of the specimens were measured using the strain-gage technique and tensilometer, respectively with respect to time. Statistical analysis revealed that control group had the highest double-bond conversion ( p < .05) among the experimental resins tested. All of the experimental composite resins had comparable flexural strength, modulus, and fracture toughness ( p > .05). Volumetric shrinkage and shrinkage stress decreased with increasing IBOA concentration. Replacing TEGDMA with SR833s and IBOA can decrease the volumetric shrinkage, shrinkage strain, and shrinkage stress of composite resins without affecting the mechanical properties. However, the degree of conversion was also decreased.
Scholte, Ronaldo G C; Schur, Nadine; Bavia, Maria E; Carvalho, Edgar M; Chammartin, Frédérique; Utzinger, Jürg; Vounatsou, Penelope
2013-11-01
Soil-transmitted helminths (Ascaris lumbricoides, Trichuris trichiura and hookworm) negatively impact the health and wellbeing of hundreds of millions of people, particularly in tropical and subtropical countries, including Brazil. Reliable maps of the spatial distribution and estimates of the number of infected people are required for the control and eventual elimination of soil-transmitted helminthiasis. We used advanced Bayesian geostatistical modelling, coupled with geographical information systems and remote sensing to visualize the distribution of the three soil-transmitted helminth species in Brazil. Remotely sensed climatic and environmental data, along with socioeconomic variables from readily available databases were employed as predictors. Our models provided mean prevalence estimates for A. lumbricoides, T. trichiura and hookworm of 15.6%, 10.1% and 2.5%, respectively. By considering infection risk and population numbers at the unit of the municipality, we estimate that 29.7 million Brazilians are infected with A. lumbricoides, 19.2 million with T. trichiura and 4.7 million with hookworm. Our model-based maps identified important risk factors related to the transmission of soiltransmitted helminths and confirm that environmental variables are closely associated with indices of poverty. Our smoothed risk maps, including uncertainty, highlight areas where soil-transmitted helminthiasis control interventions are most urgently required, namely in the North and along most of the coastal areas of Brazil. We believe that our predictive risk maps are useful for disease control managers for prioritising control interventions and for providing a tool for more efficient surveillance-response mechanisms.
Mapping local and global variability in plant trait distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, Ethan E.; Datta, Abhirup; Flores-Moreno, Habacuc
2017-12-01
Our ability to understand and predict the response of ecosystems to a changing environment depends on quantifying vegetation functional diversity. However, representing this diversity at the global scale is challenging. Typically, in Earth system models, characterization of plant diversity has been limited to grouping related species into plant functional types (PFTs), with all trait variation in a PFT collapsed into a single mean value that is applied globally. Using the largest global plant trait database and state of the art Bayesian modeling, we created fine-grained global maps of plant trait distributions that can be applied to Earth system models. Focusingmore » on a set of plant traits closely coupled to photosynthesis and foliar respiration—specific leaf area (SLA) and dry mass-based concentrations of leaf nitrogen (N m) and phosphorus (P m), we characterize how traits vary within and among over 50,000 ~50×50-km cells across the entire vegetated land surface. We do this in several ways—without defining the PFT of each grid cell and using 4 or 14 PFTs; each model’s predictions are evaluated against out-of-sample data. This endeavor advances prior trait mapping by generating global maps that preserve variability across scales by using modern Bayesian spatial statistical modeling in combination with a database over three times larger than that in previous analyses. Our maps further reveal that the most diverse grid cells possess trait variability close to the range of global PFT means.« less
Mapping local and global variability in plant trait distributions.
Butler, Ethan E; Datta, Abhirup; Flores-Moreno, Habacuc; Chen, Ming; Wythers, Kirk R; Fazayeli, Farideh; Banerjee, Arindam; Atkin, Owen K; Kattge, Jens; Amiaud, Bernard; Blonder, Benjamin; Boenisch, Gerhard; Bond-Lamberty, Ben; Brown, Kerry A; Byun, Chaeho; Campetella, Giandiego; Cerabolini, Bruno E L; Cornelissen, Johannes H C; Craine, Joseph M; Craven, Dylan; de Vries, Franciska T; Díaz, Sandra; Domingues, Tomas F; Forey, Estelle; González-Melo, Andrés; Gross, Nicolas; Han, Wenxuan; Hattingh, Wesley N; Hickler, Thomas; Jansen, Steven; Kramer, Koen; Kraft, Nathan J B; Kurokawa, Hiroko; Laughlin, Daniel C; Meir, Patrick; Minden, Vanessa; Niinemets, Ülo; Onoda, Yusuke; Peñuelas, Josep; Read, Quentin; Sack, Lawren; Schamp, Brandon; Soudzilovskaia, Nadejda A; Spasojevic, Marko J; Sosinski, Enio; Thornton, Peter E; Valladares, Fernando; van Bodegom, Peter M; Williams, Mathew; Wirth, Christian; Reich, Peter B
2017-12-19
Our ability to understand and predict the response of ecosystems to a changing environment depends on quantifying vegetation functional diversity. However, representing this diversity at the global scale is challenging. Typically, in Earth system models, characterization of plant diversity has been limited to grouping related species into plant functional types (PFTs), with all trait variation in a PFT collapsed into a single mean value that is applied globally. Using the largest global plant trait database and state of the art Bayesian modeling, we created fine-grained global maps of plant trait distributions that can be applied to Earth system models. Focusing on a set of plant traits closely coupled to photosynthesis and foliar respiration-specific leaf area (SLA) and dry mass-based concentrations of leaf nitrogen ([Formula: see text]) and phosphorus ([Formula: see text]), we characterize how traits vary within and among over 50,000 [Formula: see text]-km cells across the entire vegetated land surface. We do this in several ways-without defining the PFT of each grid cell and using 4 or 14 PFTs; each model's predictions are evaluated against out-of-sample data. This endeavor advances prior trait mapping by generating global maps that preserve variability across scales by using modern Bayesian spatial statistical modeling in combination with a database over three times larger than that in previous analyses. Our maps reveal that the most diverse grid cells possess trait variability close to the range of global PFT means.
Zayeri, Farid; Salehi, Masoud; Pirhosseini, Hasan
2011-12-01
To present the geographical map of malaria and identify some of the important environmental factors of this disease in Sistan and Baluchistan province, Iran. We used the registered malaria data to compute the standard incidence rates (SIRs) of malaria in different areas of Sistan and Baluchistan province for a nine-year period (from 2001 to 2009). Statistical analyses consisted of two different parts: geographical mapping of malaria incidence rates, and modeling the environmental factors. The empirical Bayesian estimates of malaria SIRs were utilized for geographical mapping of malaria and a Poisson random effects model was used for assessing the effect of environmental factors on malaria SIRs. In general, 64,926 new cases of malaria were registered in Sistan and Baluchistan Province from 2001 to 2009. Among them, 42,695 patients (65.8%) were male and 22,231 patients (34.2%) were female. Modeling the environmental factors showed that malaria incidence rates had positive relationship with humidity, elevation, average minimum temperature and average maximum temperature, while rainfall had negative effect on malaria SIRs in this province. The results of the present study reveals that malaria is still a serious health problem in Sistan and Baluchistan province, Iran. Geographical map and related environmental factors of malaria can help the health policy makers to intervene in high risk areas more efficiently and allocate the resources in a proper manner. Copyright © 2011 Hainan Medical College. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hussein, Rafid M.; Chandrashekhara, K.
2017-11-01
A multi-scale modeling approach is presented to simulate and validate thermo-oxidation shrinkage and cracking damage of a high temperature polymer composite. The multi-scale approach investigates coupled transient diffusion-reaction and static structural at macro- to micro-scale. The micro-scale shrinkage deformation and cracking damage are simulated and validated using 2D and 3D simulations. Localized shrinkage displacement boundary conditions for the micro-scale simulations are determined from the respective meso- and macro-scale simulations, conducted for a cross-ply laminate. The meso-scale geometrical domain and the micro-scale geometry and mesh are developed using the object oriented finite element (OOF). The macro-scale shrinkage and weight loss are measured using unidirectional coupons and used to build the macro-shrinkage model. The cross-ply coupons are used to validate the macro-shrinkage model by the shrinkage profiles acquired using scanning electron images at the cracked surface. The macro-shrinkage model deformation shows a discrepancy when the micro-scale image-based cracking is computed. The local maximum shrinkage strain is assumed to be 13 times the maximum macro-shrinkage strain of 2.5 × 10-5, upon which the discrepancy is minimized. The microcrack damage of the composite is modeled using a static elastic analysis with extended finite element and cohesive surfaces by considering the modulus spatial evolution. The 3D shrinkage displacements are fed to the model using node-wise boundary/domain conditions of the respective oxidized region. Microcrack simulation results: length, meander, and opening are closely matched to the crack in the area of interest for the scanning electron images.
Bayesian Lagrangian Data Assimilation and Drifter Deployment Strategies
NASA Astrophysics Data System (ADS)
Dutt, A.; Lermusiaux, P. F. J.
2017-12-01
Ocean currents transport a variety of natural (e.g. water masses, phytoplankton, zooplankton, sediments, etc.) and man-made materials and other objects (e.g. pollutants, floating debris, search and rescue, etc.). Lagrangian Coherent Structures (LCSs) or the most influential/persistent material lines in a flow, provide a robust approach to characterize such Lagrangian transports and organize classic trajectories. Using the flow-map stochastic advection and a dynamically-orthogonal decomposition, we develop uncertainty prediction schemes for both Eulerian and Lagrangian variables. We then extend our Bayesian Gaussian Mixture Model (GMM)-DO filter to a joint Eulerian-Lagrangian Bayesian data assimilation scheme. The resulting nonlinear filter allows the simultaneous non-Gaussian estimation of Eulerian variables (e.g. velocity, temperature, salinity, etc.) and Lagrangian variables (e.g. drifter/float positions, trajectories, LCSs, etc.). Its results are showcased using a double-gyre flow with a random frequency, a stochastic flow past a cylinder, and realistic ocean examples. We further show how our Bayesian mutual information and adaptive sampling equations provide a rigorous efficient methodology to plan optimal drifter deployment strategies and predict the optimal times, locations, and types of measurements to be collected.
Lloyd-Jones, Luke R; Robinson, Matthew R; Moser, Gerhard; Zeng, Jian; Beleza, Sandra; Barsh, Gregory S; Tang, Hua; Visscher, Peter M
2017-06-01
Genetic association studies in admixed populations are underrepresented in the genomics literature, with a key concern for researchers being the adequate control of spurious associations due to population structure. Linear mixed models (LMMs) are well suited for genome-wide association studies (GWAS) because they account for both population stratification and cryptic relatedness and achieve increased statistical power by jointly modeling all genotyped markers. Additionally, Bayesian LMMs allow for more flexible assumptions about the underlying distribution of genetic effects, and can concurrently estimate the proportion of phenotypic variance explained by genetic markers. Using three recently published Bayesian LMMs, Bayes R, BSLMM, and BOLT-LMM, we investigate an existing data set on eye ( n = 625) and skin ( n = 684) color from Cape Verde, an island nation off West Africa that is home to individuals with a broad range of phenotypic values for eye and skin color due to the mix of West African and European ancestry. We use simulations to demonstrate the utility of Bayesian LMMs for mapping loci and studying the genetic architecture of quantitative traits in admixed populations. The Bayesian LMMs provide evidence for two new pigmentation loci: one for eye color ( AHRR ) and one for skin color ( DDB1 ). Copyright © 2017 by the Genetics Society of America.
Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A; van't Veld, Aart A
2012-03-15
To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended. Copyright © 2012 Elsevier Inc. All rights reserved.
Satagopan, Jaya M; Sen, Ananda; Zhou, Qin; Lan, Qing; Rothman, Nathaniel; Langseth, Hilde; Engel, Lawrence S
2016-06-01
Matched case-control studies are popular designs used in epidemiology for assessing the effects of exposures on binary traits. Modern studies increasingly enjoy the ability to examine a large number of exposures in a comprehensive manner. However, several risk factors often tend to be related in a nontrivial way, undermining efforts to identify the risk factors using standard analytic methods due to inflated type-I errors and possible masking of effects. Epidemiologists often use data reduction techniques by grouping the prognostic factors using a thematic approach, with themes deriving from biological considerations. We propose shrinkage-type estimators based on Bayesian penalization methods to estimate the effects of the risk factors using these themes. The properties of the estimators are examined using extensive simulations. The methodology is illustrated using data from a matched case-control study of polychlorinated biphenyls in relation to the etiology of non-Hodgkin's lymphoma. © 2015, The International Biometric Society.
A Monte Carlo Evaluation of Estimated Parameters of Five Shrinkage Estimate Formuli.
ERIC Educational Resources Information Center
Newman, Isadore; And Others
A Monte Carlo study was conducted to estimate the efficiency of and the relationship between five equations and the use of cross validation as methods for estimating shrinkage in multiple correlations. Two of the methods were intended to estimate shrinkage to population values and the other methods were intended to estimate shrinkage from sample…
NASA Astrophysics Data System (ADS)
Tonini, Roberto; Sandri, Laura; Anne Thompson, Mary
2015-06-01
PyBetVH is a completely new, free, open-source and cross-platform software implementation of the Bayesian Event Tree for Volcanic Hazard (BET_VH), a tool for estimating the probability of any magmatic hazardous phenomenon occurring in a selected time frame, accounting for all the uncertainties. New capabilities of this implementation include the ability to calculate hazard curves which describe the distribution of the exceedance probability as a function of intensity (e.g., tephra load) on a grid of points covering the target area. The computed hazard curves are (i) absolute (accounting for the probability of eruption in a given time frame, and for all the possible vent locations and eruptive sizes) and (ii) Bayesian (computed at different percentiles, in order to quantify the epistemic uncertainty). Such curves allow representation of the full information contained in the probabilistic volcanic hazard assessment (PVHA) and are well suited to become a main input to quantitative risk analyses. PyBetVH allows for interactive visualization of both the computed hazard curves, and the corresponding Bayesian hazard/probability maps. PyBetVH is designed to minimize the efforts of end users, making PVHA results accessible to people who may be less experienced in probabilistic methodologies, e.g. decision makers. The broad compatibility of Python language has also allowed PyBetVH to be installed on the VHub cyber-infrastructure, where it can be run online or downloaded at no cost. PyBetVH can be used to assess any type of magmatic hazard from any volcano. Here we illustrate how to perform a PVHA through PyBetVH using the example of analyzing tephra fallout from the Okataina Volcanic Centre (OVC), New Zealand, and highlight the range of outputs that the tool can generate.
NASA Astrophysics Data System (ADS)
Lowman, L.; Barros, A. P.
2014-12-01
Computational modeling of surface erosion processes is inherently difficult because of the four-dimensional nature of the problem and the multiple temporal and spatial scales that govern individual mechanisms. Landscapes are modified via surface and fluvial erosion and exhumation, each of which takes place over a range of time scales. Traditional field measurements of erosion/exhumation rates are scale dependent, often valid for a single point-wise location or averaging over large aerial extents and periods with intense and mild erosion. We present a method of remotely estimating erosion rates using a Bayesian hierarchical model based upon the stream power erosion law (SPEL). A Bayesian approach allows for estimating erosion rates using the deterministic relationship given by the SPEL and data on channel slopes and precipitation at the basin and sub-basin scale. The spatial scale associated with this framework is the elevation class, where each class is characterized by distinct morphologic behavior observed through different modes in the distribution of basin outlet elevations. Interestingly, the distributions of first-order outlets are similar in shape and extent to the distribution of precipitation events (i.e. individual storms) over a 14-year period between 1998-2011. We demonstrate an application of the Bayesian hierarchical modeling framework for five basins and one intermontane basin located in the central Andes between 5S and 20S. Using remotely sensed data of current annual precipitation rates from the Tropical Rainfall Measuring Mission (TRMM) and topography from a high resolution (3 arc-seconds) digital elevation map (DEM), our erosion rate estimates are consistent with decadal-scale estimates based on landslide mapping and sediment flux observations and 1-2 orders of magnitude larger than most millennial and million year timescale estimates from thermochronology and cosmogenic nuclides.
Modular analysis of the probabilistic genetic interaction network.
Hou, Lin; Wang, Lin; Qian, Minping; Li, Dong; Tang, Chao; Zhu, Yunping; Deng, Minghua; Li, Fangting
2011-03-15
Epistatic Miniarray Profiles (EMAP) has enabled the mapping of large-scale genetic interaction networks; however, the quantitative information gained from EMAP cannot be fully exploited since the data are usually interpreted as a discrete network based on an arbitrary hard threshold. To address such limitations, we adopted a mixture modeling procedure to construct a probabilistic genetic interaction network and then implemented a Bayesian approach to identify densely interacting modules in the probabilistic network. Mixture modeling has been demonstrated as an effective soft-threshold technique of EMAP measures. The Bayesian approach was applied to an EMAP dataset studying the early secretory pathway in Saccharomyces cerevisiae. Twenty-seven modules were identified, and 14 of those were enriched by gold standard functional gene sets. We also conducted a detailed comparison with state-of-the-art algorithms, hierarchical cluster and Markov clustering. The experimental results show that the Bayesian approach outperforms others in efficiently recovering biologically significant modules.
Use of space-time models to investigate the stability of patterns of disease.
Abellan, Juan Jose; Richardson, Sylvia; Best, Nicky
2008-08-01
The use of Bayesian hierarchical spatial models has become widespread in disease mapping and ecologic studies of health-environment associations. In this type of study, the data are typically aggregated over an extensive time period, thus neglecting the time dimension. The output of purely spatial disease mapping studies is therefore the average spatial pattern of risk over the period analyzed, but the results do not inform about, for example, whether a high average risk was sustained over time or changed over time. We investigated how including the time dimension in disease-mapping models strengthens the epidemiologic interpretation of the overall pattern of risk. We discuss a class of Bayesian hierarchical models that simultaneously characterize and estimate the stable spatial and temporal patterns as well as departures from these stable components. We show how useful rules for classifying areas as stable can be constructed based on the posterior distribution of the space-time interactions. We carry out a simulation study to investigate the sensitivity and specificity of the decision rules we propose, and we illustrate our approach in a case study of congenital anomalies in England. Our results confirm that extending hierarchical disease-mapping models to models that simultaneously consider space and time leads to a number of benefits in terms of interpretation and potential for detection of localized excesses.
Shrinkage Degree in $L_{2}$ -Rescale Boosting for Regression.
Xu, Lin; Lin, Shaobo; Wang, Yao; Xu, Zongben
2017-08-01
L 2 -rescale boosting ( L 2 -RBoosting) is a variant of L 2 -Boosting, which can essentially improve the generalization performance of L 2 -Boosting. The key feature of L 2 -RBoosting lies in introducing a shrinkage degree to rescale the ensemble estimate in each iteration. Thus, the shrinkage degree determines the performance of L 2 -RBoosting. The aim of this paper is to develop a concrete analysis concerning how to determine the shrinkage degree in L 2 -RBoosting. We propose two feasible ways to select the shrinkage degree. The first one is to parameterize the shrinkage degree and the other one is to develop a data-driven approach. After rigorously analyzing the importance of the shrinkage degree in L 2 -RBoosting, we compare the pros and cons of the proposed methods. We find that although these approaches can reach the same learning rates, the structure of the final estimator of the parameterized approach is better, which sometimes yields a better generalization capability when the number of sample is finite. With this, we recommend to parameterize the shrinkage degree of L 2 -RBoosting. We also present an adaptive parameter-selection strategy for shrinkage degree and verify its feasibility through both theoretical analysis and numerical verification. The obtained results enhance the understanding of L 2 -RBoosting and give guidance on how to use it for regression tasks.
Shrinkage and footage loss from drying 4/4-inch hard maple lumber.
Daniel E. Dunmire
1968-01-01
Equations are presented for estimating shrinkage and resulting footage losses due to drying hard maple lumber. The equations, based on board shrinkage data taken from a representative lumber sample, are chiefly intended for use with lots of hard maple lumber, such as carloads, truckloads, or kiln loads, but also can be used for estimating the average shrinkage of...
Strategies to overcome polymerization shrinkage--materials and techniques. A review.
Malhotra, Neeraj; Kundabala, M; Shashirashmi, Acharya
2010-03-01
Stress generation at tissue/resin composite interfaces is one of the important reasons for failure of resin-based composite (RBC) restorations owing to the inherent property of polymerization shrinkage. Unrelieved stresses can weaken the bond between the tooth structure and the restoration, eventually producing a gap at the restoration margins. This can lead to postoperative sensitivity, secondary caries, fracture of the restorations, marginal deterioration and discoloration. As polymerization shrinkage cannot be eliminated completely, various techniques and protocols have been suggested in the manipulation of, and restorative procedures for, RBCs to minimize the shrinkage and associated stresses. Introduction of various newer monomer systems (siloranes) may also overcome this problem of shrinkage stress. This review emphasizes the various material science advances and techniques advocated that are currently available or under trial/testing phase to deal with polymerization shrinkage in a clinical environment. Minimizing the shrinkage stresses in RBC restorations may lead to improvement in the success rate and survival of restorations. Thus, it is important for dental practitioners to be aware of various techniques and materials available to reduce these shrinkage stresses and be updated with the current knowledge available to deal with this issue.
Effect of the Key Mixture Parameters on Shrinkage of Reactive Powder Concrete
Zubair, Ahmed
2014-01-01
Reactive powder concrete (RPC) mixtures are reported to have excellent mechanical and durability characteristics. However, such concrete mixtures having high amount of cementitious materials may have high early shrinkage causing cracking of concrete. In the present work, an attempt has been made to study the simultaneous effects of three key mixture parameters on shrinkage of the RPC mixtures. Considering three different levels of the three key mixture factors, a total of 27 mixtures of RPC were prepared according to 33 factorial experiment design. The specimens belonging to all 27 mixtures were monitored for shrinkage at different ages over a total period of 90 days. The test results were plotted to observe the variation of shrinkage with time and to see the effects of the key mixture factors. The experimental data pertaining to 90-day shrinkage were used to conduct analysis of variance to identify significance of each factor and to obtain an empirical equation correlating the shrinkage of RPC with the three key mixture factors. The rate of development of shrinkage at early ages was higher. The water to binder ratio was found to be the most prominent factor followed by cement content with the least effect of silica fume content. PMID:25050395
Effect of the key mixture parameters on shrinkage of reactive powder concrete.
Ahmad, Shamsad; Zubair, Ahmed; Maslehuddin, Mohammed
2014-01-01
Reactive powder concrete (RPC) mixtures are reported to have excellent mechanical and durability characteristics. However, such concrete mixtures having high amount of cementitious materials may have high early shrinkage causing cracking of concrete. In the present work, an attempt has been made to study the simultaneous effects of three key mixture parameters on shrinkage of the RPC mixtures. Considering three different levels of the three key mixture factors, a total of 27 mixtures of RPC were prepared according to 3(3) factorial experiment design. The specimens belonging to all 27 mixtures were monitored for shrinkage at different ages over a total period of 90 days. The test results were plotted to observe the variation of shrinkage with time and to see the effects of the key mixture factors. The experimental data pertaining to 90-day shrinkage were used to conduct analysis of variance to identify significance of each factor and to obtain an empirical equation correlating the shrinkage of RPC with the three key mixture factors. The rate of development of shrinkage at early ages was higher. The water to binder ratio was found to be the most prominent factor followed by cement content with the least effect of silica fume content.
Spatiotemporal Patterns of Ground Monitored PM2.5 Concentrations in China in Recent Years
Li, Junming; Han, Xiulan; Li, Xiao; Yang, Jianping; Li, Xuejiao
2018-01-01
This paper firstly explores the space-time evolution of city-level PM2.5 concentrations showed a very significant seasonal cycle type fluctuation during the period between 13 May 2014 and 30 May 2017. The period from October to April following each year was a heavy pollution period, whereas the phase from April to October of the current year was part of a light pollution period. The average monthly PM2.5 concentrations in mainland China based on ground monitoring, employing a descriptive statistics method and a Bayesian spatiotemporal hierarchy model. Daily and weekly average PM2.5 concentrations in 338 cities in mainland China presented no significant spatial difference during the severe pollution period but a large spatial difference during light pollution periods. The severe PM2.5 pollution areas were mainly distributed in the Beijing-Tianjin-Hebei urban agglomeration in the North China Plain during the beginning of each autumn-winter season (September), spreading to the Northeast Plains after October, then later continuing to spread to other cities in mainland China, eventually covering most cities. PM2.5 pollution in China appeared to be a cyclic characteristic of first spreading and then centralizing in the space in two spring-summer seasons, and showed an obvious process of first diffusing then transferring to shrinkage alternation during the spring-summer season of 2015, but showed no obvious diffusion during the spring-summer season of 2016, maintaining a stable spatial structure after the shrinkage in June, as well as being more concentrated. The heavily polluted areas are continuously and steadily concentrated in East China, Central China and Xinjiang Province. PMID:29324671
NASA Astrophysics Data System (ADS)
Ge, Honghao; Ren, Fengli; Li, Jun; Han, Xiujun; Xia, Mingxu; Li, Jianguo
2017-03-01
A four-phase dendritic model was developed to predict the macrosegregation, shrinkage cavity, and porosity during solidification. In this four-phase dendritic model, some important factors, including dendritic structure for equiaxed crystals, melt convection, crystals sedimentation, nucleation, growth, and shrinkage of solidified phases, were taken into consideration. Furthermore, in this four-phase dendritic model, a modified shrinkage criterion was established to predict shrinkage porosity (microporosity) of a 55-ton industrial Fe-3.3 wt pct C ingot. The predicted macrosegregation pattern and shrinkage cavity shape are in a good agreement with experimental results. The shrinkage cavity has a significant effect on the formation of positive segregation in hot top region, which generally forms during the last stage of ingot casting. The dendritic equiaxed grains also play an important role on the formation of A-segregation. A three-dimensional laminar structure of A-segregation in industrial ingot was, for the first time, predicted by using a 3D case simulation.
Variation of Shrinkage Strain within the Depth of Concrete Beams.
Jeong, Jong-Hyun; Park, Yeong-Seong; Lee, Yong-Hak
2015-11-16
The variation of shrinkage strain within beam depth was examined through four series of time-dependent laboratory experiments on unreinforced concrete beam specimens. Two types of beam specimens, horizontally cast and vertically cast, were tested; shrinkage variation was observed in the horizontally cast specimens. This indicated that the shrinkage variation within the beam depth was due to water bleeding and tamping during the placement of the fresh concrete. Shrinkage strains were measured within the beam depth by two types of strain gages, surface-attached and embedded. The shrinkage strain distribution within the beam depth showed a consistent tendency for the two types of gages. The test beams were cut into four sections after completion of the test, and the cutting planes were divided into four equal sub-areas to measure the aggregate concentration for each sub-area of the cutting plane. The aggregate concentration increased towards the bottom of the beam. The shrinkage strain distribution was estimated by Hobbs' equation, which accounts for the change of aggregate volume concentration.
Variation of Shrinkage Strain within the Depth of Concrete Beams
Jeong, Jong-Hyun; Park, Yeong-Seong; Lee, Yong-Hak
2015-01-01
The variation of shrinkage strain within beam depth was examined through four series of time-dependent laboratory experiments on unreinforced concrete beam specimens. Two types of beam specimens, horizontally cast and vertically cast, were tested; shrinkage variation was observed in the horizontally cast specimens. This indicated that the shrinkage variation within the beam depth was due to water bleeding and tamping during the placement of the fresh concrete. Shrinkage strains were measured within the beam depth by two types of strain gages, surface-attached and embedded. The shrinkage strain distribution within the beam depth showed a consistent tendency for the two types of gages. The test beams were cut into four sections after completion of the test, and the cutting planes were divided into four equal sub-areas to measure the aggregate concentration for each sub-area of the cutting plane. The aggregate concentration increased towards the bottom of the beam. The shrinkage strain distribution was estimated by Hobbs’ equation, which accounts for the change of aggregate volume concentration. PMID:28793677
Bayesian Deconvolution for Angular Super-Resolution in Forward-Looking Scanning Radar
Zha, Yuebo; Huang, Yulin; Sun, Zhichao; Wang, Yue; Yang, Jianyu
2015-01-01
Scanning radar is of notable importance for ground surveillance, terrain mapping and disaster rescue. However, the angular resolution of a scanning radar image is poor compared to the achievable range resolution. This paper presents a deconvolution algorithm for angular super-resolution in scanning radar based on Bayesian theory, which states that the angular super-resolution can be realized by solving the corresponding deconvolution problem with the maximum a posteriori (MAP) criterion. The algorithm considers that the noise is composed of two mutually independent parts, i.e., a Gaussian signal-independent component and a Poisson signal-dependent component. In addition, the Laplace distribution is used to represent the prior information about the targets under the assumption that the radar image of interest can be represented by the dominant scatters in the scene. Experimental results demonstrate that the proposed deconvolution algorithm has higher precision for angular super-resolution compared with the conventional algorithms, such as the Tikhonov regularization algorithm, the Wiener filter and the Richardson–Lucy algorithm. PMID:25806871
Pitel, Mark L
2013-09-01
Despite numerous advances in composite resin technology over the course of many decades, shrinkage behavior and the resultant stresses inherent to direct placed composite restorations continue to challenge clinicians. This overview of composite resins includes a review of their history and development along with a discussion of strategies for reducing polymerization shrinkage. An assessment of the clinical significance of these materials is also provided, including a discussion of the differences between polymerization shrinkage and stress, incremental layering versus bulk placement, and the emergence of lower shrinkage stress monomer chemistry.
Devitrification and shrinkage behavior of silica fibers
NASA Technical Reports Server (NTRS)
Zaplatynsky, I.
1972-01-01
Devitrification and shrinkage of three batches of silica fibers were investigated in the temperature range of 1200 to 1350 C. Fibers with high water and impurity content devitrified rapidly to cristobalite and quartz and exhibited rapid, but the least amount of, shrinkage. A batch with low water and impurity content devitrified more slowly to cristobalite only and underwent severe shrinkage by the mechanism of viscous flow. A third batch of intermediate purity level and low water content devitrified at a moderate rate mainly to cristobalite but shrunk very rapidly. Completely devitrified silica fibers did not exhibit any further shrinkage.
Aerosol particle shrinkage event phenomenology in a South European suburban area during 2009-2015
NASA Astrophysics Data System (ADS)
Alonso-Blanco, E.; Gómez-Moreno, F. J.; Núñez, L.; Pujadas, M.; Cusack, M.; Artíñano, B.
2017-07-01
A high number of aerosol particle shrinkage cases (70) have been identified and analyzed from an extensive and representative database of aerosol size distributions obtained between 2009 and 2015 at an urban background site in Madrid (Spain). A descriptive classification based on the process from which the shrinkage began is proposed according which shrinkage events were divided into three groups: (1) NPF + shrinkage (NPF + S) events, (2) aerosol particle growth process + shrinkage (G + S) events, and (3) pure shrinkage (S) events. The largest number of shrinkages corresponded to the S-type followed by NPF + S, while the G + S events were the least frequent group recorded. Duration of shrinkages varied widely from 0.75 to 8.5 h and SR from -1.0 to -11.1 nm h-1. These processes typically occurred in the afternoon, around 18:00 UTC, caused by two situations: i) a wind speed increase usually associated with a change in the wind direction (over 60% of the observations) and ii) the reduction of photochemical activity at the end of the day. All shrinkages were detected during the warm period, mainly between May and August, when local meteorological conditions (high solar irradiance and temperature and low relative humidity), atmospheric processes (high photochemical activity) and availability of aerosol-forming precursors were favorable for their development. As a consequence of these processes, the particles concentration corresponding to the Aitken mode decreased into the nucleation mode. The accumulation mode did not undergo significant changes during these processes. In some cases, a dilution of the particulate content in the ambient air was observed. This work, goes further than others works dealing with aerosol particles shrinkages, as it incorporates as a main novelty a classification methodology for studying these processes. Moreover, compared to other studies, it is supported by a high and representative number of observations. Thus, this study contributes to get a better understanding of this type of atmospheric aerosol transformations and its features.
Alternative methods for determining shrinkage in restorative resin composites.
de Melo Monteiro, Gabriela Queiroz; Montes, Marcos Antonio Japiassú Resende; Rolim, Tiago Vieira; de Oliveira Mota, Cláudia Cristina Brainer; de Barros Correia Kyotoku, Bernardo; Gomes, Anderson Stevens Leônidas; de Freitas, Anderson Zanardi
2011-08-01
The purpose of this study was to evaluate polymerization shrinkage of resin composites using a coordinate measuring machine, optical coherence tomography and a more widely known method, such as Archimedes Principle. Two null hypothesis were tested: (1) there are no differences between the materials tested; (2) there are no differences between the methods used for polymerization shrinkage measurements. Polymerization shrinkage of seven resin-based dental composites (Filtek Z250™, Filtek Z350™, Filtek P90™/3M ESPE, Esthet-X™, TPH Spectrum™/Dentsply 4 Seasons™, Tetric Ceram™/Ivoclar-Vivadent) was measured. For coordinate measuring machine measurements, composites were applied to a cylindrical Teflon mold (7 mm × 2 mm), polymerized and removed from the mold. The difference between the volume of the mold and the volume of the specimen was calculated as a percentage. Optical coherence tomography was also used for linear shrinkage evaluations. The thickness of the specimens was measured before and after photoactivation. Polymerization shrinkage was also measured using Archimedes Principle of buoyancy (n=5). Statistical analysis of the data was performed with ANOVA and the Games-Howell test. The results show that polymerization shrinkage values vary with the method used. Despite numerical differences the ranking of the resins was very similar with Filtek P90 presenting the lowest shrinkage values. Because of the variations in the results, reported values could only be used to compare materials within the same method. However, it is possible rank composites for polymerization shrinkage and to relate these data from different test methods. Independently of the method used, reduced polymerization shrinkage was found for silorane resin-based composite. Copyright © 2011 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
SUVI Thematic Maps: A new tool for space weather forecasting
NASA Astrophysics Data System (ADS)
Hughes, J. M.; Seaton, D. B.; Darnel, J.
2017-12-01
The new Solar Ultraviolet Imager (SUVI) instruments aboard NOAA's GOES-R series satellites collect continuous, high-quality imagery of the Sun in six wavelengths. SUVI imagers produce at least one image every 10 seconds, or 8,640 images per day, considerably more data than observers can digest in real time. Over the projected 20-year lifetime of the four GOES-R series spacecraft, SUVI will provide critical imagery for space weather forecasters and produce an extensive but unwieldy archive. In order to condense the database into a dynamic and searchable form we have developed solar thematic maps, maps of the Sun with key features, such as coronal holes, flares, bright regions, quiet corona, and filaments, identified. Thematic maps will be used in NOAA's Space Weather Prediction Center to improve forecaster response time to solar events and generate several derivative products. Likewise, scientists use thematic maps to find observations of interest more easily. Using an expert-trained, naive Bayesian classifier to label each pixel, we create thematic maps in real-time. We created software to collect expert classifications of solar features based on SUVI images. Using this software, we compiled a database of expert classifications, from which we could characterize the distribution of pixels associated with each theme. Given new images, the classifier assigns each pixel the most appropriate label according to the trained distribution. Here we describe the software to collect expert training and the successes and limitations of the classifier. The algorithm excellently identifies coronal holes but fails to consistently detect filaments and prominences. We compare the Bayesian classifier to an artificial neural network, one of our attempts to overcome the aforementioned limitations. These results are very promising and encourage future research into an ensemble classification approach.
Thermoelectrically controlled device for studies of temperature-induced corneal shrinkage
NASA Astrophysics Data System (ADS)
Borja, David; Manns, Fabrice; Fernandez, Viviana; Lamar, Peggy; Soederberg, Per G.; Parel, Jean-Marie A.
2002-06-01
The purpose of this study was to design and calibrate a device to measure the dynamics of thermal shrinkage in corneal and scleral strips. The apparatus consists of a thermoelectric cell controlled by a temperature controller designed to generate temperatures up to 90 degree(s)C in rectangular corneal strips; a copper cuvette filled with Dextran solution that holds the corneal strip and a displacement sensor that measures the change in length of the tissue during heat-induced shrinkage. The device was tested on corneal tissue from Florida Eye-Bank eyes that were cut into 2x4mm rectangular strips. Preliminary results indicate that our system can reproducibly create and accurately measure thermally induced corneal shrinkage. Shrinkage experiments will be used to optimize laser parameters for corneal shrinkage during laser thermokeratoplasty and laser scleral buckling.
Lopes, Lawrence Gonzaga; Franco, Eduardo Batista; Pereira, José Carlos; Mondelli, Rafael Francisco Lia
2008-01-01
The aim of this study was to evaluate the polymerization shrinkage and shrinkage stress of composites polymerized with a LED and a quartz tungsten halogen (QTH) light sources. The LED was used in a conventional mode (CM) and the QTH was used in both conventional and pulse-delay modes (PD). The composite resins used were Z100, A110, SureFil and Bisfil 2B (chemical-cured). Composite deformation upon polymerization was measured by the strain gauge method. The shrinkage stress was measured by photoelastic analysis. The polymerization shrinkage data were analyzed statistically using two-way ANOVA and Tukey test (p≤0.05), and the stress data were analyzed by one-way ANOVA and Tukey's test (p≤0.05). Shrinkage and stress means of Bisfil 2B were statistically significant lower than those of Z100, A110 and SureFil. In general, the PD mode reduced the contraction and the stress values when compared to CM. LED generated the same stress as QTH in conventional mode. Regardless of the activation mode, SureFil produced lower contraction and stress values than the other light-cured resins. Conversely, Z100 and A110 produced the greatest contraction and stress values. As expected, the chemically cured resin generated lower shrinkage and stress than the light-cured resins. In conclusion, The PD mode effectively decreased contraction stress for Z100 and A110. Development of stress in light-cured resins depended on the shrinkage value. PMID:19089287
Modeling dental composite shrinkage by digital image correlation and finite element methods
NASA Astrophysics Data System (ADS)
Chen, Terry Yuan-Fang; Huang, Pin-Sheng; Chuang, Shu-Fen
2014-10-01
Dental composites are light-curable resin-based materials with an inherent defect of polymerization shrinkage which may cause tooth deflection and debonding of restorations. This study aimed to combine digital image correlation (DIC) and finite element analysis (FEA) to model the shrinkage behaviors under different light curing regimens. Extracted human molars were prepared with proximal cavities for composite restorations, and then divided into three groups to receive different light curing protocols: regular intensity, low intensity, and step-curing consisting of low and high intensities. For each tooth, the composite fillings were consecutively placed under both unbonded and bonded conditions. At first, the shrinkage of the unbonded restorations was analyzed by DIC and adopted as the setting of FEA. The simulated shrinkage behaviors obtained from FEA were further validated by the measurements in the bonded cases. The results showed that different light curing regimens affected the shrinkage in unbonded restorations, with regular intensity showing the greatest shrinkage strain on the top surface. The shrinkage centers in the bonded cases were located closer to the cavity floor than those in the unbonded cases, and were less affected by curing regimens. The FEA results showed that the stress was modulated by the accumulated light energy density, while step-curing may alleviate the tensile stress along the cavity walls. In this study, DIC provides a complete description of the polymerization shrinkage behaviors of dental composites, which may facilitate the stress analysis in the numerical investigation.
Acoustic emission analysis of tooth-composite interfacial debonding.
Cho, N Y; Ferracane, J L; Lee, I B
2013-01-01
This study detected tooth-composite interfacial debonding during composite restoration by means of acoustic emission (AE) analysis and investigated the effects of composite properties and adhesives on AE characteristics. The polymerization shrinkage, peak shrinkage rate, flexural modulus, and shrinkage stress of a methacrylate-based universal hybrid, a flowable, and a silorane-based composite were measured. Class I cavities on 49 extracted premolars were restored with 1 of the 3 composites and 1 of the following adhesives: 2 etch-and-rinse adhesives, 2 self-etch adhesives, and an adhesive for the silorane-based composite. AE analysis was done for 2,000 sec during light-curing. The silorane-based composite exhibited the lowest shrinkage (rate), the longest time to peak shrinkage rate, the lowest shrinkage stress, and the fewest AE events. AE events were detected immediately after the beginning of light-curing in most composite-adhesive combinations, but not until 40 sec after light-curing began for the silorane-based composite. AE events were concentrated at the initial stage of curing in self-etch adhesives compared with etch-and-rinse adhesives. Reducing the shrinkage (rate) of composites resulted in reduced shrinkage stress and less debonding, as evidenced by fewer AE events. AE is an effective technique for monitoring, in real time, the debonding kinetics at the tooth-composite interface.
Kim, L U; Kim, J W; Kim, C K
2006-09-01
To prepare a dental composite that has a low amount of curing shrinkage and excellent mechanical strength, various 2,2-bis[4-(2-hydroxy-3-methacryloyloxy propoxy) phenyl] propane (Bis-GMA) derivatives were synthesized via molecular structure design, and afterward, properties of their mixtures were explored. Bis-GMA derivatives, which were obtained by substituting methyl groups for hydrogen on the phenyl ring in the Bis-GMA, exhibited lower curing shrinkage than Bis-GMA, whereas their viscosities were higher than that of Bis-GMA. Other Bis-GMA derivatives, which contained a glycidyl methacrylate as a molecular end group exhibited reduced curing shrinkage and viscosity. Methoxy substitution for hydroxyl groups on the Bis-GMA derivatives was performed for the further reduction of the viscosity and curing shrinkage. Various resin mixtures, which had the same viscosity as the commercial one, were prepared, and their curing shrinkage was examined. A resin mixture containing 2,2-bis[3,5-dimethyl, 4-(2-methoxy-3-methacryloyloxy propoxy) phenyl] propane] (TMBis-M-GMA) as a base resin and 4-tert-butylphenoxy-2-methyoxypropyl methacrylate (t-BP-M-GMA) as a diluent exhibited the lowest curing shrinkage among them. The composite prepared from this resin mixture also exhibited the lowest curing shrinkage along with enhanced mechanical properties.
Predictive model of outcome of targeted nodal assessment in colorectal cancer.
Nissan, Aviram; Protic, Mladjan; Bilchik, Anton; Eberhardt, John; Peoples, George E; Stojadinovic, Alexander
2010-02-01
Improvement in staging accuracy is the principal aim of targeted nodal assessment in colorectal carcinoma. Technical factors independently predictive of false negative (FN) sentinel lymph node (SLN) mapping should be identified to facilitate operative decision making. To define independent predictors of FN SLN mapping and to develop a predictive model that could support surgical decisions. Data was analyzed from 2 completed prospective clinical trials involving 278 patients with colorectal carcinoma undergoing SLN mapping. Clinical outcome of interest was FN SLN(s), defined as one(s) with no apparent tumor cells in the presence of non-SLN metastases. To assess the independent predictive effect of a covariate for a nominal response (FN SLN), a logistic regression model was constructed and parameters estimated using maximum likelihood. A probabilistic Bayesian model was also trained and cross validated using 10-fold train-and-test sets to predict FN SLN mapping. Area under the curve (AUC) from receiver operating characteristics curves of these predictions was calculated to determine the predictive value of the model. Number of SLNs (<3; P = 0.03) and tumor-replaced nodes (P < 0.01) independently predicted FN SLN. Cross validation of the model created with Bayesian Network Analysis effectively predicted FN SLN (area under the curve = 0.84-0.86). The positive and negative predictive values of the model are 83% and 97%, respectively. This study supports a minimum threshold of 3 nodes for targeted nodal assessment in colorectal cancer, and establishes sufficient basis to conclude that SLN mapping and biopsy cannot be justified in the presence of clinically apparent tumor-replaced nodes.
Comparative Study of Shrinkage and Non-Shrinkage Model of Food Drying
NASA Astrophysics Data System (ADS)
Shahari, N.; Jamil, N.; Rasmani, KA.
2016-08-01
A single phase heat and mass model has always been used to represent the moisture and temperature distribution during the drying of food. Several effects of the drying process, such as physical and structural changes, have been considered in order to increase understanding of the movement of water and temperature. However, the comparison between the heat and mass equation with and without structural change (in terms of shrinkage), which can affect the accuracy of the prediction model, has been little investigated. In this paper, two mathematical models to describe the heat and mass transfer in food, with and without the assumption of structural change, were analysed. The equations were solved using the finite difference method. The converted coordinate system was introduced within the numerical computations for the shrinkage model. The result shows that the temperature with shrinkage predicts a higher temperature at a specific time compared to that of the non-shrinkage model. Furthermore, the predicted moisture content decreased faster at a specific time when the shrinkage effect was included in the model.
NASA Astrophysics Data System (ADS)
Santos, Jonnathan D.; Fajardo, Jorge I.; Cuji, Alvaro R.; García, Jaime A.; Garzón, Luis E.; López, Luis M.
2015-09-01
A polymeric natural fiber-reinforced composite is developed by extrusion and injection molding process. The shrinkage and warpage of high-density polyethylene reinforced with short natural fibers of Guadua angustifolia Kunth are analyzed by experimental measurements and computer simulations. Autodesk Moldflow® and Solid Works® are employed to simulate both volumetric shrinkage and warpage of injected parts at different configurations: 0 wt.%, 20 wt.%, 30 wt.% and 40 wt.% reinforcing on shrinkage and warpage behavior of polymer composite. Become evident the restrictive effect of reinforcing on the volumetric shrinkage and warpage of injected parts. The results indicate that volumetric shrinkage of natural composite is reduced up to 58% with fiber increasing, whereas the warpage shows a reduction form 79% to 86% with major fiber content. These results suggest that it is a highly beneficial use of natural fibers to improve the assembly properties of polymeric natural fiber-reinforced composites.
Rodhouse, T.J.; Irvine, K.M.; Vierling, K.T.; Vierling, L.A.
2011-01-01
Monitoring programs that evaluate restoration and inform adaptive management are important for addressing environmental degradation. These efforts may be well served by spatially explicit hierarchical approaches to modeling because of unavoidable spatial structure inherited from past land use patterns and other factors. We developed Bayesian hierarchical models to estimate trends from annual density counts observed in a spatially structured wetland forb (Camassia quamash [camas]) population following the cessation of grazing and mowing on the study area, and in a separate reference population of camas. The restoration site was bisected by roads and drainage ditches, resulting in distinct subpopulations ("zones") with different land use histories. We modeled this spatial structure by fitting zone-specific intercepts and slopes. We allowed spatial covariance parameters in the model to vary by zone, as in stratified kriging, accommodating anisotropy and improving computation and biological interpretation. Trend estimates provided evidence of a positive effect of passive restoration, and the strength of evidence was influenced by the amount of spatial structure in the model. Allowing trends to vary among zones and accounting for topographic heterogeneity increased precision of trend estimates. Accounting for spatial autocorrelation shifted parameter coefficients in ways that varied among zones depending on strength of statistical shrinkage, autocorrelation and topographic heterogeneity-a phenomenon not widely described. Spatially explicit estimates of trend from hierarchical models will generally be more useful to land managers than pooled regional estimates and provide more realistic assessments of uncertainty. The ability to grapple with historical contingency is an appealing benefit of this approach.
Ren, Xinyu; Lv, Yingying; Li, Mingshi
2017-03-01
Changes in forest ecosystem structure and functions are considered some of the research issues in landscape ecology. In this study, advancing Forman's theory, we considered five spatially explicit processes associated with fragmentation, including perforation, dissection, subdivision, shrinkage, and attrition, and two processes associated with restoration, i.e., increment and expansion processes. Following this theory, a forest fragmentation and restoration process model that can detect the spatially explicit processes and ecological consequences of forest landscape change was developed and tested in the current analysis. Using the National Land Cover Databases (2001, 2006 and 2011), the forest fragmentation and restoration process model was applied to US western natural forests and southeastern plantation forests to quantify and classify forest patch losses into one of the four fragmentation processes (the dissection process was merged into the subdivision process) and to classify the newly gained forest patches based on the two restoration processes. At the same time, the spatio-temporal differences in fragmentation and restoration patterns and trends between natural forests and plantations were further compared. Then, through overlaying the forest fragmentation/restoration processes maps with targeting year land cover data and land ownership vectors, the results from forest fragmentation and the contributors to forest restoration in federal and nonfederal lands were identified. Results showed that, in natural forests, the forest change patches concentrated around the urban/forest, cultivated/forest, and shrubland/forest interfaces, while the patterns of plantation change patches were scattered sparsely and irregularly. The shrinkage process was the most common type in forest fragmentation, and the average size was the smallest. Expansion, the most common restoration process, was observed in both natural forests and plantations and often occurred around the previous expansion or covered the previous subdivision or shrinkage processes. The overall temporal fragmentation pattern of natural forests had a "perforation-subdivision/shrinkage-attrition" pathway, which corresponded to Forman's landscape fragmentation rule, while the plantation forests did not follow the rule strictly. The main land cover types resulted from forest fragmentation in natural forests and plantation forests were shrubland and herbaceous, mainly through subdivision and shrinkages process. The processes and effects of restoration of plantation forests were more diverse and efficient, compared to the natural forest, which were simpler with a lower regrowth rate. The fragmentation mostly occurred in nonfederal lands. In natural forests, forest fragmentation pattern differed in different land tenures, yet plantations remained the same in federal and nonfederal lands. Copyright © 2016 Elsevier Ltd. All rights reserved.
Shrinkage stress in concrete under dry-wet cycles: an example with concrete column
NASA Astrophysics Data System (ADS)
Gao, Yuan; Zhang, Jun; Luosun, Yiming
2014-02-01
This paper focuses on the simulation of shrinkage stress in concrete structures under dry-wet environments. In the modeling, an integrative model for autogenous and drying shrinkage predictions of concrete under dry-wet cycles is introduced first. Second, a model taking both cement hydration and moisture diffusion into account synchronously is used to calculate the distribution of interior humidity in concrete. Using the above two models, the distributions of shrinkage strain and stress in concrete columns made by normal and high strength concrete respectively under dry-wet cycles are calculated. The model results show that shrinkage gradient along the radial direction of the column from the center to outer surface increases with age as the outer circumference suffers to dry. The maximum and minimum shrinkage occur at the outer surface and the center of the column, respectively, under drying condition. As wetting starts, the shrinkage strain decreases with increase of interior humidity. The closer to the wetting face, the higher the humidity and the lower the shrinkage strain, as well as the lower the shrinkage stress. As results of the dry-wet cycles acting on the outer circumference of the column, cyclic stress status is developed within the area close to the outer surface of the column. The depth of the influencing zone of dry-wet cyclic action is influenced by concrete strength and dry-wet regime. For low strength concrete, relatively deeper influencing zone is expected compared with that of high strength concrete. The models are verified by concrete-steel composite ring tests and a good agreement between model and test results is found.
Drying shrinkage problems in high PI subgrade soils.
DOT National Transportation Integrated Search
2014-01-01
The main objective of this study was to investigate the longitudinal cracking in pavements due to drying : shrinkage of high PI subgrade soils. The study involved laboartory soil testing and modeling. The : shrinkage cracks usually occur within the v...
Chen, Wenan; McDonnell, Shannon K; Thibodeau, Stephen N; Tillmans, Lori S; Schaid, Daniel J
2016-11-01
Functional annotations have been shown to improve both the discovery power and fine-mapping accuracy in genome-wide association studies. However, the optimal strategy to incorporate the large number of existing annotations is still not clear. In this study, we propose a Bayesian framework to incorporate functional annotations in a systematic manner. We compute the maximum a posteriori solution and use cross validation to find the optimal penalty parameters. By extending our previous fine-mapping method CAVIARBF into this framework, we require only summary statistics as input. We also derived an exact calculation of Bayes factors using summary statistics for quantitative traits, which is necessary when a large proportion of trait variance is explained by the variants of interest, such as in fine mapping expression quantitative trait loci (eQTL). We compared the proposed method with PAINTOR using different strategies to combine annotations. Simulation results show that the proposed method achieves the best accuracy in identifying causal variants among the different strategies and methods compared. We also find that for annotations with moderate effects from a large annotation pool, screening annotations individually and then combining the top annotations can produce overly optimistic results. We applied these methods on two real data sets: a meta-analysis result of lipid traits and a cis-eQTL study of normal prostate tissues. For the eQTL data, incorporating annotations significantly increased the number of potential causal variants with high probabilities. Copyright © 2016 by the Genetics Society of America.
NASA Astrophysics Data System (ADS)
Kim, J.; Jeong, H.; Ji, M.; Jeong, K.; Yun, C.; Lee, J.; Chung, H.
2015-09-01
This paper examines the characteristics of butt welding joint shrinkage for shipbuilding and marine structures main plate. The shrinkage strain of butt welding joint which is caused by the process of heat input and cooling, results in the difference between dimensions of the actual parent metal and the dimensions of design. This, in turn, leads to poor quality in the production of ship blocks and reworking through period of correction brings about impediment on improvement of productivity. Through experiments on butt welding joint's shrinkage strain on large structures main plate, the deformation of welding residual stress in the form of I, Y, V was obtained. In addition, the results of experiments indicate that there is limited range of shrinkage in the range of 1 ∼ 2 mm in 11t ∼ 21.5t thickness and the effect of heat transfer of weld appears to be limited within 1000 mm based on one side of seam line so there was limited impact of weight of parent metal on the shrinkage. Finally, it has been learned that Shrinkage margin needs to be applied differently based on groove phenomenon in the design phase in order to minimize shrinkage.
Review and specification for shrinkage cracks of bridge decks : final report.
DOT National Transportation Integrated Search
2016-12-01
An existing standard method ASTM C157 is used to determine the length change or free shrinkage of an unrestrained concrete specimen. However, in bridge decks, the concrete is actually under restrained conditions, and thus free shrinkage test methods ...
CDMBE: A Case Description Model Based on Evidence
Zhu, Jianlin; Yang, Xiaoping; Zhou, Jing
2015-01-01
By combining the advantages of argument map and Bayesian network, a case description model based on evidence (CDMBE), which is suitable to continental law system, is proposed to describe the criminal cases. The logic of the model adopts the credibility logical reason and gets evidence-based reasoning quantitatively based on evidences. In order to consist with practical inference rules, five types of relationship and a set of rules are defined to calculate the credibility of assumptions based on the credibility and supportability of the related evidences. Experiments show that the model can get users' ideas into a figure and the results calculated from CDMBE are in line with those from Bayesian model. PMID:26421006
The two sides of the C-factor.
Fok, Alex S L; Aregawi, Wondwosen A
2018-04-01
The aim of this paper is to investigate the effects on shrinkage strain/stress development of the lateral constraints at the bonded surfaces of resin composite specimens used in laboratory measurement. Using three-dimensional (3D) Hooke's law, a recently developed shrinkage stress theory is extended to 3D to include the additional out-of-plane strain/stress induced by the lateral constraints at the bonded surfaces through the Poisson's ratio effect. The model contains a parameter that defines the relative thickness of the boundary layers, adjacent to the bonded surfaces, that are under such multiaxial stresses. The resulting differential equation is solved for the shrinkage stress under different boundary conditions. The accuracy of the model is assessed by comparing the numerical solutions with a wide range of experimental data, which include those from both shrinkage strain and shrinkage stress measurements. There is good agreement between theory and experiments. The model correctly predicts the different instrument-dependent effects that a specimen's configuration factor (C-factor) has on shrinkage stress. That is, for noncompliant stress-measuring instruments, shrinkage stress increases with the C-factor of the cylindrical specimen; while the opposite is true for compliant instruments. The model also provides a correction factor, which is a function of the C-factor, Poisson's ratio and boundary layer thickness of the specimen, for shrinkage strain measured using the bonded-disc method. For the resin composite examined, the boundary layers have a combined thickness that is ∼11.5% of the specimen's diameter. The theory provides a physical and mechanical basis for the C-factor using principles of engineering mechanics. The correction factor it provides allows the linear shrinkage strain of a resin composite to be obtained more accurately from the bonded-disc method. Published by Elsevier Ltd.
Bayesian spatio-temporal discard model in a demersal trawl fishery
NASA Astrophysics Data System (ADS)
Grazia Pennino, M.; Muñoz, Facundo; Conesa, David; López-Quílez, Antonio; Bellido, José M.
2014-07-01
Spatial management of discards has recently been proposed as a useful tool for the protection of juveniles, by reducing discard rates and can be used as a buffer against management errors and recruitment failure. In this study Bayesian hierarchical spatial models have been used to analyze about 440 trawl fishing operations of two different metiers, sampled between 2009 and 2012, in order to improve our understanding of factors that influence the quantity of discards and to identify their spatio-temporal distribution in the study area. Our analysis showed that the relative importance of each variable was different for each metier, with a few similarities. In particular, the random vessel effect and seasonal variability were identified as main driving variables for both metiers. Predictive maps of the abundance of discards and maps of the posterior mean of the spatial component show several hot spots with high discard concentration for each metier. We argue how the seasonal/spatial effects, and the knowledge about the factors influential to discarding, could potentially be exploited as potential mitigation measures for future fisheries management strategies. However, misidentification of hotspots and uncertain predictions can culminate in inappropriate mitigation practices which can sometimes be irreversible. The proposed Bayesian spatial method overcomes these issues, since it offers a unified approach which allows the incorporation of spatial random-effect terms, spatial correlation of the variables and the uncertainty of the parameters in the modeling process, resulting in a better quantification of the uncertainty and accurate predictions.
Pixel-based skin segmentation in psoriasis images.
George, Y; Aldeen, M; Garnavi, R
2016-08-01
In this paper, we present a detailed comparison study of skin segmentation methods for psoriasis images. Different techniques are modified and then applied to a set of psoriasis images acquired from the Royal Melbourne Hospital, Melbourne, Australia, with aim of finding the best technique suited for application to psoriasis images. We investigate the effect of different colour transformations on skin detection performance. In this respect, explicit skin thresholding is evaluated with three different decision boundaries (CbCr, HS and rgHSV). Histogram-based Bayesian classifier is applied to extract skin probability maps (SPMs) for different colour channels. This is then followed by using different approaches to find a binary skin map (SM) image from the SPMs. The approaches used include binary decision tree (DT) and Otsu's thresholding. Finally, a set of morphological operations are implemented to refine the resulted SM image. The paper provides detailed analysis and comparison of the performance of the Bayesian classifier in five different colour spaces (YCbCr, HSV, RGB, XYZ and CIELab). The results show that histogram-based Bayesian classifier is more effective than explicit thresholding, when applied to psoriasis images. It is also found that decision boundary CbCr outperforms HS and rgHSV. Another finding is that the SPMs of Cb, Cr, H and B-CIELab colour bands yield the best SMs for psoriasis images. In this study, we used a set of 100 psoriasis images for training and testing the presented methods. True Positive (TP) and True Negative (TN) are used as statistical evaluation measures.
Comparison of shrinkage related properties of various patch repair materials
NASA Astrophysics Data System (ADS)
Kristiawan, S. A.; Fitrianto, R. S.
2017-02-01
A patch repair material has been developed in the form of unsaturated polyester resin (UPR)-mortar. The performance and durability of this material are governed by its compatibility with the concrete being repaired. One of the compatibility issue that should be tackled is the dimensional compatibility as a result of differential shrinkage between the repair material and the concrete substrate. This research aims to evaluate such shrinkage related properties of UPR-mortar and to compare with those of other patch repair materials. The investigation includes the following aspects: free shrinkage, resistance to delamination and cracking. The results indicate that UPR-mortar poses a lower free shrinkage, lower risk of both delamination and cracking tendency in comparison to other repair materials.
Bayesian decoding using unsorted spikes in the rat hippocampus
Layton, Stuart P.; Chen, Zhe; Wilson, Matthew A.
2013-01-01
A fundamental task in neuroscience is to understand how neural ensembles represent information. Population decoding is a useful tool to extract information from neuronal populations based on the ensemble spiking activity. We propose a novel Bayesian decoding paradigm to decode unsorted spikes in the rat hippocampus. Our approach uses a direct mapping between spike waveform features and covariates of interest and avoids accumulation of spike sorting errors. Our decoding paradigm is nonparametric, encoding model-free for representing stimuli, and extracts information from all available spikes and their waveform features. We apply the proposed Bayesian decoding algorithm to a position reconstruction task for freely behaving rats based on tetrode recordings of rat hippocampal neuronal activity. Our detailed decoding analyses demonstrate that our approach is efficient and better utilizes the available information in the nonsortable hash than the standard sorting-based decoding algorithm. Our approach can be adapted to an online encoding/decoding framework for applications that require real-time decoding, such as brain-machine interfaces. PMID:24089403
NASA Astrophysics Data System (ADS)
Gholamhoseini, Alireza
2016-03-01
Relatively little research has been reported on the time-dependent in-service behavior of composite concrete slabs with profiled steel decking as permanent formwork and little guidance is available for calculating long-term deflections. The drying shrinkage profile through the thickness of a composite slab is greatly affected by the impermeable steel deck at the slab soffit, and this has only recently been quantified. This paper presents the results of long-term laboratory tests on composite slabs subjected to both drying shrinkage and sustained loads. Based on laboratory measurements, a design model for the shrinkage strain profile through the thickness of a slab is proposed. The design model is based on some modifications to an existing creep and shrinkage prediction model B3. In addition, an analytical model is developed to calculate the time-dependent deflection of composite slabs taking into account the time-dependent effects of creep and shrinkage. The calculated deflections are shown to be in good agreement with the experimental measurements.
Shrinkage-stress kinetics of photopolymerised resin-composites
NASA Astrophysics Data System (ADS)
Satterthwaite, Julian D.
The use of directly-placed substances as restorative materials in teeth remains the technique of choice for preserving function and form in teeth that have cavities. The current aesthetic restorative materials of choice are resin-composite materials, although these undergo molecular densification during polymerisation, which has deleterious effects. Although shrinkage-strain is the cause, it is the shrinkage-stress effects that may be seen as being responsible for the problems with adhesive resin-based restorations that are encountered clinically, the bond may fail with separation of the material from the cavity wall, leading to marginal discolouration, pulpal irritation and subsequent necrosis, post operative sensitivity, recurrent caries and eventual failure of restorations. Other outcomes include cohesive fracture of enamel or cusps, cuspal movement (strain) and persistent pain. The aims of this research were to characterise the effects of variations in resin-composite formulation on shrinkage-strain and shrinkage-stress kinetics. In particular, the influence of the size and morphology of the dispersed phase was investigated through the study of experimental formulations. Polymerisation shrinkage-strain kinetics were assessed with the bonded-disk method. It was found that resin-composites with spherical filler particles had significantly lower shrinkage-strain compared to those with irregular filler particles. Additionally, shrinkage-strain was found to be dependent on the size of filler particle, and this trend was related, in part, to differences in the degree of conversion. The data were also used to calculate the activation energy for each material, and a relationship between this and filler particle size for the irregular fillers was demonstrated. A fixed-compliance cantilever beam instrument (Bioman) was used for characterisation of shrinkage-stress kinetics. Significant differences were identified between materials in relation to filler particle size and morphology. A hypothesis for these interactions, relating to surface area effects, was presented. The complex interactions leading to the development of shrinkage-stress were investigated further. Shrinkage-stress over a 24 hour period was assessed, and modelled through application of the Kohlrausch-Williams-Watts equation. The effect of variation in specimen dimensions were assessed, and it was shown that the relationship of the specimen height and diameter to shrinkage-stress is a function not only of the C-factor (the ratio of bonded to unbonded surfaces), but also how the C-factor is created. These relationships were characterised and descriptive equations fitted to the data to describe the phenomena. Shrinkage-stress measurements against a variety of test surfaces were also assessed, and the use of stainless steel as a test surface was validated. Finally, exploratory research was undertaken to develop a moire interferometer for the measurement of in-plane displacements and strain arising in teeth due to polymerisation of resin-composite restorations.
Kaisarly, Dalia; El Gezawi, Moataz; Xu, Xiaohui; Rösch, Peter; Kunzelmann, Karl-Heinz
2018-01-01
Polymerization shrinkage of dental resin composites leads to stress build-up at the tooth-restoration interface that predisposes the restoration to debonding. In contrast to the heterogeneity of enamel and dentin, this study investigated the effect of boundary conditions in artificial cavity models such as ceramic and Teflon. Ceramic serves as a homogenous substrate that provides optimal bonding conditions, which we presented in the form of etched and silanized ceramic in addition to an etched, silanized and bonded ceramic cavity. In contrast, the Teflon cavity presented a non-adhesive boundary condition that provided an exaggerated condition of poor bonding as in the case of contamination during the application procedure or a poor bonding substrate such as sclerotic or deep dentin. The greatest 3D shrinkage vectors and movement in the axial direction were observed in the ceramic cavity with the bonding agent followed by the silanized ceramic cavity, and smallest shrinkage vectors and axial movements were observed in the Teflon cavity. The shrinkage vectors in the ceramic cavities exhibited downward movement toward the cavity bottom with great downward shrinkage of the free surface. The shrinkage vectors in the Teflon cavity pointed towards the center of the restoration with lateral movement greater at one side denoting the site of first detachment from the cavity walls. These results proved that the boundary conditions, in terms of bonding substrates, significantly influenced the shrinkage direction. Copyright © 2017 Elsevier Ltd. All rights reserved.
VizieR Online Data Catalog: X-ray sources in the AKARI NEP deep field (Krumpe+, 2015)
NASA Astrophysics Data System (ADS)
Krumpe, M.; Miyaji, T.; Brunner, H.; Hanami, H.; Ishigaki, T.; Takagi, T.; Markowitz, A. G.; Goto, T.; Malkan, M. A.; Matsuhara, H.; Pearson, C.; Ueda, Y.; Wada, T.
2015-06-01
The fits images labelled SeMap* are the sensitivity maps in which we give the minimum flux that would have caused a detection at each position. This flux depends on the maximum likelihood threshold chosen in the source detection run, the point spread function, and the background level at the chosen position. We create sensitivity maps in different energy bands (0.5-2, 0.5-7, 2-4, 2-7, and 4-7keV) by searching for the flux to reject the null-hypothesis that the flux at a given position is only caused by a background fluctuation. In a chosen energy band, we determine for each position in the survey the flux required to obtain a certain Poisson probability above the background counts. Since ML=-ln(P), we know from our ML=12 threshold the probability we are aiming for. In practice, we search for a value of -ln P_total that falls within Delta ML=+/-0.2 of our targeted ML threshold. This tolerance range corresponds to having one spurious source more or less in the whole survey. Note, that outside the deep Subaru/Suprime-Cam imaging the sensitivity maps should be used with caution since we assume for their generation a ML=12 over the whole area covered by Chandra. More details on the procedure of producing the sensitivity maps, including the PSF-summed background map and PSF-weighted averaged exposure maps are given in the paper, section 5.3. The fits images labelled u90* are the upper limit maps, where the upper 90 per cent confidence flux limit is given at each position. We take a Bayesian approach following Kraft, Burrows & Nousek, 1991ApJ...374..344K. Consequently, we obtain the upper 90~per cent confidence flux limit by searching for the flux such that given the observed counts the Bayesian probability of having this flux or larger is 10~per cent. More details on the procedure of producing the upper 90 per cent flux limit maps are given in the paper, section 5.4. (6 data files).
Speech Enhancement, Gain, and Noise Spectrum Adaptation Using Approximate Bayesian Estimation
Hao, Jiucang; Attias, Hagai; Nagarajan, Srikantan; Lee, Te-Won; Sejnowski, Terrence J.
2010-01-01
This paper presents a new approximate Bayesian estimator for enhancing a noisy speech signal. The speech model is assumed to be a Gaussian mixture model (GMM) in the log-spectral domain. This is in contrast to most current models in frequency domain. Exact signal estimation is a computationally intractable problem. We derive three approximations to enhance the efficiency of signal estimation. The Gaussian approximation transforms the log-spectral domain GMM into the frequency domain using minimal Kullback–Leiber (KL)-divergency criterion. The frequency domain Laplace method computes the maximum a posteriori (MAP) estimator for the spectral amplitude. Correspondingly, the log-spectral domain Laplace method computes the MAP estimator for the log-spectral amplitude. Further, the gain and noise spectrum adaptation are implemented using the expectation–maximization (EM) algorithm within the GMM under Gaussian approximation. The proposed algorithms are evaluated by applying them to enhance the speeches corrupted by the speech-shaped noise (SSN). The experimental results demonstrate that the proposed algorithms offer improved signal-to-noise ratio, lower word recognition error rate, and less spectral distortion. PMID:20428253
Development and implementation of a Bayesian-based aquifer vulnerability assessment in Florida
Arthur, J.D.; Wood, H.A.R.; Baker, A.E.; Cichon, J.R.; Raines, G.L.
2007-01-01
The Florida Aquifer Vulnerability Assessment (FAVA) was designed to provide a tool for environmental, regulatory, resource management, and planning professionals to facilitate protection of groundwater resources from surface sources of contamination. The FAVA project implements weights-of-evidence (WofE), a data-driven, Bayesian-probabilistic model to generate a series of maps reflecting relative aquifer vulnerability of Florida's principal aquifer systems. The vulnerability assessment process, from project design to map implementation is described herein in reference to the Floridan aquifer system (FAS). The WofE model calculates weighted relationships between hydrogeologic data layers that influence aquifer vulnerability and ambient groundwater parameters in wells that reflect relative degrees of vulnerability. Statewide model input data layers (evidential themes) include soil hydraulic conductivity, density of karst features, thickness of aquifer confinement, and hydraulic head difference between the FAS and the watertable. Wells with median dissolved nitrogen concentrations exceeding statistically established thresholds serve as training points in the WofE model. The resulting vulnerability map (response theme) reflects classified posterior probabilities based on spatial relationships between the evidential themes and training points. The response theme is subjected to extensive sensitivity and validation testing. Among the model validation techniques is calculation of a response theme based on a different water-quality indicator of relative recharge or vulnerability: dissolved oxygen. Successful implementation of the FAVA maps was facilitated by the overall project design, which included a needs assessment and iterative technical advisory committee input and review. Ongoing programs to protect Florida's springsheds have led to development of larger-scale WofE-based vulnerability assessments. Additional applications of the maps include land-use planning amendments and prioritization of land purchases to protect groundwater resources. ?? International Association for Mathematical Geology 2007.
Bayesian mapping of HIV infection among women of reproductive age in Rwanda.
Niragire, François; Achia, Thomas N O; Lyambabaje, Alexandre; Ntaganira, Joseph
2015-01-01
HIV prevalence is rising and has been consistently higher among women in Rwanda whereas a decreasing national HIV prevalence rate in the adult population has stabilised since 2005. Factors explaining the increased vulnerability of women to HIV infection are not currently well understood. A statistical mapping at smaller geographic units and the identification of key HIV risk factors are crucial for pragmatic and more efficient interventions. The data used in this study were extracted from the 2010 Rwanda Demographic and Health Survey data for 6952 women. A full Bayesian geo-additive logistic regression model was fitted to data in order to assess the effect of key risk factors and map district-level spatial effects on the risk of HIV infection. The results showed that women who had STIs, concurrent sexual partners in the 12 months prior to the survey, a sex debut at earlier age than 19 years, were living in a woman-headed or high-economic status household were significantly associated with a higher risk of HIV infection. There was a protective effect of high HIV knowledge and perception. Women occupied in agriculture, and those residing in rural areas were also associated with lower risk of being infected. This study provides district-level maps of the variation of HIV infection among women of child-bearing age in Rwanda. The maps highlight areas where women are at a higher risk of infection; the aspect that proximate and distal factors alone could not uncover. There are distinctive geographic patterns, although statistically insignificant, of the risk of HIV infection suggesting potential effectiveness of district specific interventions. The results also suggest that changes in sexual behaviour can yield significant results in controlling HIV infection in Rwanda.
Bayesian Mapping of HIV Infection among Women of Reproductive Age in Rwanda
Niragire, François; Achia, Thomas N. O.; Lyambabaje, Alexandre; Ntaganira, Joseph
2015-01-01
HIV prevalence is rising and has been consistently higher among women in Rwanda whereas a decreasing national HIV prevalence rate in the adult population has stabilised since 2005. Factors explaining the increased vulnerability of women to HIV infection are not currently well understood. A statistical mapping at smaller geographic units and the identification of key HIV risk factors are crucial for pragmatic and more efficient interventions. The data used in this study were extracted from the 2010 Rwanda Demographic and Health Survey data for 6952 women. A full Bayesian geo-additive logistic regression model was fitted to data in order to assess the effect of key risk factors and map district-level spatial effects on the risk of HIV infection. The results showed that women who had STIs, concurrent sexual partners in the 12 months prior to the survey, a sex debut at earlier age than 19 years, were living in a woman-headed or high-economic status household were significantly associated with a higher risk of HIV infection. There was a protective effect of high HIV knowledge and perception. Women occupied in agriculture, and those residing in rural areas were also associated with lower risk of being infected. This study provides district-level maps of the variation of HIV infection among women of child-bearing age in Rwanda. The maps highlight areas where women are at a higher risk of infection; the aspect that proximate and distal factors alone could not uncover. There are distinctive geographic patterns, although statistically insignificant, of the risk of HIV infection suggesting potential effectiveness of district specific interventions. The results also suggest that changes in sexual behaviour can yield significant results in controlling HIV infection in Rwanda. PMID:25811462
DOT National Transportation Integrated Search
2012-10-01
The main objective of this study was to determine the effect on shrinkage, creep, : and abrasion resistance of high-volume fly ash (HVFA) concrete. The HVFA concrete : test program consisted of comparing the shrinkage, creep, and abrasion performance...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maruyama, I., E-mail: ippei@dali.nuac.nagoya-u.ac.jp; Teramoto, A.
Ultra-high-strength concrete with a large unit cement content undergoes considerable temperature increase inside members due to hydration heat, leading to a higher risk of internal cracking. Hence, the temperature dependence of autogenous shrinkage of cement pastes made with silica fume premixed cement with a water–binder ratio of 0.15 was studied extensively. Development of autogenous shrinkage showed different behaviors before and after the inflection point, and dependence on the temperature after mixing and subsequent temperature histories. The difference in autogenous shrinkage behavior poses problems for winter construction because autogenous shrinkage may increase with decrease in temperature after mixing before the inflectionmore » point and with increase in temperature inside concrete members with large cross sections.« less
Ge, Junhao; Trujillo, Marianela; Stansbury, Jeffrey
2005-12-01
This study was conducted to determine whether novel photopolymerizable formulations based on dimethacrylate monomers with bulky substituent groups could provide low polymerization shrinkage without sacrifice to degree of conversion, and mechanical properties of the polymers. Relatively high molecular weight dimethacrylate monomers were prepared from rigid bisphenol A core groups. Photopolymerization kinetics and shrinkage as well as flexural strength and glass transition temperatures were evaluated for various comonomer compositions. Copolymerization of the bulky monomers with TEGDMA show higher conversion but similar shrinkage compared with Bis-GMA/TEGDMA controls. The resulting polymers have suitable mechanical strength properties for potential dental restorative materials applications. When copolymerized with PEGDMA, the bulky monomers show lower shrinkage, comparable conversion, and more homogeneous polymeric network structures compared with Bis-EMA/PEGDMA systems. The novel dimethacrylate monomers with reduced reactive group densities can decrease the polymerization shrinkage as anticipated, but there is no significant evidence that the bulky substituent groups have any additional effect on reducing shrinkage based on the physical interactions as polymer side chains. The bulky groups improve the double bond conversion and help maintain the mechanical properties of the resulting polymer, which would otherwise decrease rapidly due to the reduced crosslinking density. Further, it was found that bulky monomers help produce more homogeneous copolymer networks.
Bouhrara, Mustapha; Spencer, Richard G.
2015-01-01
Myelin water fraction (MWF) mapping with magnetic resonance imaging has led to the ability to directly observe myelination and demyelination in both the developing brain and in disease. Multicomponent driven equilibrium single pulse observation of T1 and T2 (mcDESPOT) has been proposed as a rapid approach for multicomponent relaxometry and has been applied to map MWF in human brain. However, even for the simplest two-pool signal model consisting of MWF and non-myelin-associated water, the dimensionality of the parameter space for obtaining MWF estimates remains high. This renders parameter estimation difficult, especially at low-to-moderate signal-to-noise ratios (SNR), due to the presence of local minima and the flatness of the fit residual energy surface used for parameter determination using conventional nonlinear least squares (NLLS)-based algorithms. In this study, we introduce three Bayesian approaches for analysis of the mcDESPOT signal model to determine MWF. Given the high dimensional nature of mcDESPOT signal model, and, thereby, the high dimensional marginalizations over nuisance parameters needed to derive the posterior probability distribution of MWF parameter, the introduced Bayesian analyses use different approaches to reduce the dimensionality of the parameter space. The first approach uses normalization by average signal amplitude, and assumes that noise can be accurately estimated from signal-free regions of the image. The second approach likewise uses average amplitude normalization, but incorporates a full treatment of noise as an unknown variable through marginalization. The third approach does not use amplitude normalization and incorporates marginalization over both noise and signal amplitude. Through extensive Monte Carlo numerical simulations and analysis of in-vivo human brain datasets exhibiting a range of SNR and spatial resolution, we demonstrated the markedly improved accuracy and precision in the estimation of MWF using these Bayesian methods as compared to the stochastic region contraction (SRC) implementation of NLLS. PMID:26499810
2011-01-01
Background Previous research has documented heterogeneity in the effects of maternal education on adverse birth outcomes by nativity and Hispanic subgroup in the United States. In this article, we considered the risk of preterm birth (PTB) using 9 years of vital statistics birth data from New York City. We employed finer categorizations of exposure than used previously and estimated the risk dose-response across the range of education by nativity and ethnicity. Methods Using Bayesian random effects logistic regression models with restricted quadratic spline terms for years of completed maternal education, we calculated and plotted the estimated posterior probabilities of PTB (gestational age < 37 weeks) for each year of education by ethnic and nativity subgroups adjusted for only maternal age, as well as with more extensive covariate adjustments. We then estimated the posterior risk difference between native and foreign born mothers by ethnicity over the continuous range of education exposures. Results The risk of PTB varied substantially by education, nativity and ethnicity. Native born groups showed higher absolute risk of PTB and declining risk associated with higher levels of education beyond about 10 years, as did foreign-born Puerto Ricans. For most other foreign born groups, however, risk of PTB was flatter across the education range. For Mexicans, Central Americans, Dominicans, South Americans and "Others", the protective effect of foreign birth diminished progressively across the educational range. Only for Puerto Ricans was there no nativity advantage for the foreign born, although small numbers of foreign born Cubans limited precision of estimates for that group. Conclusions Using flexible Bayesian regression models with random effects allowed us to estimate absolute risks without strong modeling assumptions. Risk comparisons for any sub-groups at any exposure level were simple to calculate. Shrinkage of posterior estimates through the use of random effects allowed for finer categorization of exposures without restricting joint effects to follow a fixed parametric scale. Although foreign born Hispanic women with the least education appeared to generally have low risk, this seems likely to be a marker for unmeasured environmental and behavioral factors, rather than a causally protective effect of low education itself. PMID:21504612
Kaufman, Jay S; MacLehose, Richard F; Torrone, Elizabeth A; Savitz, David A
2011-04-19
Previous research has documented heterogeneity in the effects of maternal education on adverse birth outcomes by nativity and Hispanic subgroup in the United States. In this article, we considered the risk of preterm birth (PTB) using 9 years of vital statistics birth data from New York City. We employed finer categorizations of exposure than used previously and estimated the risk dose-response across the range of education by nativity and ethnicity. Using Bayesian random effects logistic regression models with restricted quadratic spline terms for years of completed maternal education, we calculated and plotted the estimated posterior probabilities of PTB (gestational age < 37 weeks) for each year of education by ethnic and nativity subgroups adjusted for only maternal age, as well as with more extensive covariate adjustments. We then estimated the posterior risk difference between native and foreign born mothers by ethnicity over the continuous range of education exposures. The risk of PTB varied substantially by education, nativity and ethnicity. Native born groups showed higher absolute risk of PTB and declining risk associated with higher levels of education beyond about 10 years, as did foreign-born Puerto Ricans. For most other foreign born groups, however, risk of PTB was flatter across the education range. For Mexicans, Central Americans, Dominicans, South Americans and "Others", the protective effect of foreign birth diminished progressively across the educational range. Only for Puerto Ricans was there no nativity advantage for the foreign born, although small numbers of foreign born Cubans limited precision of estimates for that group. Using flexible Bayesian regression models with random effects allowed us to estimate absolute risks without strong modeling assumptions. Risk comparisons for any sub-groups at any exposure level were simple to calculate. Shrinkage of posterior estimates through the use of random effects allowed for finer categorization of exposures without restricting joint effects to follow a fixed parametric scale. Although foreign born Hispanic women with the least education appeared to generally have low risk, this seems likely to be a marker for unmeasured environmental and behavioral factors, rather than a causally protective effect of low education itself.
Covariance specification and estimation to improve top-down Green House Gas emission estimates
NASA Astrophysics Data System (ADS)
Ghosh, S.; Lopez-Coto, I.; Prasad, K.; Whetstone, J. R.
2015-12-01
The National Institute of Standards and Technology (NIST) operates the North-East Corridor (NEC) project and the Indianapolis Flux Experiment (INFLUX) in order to develop measurement methods to quantify sources of Greenhouse Gas (GHG) emissions as well as their uncertainties in urban domains using a top down inversion method. Top down inversion updates prior knowledge using observations in a Bayesian way. One primary consideration in a Bayesian inversion framework is the covariance structure of (1) the emission prior residuals and (2) the observation residuals (i.e. the difference between observations and model predicted observations). These covariance matrices are respectively referred to as the prior covariance matrix and the model-data mismatch covariance matrix. It is known that the choice of these covariances can have large effect on estimates. The main objective of this work is to determine the impact of different covariance models on inversion estimates and their associated uncertainties in urban domains. We use a pseudo-data Bayesian inversion framework using footprints (i.e. sensitivities of tower measurements of GHGs to surface emissions) and emission priors (based on Hestia project to quantify fossil-fuel emissions) to estimate posterior emissions using different covariance schemes. The posterior emission estimates and uncertainties are compared to the hypothetical truth. We find that, if we correctly specify spatial variability and spatio-temporal variability in prior and model-data mismatch covariances respectively, then we can compute more accurate posterior estimates. We discuss few covariance models to introduce space-time interacting mismatches along with estimation of the involved parameters. We then compare several candidate prior spatial covariance models from the Matern covariance class and estimate their parameters with specified mismatches. We find that best-fitted prior covariances are not always best in recovering the truth. To achieve accuracy, we perform a sensitivity study to further tune covariance parameters. Finally, we introduce a shrinkage based sample covariance estimation technique for both prior and mismatch covariances. This technique allows us to achieve similar accuracy nonparametrically in a more efficient and automated way.
Robust, Adaptive Functional Regression in Functional Mixed Model Framework.
Zhu, Hongxiao; Brown, Philip J; Morris, Jeffrey S
2011-09-01
Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets.
Robust, Adaptive Functional Regression in Functional Mixed Model Framework
Zhu, Hongxiao; Brown, Philip J.; Morris, Jeffrey S.
2012-01-01
Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets. PMID:22308015
Sparse Logistic Regression for Diagnosis of Liver Fibrosis in Rat by Using SCAD-Penalized Likelihood
Yan, Fang-Rong; Lin, Jin-Guan; Liu, Yu
2011-01-01
The objective of the present study is to find out the quantitative relationship between progression of liver fibrosis and the levels of certain serum markers using mathematic model. We provide the sparse logistic regression by using smoothly clipped absolute deviation (SCAD) penalized function to diagnose the liver fibrosis in rats. Not only does it give a sparse solution with high accuracy, it also provides the users with the precise probabilities of classification with the class information. In the simulative case and the experiment case, the proposed method is comparable to the stepwise linear discriminant analysis (SLDA) and the sparse logistic regression with least absolute shrinkage and selection operator (LASSO) penalty, by using receiver operating characteristic (ROC) with bayesian bootstrap estimating area under the curve (AUC) diagnostic sensitivity for selected variable. Results show that the new approach provides a good correlation between the serum marker levels and the liver fibrosis induced by thioacetamide (TAA) in rats. Meanwhile, this approach might also be used in predicting the development of liver cirrhosis. PMID:21716672
NASA Astrophysics Data System (ADS)
Mansuy, N. R.; Paré, D.; Thiffault, E.
2015-12-01
Large-scale mapping of soil properties is increasingly important for environmental resource management. Whileforested areas play critical environmental roles at local and global scales, forest soil maps are typically at lowresolution.The objective of this study was to generate continuous national maps of selected soil variables (C, N andsoil texture) for the Canadian managed forest landbase at 250 m resolution. We produced these maps using thekNN method with a training dataset of 538 ground-plots fromthe National Forest Inventory (NFI) across Canada,and 18 environmental predictor variables. The best predictor variables were selected (7 topographic and 5 climaticvariables) using the Least Absolute Shrinkage and Selection Operator method. On average, for all soil variables,topographic predictors explained 37% of the total variance versus 64% for the climatic predictors. Therelative root mean square error (RMSE%) calculated with the leave-one-out cross-validation method gave valuesranging between 22% and 99%, depending on the soil variables tested. RMSE values b 40% can be considered agood imputation in light of the low density of points used in this study. The study demonstrates strong capabilitiesfor mapping forest soil properties at 250m resolution, compared with the current Soil Landscape of CanadaSystem, which is largely oriented towards the agricultural landbase. The methodology used here can potentiallycontribute to the national and international need for spatially explicit soil information in resource managementscience.
Li, Jianying; Fok, Alex S L; Satterthwaite, Julian; Watts, David C
2009-05-01
The aim of this study was to measure the full-field polymerization shrinkage of dental composites using optical image correlation method. Bar specimens of cross-section 4mm x 2mm and length 10mm approximately were light cured with two irradiances, 450 mW/cm(2) and 180 mW/cm(2), respectively. The curing light was generated with Optilux 501 (Kerr) and the two different irradiances were achieved by adjusting the distance between the light tip and the specimen. A single-camera 2D measuring system was used to record the deformation of the composite specimen for 30 min at a frequency of 0.1 Hz. The specimen surface under observation was sprayed with paint to produce sufficient contrast to allow tracking of individual points on the surface. The curing light was applied to one end of the specimen for 40s during which the painted surface was fully covered. After curing, the cover was removed immediately so that deformation of the painted surface could be recorded by the camera. The images were then analyzed with specialist software and the volumetric shrinkage determined along the beam length. A typical shrinkage strain field obtained on a specimen surface was highly non-uniform, even at positions of constant distance from the irradiation surface, indicating possible heterogeneity in material composition and shrinkage behavior in the composite. The maximum volumetric shrinkage strain of approximately 1.5% occurred at a subsurface distance of about 1mm, instead of at the irradiation surface. After reaching its peak value, the shrinkage strain then gradually decreased with increasing distance along the beam length, before leveling off to a value of approximately 0.2% at a distance of 4-5mm. The maximum volumetric shrinkage obtained agreed well with the value of 1.6% reported by the manufacturer for the composite examined in this work. Using irradiance of 180 mW/cm(2) resulted in only slightly less polymerization shrinkage than using irradiance of 450 mW/cm(2). Compared to the other measurement methods, the image correlation method is capable of producing full-field information about the polymerization shrinkage behavior of dental composites.
Scoffoni, Christine; Vuong, Christine; Diep, Steven; Cochard, Hervé; Sack, Lawren
2014-01-01
Leaf shrinkage with dehydration has attracted attention for over 100 years, especially as it becomes visibly extreme during drought. However, little has been known of its correlation with physiology. Computer simulations of the leaf hydraulic system showed that a reduction of hydraulic conductance of the mesophyll pathways outside the xylem would cause a strong decline of leaf hydraulic conductance (Kleaf). For 14 diverse species, we tested the hypothesis that shrinkage during dehydration (i.e. in whole leaf, cell and airspace thickness, and leaf area) is associated with reduction in Kleaf at declining leaf water potential (Ψleaf). We tested hypotheses for the linkage of leaf shrinkage with structural and physiological water relations parameters, including modulus of elasticity, osmotic pressure at full turgor, turgor loss point (TLP), and cuticular conductance. Species originating from moist habitats showed substantial shrinkage during dehydration before reaching TLP, in contrast with species originating from dry habitats. Across species, the decline of Kleaf with mild dehydration (i.e. the initial slope of the Kleaf versus Ψleaf curve) correlated with the decline of leaf thickness (the slope of the leaf thickness versus Ψleaf curve), as expected based on predictions from computer simulations. Leaf thickness shrinkage before TLP correlated across species with lower modulus of elasticity and with less negative osmotic pressure at full turgor, as did leaf area shrinkage between full turgor and oven desiccation. These findings point to a role for leaf shrinkage in hydraulic decline during mild dehydration, with potential impacts on drought adaptation for cells and leaves, influencing plant ecological distributions. PMID:24306532
Mechanism of Macrosegregation Formation in Continuous Casting Slab: A Numerical Simulation Study
NASA Astrophysics Data System (ADS)
Jiang, Dongbin; Wang, Weiling; Luo, Sen; Ji, Cheng; Zhu, Miaoyong
2017-12-01
Solidified shell bulging is supposed to be the main reason for slab center segregation, while the influence of thermal shrinkage rarely has been considered. In this article, a thermal shrinkage model coupled with the multiphase solidification model is developed to investigate the effect of the thermal shrinkage, solidification shrinkage, grain sedimentation, and thermal flow on solute transport in the continuous casting slab. In this model, the initial equiaxed grains contract freely with the temperature decrease, while the coherent equiaxed grains and columnar phase move directionally toward the slab surface. The results demonstrate that the center positive segregation accompanied by negative segregation in the periphery zone is mainly caused by thermal shrinkage. During the solidification process, liquid phase first transports toward the slab surface to compensate for thermal shrinkage, which is similar to the case considering solidification shrinkage, and then it moves opposite to the slab center near the solidification end. It is attributed to the sharp decrease of center temperature and the intensive contract of solid phase, which cause the enriched liquid to be squeezed out. With the effect of grain sedimentation and thermal flow, the negative segregation at the external arc side (zone A1) and the positive segregation near the columnar-to-equiaxed transition at the inner arc side (position B1) come into being. Besides, it is found that the grain sedimentation and thermal flow only influence solute transport before equiaxed grains impinge with each other, while the solidification and thermal shrinkage still affect solute redistribution in the later stage.
Leaf shrinkage with dehydration: coordination with hydraulic vulnerability and drought tolerance.
Scoffoni, Christine; Vuong, Christine; Diep, Steven; Cochard, Hervé; Sack, Lawren
2014-04-01
Leaf shrinkage with dehydration has attracted attention for over 100 years, especially as it becomes visibly extreme during drought. However, little has been known of its correlation with physiology. Computer simulations of the leaf hydraulic system showed that a reduction of hydraulic conductance of the mesophyll pathways outside the xylem would cause a strong decline of leaf hydraulic conductance (K(leaf)). For 14 diverse species, we tested the hypothesis that shrinkage during dehydration (i.e. in whole leaf, cell and airspace thickness, and leaf area) is associated with reduction in K(leaf) at declining leaf water potential (Ψ(leaf)). We tested hypotheses for the linkage of leaf shrinkage with structural and physiological water relations parameters, including modulus of elasticity, osmotic pressure at full turgor, turgor loss point (TLP), and cuticular conductance. Species originating from moist habitats showed substantial shrinkage during dehydration before reaching TLP, in contrast with species originating from dry habitats. Across species, the decline of K(leaf) with mild dehydration (i.e. the initial slope of the K(leaf) versus Ψ(leaf) curve) correlated with the decline of leaf thickness (the slope of the leaf thickness versus Ψ(leaf) curve), as expected based on predictions from computer simulations. Leaf thickness shrinkage before TLP correlated across species with lower modulus of elasticity and with less negative osmotic pressure at full turgor, as did leaf area shrinkage between full turgor and oven desiccation. These findings point to a role for leaf shrinkage in hydraulic decline during mild dehydration, with potential impacts on drought adaptation for cells and leaves, influencing plant ecological distributions.
Wavelet-Bayesian inference of cosmic strings embedded in the cosmic microwave background
NASA Astrophysics Data System (ADS)
McEwen, J. D.; Feeney, S. M.; Peiris, H. V.; Wiaux, Y.; Ringeval, C.; Bouchet, F. R.
2017-12-01
Cosmic strings are a well-motivated extension to the standard cosmological model and could induce a subdominant component in the anisotropies of the cosmic microwave background (CMB), in addition to the standard inflationary component. The detection of strings, while observationally challenging, would provide a direct probe of physics at very high-energy scales. We develop a framework for cosmic string inference from observations of the CMB made over the celestial sphere, performing a Bayesian analysis in wavelet space where the string-induced CMB component has distinct statistical properties to the standard inflationary component. Our wavelet-Bayesian framework provides a principled approach to compute the posterior distribution of the string tension Gμ and the Bayesian evidence ratio comparing the string model to the standard inflationary model. Furthermore, we present a technique to recover an estimate of any string-induced CMB map embedded in observational data. Using Planck-like simulations, we demonstrate the application of our framework and evaluate its performance. The method is sensitive to Gμ ∼ 5 × 10-7 for Nambu-Goto string simulations that include an integrated Sachs-Wolfe contribution only and do not include any recombination effects, before any parameters of the analysis are optimized. The sensitivity of the method compares favourably with other techniques applied to the same simulations.
NASA Astrophysics Data System (ADS)
Kiyan, Duygu; Rath, Volker; Delhaye, Robert
2017-04-01
The frequency- and time-domain airborne electromagnetic (AEM) data collected under the Tellus projects of the Geological Survey of Ireland (GSI) which represent a wealth of information on the multi-dimensional electrical structure of Ireland's near-surface. Our project, which was funded by GSI under the framework of their Short Call Research Programme, aims to develop and implement inverse techniques based on various Bayesian methods for these densely sampled data. We have developed a highly flexible toolbox using Python language for the one-dimensional inversion of AEM data along the flight lines. The computational core is based on an adapted frequency- and time-domain forward modelling core derived from the well-tested open-source code AirBeo, which was developed by the CSIRO (Australia) and the AMIRA consortium. Three different inversion methods have been implemented: (i) Tikhonov-type inversion including optimal regularisation methods (Aster el al., 2012; Zhdanov, 2015), (ii) Bayesian MAP inversion in parameter and data space (e.g. Tarantola, 2005), and (iii) Full Bayesian inversion with Markov Chain Monte Carlo (Sambridge and Mosegaard, 2002; Mosegaard and Sambridge, 2002), all including different forms of spatial constraints. The methods have been tested on synthetic and field data. This contribution will introduce the toolbox and present case studies on the AEM data from the Tellus projects.
Planetary micro-rover operations on Mars using a Bayesian framework for inference and control
NASA Astrophysics Data System (ADS)
Post, Mark A.; Li, Junquan; Quine, Brendan M.
2016-03-01
With the recent progress toward the application of commercially-available hardware to small-scale space missions, it is now becoming feasible for groups of small, efficient robots based on low-power embedded hardware to perform simple tasks on other planets in the place of large-scale, heavy and expensive robots. In this paper, we describe design and programming of the Beaver micro-rover developed for Northern Light, a Canadian initiative to send a small lander and rover to Mars to study the Martian surface and subsurface. For a small, hardware-limited rover to handle an uncertain and mostly unknown environment without constant management by human operators, we use a Bayesian network of discrete random variables as an abstraction of expert knowledge about the rover and its environment, and inference operations for control. A framework for efficient construction and inference into a Bayesian network using only the C language and fixed-point mathematics on embedded hardware has been developed for the Beaver to make intelligent decisions with minimal sensor data. We study the performance of the Beaver as it probabilistically maps a simple outdoor environment with sensor models that include uncertainty. Results indicate that the Beaver and other small and simple robotic platforms can make use of a Bayesian network to make intelligent decisions in uncertain planetary environments.
Receptive Field Inference with Localized Priors
Park, Mijung; Pillow, Jonathan W.
2011-01-01
The linear receptive field describes a mapping from sensory stimuli to a one-dimensional variable governing a neuron's spike response. However, traditional receptive field estimators such as the spike-triggered average converge slowly and often require large amounts of data. Bayesian methods seek to overcome this problem by biasing estimates towards solutions that are more likely a priori, typically those with small, smooth, or sparse coefficients. Here we introduce a novel Bayesian receptive field estimator designed to incorporate locality, a powerful form of prior information about receptive field structure. The key to our approach is a hierarchical receptive field model that flexibly adapts to localized structure in both spacetime and spatiotemporal frequency, using an inference method known as empirical Bayes. We refer to our method as automatic locality determination (ALD), and show that it can accurately recover various types of smooth, sparse, and localized receptive fields. We apply ALD to neural data from retinal ganglion cells and V1 simple cells, and find it achieves error rates several times lower than standard estimators. Thus, estimates of comparable accuracy can be achieved with substantially less data. Finally, we introduce a computationally efficient Markov Chain Monte Carlo (MCMC) algorithm for fully Bayesian inference under the ALD prior, yielding accurate Bayesian confidence intervals for small or noisy datasets. PMID:22046110
Ha, Jung-Yun; Chun, Ju-Na; Son, Jun Sik; Kim, Kyo-Han
2014-01-01
Dental modeling resins have been developed for use in areas where highly precise resin structures are needed. The manufacturers claim that these polymethyl methacrylate/methyl methacrylate (PMMA/MMA) resins show little or no shrinkage after polymerization. This study examined the polymerization shrinkage of five dental modeling resins as well as one temporary PMMA/MMA resin (control). The morphology and the particle size of the prepolymerized PMMA powders were investigated by scanning electron microscopy and laser diffraction particle size analysis, respectively. Linear polymerization shrinkage strains of the resins were monitored for 20 minutes using a custom-made linometer, and the final values (at 20 minutes) were converted into volumetric shrinkages. The final volumetric shrinkage values for the modeling resins were statistically similar (P > 0.05) or significantly larger (P < 0.05) than that of the control resin and were related to the polymerization kinetics (P < 0.05) rather than the PMMA bead size (P = 0.335). Therefore, the optimal control of the polymerization kinetics seems to be more important for producing high-precision resin structures rather than the use of dental modeling resins. PMID:24779020
Jan, Yih-Dean; Lee, Bor-Shiunn; Lin, Chun-Pin; Tseng, Wan-Yu
2014-04-01
Polymerization shrinkage is one of the main causes of dental restoration failure. This study tried to conjugate two diisocyanate side chains to dimethacrylate resins in order to reduce polymerization shrinkage and increase the hardness of composite resins. Diisocyanate, 2-hydroxyethyl methacrylate, and bisphenol A dimethacrylate were reacted in different ratios to form urethane-modified new resin matrices, and then mixed with 50 wt.% silica fillers. The viscosities of matrices, polymerization shrinkage, surface hardness, and degrees of conversion of experimental composite resins were then evaluated and compared with a non-modified control group. The viscosities of resin matrices increased with increasing diisocyanate side chain density. Polymerization shrinkage and degree of conversion, however, decreased with increasing diisocyanate side chain density. The surface hardness of all diisocyanate-modified groups was equal to or significantly higher than that of the control group. Conjugation of diisocyanate side chains to dimethacrylate represents an effective means of reducing polymerization shrinkage and increasing the surface hardness of dental composite resins. Copyright © 2012. Published by Elsevier B.V.
Drying Shrinkage of Mortar Incorporating High Volume Oil Palm Biomass Waste
NASA Astrophysics Data System (ADS)
Shukor Lim, Nor Hasanah Abdul; Samadi, Mostafa; Rahman Mohd. Sam, Abdul; Khalid, Nur Hafizah Abd; Nabilah Sarbini, Noor; Farhayu Ariffin, Nur; Warid Hussin, Mohd; Ismail, Mohammed A.
2018-03-01
This paper studies the drying shrinkage of mortar incorporating oil palm biomass waste including Palm Oil Fuel Ash, Oil Palm Kernel Shell and Oil Palm Fibre. Nano size of palm oil fuel ash was used up to 80 % as cement replacement by weight. The ash has been treated to improve the physical and chemical properties of mortar. The mass ratio of sand to blended ashes was 3:1. The test was carried out using 25 × 25 × 160 mm prism for drying shrinkage tests and 70 × 70 ×70 mm for compressive strength test. The results show that the shrinkage value of biomass mortar is reduced by 31% compared with OPC mortar thus, showing better performance in restraining deformation of the mortar while the compressive strength increased by 24% compared with OPC mortar at later age. The study gives a better understanding of how the biomass waste affect on mortar compressive strength and drying shrinkage behaviour. Overall, the oil palm biomass waste can be used to produce a better performance mortar at later age in terms of compressive strength and drying shrinkage.
Advanced shrink material for NTD process with lower Y/X shrinkage bias of elongated patterns
NASA Astrophysics Data System (ADS)
Miyamoto, Yoshihiro; Sekito, Takashi; Sagan, John; Horiba, Yuko; Kinuta, Takafumi; Nagahara, Tatsuro; Tarutani, Shinji
2015-03-01
Negative tone shrink materials (NSM) suitable for resolution enhancement of negative tone development (NTD) 193nm immersion resists have been developed. While this technology is being expanded to integrated circuits (IC) manufacturing, there still have two major problems to apply various processes. One of them is shrink ID bias which means shrink differences between isolated (I) and dense (D) CDs, and the other one is Y/X shrinkage bias which means shrinkage differences between major axis (Y) and minor axis (X) of the elongated or oval shape pattern. While we have presented the improvement of shrink ID bias at SPIE2014 [1], the reduction of Y/X shrinkage bias was the examination theme for quite some time. In this paper, we present Y/X shrinkage bias of current NTD shrink material, new concept material for Y/X bias reduction and the result of new shrink material. Current NTD shrink model has Y/X bias of 1.6 (Y shrink=16nm) at a mixing bake (MB) of 150°C on AZ AX2110P NTD elongated pattern of X=70nm and Y=210nm ADI. This means shrinkage of Y has larger shrinkage than X and that makes difficult to apply shrink material. We expected that the characteristic shape of elongated pattern was one of the root-cause for Y/X bias, and then simulated how to achieve equivalent shrinkage at Y and X. We concluded that available resist volume per each Y and X unit was not equivalent and need new shrink concept to solve Y/X bias. Based on our new concept, we prepared new shrink material which has lower Y/X bias and larger shrink amount compared with current NTD shrink material. Finally we have achieved lower Y/X bias from 1.6 to 1.1 at MB150°C and moreover got higher shrinkage than current NTD shrink material from 10.1nm to 16.7nm.
A Parallel and Incremental Approach for Data-Intensive Learning of Bayesian Networks.
Yue, Kun; Fang, Qiyu; Wang, Xiaoling; Li, Jin; Liu, Weiyi
2015-12-01
Bayesian network (BN) has been adopted as the underlying model for representing and inferring uncertain knowledge. As the basis of realistic applications centered on probabilistic inferences, learning a BN from data is a critical subject of machine learning, artificial intelligence, and big data paradigms. Currently, it is necessary to extend the classical methods for learning BNs with respect to data-intensive computing or in cloud environments. In this paper, we propose a parallel and incremental approach for data-intensive learning of BNs from massive, distributed, and dynamically changing data by extending the classical scoring and search algorithm and using MapReduce. First, we adopt the minimum description length as the scoring metric and give the two-pass MapReduce-based algorithms for computing the required marginal probabilities and scoring the candidate graphical model from sample data. Then, we give the corresponding strategy for extending the classical hill-climbing algorithm to obtain the optimal structure, as well as that for storing a BN by
Angelidou, E; Kostoulas, P; Leontides, L
2014-02-01
We validated a commercial (Idexx Pourquier, Montpellier, France) serum and milk indirect ELISA that detects antibodies against Mycobacterium avium ssp. paratuberculosis (MAP) in Greek dairy goats. Each goat was sampled 4 times, starting from kidding and covering early, mid, and late lactation. A total of 1,268 paired milk (or colostrum) and serum samples were collected during the 7-mo lactation period. Bayesian latent class models, which allow for the continuous interpretation of test results, were used to derive the distribution of the serum and milk ELISA response for healthy and MAP-infected individuals at each lactation stage. Both serum and milk ELISA, in all lactation stages, had average and similar overall discriminatory ability as measured by the area under the curve (AUC). For each test, the smallest overlap between the distribution of the healthy and MAP-infected does was in late lactation. At this stage, the AUC was 0.89 (95% credible interval: 0.70; 0.98) and 0.92 (0.74; 0.99) for the milk and serum ELISA, respectively. Both tests had comparable sensitivities and specificities at the recommended cutoffs across lactation. Lowering the cutoffs led to an increase in sensitivity without serious loss in specificity. In conclusion, the milk ELISA was as accurate as the serum ELISA. Therefore, it could serve as the diagnostic tool of choice, especially during the implementation of MAP control programs that require frequent testing, because milk sampling is a noninvasive, rapid, and easy process. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Alsing, Justin; Heavens, Alan; Jaffe, Andrew H.
2017-04-01
We apply two Bayesian hierarchical inference schemes to infer shear power spectra, shear maps and cosmological parameters from the Canada-France-Hawaii Telescope (CFHTLenS) weak lensing survey - the first application of this method to data. In the first approach, we sample the joint posterior distribution of the shear maps and power spectra by Gibbs sampling, with minimal model assumptions. In the second approach, we sample the joint posterior of the shear maps and cosmological parameters, providing a new, accurate and principled approach to cosmological parameter inference from cosmic shear data. As a first demonstration on data, we perform a two-bin tomographic analysis to constrain cosmological parameters and investigate the possibility of photometric redshift bias in the CFHTLenS data. Under the baseline ΛCDM (Λ cold dark matter) model, we constrain S_8 = σ _8(Ω _m/0.3)^{0.5} = 0.67+0.03-0.03 (68 per cent), consistent with previous CFHTLenS analyses but in tension with Planck. Adding neutrino mass as a free parameter, we are able to constrain ∑mν < 4.6 eV (95 per cent) using CFHTLenS data alone. Including a linear redshift-dependent photo-z bias Δz = p2(z - p1), we find p_1=-0.25+0.53-0.60 and p_2 = -0.15+0.17-0.15, and tension with Planck is only alleviated under very conservative prior assumptions. Neither the non-minimal neutrino mass nor photo-z bias models are significantly preferred by the CFHTLenS (two-bin tomography) data.
NASA Astrophysics Data System (ADS)
Law, Jane; Quick, Matthew
2013-01-01
This paper adopts a Bayesian spatial modeling approach to investigate the distribution of young offender residences in York Region, Southern Ontario, Canada, at the census dissemination area level. Few geographic researches have analyzed offender (as opposed to offense) data at a large map scale (i.e., using a relatively small areal unit of analysis) to minimize aggregation effects. Providing context is the social disorganization theory, which hypothesizes that areas with economic deprivation, high population turnover, and high ethnic heterogeneity exhibit social disorganization and are expected to facilitate higher instances of young offenders. Non-spatial and spatial Poisson models indicate that spatial methods are superior to non-spatial models with respect to model fit and that index of ethnic heterogeneity, residential mobility (1 year moving rate), and percentage of residents receiving government transfer payments are, respectively, the most significant explanatory variables related to young offender location. These findings provide overwhelming support for social disorganization theory as it applies to offender location in York Region, Ontario. Targeting areas where prevalence of young offenders could or could not be explained by social disorganization through decomposing the estimated risk map are helpful for dealing with juvenile offenders in the region. Results prompt discussion into geographically targeted police services and young offender placement pertaining to risk of recidivism. We discuss possible reasons for differences and similarities between the previous findings (that analyzed offense data and/or were conducted at a smaller map scale) and our findings, limitations of our study, and practical outcomes of this research from a law enforcement perspective.
NASA Astrophysics Data System (ADS)
Furuhashi, Hiroshi; Aoki, Takerou; Okabe, Sayaka; Arai, Tsuyoshi; Seto, Masahiro; Yamabe, Masashi
L-shape is the important and fundamental shape for injection molded parts. Therefore to reveal the corner angular deformation mechanism of this shape is also valuable for understanding the warpage mechanism of injection molded parts. In this study, we investigated the influence of the filling materials (fiber, talc and not filled) and two kinds of anisotropic shrinkage factors, solidification shrinkage and shrinkage caused by thermal expansion coefficient during cooling, to the angular deformation of L-shaped specimens and the following conclusions were obtained 1) The anisotropic solidification shrinkage of MD/TD and the anisotropic thermal expansion coefficient of MD/TD are considered to cause the angular deformation of L-shaped specimens. But the contribution ratios of these two anisotropies depend on the filling material for plastics. 2) The angular deformation of PP and PBT filled with glass fiber is mainly caused by the anisotropic thermal expansion coefficient and on the other hand, that of PP and PBT without filling material is caused by anisotropic solidification shrinkage. However both anisotropies cause the angular deformation of PP filled with talc. 3) The plate thickness dependence of the angular deformation of PP filled with talc is the singular peculiar phenomenon. The plate thickness dependence of anisotropic solidification shrinkage of this material (it is also singular) is considered to have an important influence on this phenomenon.
Juliana, Philomin; Singh, Ravi P; Singh, Pawan K; Crossa, Jose; Rutkoski, Jessica E; Poland, Jesse A; Bergstrom, Gary C; Sorrells, Mark E
2017-07-01
The leaf spotting diseases in wheat that include Septoria tritici blotch (STB) caused by , Stagonospora nodorum blotch (SNB) caused by , and tan spot (TS) caused by pose challenges to breeding programs in selecting for resistance. A promising approach that could enable selection prior to phenotyping is genomic selection that uses genome-wide markers to estimate breeding values (BVs) for quantitative traits. To evaluate this approach for seedling and/or adult plant resistance (APR) to STB, SNB, and TS, we compared the predictive ability of least-squares (LS) approach with genomic-enabled prediction models including genomic best linear unbiased predictor (GBLUP), Bayesian ridge regression (BRR), Bayes A (BA), Bayes B (BB), Bayes Cπ (BC), Bayesian least absolute shrinkage and selection operator (BL), and reproducing kernel Hilbert spaces markers (RKHS-M), a pedigree-based model (RKHS-P) and RKHS markers and pedigree (RKHS-MP). We observed that LS gave the lowest prediction accuracies and RKHS-MP, the highest. The genomic-enabled prediction models and RKHS-P gave similar accuracies. The increase in accuracy using genomic prediction models over LS was 48%. The mean genomic prediction accuracies were 0.45 for STB (APR), 0.55 for SNB (seedling), 0.66 for TS (seedling) and 0.48 for TS (APR). We also compared markers from two whole-genome profiling approaches: genotyping by sequencing (GBS) and diversity arrays technology sequencing (DArTseq) for prediction. While, GBS markers performed slightly better than DArTseq, combining markers from the two approaches did not improve accuracies. We conclude that implementing GS in breeding for these diseases would help to achieve higher accuracies and rapid gains from selection. Copyright © 2017 Crop Science Society of America.
Hardness, density, and shrinkage characteristics of silk-oak from Hawaii
R. L. Youngs
1964-01-01
Shrinkage, specific gravity, and hardness of two shipments of silk-oak (Grevillea robusta) from Hawaii were evaluated to provide basic information pertinent to the use of the wood for cabinet and furniture purposes. The wood resembles Hawaii-grown shamel ash (Fraxinus uhdei ) in the properties evaluated. Shrinkage compares well with that of black cherry, silver maple,...
DOT National Transportation Integrated Search
2011-04-01
The objective of this study was to determine the influence of admixtures on long term drying shrinkage and creep of high : strength concrete (HSC). Creep and shrinkage of the mix utilized in segments of the Skyway Structure of the San : Francisco-Oak...
DOT National Transportation Integrated Search
2011-03-01
The objective of this study was to determine the influence of admixtures on long term drying shrinkage and creep of high : strength concrete (HSC). Creep and shrinkage of the mix utilized in segments of the Skyway Structure of the San : Francisco-Oak...
Wainwright, Haruko M; Seki, Akiyuki; Chen, Jinsong; Saito, Kimiaki
2017-02-01
This paper presents a multiscale data integration method to estimate the spatial distribution of air dose rates in the regional scale around the Fukushima Daiichi Nuclear Power Plant. We integrate various types of datasets, such as ground-based walk and car surveys, and airborne surveys, all of which have different scales, resolutions, spatial coverage, and accuracy. This method is based on geostatistics to represent spatial heterogeneous structures, and also on Bayesian hierarchical models to integrate multiscale, multi-type datasets in a consistent manner. The Bayesian method allows us to quantify the uncertainty in the estimates, and to provide the confidence intervals that are critical for robust decision-making. Although this approach is primarily data-driven, it has great flexibility to include mechanistic models for representing radiation transport or other complex correlations. We demonstrate our approach using three types of datasets collected at the same time over Fukushima City in Japan: (1) coarse-resolution airborne surveys covering the entire area, (2) car surveys along major roads, and (3) walk surveys in multiple neighborhoods. Results show that the method can successfully integrate three types of datasets and create an integrated map (including the confidence intervals) of air dose rates over the domain in high resolution. Moreover, this study provides us with various insights into the characteristics of each dataset, as well as radiocaesium distribution. In particular, the urban areas show high heterogeneity in the contaminant distribution due to human activities as well as large discrepancy among different surveys due to such heterogeneity. Copyright © 2016 Elsevier Ltd. All rights reserved.
Atai, Mohammad; Ahmadi, Mehdi; Babanzadeh, Samal; Watts, David C
2007-08-01
The aim of the study was to synthesize and characterize an isophorone-based urethane dimethacrylate (IP-UDMA) resin-monomer and to investigate its shrinkage and curing kinetics. The IP-UDMA monomer was synthesized through the reaction of polyethylene glycol 400 and isophorone diisocyanate followed by reacting with HEMA to terminate it with methacrylate end groups. The reaction was followed using a standard back titration method and FTIR spectroscopy. The final product was purified and characterized using FTIR, (1)H NMR, elemental analysis and refractive index measurement. The shrinkage-strain of the specimens photopolymerized at circa 700mW/cm(2) was measured using the bonded-disk technique at 23, 35, and 45 degrees C. Initial shrinkage-strain-rates were obtained by numerical differentiation of shrinkage-strain data with respect to time. Degree-of-conversion of the specimens was measured using FTIR spectroscopy. The thermal curing kinetics of the monomer were also studied by differential scanning calorimetry (DSC). The characterization methods confirmed the suggested reaction route and the synthesized monomer. A low shrinkage-strain of about 4% was obtained for the new monomer. The results showed that the shrinkage-strain-rate of the monomer followed the autocatalytic model of Kamal and Sourour [Kamal MR, Sourour S. Kinetic and thermal characterization of thermoset cure. Polym Eng Sci 1973;13(1):59-64], which is used to describe the reaction kinetics of thermoset resins. The model parameters were calculated by linearization of the equation. The model prediction was in a good agreement with the experimental data. The properties of the new monomer compare favorably with properties of the commercially available resins.
Time dependence of composite shrinkage using halogen and LED light curing.
Uhl, Alexander; Mills, Robin W; Rzanny, Angelika E; Jandt, Klaus D
2005-03-01
The polymerization shrinkage of light cured dental composites presents the major drawback for these aesthetically adaptable restorative materials. LED based light curing technology has recently become commercially available. Therefore, the aim of the present study was to investigate if there was a statistically significant difference in linear and volumetric composite shrinkage strain if a LED LCU is used for the light curing process rather than a conventional halogen LCU. The volumetric shrinkage strain was determined using the Archimedes buoyancy principle after 5, 10, 20, 40 s of light curing and after 120 s following the 40 s light curing time period. The linear shrinkage strain was determined with a dynamic mechanical analyzer for the composites Z100, Spectrum, Solitaire2 and Definite polymerized with the LCUs Trilight (halogen), Freelight I (LED) and LED63 (LED LCU prototype). The changes in irradiance and spectra of the LCUs were measured after 0, 312 and 360 min of duty time. In general there was no considerable difference in shrinkage of the composites Z100, Spectrum or Solitaire2 when the LED63 was used instead of the Trilight. There was, however, a statistically significant difference in shrinkage strain when the composite Definite was polymerized with the LED63 instead of the Trilight. The spectrum of the Trilight changed during the experiment considerably whereas the LED63 showed an almost constant light output. The Freelight I dropped considerably in irradiance and had to be withdrawn from the study because of technical problems. The composites containing only the photoinitiator camphorquinone showed similar shrinkage strain behaviour when a LED or halogen LCU is used for the polymerization. The irradiance of some LED LCUs can also decrease over time and should therefore be checked on a regular basis.
Ion density evolution in a high-power sputtering discharge with bipolar pulsing
NASA Astrophysics Data System (ADS)
Britun, N.; Michiels, M.; Godfroid, T.; Snyders, R.
2018-06-01
Time evolution of sputtered metal ions in high power impulse magnetron sputtering (HiPIMS) discharge with a positive voltage pulse applied after a negative one (regime called "bipolar pulse HiPIMS"—BPH) is studied using 2-D density mapping. It is demonstrated that the ion propagation dynamics is mainly affected by the amplitude and duration of the positive pulse. Such effects as ion repulsion from the cathode and the ionization zone shrinkage due to electron drift towards the cathode are clearly observed during the positive pulse. The BPH mode also alters the film crystallographic structure, as observed from X-ray diffraction analysis.
NASA Astrophysics Data System (ADS)
Teall, Oliver; Pilegis, Martins; Sweeney, John; Gough, Tim; Thompson, Glen; Jefferson, Anthony; Lark, Robert; Gardner, Diane
2017-04-01
The shrinkage force exerted by restrained shape memory polymers (SMPs) can potentially be used to close cracks in structural concrete. This paper describes the physical processing and experimental work undertaken to develop high shrinkage die-drawn polyethylene terephthalate (PET) SMP tendons for use within a crack closure system. The extrusion and die-drawing procedure used to manufacture a series of PET tendon samples is described. The results from a set of restrained shrinkage tests, undertaken at differing activation temperatures, are also presented along with the mechanical properties of the most promising samples. The stress developed within the tendons is found to be related to the activation temperature, the cross-sectional area and to the draw rate used during manufacture. Comparisons with commercially-available PET strip samples used in previous research are made, demonstrating an increase in restrained shrinkage stress by a factor of two for manufactured PET filament samples.
Extreme-Scale Bayesian Inference for Uncertainty Quantification of Complex Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biros, George
Uncertainty quantification (UQ)—that is, quantifying uncertainties in complex mathematical models and their large-scale computational implementations—is widely viewed as one of the outstanding challenges facing the field of CS&E over the coming decade. The EUREKA project set to address the most difficult class of UQ problems: those for which both the underlying PDE model as well as the uncertain parameters are of extreme scale. In the project we worked on these extreme-scale challenges in the following four areas: 1. Scalable parallel algorithms for sampling and characterizing the posterior distribution that exploit the structure of the underlying PDEs and parameter-to-observable map. Thesemore » include structure-exploiting versions of the randomized maximum likelihood method, which aims to overcome the intractability of employing conventional MCMC methods for solving extreme-scale Bayesian inversion problems by appealing to and adapting ideas from large-scale PDE-constrained optimization, which have been very successful at exploring high-dimensional spaces. 2. Scalable parallel algorithms for construction of prior and likelihood functions based on learning methods and non-parametric density estimation. Constructing problem-specific priors remains a critical challenge in Bayesian inference, and more so in high dimensions. Another challenge is construction of likelihood functions that capture unmodeled couplings between observations and parameters. We will create parallel algorithms for non-parametric density estimation using high dimensional N-body methods and combine them with supervised learning techniques for the construction of priors and likelihood functions. 3. Bayesian inadequacy models, which augment physics models with stochastic models that represent their imperfections. The success of the Bayesian inference framework depends on the ability to represent the uncertainty due to imperfections of the mathematical model of the phenomena of interest. This is a central challenge in UQ, especially for large-scale models. We propose to develop the mathematical tools to address these challenges in the context of extreme-scale problems. 4. Parallel scalable algorithms for Bayesian optimal experimental design (OED). Bayesian inversion yields quantified uncertainties in the model parameters, which can be propagated forward through the model to yield uncertainty in outputs of interest. This opens the way for designing new experiments to reduce the uncertainties in the model parameters and model predictions. Such experimental design problems have been intractable for large-scale problems using conventional methods; we will create OED algorithms that exploit the structure of the PDE model and the parameter-to-output map to overcome these challenges. Parallel algorithms for these four problems were created, analyzed, prototyped, implemented, tuned, and scaled up for leading-edge supercomputers, including UT-Austin’s own 10 petaflops Stampede system, ANL’s Mira system, and ORNL’s Titan system. While our focus is on fundamental mathematical/computational methods and algorithms, we will assess our methods on model problems derived from several DOE mission applications, including multiscale mechanics and ice sheet dynamics.« less
NASA Astrophysics Data System (ADS)
Franco, Ana Paula G. O.; Karam, Leandro Z.; Galvão, José R.; Kalinowski, Hypolito J.
2015-09-01
The aim of the present study was evaluate the shrinkage polymerization and temperature of different acrylic resins used to splinting transfer copings in indirect impression technique. Two implants were placed in an artificial bone, with the two transfer copings joined with dental floss and acrylic resins; two dental resins are used. Measurements of deformation and temperature were performed with Fiber Braggs grating sensor for 17 minutes. The results revealed that one type of resin shows greater values of polymerization shrinkage than the other. Pattern resins did not present lower values of shrinkage, as usually reported by the manufacturer.
3D full field strain analysis of polymerization shrinkage in a dental composite.
Martinsen, Michael; El-Hajjar, Rani F; Berzins, David W
2013-08-01
The objective of this research was to study the polymerization shrinkage in a dental composite using 3D digital image correlation (DIC). Using 2 coupled cameras, digital images were taken of bar-shaped composite (Premise Universal Composite; Kerr) specimens before light curing and after for 10 min. Three-dimensional DIC was used to assess in-plane and out-of-plane deformations associated with polymerization shrinkage. The results show the polymerization shrinkage to be highly variable with the peak values occurring 0.6-0.8mm away from the surface. Volumetric shrinkage began to significantly decrease at 3.2mm from the specimen surface and reached a minimum at 4mm within the composite. Approximately 25-30% of the strain registered at 5 min occurred after light-activation. Application of 3D DIC dental applications can be performed without the need for assumptions on the deformation field. Understanding the local deformations and strain fields from the initial polymerization shrinkage can lead to a better understanding of the composite material and interaction with surrounding tooth structure, aiding in their further development and clinical prognosis. Copyright © 2013 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Non-Ionotropic NMDA Receptor Signaling Drives Activity-Induced Dendritic Spine Shrinkage.
Stein, Ivar S; Gray, John A; Zito, Karen
2015-09-02
The elimination of dendritic spine synapses is a critical step in the refinement of neuronal circuits during development of the cerebral cortex. Several studies have shown that activity-induced shrinkage and retraction of dendritic spines depend on activation of the NMDA-type glutamate receptor (NMDAR), which leads to influx of extracellular calcium ions and activation of calcium-dependent phosphatases that modify regulators of the spine cytoskeleton, suggesting that influx of extracellular calcium ions drives spine shrinkage. Intriguingly, a recent report revealed a novel non-ionotropic function of the NMDAR in the regulation of synaptic strength, which relies on glutamate binding but is independent of ion flux through the receptor (Nabavi et al., 2013). Here, we tested whether non-ionotropic NMDAR signaling could also play a role in driving structural plasticity of dendritic spines. Using two-photon glutamate uncaging and time-lapse imaging of rat hippocampal CA1 neurons, we show that low-frequency glutamatergic stimulation results in shrinkage of dendritic spines even in the presence of the NMDAR d-serine/glycine binding site antagonist 7-chlorokynurenic acid (7CK), which fully blocks NMDAR-mediated currents and Ca(2+) transients. Notably, application of 7CK or MK-801 also converts spine enlargement resulting from a high-frequency uncaging stimulus into spine shrinkage, demonstrating that strong Ca(2+) influx through the NMDAR normally overcomes a non-ionotropic shrinkage signal to drive spine growth. Our results support a model in which NMDAR signaling, independent of ion flux, drives structural shrinkage at spiny synapses. Dendritic spine elimination is vital for the refinement of neural circuits during development and has been linked to improvements in behavioral performance in the adult. Spine shrinkage and elimination have been widely accepted to depend on Ca(2+) influx through NMDA-type glutamate receptors (NMDARs) in conjunction with long-term depression (LTD) of synaptic strength. Here, we use two-photon glutamate uncaging and time-lapse imaging to show that non-ionotropic NMDAR signaling can drive shrinkage of dendritic spines, independent of NMDAR-mediated Ca(2+) influx. Signaling through p38 MAPK was required for this activity-dependent spine shrinkage. Our results provide fundamental new insights into the signaling mechanisms that support experience-dependent changes in brain structure. Copyright © 2015 the authors 0270-6474/15/3512303-06$15.00/0.
Risk assessment of mountain infrastructure destabilization in the French Alps
NASA Astrophysics Data System (ADS)
Duvillard, Pierre-Allain; Ravanel, Ludovic; Deline, Philip
2015-04-01
In the current context of imbalance of geosystems in connection with the rising air temperature for several decades, high mountain environments are especially affected by the shrinkage of glaciers and the permafrost degradation which can trigger slope movements in the rock slopes (rockfall, rock avalanches) or in superficial deposits (slides, rock glacier rupture, thermokarst). These processes generate a risk of direct destabilization for high mountain infrastructure (huts, cable-cars...) in addition to indirect risks for people and infrastructure located on the path of moving rock masses. We here focus on the direct risk of infrastructure destabilization due to permafrost degradation and/or glacier shrinkage in the French Alps. To help preventing these risks, an inventory of all the infrastructure was carried out with a GIS using different data layers among which the Alpine Permafrost Index Map and inventories of the French Alps glaciers in 2006-2009, 1967-1971 and at the end of the Little Ice Age. 1769 infrastructures have been identified in areas likely characterized by permafrost and/or possibly affected by glacier shrinkage. An index of risk of destabilization has been built to identify and to rank infrastructure at risk. This theoretical risk index includes a characterization of hazards and a diagnosis of the vulnerability. The value of hazard is dependent on passive factors (topography, lithology, geomorphological context...) and on so-considered active factors (thermal state of the permafrost, and changing constraints on slopes related to glacier shrinkage). The diagnosis of vulnerability has meanwhile been established by combining the level of potential damage to the exposed elements with their operational and financial values. The combination of hazard and vulnerability determines a degree of risk of infrastructure destabilization (from low to very high). Field work and several inventories of infrastructure damages were used to validate it. The application of this risk index for infrastructure in the French Alps indicates 999 infrastructures potentially at risk, among 0.2 % are characterized by a very high risk and 4.4 % by a high risk of destabilization. The risk unequally affects massifs: 55 % of the infrastructure at risk are in the Vanoise massif (Savoie) due to the large number of high-altitude ski resorts in this area. The Mont-Blanc massif (Haute-Savoie) includes only 6.5 % of the infrastructure at risk. Furthermore, 71 % of the exposed infrastructure are ski-lifts.
Pidlisecky, Adam; Haines, S.S.
2011-01-01
Conventional processing methods for seismic cone penetrometer data present several shortcomings, most notably the absence of a robust velocity model uncertainty estimate. We propose a new seismic cone penetrometer testing (SCPT) data-processing approach that employs Bayesian methods to map measured data errors into quantitative estimates of model uncertainty. We first calculate travel-time differences for all permutations of seismic trace pairs. That is, we cross-correlate each trace at each measurement location with every trace at every other measurement location to determine travel-time differences that are not biased by the choice of any particular reference trace and to thoroughly characterize data error. We calculate a forward operator that accounts for the different ray paths for each measurement location, including refraction at layer boundaries. We then use a Bayesian inversion scheme to obtain the most likely slowness (the reciprocal of velocity) and a distribution of probable slowness values for each model layer. The result is a velocity model that is based on correct ray paths, with uncertainty bounds that are based on the data error. ?? NRC Research Press 2011.
An efficient method for model refinement in diffuse optical tomography
NASA Astrophysics Data System (ADS)
Zirak, A. R.; Khademi, M.
2007-11-01
Diffuse optical tomography (DOT) is a non-linear, ill-posed, boundary value and optimization problem which necessitates regularization. Also, Bayesian methods are suitable owing to measurements data are sparse and correlated. In such problems which are solved with iterative methods, for stabilization and better convergence, the solution space must be small. These constraints subject to extensive and overdetermined system of equations which model retrieving criteria specially total least squares (TLS) must to refine model error. Using TLS is limited to linear systems which is not achievable when applying traditional Bayesian methods. This paper presents an efficient method for model refinement using regularized total least squares (RTLS) for treating on linearized DOT problem, having maximum a posteriori (MAP) estimator and Tikhonov regulator. This is done with combination Bayesian and regularization tools as preconditioner matrices, applying them to equations and then using RTLS to the resulting linear equations. The preconditioning matrixes are guided by patient specific information as well as a priori knowledge gained from the training set. Simulation results illustrate that proposed method improves the image reconstruction performance and localize the abnormally well.
A Hierarchical Bayesian Model for Crowd Emotions
Urizar, Oscar J.; Baig, Mirza S.; Barakova, Emilia I.; Regazzoni, Carlo S.; Marcenaro, Lucio; Rauterberg, Matthias
2016-01-01
Estimation of emotions is an essential aspect in developing intelligent systems intended for crowded environments. However, emotion estimation in crowds remains a challenging problem due to the complexity in which human emotions are manifested and the capability of a system to perceive them in such conditions. This paper proposes a hierarchical Bayesian model to learn in unsupervised manner the behavior of individuals and of the crowd as a single entity, and explore the relation between behavior and emotions to infer emotional states. Information about the motion patterns of individuals are described using a self-organizing map, and a hierarchical Bayesian network builds probabilistic models to identify behaviors and infer the emotional state of individuals and the crowd. This model is trained and tested using data produced from simulated scenarios that resemble real-life environments. The conducted experiments tested the efficiency of our method to learn, detect and associate behaviors with emotional states yielding accuracy levels of 74% for individuals and 81% for the crowd, similar in performance with existing methods for pedestrian behavior detection but with novel concepts regarding the analysis of crowds. PMID:27458366
Zollanvari, Amin; Dougherty, Edward R
2016-12-01
In classification, prior knowledge is incorporated in a Bayesian framework by assuming that the feature-label distribution belongs to an uncertainty class of feature-label distributions governed by a prior distribution. A posterior distribution is then derived from the prior and the sample data. An optimal Bayesian classifier (OBC) minimizes the expected misclassification error relative to the posterior distribution. From an application perspective, prior construction is critical. The prior distribution is formed by mapping a set of mathematical relations among the features and labels, the prior knowledge, into a distribution governing the probability mass across the uncertainty class. In this paper, we consider prior knowledge in the form of stochastic differential equations (SDEs). We consider a vector SDE in integral form involving a drift vector and dispersion matrix. Having constructed the prior, we develop the optimal Bayesian classifier between two models and examine, via synthetic experiments, the effects of uncertainty in the drift vector and dispersion matrix. We apply the theory to a set of SDEs for the purpose of differentiating the evolutionary history between two species.
Shrinkage strain-rates of dental resin-monomer and composite systems.
Atai, Mohammad; Watts, David C; Atai, Zahra
2005-08-01
The purpose of this study was to investigate the shrinkage strain rate of different monomers, which are commonly used in dental composites and the effect of monomer functionality and molecular mass on the rate. Bis-GMA, TEGDMA, UDMA, MMA, HEMA, HPMA and different ratios of Bis-GMA/TEGDMA were mixed with Camphorquinone and Dimethyl aminoethyle methacrylate as initiator system. The shrinkage strain of the samples photopolymerised at Ca. 550 mW/cm2 and 23 degrees C was measured using the bonded-disk technique of Watts and Cash (Meas. Sci. Technol. 2 (1991) 788-794), and initial shrinkage-strain rates were obtained by numerical differentiation. Shrinkage-strain rates rose rapidly to a maximum, and then fell rapidly upon vitrification. Strain and initial strain rate were dependent upon monomer functionality, molecular mass and viscosity. Strain rates were correlated with Bis-GMA in Bis-GMA/TEGDMA mixtures up to 75-80 w/w%, due to the higher molecular mass of Bis-GMA affecting termination reactions, and then decreased due to its higher viscosity affecting propagation reactions. Monofunctional monomers exhibited lower rates. UDMA, a difunctional monomer of medium viscosity, showed the highest shrinkage strain rate (P < 0.05). Shrinkage strain rate, related to polymerization rate, is an important factor affecting the biomechanics and marginal integrity of composites cured in dental cavities. This study shows how this is related to monomer molecular structure and viscosity. The results are significant for the production, optimization and clinical application of dental composite restoratives.
Oh, Gye-Jeong; Yun, Kwi-Dug; Lee, Kwang-Min; Lim, Hyun-Pil; Park, Sang-Won
2010-09-01
The purpose of this study was to compare the linear sintering behavior of presintered zirconia blocks of various densities. The mechanical properties of the resulting sintered zirconia blocks were then analyzed. Three experimental groups of dental zirconia blocks, with a different presintering density each, were designed in the present study. Kavo Everest® ZS blanks (Kavo, Biberach, Germany) were used as a control group. The experimental group blocks were fabricated from commercial yttria-stabilized tetragonal zirconia powder (KZ-3YF (SD) Type A, KCM. Corporation, Nagoya, Japan). The biaxial flexural strengths, microhardnesses, and microstructures of the sintered blocks were then investigated. The linear sintering shrinkages of blocks were calculated and compared. Despite their different presintered densities, the sintered blocks of the control and experimental groups showed similar mechanical properties. However, the sintered block had different linear sintering shrinkage rate depending on the density of the presintered block. As the density of the presintered block increased, the linear sintering shrinkage decreased. In the experimental blocks, the three sectioned pieces of each block showed the different linear shrinkage depending on the area. The tops of the experimental blocks showed the lowest linear sintering shrinkage, whereas the bottoms of the experimental blocks showed the highest linear sintering shrinkage. Within the limitations of this study, the density difference of the presintered zirconia block did not affect the mechanical properties of the sintered zirconia block, but affected the linear sintering shrinkage of the zirconia block.
Oh, Gye-Jeong; Yun, Kwi-Dug; Lee, Kwang-Min; Lim, Hyun-Pil
2010-01-01
PURPOSE The purpose of this study was to compare the linear sintering behavior of presintered zirconia blocks of various densities. The mechanical properties of the resulting sintered zirconia blocks were then analyzed. MATERIALS AND METHODS Three experimental groups of dental zirconia blocks, with a different presintering density each, were designed in the present study. Kavo Everest® ZS blanks (Kavo, Biberach, Germany) were used as a control group. The experimental group blocks were fabricated from commercial yttria-stabilized tetragonal zirconia powder (KZ-3YF (SD) Type A, KCM. Corporation, Nagoya, Japan). The biaxial flexural strengths, microhardnesses, and microstructures of the sintered blocks were then investigated. The linear sintering shrinkages of blocks were calculated and compared. RESULTS Despite their different presintered densities, the sintered blocks of the control and experimental groups showed similar mechanical properties. However, the sintered block had different linear sintering shrinkage rate depending on the density of the presintered block. As the density of the presintered block increased, the linear sintering shrinkage decreased. In the experimental blocks, the three sectioned pieces of each block showed the different linear shrinkage depending on the area. The tops of the experimental blocks showed the lowest linear sintering shrinkage, whereas the bottoms of the experimental blocks showed the highest linear sintering shrinkage. CONCLUSION Within the limitations of this study, the density difference of the presintered zirconia block did not affect the mechanical properties of the sintered zirconia block, but affected the linear sintering shrinkage of the zirconia block. PMID:21165274
The effect of mucosal cuff shrinkage around dental implants during healing abutment replacement.
Nissan, J; Zenziper, E; Rosner, O; Kolerman, R; Chaushu, L; Chaushu, G
2015-10-01
Soft tissue shrinkage during the course of restoring dental implants may result in biological and prosthodontic difficulties. This study was conducted to measure the continuous shrinkage of the mucosal cuff around dental implants following the removal of the healing abutment up to 60 s. Individuals treated with implant-supported fixed partial dentures were included. Implant data--location, type, length, diameter and healing abutments' dimensions--were recorded. Mucosal cuff shrinkage, following removal of the healing abutments, was measured in bucco-lingual direction at four time points--immediately after 20, 40 and 60 s. anova was used to for statistical analysis. Eighty-seven patients (49 women and 38 men) with a total of 311 implants were evaluated (120 maxilla; 191 mandible; 291 posterior segments; 20 anterior segments). Two-hundred and five (66%) implants displayed thick and 106 (34%) thin gingival biotype. Time was the sole statistically significant parameter affecting mucosal cuff shrinkage around dental implants (P < 0.001). From time 0 to 20, 40 and 60 s, the mean diameter changed from 4.1 to 4.07, 3.4 and 2.81 mm, respectively. The shrinkage was 1%, 17% and 31%, respectively. The gingival biotype had no statistically significant influence on mucosal cuff shrinkage (P = 0.672). Time required replacing a healing abutment with a prosthetic element should be minimised (up to 20/40 s), to avoid pain, discomfort and misfit. © 2015 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Pattnaik, Rashmi Ranjan
2017-06-01
A Finite Element Analysis (FEA) and an experimental study was conducted on composite beam of repair material and substrate concrete to investigate the failures of the composite beam due to drying shrinkage property of the repair materials. In FEA, the stress distribution in the composite beam due to two concentrate load and shrinkage of repair materials were investigated in addition to the deflected shape of the composite beam. The stress distributions and load deflection shapes of the finite element model were investigated to aid in analysis of the experimental findings. In the experimental findings, the mechanical properties such as compressive strength, split tensile strength, flexural strength, and load-deflection curves were studied in addition to slant shear bond strength, drying shrinkage and failure patterns of the composite beam specimens. Flexure test was conducted to simulate tensile stress at the interface between the repair material and substrate concrete. The results of FEA were used to analyze the experimental results. It was observed that the repair materials with low drying shrinkage are showing compatible failure in the flexure test of the composite beam and deform adequately in the load deflection curves. Also, the flexural strength of the composite beam with low drying shrinkage repair materials showed higher flexural strength as compared to the composite beams with higher drying shrinkage value of the repair materials even though the strength of those materials were more.
Hong S. He; Daniel C. Dey; Xiuli Fan; Mevin B. Hooten; John M. Kabrick; Christopher K. Wikle; Zhaofei. Fan
2007-01-01
In the Midwestern United States, the GeneralLandOffice (GLO) survey records provide the only reasonably accurate data source of forest composition and tree species distribution at the time of pre-European settlement (circa late 1800 to early 1850). However, GLO data have two fundamental limitations: coarse spatial resolutions (the square mile section and half mile...
Assessment of Data Fusion Algorithms for Earth Observation Change Detection Processes.
Molina, Iñigo; Martinez, Estibaliz; Morillo, Carmen; Velasco, Jesus; Jara, Alvaro
2016-09-30
In this work a parametric multi-sensor Bayesian data fusion approach and a Support Vector Machine (SVM) are used for a Change Detection problem. For this purpose two sets of SPOT5-PAN images have been used, which are in turn used for Change Detection Indices (CDIs) calculation. For minimizing radiometric differences, a methodology based on zonal "invariant features" is suggested. The choice of one or the other CDI for a change detection process is a subjective task as each CDI is probably more or less sensitive to certain types of changes. Likewise, this idea might be employed to create and improve a "change map", which can be accomplished by means of the CDI's informational content. For this purpose, information metrics such as the Shannon Entropy and "Specific Information" have been used to weight the changes and no-changes categories contained in a certain CDI and thus introduced in the Bayesian information fusion algorithm. Furthermore, the parameters of the probability density functions (pdf's) that best fit the involved categories have also been estimated. Conversely, these considerations are not necessary for mapping procedures based on the discriminant functions of a SVM. This work has confirmed the capabilities of probabilistic information fusion procedure under these circumstances.
Propagation of the velocity model uncertainties to the seismic event location
NASA Astrophysics Data System (ADS)
Gesret, A.; Desassis, N.; Noble, M.; Romary, T.; Maisons, C.
2015-01-01
Earthquake hypocentre locations are crucial in many domains of application (academic and industrial) as seismic event location maps are commonly used to delineate faults or fractures. The interpretation of these maps depends on location accuracy and on the reliability of the associated uncertainties. The largest contribution to location and uncertainty errors is due to the fact that the velocity model errors are usually not correctly taken into account. We propose a new Bayesian formulation that integrates properly the knowledge on the velocity model into the formulation of the probabilistic earthquake location. In this work, the velocity model uncertainties are first estimated with a Bayesian tomography of active shot data. We implement a sampling Monte Carlo type algorithm to generate velocity models distributed according to the posterior distribution. In a second step, we propagate the velocity model uncertainties to the seismic event location in a probabilistic framework. This enables to obtain more reliable hypocentre locations as well as their associated uncertainties accounting for picking and velocity model uncertainties. We illustrate the tomography results and the gain in accuracy of earthquake location for two synthetic examples and one real data case study in the context of induced microseismicity.
Garrido, Marta I; Rowe, Elise G; Halász, Veronika; Mattingley, Jason B
2018-05-01
Predictive coding posits that the human brain continually monitors the environment for regularities and detects inconsistencies. It is unclear, however, what effect attention has on expectation processes, as there have been relatively few studies and the results of these have yielded contradictory findings. Here, we employed Bayesian model comparison to adjudicate between 2 alternative computational models. The "Opposition" model states that attention boosts neural responses equally to predicted and unpredicted stimuli, whereas the "Interaction" model assumes that attentional boosting of neural signals depends on the level of predictability. We designed a novel, audiospatial attention task that orthogonally manipulated attention and prediction by playing oddball sequences in either the attended or unattended ear. We observed sensory prediction error responses, with electroencephalography, across all attentional manipulations. Crucially, posterior probability maps revealed that, overall, the Opposition model better explained scalp and source data, suggesting that attention boosts responses to predicted and unpredicted stimuli equally. Furthermore, Dynamic Causal Modeling showed that these Opposition effects were expressed in plastic changes within the mismatch negativity network. Our findings provide empirical evidence for a computational model of the opposing interplay of attention and expectation in the brain.
NASA Technical Reports Server (NTRS)
Colwell, R. N. (Principal Investigator)
1984-01-01
The spatial, geometric, and radiometric qualities of LANDSAT 4 thematic mapper (TM) and multispectral scanner (MSS) data were evaluated by interpreting, through visual and computer means, film and digital products for selected agricultural and forest cover types in California. Multispectral analyses employing Bayesian maximum likelihood, discrete relaxation, and unsupervised clustering algorithms were used to compare the usefulness of TM and MSS data for discriminating individual cover types. Some of the significant results are as follows: (1) for maximizing the interpretability of agricultural and forest resources, TM color composites should contain spectral bands in the visible, near-reflectance infrared, and middle-reflectance infrared regions, namely TM 4 and TM % and must contain TM 4 in all cases even at the expense of excluding TM 5; (2) using enlarged TM film products, planimetric accuracy of mapped poins was within 91 meters (RMSE east) and 117 meters (RMSE north); (3) using TM digital products, planimetric accuracy of mapped points was within 12.0 meters (RMSE east) and 13.7 meters (RMSE north); and (4) applying a contextual classification algorithm to TM data provided classification accuracies competitive with Bayesian maximum likelihood.
A Bayesian and Physics-Based Ground Motion Parameters Map Generation System
NASA Astrophysics Data System (ADS)
Ramirez-Guzman, L.; Quiroz, A.; Sandoval, H.; Perez-Yanez, C.; Ruiz, A. L.; Delgado, R.; Macias, M. A.; Alcántara, L.
2014-12-01
We present the Ground Motion Parameters Map Generation (GMPMG) system developed by the Institute of Engineering at the National Autonomous University of Mexico (UNAM). The system delivers estimates of information associated with the social impact of earthquakes, engineering ground motion parameters (gmp), and macroseismic intensity maps. The gmp calculated are peak ground acceleration and velocity (pga and pgv) and response spectral acceleration (SA). The GMPMG relies on real-time data received from strong ground motion stations belonging to UNAM's networks throughout Mexico. Data are gathered via satellite and internet service providers, and managed with the data acquisition software Earthworm. The system is self-contained and can perform all calculations required for estimating gmp and intensity maps due to earthquakes, automatically or manually. An initial data processing, by baseline correcting and removing records containing glitches or low signal-to-noise ratio, is performed. The system then assigns a hypocentral location using first arrivals and a simplified 3D model, followed by a moment tensor inversion, which is performed using a pre-calculated Receiver Green's Tensors (RGT) database for a realistic 3D model of Mexico. A backup system to compute epicentral location and magnitude is in place. A Bayesian Kriging is employed to combine recorded values with grids of computed gmp. The latter are obtained by using appropriate ground motion prediction equations (for pgv, pga and SA with T=0.3, 0.5, 1 and 1.5 s ) and numerical simulations performed in real time, using the aforementioned RGT database (for SA with T=2, 2.5 and 3 s). Estimated intensity maps are then computed using SA(T=2S) to Modified Mercalli Intensity correlations derived for central Mexico. The maps are made available to the institutions in charge of the disaster prevention systems. In order to analyze the accuracy of the maps, we compare them against observations not considered in the computations, and present some examples of recent earthquakes. We conclude that the system provides information with a fair goodness-of-fit against observations. This project is partially supported by DGAPA-PAPIIT (UNAM) project TB100313-RR170313.
Khare, Sangeeta; Drake, Kenneth L.; Lawhon, Sara D.; Nunes, Jairo E. S.; Figueiredo, Josely F.; Rossetti, Carlos A.; Gull, Tamara; Everts, Robin E.; Lewin, Harris. A.; Adams, Leslie Garry
2016-01-01
It has long been a quest in ruminants to understand how two very similar mycobacterial species, Mycobacterium avium ssp. paratuberculosis (MAP) and Mycobacterium avium ssp. avium (MAA) lead to either a chronic persistent infection or a rapid-transient infection, respectively. Here, we hypothesized that when the host immune response is activated by MAP or MAA, the outcome of the infection depends on the early activation of signaling molecules and host temporal gene expression. To test our hypothesis, ligated jejuno-ileal loops including Peyer’s patches in neonatal calves were inoculated with PBS, MAP, or MAA. A temporal analysis of the host transcriptome profile was conducted at several times post-infection (0.5, 1, 2, 4, 8 and 12 hours). When comparing the transcriptional responses of calves infected with the MAA versus MAP, discordant patterns of mucosal expression were clearly evident, and the numbers of unique transcripts altered were moderately less for MAA-infected tissue than were mucosal tissues infected with the MAP. To interpret these complex data, changes in the gene expression were further analyzed by dynamic Bayesian analysis. Bayesian network modeling identified mechanistic genes, gene-to-gene relationships, pathways and Gene Ontologies (GO) biological processes that are involved in specific cell activation during infection. MAP and MAA had significant different pathway perturbation at 0.5 and 12 hours post inoculation. Inverse processes were observed between MAP and MAA response for epithelial cell proliferation, negative regulation of chemotaxis, cell-cell adhesion mediated by integrin and regulation of cytokine-mediated signaling. MAP inoculated tissue had significantly lower expression of phagocytosis receptors such as mannose receptor and complement receptors. This study reveals that perturbation of genes and cellular pathways during MAP infection resulted in host evasion by mucosal membrane barrier weakening to access entry in the ileum, inhibition of Ca signaling associated with decreased phagosome-lysosome fusion as well as phagocytosis inhibition, bias toward Th2 cell immune response accompanied by cell recruitment, cell proliferation and cell differentiation; leading to persistent infection. Contrarily, MAA infection was related to cellular responses associated with activation of molecular pathways that release chemicals and cytokines involved with containment of infection and a strong bias toward Th1 immune response, resulting in a transient infection. PMID:27653506
Khare, Sangeeta; Drake, Kenneth L; Lawhon, Sara D; Nunes, Jairo E S; Figueiredo, Josely F; Rossetti, Carlos A; Gull, Tamara; Everts, Robin E; Lewin, Harris A; Adams, Leslie Garry
It has long been a quest in ruminants to understand how two very similar mycobacterial species, Mycobacterium avium ssp. paratuberculosis (MAP) and Mycobacterium avium ssp. avium (MAA) lead to either a chronic persistent infection or a rapid-transient infection, respectively. Here, we hypothesized that when the host immune response is activated by MAP or MAA, the outcome of the infection depends on the early activation of signaling molecules and host temporal gene expression. To test our hypothesis, ligated jejuno-ileal loops including Peyer's patches in neonatal calves were inoculated with PBS, MAP, or MAA. A temporal analysis of the host transcriptome profile was conducted at several times post-infection (0.5, 1, 2, 4, 8 and 12 hours). When comparing the transcriptional responses of calves infected with the MAA versus MAP, discordant patterns of mucosal expression were clearly evident, and the numbers of unique transcripts altered were moderately less for MAA-infected tissue than were mucosal tissues infected with the MAP. To interpret these complex data, changes in the gene expression were further analyzed by dynamic Bayesian analysis. Bayesian network modeling identified mechanistic genes, gene-to-gene relationships, pathways and Gene Ontologies (GO) biological processes that are involved in specific cell activation during infection. MAP and MAA had significant different pathway perturbation at 0.5 and 12 hours post inoculation. Inverse processes were observed between MAP and MAA response for epithelial cell proliferation, negative regulation of chemotaxis, cell-cell adhesion mediated by integrin and regulation of cytokine-mediated signaling. MAP inoculated tissue had significantly lower expression of phagocytosis receptors such as mannose receptor and complement receptors. This study reveals that perturbation of genes and cellular pathways during MAP infection resulted in host evasion by mucosal membrane barrier weakening to access entry in the ileum, inhibition of Ca signaling associated with decreased phagosome-lysosome fusion as well as phagocytosis inhibition, bias toward Th2 cell immune response accompanied by cell recruitment, cell proliferation and cell differentiation; leading to persistent infection. Contrarily, MAA infection was related to cellular responses associated with activation of molecular pathways that release chemicals and cytokines involved with containment of infection and a strong bias toward Th1 immune response, resulting in a transient infection.
Markov Chain Monte Carlo Inference of Parametric Dictionaries for Sparse Bayesian Approximations
Chaspari, Theodora; Tsiartas, Andreas; Tsilifis, Panagiotis; Narayanan, Shrikanth
2016-01-01
Parametric dictionaries can increase the ability of sparse representations to meaningfully capture and interpret the underlying signal information, such as encountered in biomedical problems. Given a mapping function from the atom parameter space to the actual atoms, we propose a sparse Bayesian framework for learning the atom parameters, because of its ability to provide full posterior estimates, take uncertainty into account and generalize on unseen data. Inference is performed with Markov Chain Monte Carlo, that uses block sampling to generate the variables of the Bayesian problem. Since the parameterization of dictionary atoms results in posteriors that cannot be analytically computed, we use a Metropolis-Hastings-within-Gibbs framework, according to which variables with closed-form posteriors are generated with the Gibbs sampler, while the remaining ones with the Metropolis Hastings from appropriate candidate-generating densities. We further show that the corresponding Markov Chain is uniformly ergodic ensuring its convergence to a stationary distribution independently of the initial state. Results on synthetic data and real biomedical signals indicate that our approach offers advantages in terms of signal reconstruction compared to previously proposed Steepest Descent and Equiangular Tight Frame methods. This paper demonstrates the ability of Bayesian learning to generate parametric dictionaries that can reliably represent the exemplar data and provides the foundation towards inferring the entire variable set of the sparse approximation problem for signal denoising, adaptation and other applications. PMID:28649173
Tsujimoto, Akimasa; Barkmeier, Wayne W; Takamizawa, Toshiki; Latta, Mark A; Miyazaki, Masashi
2017-03-31
The purpose of this study was to investigate the depth of cure, flexural properties and volumetric shrinkage of low and high viscosity bulk-fill giomers and resin composites. Depth of cure and flexural properties were determined according to ISO 4049, and volumetric shrinkage was measured using a dilatometer. The depths of cure of giomers were significantly lower than those of resin composites, regardless of photo polymerization times. No difference in flexural strength and modulus was found among either high or low viscosity bulk fill materials. Volumetric shrinkage of low and high viscosity bulk-fill resin composites was significantly less than low and high viscosity giomers. Depth of cure of both low and high viscosity bulk-fill materials is time dependent. Flexural strength and modulus of high viscosity or low viscosity bulk-fill giomer or resin composite materials are not different for their respective category. Resin composites exhibited less polymerization shrinkage than giomers.
HELP: XID+, the probabilistic de-blender for Herschel SPIRE maps
NASA Astrophysics Data System (ADS)
Hurley, P. D.; Oliver, S.; Betancourt, M.; Clarke, C.; Cowley, W. I.; Duivenvoorden, S.; Farrah, D.; Griffin, M.; Lacey, C.; Le Floc'h, E.; Papadopoulos, A.; Sargent, M.; Scudder, J. M.; Vaccari, M.; Valtchanov, I.; Wang, L.
2017-01-01
We have developed a new prior-based source extraction tool, XID+, to carry out photometry in the Herschel SPIRE (Spectral and Photometric Imaging Receiver) maps at the positions of known sources. XID+ is developed using a probabilistic Bayesian framework that provides a natural framework in which to include prior information, and uses the Bayesian inference tool Stan to obtain the full posterior probability distribution on flux estimates. In this paper, we discuss the details of XID+ and demonstrate the basic capabilities and performance by running it on simulated SPIRE maps resembling the COSMOS field, and comparing to the current prior-based source extraction tool DESPHOT. Not only we show that XID+ performs better on metrics such as flux accuracy and flux uncertainty accuracy, but we also illustrate how obtaining the posterior probability distribution can help overcome some of the issues inherent with maximum-likelihood-based source extraction routines. We run XID+ on the COSMOS SPIRE maps from Herschel Multi-Tiered Extragalactic Survey using a 24-μm catalogue as a positional prior, and a uniform flux prior ranging from 0.01 to 1000 mJy. We show the marginalized SPIRE colour-colour plot and marginalized contribution to the cosmic infrared background at the SPIRE wavelengths. XID+ is a core tool arising from the Herschel Extragalactic Legacy Project (HELP) and we discuss how additional work within HELP providing prior information on fluxes can and will be utilized. The software is available at https://github.com/H-E-L-P/XID_plus. We also provide the data product for COSMOS. We believe this is the first time that the full posterior probability of galaxy photometry has been provided as a data product.
NASA Astrophysics Data System (ADS)
Öktem, H.
2012-01-01
Plastic injection molding plays a key role in the production of high-quality plastic parts. Shrinkage is one of the most significant problems of a plastic part in terms of quality in the plastic injection molding. This article focuses on the study of the modeling and analysis of the effects of process parameters on the shrinkage by evaluating the quality of the plastic part of a DVD-ROM cover made with Acrylonitrile Butadiene Styrene (ABS) polymer material. An effective regression model was developed to determine the mathematical relationship between the process parameters (mold temperature, melt temperature, injection pressure, injection time, and cooling time) and the volumetric shrinkage by utilizing the analysis data. Finite element (FE) analyses designed by Taguchi (L27) orthogonal arrays were run in the Moldflow simulation program. Analysis of variance (ANOVA) was then performed to check the adequacy of the regression model and to determine the effect of the process parameters on the shrinkage. Experiments were conducted to control the accuracy of the regression model with the FE analyses obtained from Moldflow. The results show that the regression model agrees very well with the FE analyses and the experiments. From this, it can be concluded that this study succeeded in modeling the shrinkage problem in our application.
Geosynthetic clay liners shrinkage under simulated daily thermal cycles.
Sarabadani, Hamid; Rayhani, Mohammad T
2014-06-01
Geosynthetic clay liners are used as part of composite liner systems in municipal solid waste landfills and other applications to restrict the escape of contaminants into the surrounding environment. This is attainable provided that the geosynthetic clay liner panels continuously cover the subsoil. Previous case histories, however, have shown that some geosynthetic clay liner panels are prone to significant shrinkage and separation when an overlying geomembrane is exposed to solar radiation. Experimental models were initiated to evaluate the potential shrinkage of different geosynthetic clay liner products placed over sand and clay subsoils, subjected to simulated daily thermal cycles (60°C for 8 hours and 22°C for 16 hours) modelling field conditions in which the liner is exposed to solar radiation. The variation of geosynthetic clay liner shrinkage was evaluated at specified times by a photogrammetry technique. The manufacturing techniques, the initial moisture content, and the aspect ratio (ratio of length to width) of the geosynthetic clay liner were found to considerably affect the shrinkage of geosynthetic clay liners. The particle size distribution of the subsoil and the associated suction at the geosynthetic clay liner-subsoil interface was also found to have significant effects on the shrinkage of the geosynthetic clay liner. © The Author(s) 2014.
Equivalent Young's modulus of composite resin for simulation of stress during dental restoration.
Park, Jung-Hoon; Choi, Nak-Sam
2017-02-01
For shrinkage stress simulation in dental restoration, the elastic properties of composite resins should be acquired beforehand. This study proposes a formula to measure the equivalent Young's modulus of a composite resin through a calculation scheme of the shrinkage stress in dental restoration. Two types of composite resins remarkably different in the polymerization shrinkage strain were used for experimental verification: the methacrylate-type (Clearfil AP-X) and the silorane-type (Filtek P90). The linear shrinkage strains of the composite resins were gained through the bonded disk method. A formula to calculate the equivalent Young's moduli of composite resin was derived on the basis of the restored ring substrate. Equivalent Young's moduli were measured for the two types of composite resins through the formula. Those values were applied as input to a finite element analysis (FEA) for validation of the calculated shrinkage stress. Both of the measured moduli through the formula were appropriate for stress simulation of dental restoration in that the shrinkage stresses calculated by the FEA were in good agreement within 3.5% with the experimental values. The concept of equivalent Young's modulus so measured could be applied for stress simulation of 2D and 3D dental restoration. Copyright © 2016 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Shrinkage modeling of concrete reinforced by palm fibres in hot dry environments
NASA Astrophysics Data System (ADS)
Akchiche, Hamida; Kriker, Abdelouahed
2017-02-01
The cement materials, such as concrete and conventional mortar present very little resistance to traction and cracking, these hydraulic materials which induces large withdrawals on materials and cracks in structures. The hot dry environments such as: the Saharan regions of Algeria, Indeed, concrete structures in these regions are very fragile, and present high shrinkage. Strengthening of these materials by fibers can provide technical solutions for improving the mechanical performance. The aim of this study is firstly, to reduce the shrinkage of conventional concrete with its reinforcement with date palm fibers. In fact, Algeria has an extraordinary resources in natural fibers (from Palm, Abaca, Hemp) but without valorization in practical areas, especially in building materials. Secondly, to model the shrinkage behavior of concrete was reinforced by date palm fibers. In the literature, several models for still fiber concrete were founded but few are offers for natural fiber concretes. To do so, a still fiber concretes model of YOUNG - CHERN was used. According to the results, a reduction of shrinkage with reinforcement by date palm fibers was showed. A good ability of molding of shrinkage of date palm reinforced concrete with YOUNG - CHERN Modified model was obtained. In fact, a good correlation between experimental data and the model data was recorded.
Some Issues of Shrinkage-Reducing Admixtures Application in Alkali-Activated Slag Systems
Bílek, Vlastimil; Kalina, Lukáš; Novotný, Radoslav; Tkacz, Jakub; Pařízek, Ladislav
2016-01-01
Significant drying shrinkage is one of the main limitations for the wider utilization of alkali-activated slag (AAS). Few previous works revealed that it is possible to reduce AAS drying shrinkage by the use of shrinkage-reducing admixtures (SRAs). However, these studies were mainly focused on SRA based on polypropylene glycol, while as it is shown in this paper, the behavior of SRA based on 2-methyl-2,4-pentanediol can be significantly different. While 0.25% and 0.50% had only a minor effect on the AAS properties, 1.0% of this SRA reduced the drying shrinkage of waterglass-activated slag mortar by more than 80%, but it greatly reduced early strengths simultaneously. This feature was further studied by isothermal calorimetry, mercury intrusion porosimetry (MIP) and scanning electron microscopy (SEM). Calorimetric experiments showed that 1% of SRA modified the second peak of the pre-induction period and delayed the maximum of the main hydration peak by several days, which corresponds well with observed strength development as well as with the MIP and SEM results. These observations proved the certain incompatibility of SRA with the studied AAS system, because the drying shrinkage reduction was induced by the strong retardation of hydration, resulting in a coarsening of the pore structure rather than the proper function of the SRA. PMID:28773584
Digital outlines and topography of the glaciers of the American West
Fountain, Andrew G.; Hoffman, Matthew; Jackson, Keith; Basagic, Hassan; Nylen, Thomas; Percy, David
2007-01-01
Alpine glaciers have generally receded during the past century (post-“Little Ice Age”) because of climate warming (Oerlemans and others, 1998; Mann and others, 1999; Dyurgerov and Meier, 2000; Grove, 2001). This general retreat has accelerated since the mid 1970s, when a shift in atmospheric circulation occurred (McCabe and Fountain, 1995; Dyurgerov and Meier, 2000). The loss in glacier cover has had several profound effects. First, the shrinkage of glaciers results in a net increase in stream flow, typically in late summer when water supplies are at the lowest levels (Fountain and Tangborn, 1985). This additional water is important to ecosystems (Hall and Fagre, 2003) and to human water needs (Tangborn, 1980). However, if shrinkage continues, the net contribution to stream flow will diminish, and the effect upon these benefactors will be adverse. Glacier shrinkage is also a significant factor in current sea level rise (Meier, 1984; Dyurgerov and Meier, 2000). Second, many of the glaciers in the West Coast States are located on stratovolcanoes, and continued recession will leave oversteepened river valleys. These valleys, once buttressed by ice are now subject to failure, creating conditions for lahars (Walder and Driedger, 1994; O’Connor and others, 2001). Finally, reduction or loss of glaciers reduce or eliminate glacial activity as an important geomorphic process on landscape evolution and alters erosion rates in high alpine areas (Hallet and others, 1996). Because of the importance of glaciers to studies of climate change, hazards, and landscape modification, glacier inventories have been published for Alaska (Manley, in press), China (http://wdcdgg.westgis.ac.cn/DATABASE/Glacier/Glacier.asp), Nepal (Mool and others, 2001), Switzerland (Paul and others, 2002), and the Tyrolian Alps of Austria (Paul, 2002), among other locales. To provide the necessary data for assessing the magnitude and rate of glacier change in the American West, exclusive of Alaska (fig. 1), we are constructing a geographic information system (GIS) database. The data on glacier location and change will be derived from maps, ground-based photographs, and aerial and satellite images. Our first step, reported here, is the compilation of a glacier inventory of the American West. The inventory is compiled from the 1:100,000 (100K) and 1:24,000 (24K)-scale topographic maps published by the U.S. Geological Survey (USGS) and U.S. Forest Service (USFS). The 24K-scale maps provide the most detailed mapping of perennial snow and ice features. This report informs users of the data about the challenges we faced in compiling the data and discusses its errors and uncertainties. We rely on the expertise of the original cartographers in distinguishing “permanent snow and ice” from seasonal snow, although we know, through personal experience, of cartographic misjudgments. Whether “permanent” means indefinite or resident for several years is impossible to determine within the scope of this study. We do not discriminate between “glacier,” defined as permanent snow or ice that moves (Paterson, 1994), and stagnant snow and ice features. Therefore, we leave to future users the final determination of seasonal versus permanent snow features and the discrimination between true glaciers and stagnant snow and ice bodies. We believe that future studies of more regional focus and knowledge can most accurately refine our initial inventory. For simplicity we refer to all snow and ice bodies in this report as glaciers, although we recognize that most probably do not strictly meet the requirements; many may be snow patches.
Robust adhesive precision bonding in automated assembly cells
NASA Astrophysics Data System (ADS)
Müller, Tobias; Haag, Sebastian; Bastuck, Thomas; Gisler, Thomas; Moser, Hansruedi; Uusimaa, Petteri; Axt, Christoph; Brecher, Christian
2014-03-01
Diode lasers are gaining importance, making their way to higher output powers along with improved BPP. The assembly of micro-optics for diode laser systems goes along with the highest requirements regarding assembly precision. Assembly costs for micro-optics are driven by the requirements regarding alignment in a submicron and the corresponding challenges induced by adhesive bonding. For micro-optic assembly tasks a major challenge in adhesive bonding at highest precision level is the fact, that the bonding process is irreversible. Accordingly, the first bonding attempt needs to be successful. Today's UV-curing adhesives inherit shrinkage effects crucial for submicron tolerances of e.g. FACs. The impact of the shrinkage effects can be tackled by a suitable bonding area design, such as minimal adhesive gaps and an adapted shrinkage offset value for the specific assembly parameters. Compensating shrinkage effects is difficult, as the shrinkage of UV-curing adhesives is not constant between two different lots and varies even over the storage period even under ideal circumstances as first test results indicate. An up-to-date characterization of the adhesive appears necessary for maximum precision in optics assembly to reach highest output yields, minimal tolerances and ideal beamshaping results. Therefore, a measurement setup to precisely determine the up-to-date level of shrinkage has been setup. The goal is to provide necessary information on current shrinkage to the operator or assembly cell to adjust the compensation offset on a daily basis. Impacts of this information are expected to be an improved beam shaping result and a first-time-right production.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wainwright, Haruko M.; Flores Orozco, Adrian; Bucker, Matthias
In floodplain environments, a naturally reduced zone (NRZ) is considered to be a common biogeochemical hot spot, having distinct microbial and geochemical characteristics. Although important for understanding their role in mediating floodplain biogeochemical processes, mapping the subsurface distribution of NRZs over the dimensions of a floodplain is challenging, as conventional wellbore data are typically spatially limited and the distribution of NRZs is heterogeneous. In this work, we present an innovative methodology for the probabilistic mapping of NRZs within a three-dimensional (3-D) subsurface domain using induced polarization imaging, which is a noninvasive geophysical technique. Measurements consist of surface geophysical surveys andmore » drilling-recovered sediments at the U.S. Department of Energy field site near Rifle, CO (USA). Inversion of surface time domain-induced polarization (TDIP) data yielded 3-D images of the complex electrical resistivity, in terms of magnitude and phase, which are associated with mineral precipitation and other lithological properties. By extracting the TDIP data values colocated with wellbore lithological logs, we found that the NRZs have a different distribution of resistivity and polarization from the other aquifer sediments. To estimate the spatial distribution of NRZs, we developed a Bayesian hierarchical model to integrate the geophysical and wellbore data. In addition, the resistivity images were used to estimate hydrostratigraphic interfaces under the floodplain. Validation results showed that the integration of electrical imaging and wellbore data using a Bayesian hierarchical model was capable of mapping spatially heterogeneous interfaces and NRZ distributions thereby providing a minimally invasive means to parameterize a hydrobiogeochemical model of the floodplain.« less
2014-01-01
Automatic reconstruction of metabolic pathways for an organism from genomics and transcriptomics data has been a challenging and important problem in bioinformatics. Traditionally, known reference pathways can be mapped into an organism-specific ones based on its genome annotation and protein homology. However, this simple knowledge-based mapping method might produce incomplete pathways and generally cannot predict unknown new relations and reactions. In contrast, ab initio metabolic network construction methods can predict novel reactions and interactions, but its accuracy tends to be low leading to a lot of false positives. Here we combine existing pathway knowledge and a new ab initio Bayesian probabilistic graphical model together in a novel fashion to improve automatic reconstruction of metabolic networks. Specifically, we built a knowledge database containing known, individual gene / protein interactions and metabolic reactions extracted from existing reference pathways. Known reactions and interactions were then used as constraints for Bayesian network learning methods to predict metabolic pathways. Using individual reactions and interactions extracted from different pathways of many organisms to guide pathway construction is new and improves both the coverage and accuracy of metabolic pathway construction. We applied this probabilistic knowledge-based approach to construct the metabolic networks from yeast gene expression data and compared its results with 62 known metabolic networks in the KEGG database. The experiment showed that the method improved the coverage of metabolic network construction over the traditional reference pathway mapping method and was more accurate than pure ab initio methods. PMID:25374614
Bayesian analysis of anisotropic cosmologies: Bianchi VIIh and WMAP
NASA Astrophysics Data System (ADS)
McEwen, J. D.; Josset, T.; Feeney, S. M.; Peiris, H. V.; Lasenby, A. N.
2013-12-01
We perform a definitive analysis of Bianchi VIIh cosmologies with Wilkinson Microwave Anisotropy Probe (WMAP) observations of the cosmic microwave background (CMB) temperature anisotropies. Bayesian analysis techniques are developed to study anisotropic cosmologies using full-sky and partial-sky masked CMB temperature data. We apply these techniques to analyse the full-sky internal linear combination (ILC) map and a partial-sky masked W-band map of WMAP 9 yr observations. In addition to the physically motivated Bianchi VIIh model, we examine phenomenological models considered in previous studies, in which the Bianchi VIIh parameters are decoupled from the standard cosmological parameters. In the two phenomenological models considered, Bayes factors of 1.7 and 1.1 units of log-evidence favouring a Bianchi component are found in full-sky ILC data. The corresponding best-fitting Bianchi maps recovered are similar for both phenomenological models and are very close to those found in previous studies using earlier WMAP data releases. However, no evidence for a phenomenological Bianchi component is found in the partial-sky W-band data. In the physical Bianchi VIIh model, we find no evidence for a Bianchi component: WMAP data thus do not favour Bianchi VIIh cosmologies over the standard Λ cold dark matter (ΛCDM) cosmology. It is not possible to discount Bianchi VIIh cosmologies in favour of ΛCDM completely, but we are able to constrain the vorticity of physical Bianchi VIIh cosmologies at (ω/H)0 < 8.6 × 10-10 with 95 per cent confidence.
Onwude, Daniel I; Hashim, Norhashila; Abdan, Khalina; Janius, Rimfiel; Chen, Guangnan
2018-03-01
Drying is a method used to preserve agricultural crops. During the drying of products with high moisture content, structural changes in shape, volume, area, density and porosity occur. These changes could affect the final quality of dried product and also the effective design of drying equipment. Therefore, this study investigated a novel approach in monitoring and predicting the shrinkage of sweet potato during drying. Drying experiments were conducted at temperatures of 50-70 °C and samples thicknesses of 2-6 mm. The volume and surface area obtained from camera vision, and the perimeter and illuminated area from backscattered optical images were analysed and used to evaluate the shrinkage of sweet potato during drying. The relationship between dimensionless moisture content and shrinkage of sweet potato in terms of volume, surface area, perimeter and illuminated area was found to be linearly correlated. The results also demonstrated that the shrinkage of sweet potato based on computer vision and backscattered optical parameters is affected by the product thickness, drying temperature and drying time. A multilayer perceptron (MLP) artificial neural network with input layer containing three cells, two hidden layers (18 neurons), and five cells for output layer, was used to develop a model that can monitor, control and predict the shrinkage parameters and moisture content of sweet potato slices under different drying conditions. The developed ANN model satisfactorily predicted the shrinkage and dimensionless moisture content of sweet potato with correlation coefficient greater than 0.95. Combined computer vision, laser light backscattering imaging and artificial neural network can be used as a non-destructive, rapid and easily adaptable technique for in-line monitoring, predicting and controlling the shrinkage and moisture changes of food and agricultural crops during drying. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
Kireeva, N; Baskin, I I; Gaspar, H A; Horvath, D; Marcou, G; Varnek, A
2012-04-01
Here, the utility of Generative Topographic Maps (GTM) for data visualization, structure-activity modeling and database comparison is evaluated, on hand of subsets of the Database of Useful Decoys (DUD). Unlike other popular dimensionality reduction approaches like Principal Component Analysis, Sammon Mapping or Self-Organizing Maps, the great advantage of GTMs is providing data probability distribution functions (PDF), both in the high-dimensional space defined by molecular descriptors and in 2D latent space. PDFs for the molecules of different activity classes were successfully used to build classification models in the framework of the Bayesian approach. Because PDFs are represented by a mixture of Gaussian functions, the Bhattacharyya kernel has been proposed as a measure of the overlap of datasets, which leads to an elegant method of global comparison of chemical libraries. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Rizzo, D. M.; Fytilis, N.; Stevens, L.
2012-12-01
Environmental managers are increasingly required to monitor and forecast long-term effects and vulnerability of biophysical systems to human-generated stresses. Ideally, a study involving both physical and biological assessments conducted concurrently (in space and time) could provide a better understanding of the mechanisms and complex relationships. However, costs and resources associated with monitoring the complex linkages between the physical, geomorphic and habitat conditions and the biological integrity of stream reaches are prohibitive. Researchers have used classification techniques to place individual streams and rivers into a broader spatial context (hydrologic or health condition). Such efforts require environmental managers to gather multiple forms of information - quantitative, qualitative and subjective. We research and develop a novel classification tool that combines self-organizing maps with a Naïve Bayesian classifier to direct resources to stream reaches most in need. The Vermont Agency of Natural Resources has developed and adopted protocols for physical stream geomorphic and habitat assessments throughout the state of Vermont. Separate from these assessments, the Vermont Department of Environmental Conservation monitors the biological communities and the water quality in streams. Our initial hypothesis is that the geomorphic reach assessments and water quality data may be leveraged to reduce error and uncertainty associated with predictions of biological integrity and stream health. We test our hypothesis using over 2500 Vermont stream reaches (~1371 stream miles) assessed by the two agencies. In the development of this work, we combine a Naïve Bayesian classifier with a modified Kohonen Self-Organizing Map (SOM). The SOM is an unsupervised artificial neural network that autonomously analyzes inherent dataset properties using input data only. It is typically used to cluster data into similar categories when a priori classes do not exist. The incorporation of a Bayesian classifier allows one to explicitly incorporate existing knowledge and expert opinion into the data analysis. Since classification plays a leading role in the future development of data-enabled science and engineering, such a computational tool is applicable to a variety of proactive adaptive watershed management applications.
Climate change impacts on glaciers and runoff in Tien Shan (Central Asia)
NASA Astrophysics Data System (ADS)
Sorg, A. F.; Bolch, T.; Stoffel, M.; Solomina, O.; Beniston, M.
2012-12-01
Climate-driven changes in glacier-fed streamflow regimes have direct implications on freshwater supply, irrigation and hydropower potential. Reliable information about current and future glaciation and runoff is crucial for water allocation and, hence, for social and ecological stability. Although the impacts of climate change on glaciation and runoff have been addressed in previous work undertaken in the Tien Shan (known as the 'water tower of Central Asia'), a coherent, regional perspective of these findings has not been presented until now. In our study, we explore the range of changes in glaciation in different climatic regions of the Tien Shan based on existing data. We show that the majority of Tien Shan glaciers experienced accelerated glacier wasting since the mid-1970s and that glacier shrinkage is most pronounced in peripheral, lower-elevation ranges near the densely populated forelands, where summers are dry and where snow and glacial meltwater is essential for water availability. The annual glacier area shrinkage rates since the middle of the twentieth century are 0.38-0.76% per year in the outer ranges, 0.15-0.40% per year in the inner ranges and 0.05-0.31% per year in the eastern ranges. This regionally non-uniform response to climate change implies that glacier shrinkage is less severe in the continental inner ranges than in the more humid outer ranges. Glaciers in the inner ranges react with larger time lags to climate change, because accumulation and thus mass turnover of the mainly cold glaciers are relatively small. Moreover, shrinkage is especially pronounced on small or fragmented glaciers, which are widely represented in the outer regions. The relative insensitivity of glaciers in the inner ranges is further accentuated by the higher average altitude, as the equilibrium line altitude ranges from 3'500 to 3'600 masl in the outer ranges to 4'400 masl in the inner ranges. For our study, we used glacier change assessments based both on direct data (mass balance measurements) and on indirect data (aerial and satellite imagery, topographic maps). Latter can be plagued with high uncertainties and considerable errors. For instance, glaciated area has been partly overestimated in the Soviet Glacier catalogue (published in 1973, with data from the 1940s and 1950s), probably as a result of misinterpreted seasonal snowcover on aerial photographs. Studies using the Soviet Glacier catalogue as a reference are thus prone to over-emphasize glacier shrinkage. A valuable alternative is the use of continued in situ mass balance and ice thickness measurements, but they are currently conducted for only a few glaciers in the Tien Shan mountains. Efforts should therefore be encouraged to ensure the continuation and re-establishment of mass balance measurements on reference glaciers, as is currently the case at Karabatkak, Abramov and Golubin glaciers. Only on the basis of sound data, past glacier changes can be assessed with high precision and future glacier shrinkage can be estimated according to different climate scenarios. Moreover, the impact of snowcover changes, black carbon and debris cover on glacier degradation needs to be studied in more detail. Only with such model approaches, reflecting transient changes in climate, snowcover, glaciation and runoff, can appropriate adaptation and mitigation strategies be developed within a realistic time horizon.
Shrinkage Stresses Generated during Resin-Composite Applications: A Review
Schneider, Luis Felipe J.; Cavalcante, Larissa Maria; Silikas, Nick
2010-01-01
Many developments have been made in the field of resin composites for dental applications. However, the manifestation of shrinkage due to the polymerization process continues to be a major problem. The material's shrinkage, associated with dynamic development of elastic modulus, creates stresses within the material and its interface with the tooth structure. As a consequence, marginal failure and subsequent secondary caries, marginal staining, restoration displacement, tooth fracture, and/or post-operative sensitivity are clinical drawbacks of resin-composite applications. The aim of the current paper is to present an overview about the shrinkage stresses created during resin-composite applications, consequences, and advances. The paper is based on results of many researches that are available in the literature. PMID:20948573
ERIC Educational Resources Information Center
Marcet, Ana; Perea, Manuel
2018-01-01
Previous research has shown that early in the word recognition process, there is some degree of uncertainty concerning letter identity and letter position. Here, we examined whether this uncertainty also extends to the mapping of letter features onto letters, as predicted by the Bayesian Reader (Norris & Kinoshita, 2012). Indeed, anecdotal…
2015-07-01
undergraduate student coauthors Aashish Jindia, Parag Srivastava, and Jay Jin for help with the research. In addition, thank you to the numerous...103 A.1.1 Sacramento Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 A.1.2 RadMap and SUNS Data Sets...parameters in a joint hypothesis space. We develop scalable branch and bound and pruning mechanisms for searching (at multiple resolutions) over source
NASA Astrophysics Data System (ADS)
Chakraborty, A.; Goto, H.
2017-12-01
The 2011 off the Pacific coast of Tohoku earthquake caused severe damage in many areas further inside the mainland because of site-amplification. Furukawa district in Miyagi Prefecture, Japan recorded significant spatial differences in ground motion even at sub-kilometer scales. The site responses in the damage zone far exceeded the levels in the hazard maps. A reason why the mismatch occurred is that mapping follow only the mean value at the measurement locations with no regard to the data uncertainties and thus are not always reliable. Our research objective is to develop a methodology to incorporate data uncertainties in mapping and propose a reliable map. The methodology is based on a hierarchical Bayesian modeling of normally-distributed site responses in space where the mean (μ), site-specific variance (σ2) and between-sites variance(s2) parameters are treated as unknowns with a prior distribution. The observation data is artificially created site responses with varying means and variances for 150 seismic events across 50 locations in one-dimensional space. Spatially auto-correlated random effects were added to the mean (μ) using a conditionally autoregressive (CAR) prior. The inferences on the unknown parameters are done using Markov Chain Monte Carlo methods from the posterior distribution. The goal is to find reliable estimates of μ sensitive to uncertainties. During initial trials, we observed that the tau (=1/s2) parameter of CAR prior controls the μ estimation. Using a constraint, s = 1/(k×σ), five spatial models with varying k-values were created. We define reliability to be measured by the model likelihood and propose the maximum likelihood model to be highly reliable. The model with maximum likelihood was selected using a 5-fold cross-validation technique. The results show that the maximum likelihood model (μ*) follows the site-specific mean at low uncertainties and converges to the model-mean at higher uncertainties (Fig.1). This result is highly significant as it successfully incorporates the effect of data uncertainties in mapping. This novel approach can be applied to any research field using mapping techniques. The methodology is now being applied to real records from a very dense seismic network in Furukawa district, Miyagi Prefecture, Japan to generate a reliable map of the site responses.
Gao, Ying; Cronin, Neil J; Pesola, Arto J; Finni, Taija
2016-10-01
Reducing sitting time by means of sit-stand workstations is an emerging trend, but further evidence is needed regarding their health benefits. This cross-sectional study compared work time muscle activity patterns and spinal shrinkage between office workers (aged 24-62, 58.3% female) who used either a sit-stand workstation (Sit-Stand group, n = 10) or a traditional sit workstation (Sit group, n = 14) for at least the past three months. During one typical workday, muscle inactivity and activity from quadriceps and hamstrings were monitored using electromyography shorts, and spinal shrinkage was measured using stadiometry before and after the workday. Compared with the Sit group, the Sit-Stand group had less muscle inactivity time (66.2 ± 17.1% vs. 80.9 ± 6.4%, p = 0.014) and more light muscle activity time (26.1 ± 12.3% vs. 14.9 ± 6.3%, p = 0.019) with no significant difference in spinal shrinkage (5.62 ± 2.75 mm vs. 6.11 ± 2.44 mm). This study provides evidence that working with sit-stand workstations can promote more light muscle activity time and less inactivity without negative effects on spinal shrinkage. Practitioner Summary: This cross-sectional study compared the effects of using a sit-stand workstation to a sit workstation on muscle activity patterns and spinal shrinkage in office workers. It provides evidence that working with a sit-stand workstation can promote more light muscle activity time and less inactivity without negative effects on spinal shrinkage.
NASA Astrophysics Data System (ADS)
Røthe Arnesen, Marius; Paulsen Hellebust, Taran; Malinen, Eirik
2017-03-01
Tumour shrinkage occurs during fractionated radiotherapy and is regulated by radiation induced cellular damage, repopulation of viable cells and clearance of dead cells. In some cases additional tumour shrinkage during external beam therapy may be beneficial, particularly for locally advanced cervical cancer where a small tumour volume may simplify and improve brachytherapy. In the current work, a mathematical tumour model is utilized to investigate how local dose escalation affects tumour shrinkage, focusing on implications for brachytherapy. The iterative two-compartment model is based upon linear-quadratic radiation response, a doubling time for viable cells and a half-time for clearance of dead cells. The model was individually fitted to clinical tumour volume data from fractionated radiotherapy of 25 cervical cancer patients. Three different fractionation patterns for dose escalation, all with an additional dose of 12.2 Gy, were simulated and compared to standard fractionation in terms of tumour shrinkage. An adaptive strategy where dose escalation was initiated after one week of treatment was also considered. For 22 out of 25 patients, a good model fit was achieved to the observed tumour shrinkage. A large degree of inter-patient variation was seen in predicted volume reduction following dose escalation. For the 10 best responding patients, a mean tumour volume reduction of 34 ± 3% (relative to standard treatment) was estimated at the time of brachytherapy. Timing of initiating dose escalation had a larger impact than the number of fractions applied. In conclusion, the model was found useful in evaluating the impact from dose escalation on tumour shrinkage. The results indicate that dose escalation could be conducted from the start of external beam radiotherapy in order to obtain additional tumour shrinkage before brachytherapy.
Development and Validation of a Constitutive Model for Dental Composites during the Curing Process
NASA Astrophysics Data System (ADS)
Wickham Kolstad, Lauren
Debonding is a critical failure of a dental composites used for dental restorations. Debonding of dental composites can be determined by comparing the shrinkage stress of to the debonding strength of the adhesive that bonds it to the tooth surface. It is difficult to measure shrinkage stress experimentally. In this study, finite element analysis is used to predict the stress in the composite during cure. A new constitutive law is presented that will allow composite developers to evaluate composite shrinkage stress at early stages in the material development. Shrinkage stress and shrinkage strain experimental data were gathered for three dental resins, Z250, Z350, and P90. Experimental data were used to develop a constitutive model for the Young's modulus as a function of time of the dental composite during cure. A Maxwell model, spring and dashpot in series, was used to simulate the composite. The compliance of the shrinkage stress device was also taken into account by including a spring in series with the Maxwell model. A coefficient of thermal expansion was also determined for internal loading of the composite by dividing shrinkage strain by time. Three FEA models are presented. A spring-disk model validates that the constitutive law is self-consistent. A quarter cuspal deflection model uses separate experimental data to verify that the constitutive law is valid. Finally, an axisymmetric tooth model is used to predict interfacial stresses in the composite. These stresses are compared to the debonding strength to check if the composite debonds. The new constitutive model accurately predicted cuspal deflection data. Predictions for interfacial bond stress in the tooth model compare favorably with debonding characteristics observed in practice for dental resins.
Persson, N.; Ghisletta, P.; Dahle, C.L.; Bender, A.R.; Yang, Y.; Yuan, P.; Daugherty, A.M.; Raz, N.
2014-01-01
We examined regional changes in brain volume in healthy adults (N = 167, age 19-79 years at baseline; N = 90 at follow-up) over approximately two years. With latent change score models, we evaluated mean change and individual differences in rates of change in 10 anatomically-defined and manually-traced regions of interest (ROIs): lateral prefrontal cortex (LPFC), orbital frontal cortex (OF), prefrontal white matter (PFw), hippocampus (HC), parahippocampal gyrus (PhG), caudate nucleus (Cd), putamen (Pt), insula (In), cerebellar hemispheres (CbH), and primary visual cortex (VC). Significant mean shrinkage was observed in the HC, CbH, In, OF, and the PhG, and individual differences in change were noted in all regions, except the OF. Pro-inflammatory genetic variants mediated shrinkage in PhG and CbH. Carriers of two T alleles of interleukin-1β (IL-1βC-511T, rs16944) and a T allele of methylenetetrahydrofolate reductase (MTHFRC677T, rs1801133) polymorphisms showed increased PhG shrinkage. No effects of a pro-inflammatory polymorphism for C-reactive protein (CRP-286C>A>T, rs3091244) or apolipoprotein (APOE) ε4 allele were noted. These results replicate the pattern of brain shrinkage observed in previous studies, with a notable exception of the LPFC thus casting doubt on the unique importance of prefrontal cortex in aging. Larger baseline volumes of CbH and In were associated with increased shrinkage, in conflict with the brain reserve hypothesis. Contrary to previous reports, we observed no significant linear effects of age and hypertension on regional brain shrinkage. Our findings warrant further investigation of the effects of neuroinflammation on structural brain change throughout the lifespan. PMID:25264227
Does expanded polytetrafluoroethylene mesh really shrink after laparoscopic ventral hernia repair?
Carter, P R; LeBlanc, K A; Hausmann, M G; Whitaker, J M; Rhynes, V K; Kleinpeter, K P; Allain, B W
2012-06-01
The shrinkage of mesh has been cited as a possible explanation for hernia recurrence. Expanded polytetrafluoroethylene (ePTFE) is unique in that it can be visualized on computed tomography (CT). Some animal studies have shown a greater than 40% rate of contraction of ePTFE; however, very few human studies have been performed. A total of 815 laparoscopic incisional/ventral hernia (LIVH) repairs were performed by a single surgical group. DualMesh Plus (ePTFE) (WL Gore & Associates, Newark, DE) was placed in the majority of these patients using both transfascial sutures and tack fixation. Fifty-eight patients had postoperative CTs of the abdomen and pelvis with ePTFE and known transverse diameter of the implanted mesh. The prosthesis was measured on the CT using the AquariusNet software program (TeraRecon, San Mateo, CA), which outlines the mesh and calculates the total length. Data were collected regarding the original mesh size, known linear dimension of mesh, seroma formation, and time interval since mesh implantation in months. The mean shrinkage rate was 6.7%. The duration of implantation ranged from 6 weeks to 78 months, with a median of 15 months. Seroma was seen in 8.6% (5) of patients. No relationship was identified between the percentage of shrinkage and the original mesh size (P = 0.78), duration of time implanted (P = 0.57), or seroma formation (P = 0.074). In 27.5% (16) of patients, no shrinkage of mesh was identified. Of the patients who did experience mesh shrinkage, the range of shrinkage was 2.6-25%. Our results are markedly different from animal studies and show that ePTFE has minimal shrinkage after LIVH repair. The use of transfascial sutures in addition to tack fixation may have an implication on the mesh contraction rates.
Hankins, Amanda D; Hatch, Robert H; Benson, Jarred H; Blen, Bernard J; Tantbirojn, Daranee; Versluis, Antheunis
2014-04-01
A nanofilled, resin-based light-cured coating (G-Coat Plus, GC America, Alsip, Ill.) may reduce water absorption by glass ionomers. The authors investigated this possibility by measuring cuspal flexure caused by swelling of glass ionomer-restored teeth. The authors cut large mesio-occlusodistal slots (4-millimeter wide, 4-mm deep) in 12 extracted premolars and restored them with a glass ionomer cement (Fuji IX GP Extra, GC America). Six teeth were coated, and the other six were uncoated controls. The authors digitized the teeth in three dimensions by using an optical scanner after preparation and restoration and during an eight-week storage in water. They calculated cuspal flexure and analyzed the results by using an analysis of variance and Student-Newman-Keuls post hoc tests (significance level .05). They used dye penetration along the interface to verify bonding. Inward cuspal flexure indicated restoration shrinkage. Coated restorations had significantly higher flexure (mean [standard deviation], -11.9 [3.5] micrometers) than did restorations without coating (-7.3 [1.5] μm). Flexure in both groups decreased significantly (P < .05) during water storage and, after eight weeks, it changed to expansion for uncoated control restorations. Dye penetration along the interfaces was not significant, which ruled out debonding as the cause of cuspal relaxation. Teeth restored with glass ionomer cement exhibited shrinkage, as seen by inward cuspal flexure. The effect of the protective coating on water absorption was evident in the slower shrinkage compensation. The study results show that teeth restored with glass ionomers exhibited setting shrinkage that deformed tooth cusps. Water absorption compensated for the shrinkage. Although the coating may be beneficial for reducing water absorption, it also slows the shrinkage compensation rate (that is, the rate that hygroscopic expansion compensates for cuspal flexure from shrinkage).
Henzel, Martin; Hamm, Klaus; Sitter, Helmut; Gross, Markus W; Surber, Gunnar; Kleinert, Gabriele; Engenhart-Cabillic, Rita
2009-09-01
Stereotactic radiosurgery (SRS) and also fractionated stereotactic radiotherapy (SRT) offer high local control (LC) rates (> 90%). This study aimed to evaluate three-dimensional (3-D) tumor volume (TV) shrinkage and to assess quality of life (QoL) after SRS/SRT. From 1999 to 2005, 35/74 patients were treated with SRS, and 39/74 with SRT. Median age was 60 years. Treatment was delivered by a linear accelerator. Median single dose was 13 Gy (SRS) or 54 Gy (SRT). Patients were followed up > or = 12 months after SRS/SRT. LC and toxicity were evaluated by clinical examinations and magnetic resonance imaging. 3-D TV shrinkage was evaluated with the planning system. QoL was assessed using the questionnaire Short Form-36. Median follow-up was 50/36 months (SRS/SRT). Actuarial 5-year freedom from progression/overall survival was 88.1%/100% (SRS), and 87.5%/87.2% (SRT). TV shrinkage was 15.1%/40.7% (SRS/SRT; p = 0.01). Single dose (< 13 Gy) was the only determinant factor for TV shrinkage after SRS (p = 0.001). Age, gender, initial TV, and previous operations did not affect TV shrinkage. Acute or late toxicity (> or = grade 3) was never seen. Concerning QoL, no significant differences were observed after SRS/SRT. Previous operations and gender did not affect QoL (p > 0.05). Compared with the German normal population, patients had worse values for all domains except for mental health. TV shrinkage was significantly higher after SRT than after SRS. Main symptoms were not affected by SRS/SRT. Retrospectively, QoL was neither affected by SRS nor by SRT.
Shrinkage and growth compensation in common sunflowers: refining estimates of damage
Sedgwick, James A.; Oldemeye, John L.; Swenson, Elizabeth L.
1986-01-01
Shrinkage and growth compensation of artificially damaged common sunflowers (Helianthus annuus) were studied in central North Dakota during 1981-1982 in an effort to increase accuracy of estimates of blackbird damage to sunflowers. In both years, as plants matured damaged areas on seedheads shrank at a greater rate than the sunflower heads themselves. This differential shrinkage resulted in an underestimation of the area damaged. Sunflower head and damaged-area shrinkage varied widely by time and degree of damage and by size of the seedhead damaged. Because variation in shrinkage by time of damage was so large, predicting when blackbird damage occurs may be the most important factor in estimating seed loss. Yield'occupied seed area was greater (P < 0.05) for damaged than undamaged heads and tended to increase as degree of damage inflicted increased, indicating growth compensation was occurring in response to lost seeds. Yields of undamaged seeds in seedheads damaged during early seed development were higher than those of heads damaged later. This suggested that there was a period of maximal response to damage when plants were best able to redirect growth to seeds remaining in the head. Sunflowers appear to be able to compensate for damage of ≤ 15% of the total hear area. Estimates of damage can be improved by applying empirical results of differential shrinkage and growth compensations.
Ghavami-Lahiji, Mehrsima; Hooshmand, Tabassom
2017-01-01
Resin-based composites are commonly used restorative materials in dentistry. Such tooth-colored restorations can adhere to the dental tissues. One drawback is that the polymerization shrinkage and induced stresses during the curing procedure is an inherent property of resin composite materials that might impair their performance. This review focuses on the significant developments of laboratory tools in the measurement of polymerization shrinkage and stresses of dental resin-based materials during polymerization. An electronic search of publications from January 1977 to July 2016 was made using ScienceDirect, PubMed, Medline, and Google Scholar databases. The search included only English-language articles. Only studies that performed laboratory methods to evaluate the amount of the polymerization shrinkage and/or stresses of dental resin-based materials during polymerization were selected. The results indicated that various techniques have been introduced with different mechanical/physical bases. Besides, there are factors that may contribute the differences between the various methods in measuring the amount of shrinkages and stresses of resin composites. The search for an ideal and standard apparatus for measuring shrinkage stress and volumetric polymerization shrinkage of resin-based materials in dentistry is still required. Researchers and clinicians must be aware of differences between analytical methods to make proper interpretation and indications of each technique relevant to a clinical situation. PMID:28928776
Blind Source Parameters for Performance Evaluation of Despeckling Filters.
Biradar, Nagashettappa; Dewal, M L; Rohit, ManojKumar; Gowre, Sanjaykumar; Gundge, Yogesh
2016-01-01
The speckle noise is inherent to transthoracic echocardiographic images. A standard noise-free reference echocardiographic image does not exist. The evaluation of filters based on the traditional parameters such as peak signal-to-noise ratio, mean square error, and structural similarity index may not reflect the true filter performance on echocardiographic images. Therefore, the performance of despeckling can be evaluated using blind assessment metrics like the speckle suppression index, speckle suppression and mean preservation index (SMPI), and beta metric. The need for noise-free reference image is overcome using these three parameters. This paper presents a comprehensive analysis and evaluation of eleven types of despeckling filters for echocardiographic images in terms of blind and traditional performance parameters along with clinical validation. The noise is effectively suppressed using the logarithmic neighborhood shrinkage (NeighShrink) embedded with Stein's unbiased risk estimation (SURE). The SMPI is three times more effective compared to the wavelet based generalized likelihood estimation approach. The quantitative evaluation and clinical validation reveal that the filters such as the nonlocal mean, posterior sampling based Bayesian estimation, hybrid median, and probabilistic patch based filters are acceptable whereas median, anisotropic diffusion, fuzzy, and Ripplet nonlinear approximation filters have limited applications for echocardiographic images.
Blind Source Parameters for Performance Evaluation of Despeckling Filters
Biradar, Nagashettappa; Dewal, M. L.; Rohit, ManojKumar; Gowre, Sanjaykumar; Gundge, Yogesh
2016-01-01
The speckle noise is inherent to transthoracic echocardiographic images. A standard noise-free reference echocardiographic image does not exist. The evaluation of filters based on the traditional parameters such as peak signal-to-noise ratio, mean square error, and structural similarity index may not reflect the true filter performance on echocardiographic images. Therefore, the performance of despeckling can be evaluated using blind assessment metrics like the speckle suppression index, speckle suppression and mean preservation index (SMPI), and beta metric. The need for noise-free reference image is overcome using these three parameters. This paper presents a comprehensive analysis and evaluation of eleven types of despeckling filters for echocardiographic images in terms of blind and traditional performance parameters along with clinical validation. The noise is effectively suppressed using the logarithmic neighborhood shrinkage (NeighShrink) embedded with Stein's unbiased risk estimation (SURE). The SMPI is three times more effective compared to the wavelet based generalized likelihood estimation approach. The quantitative evaluation and clinical validation reveal that the filters such as the nonlocal mean, posterior sampling based Bayesian estimation, hybrid median, and probabilistic patch based filters are acceptable whereas median, anisotropic diffusion, fuzzy, and Ripplet nonlinear approximation filters have limited applications for echocardiographic images. PMID:27298618
Paternal occupation and birth defects: findings from the National Birth Defects Prevention Study.
Desrosiers, Tania A; Herring, Amy H; Shapira, Stuart K; Hooiveld, Mariëtte; Luben, Tom J; Herdt-Losavio, Michele L; Lin, Shao; Olshan, Andrew F
2012-08-01
Several epidemiological studies have suggested that certain paternal occupations may be associated with an increased prevalence of birth defects in offspring. Using data from the National Birth Defects Prevention Study, the authors investigated the association between paternal occupation and birth defects in a case-control study of cases comprising over 60 different types of birth defects (n=9998) and non-malformed controls (n=4066) with dates of delivery between 1997 and 2004. Using paternal occupational histories reported by mothers via telephone interview, jobs were systematically classified into 63 groups based on shared exposure profiles within occupation and industry. Data were analysed using bayesian logistic regression with a hierarchical prior for dependent shrinkage to stabilise estimation with sparse data. Several occupations were associated with an increased prevalence of various birth defect categories, including mathematical, physical and computer scientists; artists; photographers and photo processors; food service workers; landscapers and groundskeepers; hairdressers and cosmetologists; office and administrative support workers; sawmill workers; petroleum and gas workers; chemical workers; printers; material moving equipment operators; and motor vehicle operators. Findings from this study might be used to identify specific occupations worthy of further investigation and to generate hypotheses about chemical or physical exposures common to such occupations.
An investigation of CO2 laser scleral buckling using moiré interferometry.
Maswadi, Saher M; Dyer, Peter E; Verma, Dinesh; Jalabi, Wadah; Dave, Dinesh
2002-01-01
To demonstrate suitability of moiré interferometry to assess and quantify laser-induced shrinkage of scleral collagen for buckling procedures. Scleral buckling of human cadaver eyes was investigated using a Coherent Ultrapulse CO2 laser. Projection moiré interferometry was employed to determine the out-of plane displacement produced by laser exposure, and in-situ optical microscopy of reference markers on the eye was used to measure in-plane shrinkage. Measurements based on moiré interferometry allow a three dimensional view of shape changes in the eye surface as laser treatment proceeds. Out-of-plane displacement reaches up to 1.5 mm with a single laser spot exposure. In-plane shrinkage reached a maximum of around 30%, which is similar to that reported by Sasoh et al (Ophthalmic Surg Lasers. 1998;29:410) for a Tm:YAG laser. The moiré technique is found to be suitable for quantifying the effects of CO2 laser scleral shrinkage and buckling. This can be further developed to provide a standardized method for experimental investigations of other laser sources for scleral shrinkage.
Danielson, Christian; Mehrnezhad, Ali; YekrangSafakar, Ashkan; Park, Kidong
2017-06-14
Self-folding or micro-origami technologies are actively investigated as a novel manufacturing process to fabricate three-dimensional macro/micro-structures. In this paper, we present a simple process to produce a self-folding structure with a biaxially oriented polystyrene sheet (BOPS) or Shrinky Dinks. A BOPS sheet is known to shrink to one-third of its original size in plane, when it is heated above 160 °C. A grid pattern is engraved on one side of the BOPS film with a laser engraver to decrease the thermal shrinkage of the engraved side. The thermal shrinkage of the non-engraved side remains the same and this unbalanced thermal shrinkage causes folding of the structure as the structure shrinks at high temperature. We investigated the self-folding mechanism and characterized how the grid geometry, the grid size, and the power of the laser engraver affect the bending curvature. The developed fabrication process to locally modulate thermomechanical properties of the material by engraving the grid pattern and the demonstrated design methodology to harness the unbalanced thermal shrinkage can be applied to develop complicated self-folding macro/micro structures.
Radiation Source Mapping with Bayesian Inverse Methods
Hykes, Joshua M.; Azmy, Yousry Y.
2017-03-22
In this work, we present a method to map the spectral and spatial distributions of radioactive sources using a limited number of detectors. Locating and identifying radioactive materials is important for border monitoring, in accounting for special nuclear material in processing facilities, and in cleanup operations following a radioactive material spill. Most methods to analyze these types of problems make restrictive assumptions about the distribution of the source. In contrast, the source mapping method presented here allows an arbitrary three-dimensional distribution in space and a gamma peak distribution in energy. To apply the method, the problem is cast as anmore » inverse problem where the system’s geometry and material composition are known and fixed, while the radiation source distribution is sought. A probabilistic Bayesian approach is used to solve the resulting inverse problem since the system of equations is ill-posed. The posterior is maximized with a Newton optimization method. The probabilistic approach also provides estimates of the confidence in the final source map prediction. A set of adjoint, discrete ordinates flux solutions, obtained in this work by the Denovo code, is required to efficiently compute detector responses from a candidate source distribution. These adjoint fluxes form the linear mapping from the state space to the response space. The test of the method’s success is simultaneously locating a set of 137Cs and 60Co gamma sources in a room. This test problem is solved using experimental measurements that we collected for this purpose. Because of the weak sources available for use in the experiment, some of the expected photopeaks were not distinguishable from the Compton continuum. However, by supplanting 14 flawed measurements (out of a total of 69) with synthetic responses computed by MCNP, the proof-of-principle source mapping was successful. The locations of the sources were predicted within 25 cm for two of the sources and 90 cm for the third, in a room with an ~4-x 4-m floor plan. Finally, the predicted source intensities were within a factor of ten of their true value.« less
Gerber, Brian D.; Kendall, William L.; Hooten, Mevin B.; Dubovsky, James A.; Drewien, Roderick C.
2015-01-01
Prediction is fundamental to scientific enquiry and application; however, ecologists tend to favour explanatory modelling. We discuss a predictive modelling framework to evaluate ecological hypotheses and to explore novel/unobserved environmental scenarios to assist conservation and management decision-makers. We apply this framework to develop an optimal predictive model for juvenile (<1 year old) sandhill crane Grus canadensis recruitment of the Rocky Mountain Population (RMP). We consider spatial climate predictors motivated by hypotheses of how drought across multiple time-scales and spring/summer weather affects recruitment.Our predictive modelling framework focuses on developing a single model that includes all relevant predictor variables, regardless of collinearity. This model is then optimized for prediction by controlling model complexity using a data-driven approach that marginalizes or removes irrelevant predictors from the model. Specifically, we highlight two approaches of statistical regularization, Bayesian least absolute shrinkage and selection operator (LASSO) and ridge regression.Our optimal predictive Bayesian LASSO and ridge regression models were similar and on average 37% superior in predictive accuracy to an explanatory modelling approach. Our predictive models confirmed a priori hypotheses that drought and cold summers negatively affect juvenile recruitment in the RMP. The effects of long-term drought can be alleviated by short-term wet spring–summer months; however, the alleviation of long-term drought has a much greater positive effect on juvenile recruitment. The number of freezing days and snowpack during the summer months can also negatively affect recruitment, while spring snowpack has a positive effect.Breeding habitat, mediated through climate, is a limiting factor on population growth of sandhill cranes in the RMP, which could become more limiting with a changing climate (i.e. increased drought). These effects are likely not unique to cranes. The alteration of hydrological patterns and water levels by drought may impact many migratory, wetland nesting birds in the Rocky Mountains and beyond.Generalizable predictive models (trained by out-of-sample fit and based on ecological hypotheses) are needed by conservation and management decision-makers. Statistical regularization improves predictions and provides a general framework for fitting models with a large number of predictors, even those with collinearity, to simultaneously identify an optimal predictive model while conducting rigorous Bayesian model selection. Our framework is important for understanding population dynamics under a changing climate and has direct applications for making harvest and habitat management decisions.
Quantitative trait nucleotide analysis using Bayesian model selection.
Blangero, John; Goring, Harald H H; Kent, Jack W; Williams, Jeff T; Peterson, Charles P; Almasy, Laura; Dyer, Thomas D
2005-10-01
Although much attention has been given to statistical genetic methods for the initial localization and fine mapping of quantitative trait loci (QTLs), little methodological work has been done to date on the problem of statistically identifying the most likely functional polymorphisms using sequence data. In this paper we provide a general statistical genetic framework, called Bayesian quantitative trait nucleotide (BQTN) analysis, for assessing the likely functional status of genetic variants. The approach requires the initial enumeration of all genetic variants in a set of resequenced individuals. These polymorphisms are then typed in a large number of individuals (potentially in families), and marker variation is related to quantitative phenotypic variation using Bayesian model selection and averaging. For each sequence variant a posterior probability of effect is obtained and can be used to prioritize additional molecular functional experiments. An example of this quantitative nucleotide analysis is provided using the GAW12 simulated data. The results show that the BQTN method may be useful for choosing the most likely functional variants within a gene (or set of genes). We also include instructions on how to use our computer program, SOLAR, for association analysis and BQTN analysis.
Fuzzy Bayesian Network-Bow-Tie Analysis of Gas Leakage during Biomass Gasification
Yan, Fang; Xu, Kaili; Yao, Xiwen; Li, Yang
2016-01-01
Biomass gasification technology has been rapidly developed recently. But fire and poisoning accidents caused by gas leakage restrict the development and promotion of biomass gasification. Therefore, probabilistic safety assessment (PSA) is necessary for biomass gasification system. Subsequently, Bayesian network-bow-tie (BN-bow-tie) analysis was proposed by mapping bow-tie analysis into Bayesian network (BN). Causes of gas leakage and the accidents triggered by gas leakage can be obtained by bow-tie analysis, and BN was used to confirm the critical nodes of accidents by introducing corresponding three importance measures. Meanwhile, certain occurrence probability of failure was needed in PSA. In view of the insufficient failure data of biomass gasification, the occurrence probability of failure which cannot be obtained from standard reliability data sources was confirmed by fuzzy methods based on expert judgment. An improved approach considered expert weighting to aggregate fuzzy numbers included triangular and trapezoidal numbers was proposed, and the occurrence probability of failure was obtained. Finally, safety measures were indicated based on the obtained critical nodes. The theoretical occurrence probabilities in one year of gas leakage and the accidents caused by it were reduced to 1/10.3 of the original values by these safety measures. PMID:27463975
Stewart, G B; Mengersen, K; Meader, N
2014-03-01
Bayesian networks (BNs) are tools for representing expert knowledge or evidence. They are especially useful for synthesising evidence or belief concerning a complex intervention, assessing the sensitivity of outcomes to different situations or contextual frameworks and framing decision problems that involve alternative types of intervention. Bayesian networks are useful extensions to logic maps when initiating a review or to facilitate synthesis and bridge the gap between evidence acquisition and decision-making. Formal elicitation techniques allow development of BNs on the basis of expert opinion. Such applications are useful alternatives to 'empty' reviews, which identify knowledge gaps but fail to support decision-making. Where review evidence exists, it can inform the development of a BN. We illustrate the construction of a BN using a motivating example that demonstrates how BNs can ensure coherence, transparently structure the problem addressed by a complex intervention and assess sensitivity to context, all of which are critical components of robust reviews of complex interventions. We suggest that BNs should be utilised to routinely synthesise reviews of complex interventions or empty reviews where decisions must be made despite poor evidence. Copyright © 2013 John Wiley & Sons, Ltd.
Moradi, Milad; Ghadiri, Nasser
2018-01-01
Automatic text summarization tools help users in the biomedical domain to acquire their intended information from various textual resources more efficiently. Some of biomedical text summarization systems put the basis of their sentence selection approach on the frequency of concepts extracted from the input text. However, it seems that exploring other measures rather than the raw frequency for identifying valuable contents within an input document, or considering correlations existing between concepts, may be more useful for this type of summarization. In this paper, we describe a Bayesian summarization method for biomedical text documents. The Bayesian summarizer initially maps the input text to the Unified Medical Language System (UMLS) concepts; then it selects the important ones to be used as classification features. We introduce six different feature selection approaches to identify the most important concepts of the text and select the most informative contents according to the distribution of these concepts. We show that with the use of an appropriate feature selection approach, the Bayesian summarizer can improve the performance of biomedical summarization. Using the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) toolkit, we perform extensive evaluations on a corpus of scientific papers in the biomedical domain. The results show that when the Bayesian summarizer utilizes the feature selection methods that do not use the raw frequency, it can outperform the biomedical summarizers that rely on the frequency of concepts, domain-independent and baseline methods. Copyright © 2017 Elsevier B.V. All rights reserved.
Exoplanet Biosignatures: A Framework for Their Assessment.
Catling, David C; Krissansen-Totton, Joshua; Kiang, Nancy Y; Crisp, David; Robinson, Tyler D; DasSarma, Shiladitya; Rushby, Andrew J; Del Genio, Anthony; Bains, William; Domagal-Goldman, Shawn
2018-04-20
Finding life on exoplanets from telescopic observations is an ultimate goal of exoplanet science. Life produces gases and other substances, such as pigments, which can have distinct spectral or photometric signatures. Whether or not life is found with future data must be expressed with probabilities, requiring a framework of biosignature assessment. We present a framework in which we advocate using biogeochemical "Exo-Earth System" models to simulate potential biosignatures in spectra or photometry. Given actual observations, simulations are used to find the Bayesian likelihoods of those data occurring for scenarios with and without life. The latter includes "false positives" wherein abiotic sources mimic biosignatures. Prior knowledge of factors influencing planetary inhabitation, including previous observations, is combined with the likelihoods to give the Bayesian posterior probability of life existing on a given exoplanet. Four components of observation and analysis are necessary. (1) Characterization of stellar (e.g., age and spectrum) and exoplanetary system properties, including "external" exoplanet parameters (e.g., mass and radius), to determine an exoplanet's suitability for life. (2) Characterization of "internal" exoplanet parameters (e.g., climate) to evaluate habitability. (3) Assessment of potential biosignatures within the environmental context (components 1-2), including corroborating evidence. (4) Exclusion of false positives. We propose that resulting posterior Bayesian probabilities of life's existence map to five confidence levels, ranging from "very likely" (90-100%) to "very unlikely" (<10%) inhabited. Key Words: Bayesian statistics-Biosignatures-Drake equation-Exoplanets-Habitability-Planetary science. Astrobiology 18, xxx-xxx.
Minimum risk wavelet shrinkage operator for Poisson image denoising.
Cheng, Wu; Hirakawa, Keigo
2015-05-01
The pixel values of images taken by an image sensor are said to be corrupted by Poisson noise. To date, multiscale Poisson image denoising techniques have processed Haar frame and wavelet coefficients--the modeling of coefficients is enabled by the Skellam distribution analysis. We extend these results by solving for shrinkage operators for Skellam that minimizes the risk functional in the multiscale Poisson image denoising setting. The minimum risk shrinkage operator of this kind effectively produces denoised wavelet coefficients with minimum attainable L2 error.
Glacier change and glacial lake outburst flood risk in the Bolivian Andes
NASA Astrophysics Data System (ADS)
Kougkoulos, Ioannis; Cook, Simon J.; Edwards, Laura A.; Dortch, Jason; Hoffmann, Dirk
2017-04-01
Glaciers of the Bolivian Andes represent an important water resource for Andean cities and mountain communities, yet relatively little work has assessed changes in their extent over recent decades. In many mountain regions, glacier recession has been accompanied by the development of proglacial lakes, which can pose a glacial lake outburst flood (GLOF) hazard. However, no studies have assessed the development of such lakes in Bolivia despite recent GLOF incidents here. Our mapping from satellite imagery reveals an overall areal shrinkage of 228.1 ± 22.8 km2 (43.1%) across the Bolivian Cordillera Oriental between 1986 and 2014. Shrinkage was greatest in the Tres Cruces region (47.3%), followed by the Cordillera Apolobamba (43.1%) and Cordillera Real (41.9%). A growing number of proglacial lakes have developed as glaciers have receded, in accordance with trends in most other deglaciating mountain ranges, although the number of ice-contact lakes has decreased. The reasons for this are unclear, but the pattern of lake change has varied significantly throughout the study period, suggesting that monitoring of future lake development is required as ice continues to recede. Ultimately, we use our 2014 database of proglacial lakes to assess GLOF risk across the Bolivian Andes. We identify 25 lakes that pose a potential GLOF threat to downstream communities and infrastructure. We suggest that further studies of potential GLOF impacts are urgently required.
Glacier change and glacial lake outburst flood risk in the Bolivian Andes
NASA Astrophysics Data System (ADS)
Cook, Simon J.; Kougkoulos, Ioannis; Edwards, Laura A.; Dortch, Jason; Hoffmann, Dirk
2016-10-01
Glaciers of the Bolivian Andes represent an important water resource for Andean cities and mountain communities, yet relatively little work has assessed changes in their extent over recent decades. In many mountain regions, glacier recession has been accompanied by the development of proglacial lakes, which can pose a glacial lake outburst flood (GLOF) hazard. However, no studies have assessed the development of such lakes in Bolivia despite recent GLOF incidents here. Our mapping from satellite imagery reveals an overall areal shrinkage of 228.1 ± 22.8 km2 (43.1 %) across the Bolivian Cordillera Oriental between 1986 and 2014. Shrinkage was greatest in the Tres Cruces region (47.3 %), followed by the Cordillera Apolobamba (43.1 %) and Cordillera Real (41.9 %). A growing number of proglacial lakes have developed as glaciers have receded, in accordance with trends in most other deglaciating mountain ranges, although the number of ice-contact lakes has decreased. The reasons for this are unclear, but the pattern of lake change has varied significantly throughout the study period, suggesting that monitoring of future lake development is required as ice continues to recede. Ultimately, we use our 2014 database of proglacial lakes to assess GLOF risk across the Bolivian Andes. We identify 25 lakes that pose a potential GLOF threat to downstream communities and infrastructure. We suggest that further studies of potential GLOF impacts are urgently required.
A model for shrinkage strain in photo polymerization of dental composites.
Petrovic, Ljubomir M; Atanackovic, Teodor M
2008-04-01
We formulate a new model for the shrinkage strain developed during photo polymerization in dental composites. The model is based on the diffusion type fractional order equation, since it has been proved that polymerization reaction is diffusion controlled (Atai M, Watts DC. A new kinetic model for the photo polymerization shrinkage-strain of dental composites and resin-monomers. Dent Mater 2006;22:785-91). Our model strongly confirms the observation by Atai and Watts (see reference details above) and their experimental results. The shrinkage strain is modeled by a nonlinear differential equation in (see reference details above) and that equation must be solved numerically. In our approach, we use the linear fractional order differential equation to describe the strain rate due to photo polymerization. This equation is solved exactly. As shrinkage is a consequence of the polymerization reaction and polymerization reaction is diffusion controlled, we postulate that shrinkage strain rate is described by a diffusion type equation. We find explicit form of solution to this equation and determine the strain in the resin monomers. Also by using equations of linear viscoelasticity, we determine stresses in the polymer due to the shrinkage. The time evolution of stresses implies that the maximal stresses are developed at the very beginning of the polymerization process. The stress in a dental composite that is light treated has the largest value short time after the treatment starts. The strain settles at the constant value in the time of about 100s (for the cases treated in Atai and Watts). From the model developed here, the shrinkage strain of dental composites and resin monomers is analytically determined. The maximal value of stresses is important, since this value must be smaller than the adhesive bond strength at cavo-restoration interface. The maximum stress determined here depends on the diffusivity coefficient. Since diffusivity coefficient increases as polymerization proceeds, it follows that the periods of light treatments should be shorter at the beginning of the treatment and longer at the end of the treatment, with dark interval between the initial low intensity and following high intensity curing. This is because at the end of polymerization the stress relaxation cannot take place.
Updating categorical soil maps using limited survey data by Bayesian Markov chain cosimulation.
Li, Weidong; Zhang, Chuanrong; Dey, Dipak K; Willig, Michael R
2013-01-01
Updating categorical soil maps is necessary for providing current, higher-quality soil data to agricultural and environmental management but may not require a costly thorough field survey because latest legacy maps may only need limited corrections. This study suggests a Markov chain random field (MCRF) sequential cosimulation (Co-MCSS) method for updating categorical soil maps using limited survey data provided that qualified legacy maps are available. A case study using synthetic data demonstrates that Co-MCSS can appreciably improve simulation accuracy of soil types with both contributions from a legacy map and limited sample data. The method indicates the following characteristics: (1) if a soil type indicates no change in an update survey or it has been reclassified into another type that similarly evinces no change, it will be simply reproduced in the updated map; (2) if a soil type has changes in some places, it will be simulated with uncertainty quantified by occurrence probability maps; (3) if a soil type has no change in an area but evinces changes in other distant areas, it still can be captured in the area with unobvious uncertainty. We concluded that Co-MCSS might be a practical method for updating categorical soil maps with limited survey data.
Updating Categorical Soil Maps Using Limited Survey Data by Bayesian Markov Chain Cosimulation
Dey, Dipak K.; Willig, Michael R.
2013-01-01
Updating categorical soil maps is necessary for providing current, higher-quality soil data to agricultural and environmental management but may not require a costly thorough field survey because latest legacy maps may only need limited corrections. This study suggests a Markov chain random field (MCRF) sequential cosimulation (Co-MCSS) method for updating categorical soil maps using limited survey data provided that qualified legacy maps are available. A case study using synthetic data demonstrates that Co-MCSS can appreciably improve simulation accuracy of soil types with both contributions from a legacy map and limited sample data. The method indicates the following characteristics: (1) if a soil type indicates no change in an update survey or it has been reclassified into another type that similarly evinces no change, it will be simply reproduced in the updated map; (2) if a soil type has changes in some places, it will be simulated with uncertainty quantified by occurrence probability maps; (3) if a soil type has no change in an area but evinces changes in other distant areas, it still can be captured in the area with unobvious uncertainty. We concluded that Co-MCSS might be a practical method for updating categorical soil maps with limited survey data. PMID:24027447
Efficient, adaptive estimation of two-dimensional firing rate surfaces via Gaussian process methods.
Rad, Kamiar Rahnama; Paninski, Liam
2010-01-01
Estimating two-dimensional firing rate maps is a common problem, arising in a number of contexts: the estimation of place fields in hippocampus, the analysis of temporally nonstationary tuning curves in sensory and motor areas, the estimation of firing rates following spike-triggered covariance analyses, etc. Here we introduce methods based on Gaussian process nonparametric Bayesian techniques for estimating these two-dimensional rate maps. These techniques offer a number of advantages: the estimates may be computed efficiently, come equipped with natural errorbars, adapt their smoothness automatically to the local density and informativeness of the observed data, and permit direct fitting of the model hyperparameters (e.g., the prior smoothness of the rate map) via maximum marginal likelihood. We illustrate the method's flexibility and performance on a variety of simulated and real data.
Shrinkage and durability study of bridge deck concrete.
DOT National Transportation Integrated Search
2010-12-01
The Mississippi Department of Transportation is incorporating changes to material : specifications and construction procedures for bridge decks in an effort to reduce shrinkage : cracking. These changes are currently being implemented into a limited ...
Resizing metal-coated nanopores using a scanning electron microscope.
Chansin, Guillaume A T; Hong, Jongin; Dusting, Jonathan; deMello, Andrew J; Albrecht, Tim; Edel, Joshua B
2011-10-04
Electron beam-induced shrinkage provides a convenient way of resizing solid-state nanopores in Si(3) N(4) membranes. Here, a scanning electron microscope (SEM) has been used to resize a range of different focussed ion beam-milled nanopores in Al-coated Si(3) N(4) membranes. Energy-dispersive X-ray spectra and SEM images acquired during resizing highlight that a time-variant carbon deposition process is the dominant mechanism of pore shrinkage, although granular structures on the membrane surface in the vicinity of the pores suggest that competing processes may occur. Shrinkage is observed on the Al side of the pore as well as on the Si(3) N(4) side, while the shrinkage rate is observed to be dependent on a variety of factors. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Brewis, C; Pracy, J P; Albert, D M
2000-04-01
The treatments previously used for lymphangiomas of the head and neck in children-surgery and intralesional injection of sclerosants-are associated with significant morbidity. A new treatment-intralesional injection of OK-432-was used for lymphangiomas of the head and neck in 11 children. The results were total shrinkage in two, marked shrinkage in two, slight shrinkage in five and no response in two. The results were not affected by previous surgery nor by whether aspiration prior to injection was possible. There were no recurrences in those children in whom shrinkage occurred and no child had subsequent surgery following injection. The results of this series support those of previous series showing that OK-432 injection is an effective and safe treatment for lymphangiomas of the head and neck in children.
Gyulkhandanyan, Armen V; Mutlu, Asuman; Allen, David J; Freedman, John; Leytin, Valery
2014-01-01
Depolarization of mitochondrial inner transmembrane potential (ΔΨm) is a key biochemical manifestation of the intrinsic apoptosis pathway in anucleate platelets. Little is known, however, about the relationship between ΔΨm depolarization and downstream morphological manifestations of platelet apoptosis, cell shrinkage and microparticle (MP) formation. To elucidate this relationship in human platelets. Using flow cytometry, we analyzed ΔΨm depolarization, platelet shrinkage and MP formation in platelets treated with BH3-mimetic ABT-737 and calcium ionophore A23187, well-known inducers of intrinsic platelet apoptosis. We found that at optimal treatment conditions (90min, 37°C) both ABT-737 and A23187 induce ΔΨm depolarization in the majority (88-94%) of platelets and strongly increase intracellular free calcium. In contrast, effects of A23187 and ABT-737 on platelet shrinkage and MP formation are quite different. A23187 strongly stimulates cell shrinkage and MP formation, whereas ABT-737 only weakly induces these events (10-20% of the effect seen with A23187, P<0.0001). These data indicate that a high level of ΔΨm depolarization and intracellular free calcium does not obligatorily ensure strong platelet shrinkage and MP formation. Since ABT-737 efficiently induces clearance of platelets from the circulation, our results suggest that platelet clearance may occur in the absence of the morphological manifestations of apoptosis. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Skals, Marianne; Jensen, Uffe B.; Ousingsawat, Jiraporn; Kunzelmann, Karl; Leipziger, Jens; Praetorius, Helle A.
2010-01-01
α-Hemolysin from Escherichia coli (HlyA) readily lyse erythrocytes from various species. We have recently demonstrated that this pore-forming toxin provokes distinct shrinkage and crenation before it finally leads to swelling and lysis of erythrocytes. The present study documents the underlying mechanism for this severe volume reduction. We show that HlyA-induced shrinkage and crenation of human erythrocytes occur subsequent to a significant rise in [Ca2+]i. The Ca2+-activated K+ channel KCa3.1 (or Gardos channel) is essential for the initial shrinkage, because both clotrimazole and TRAM-34 prevent the shrinkage and potentiate hemolysis produced by HlyA. Notably, the recently described Ca2+-activated Cl− channel TMEM16A contributes substantially to HlyA-induced cell volume reduction. Erythrocytes isolated from TMEM16A−/− mice showed significantly attenuated crenation and increased lysis compared with controls. Additionally, we found that HlyA leads to acute exposure of phosphatidylserine in the outer leaflet of the plasma membrane. This exposure was considerably reduced by KCa3.1 antagonists. In conclusion, this study shows that HlyA triggers acute erythrocyte shrinkage, which depends on Ca2+-activated efflux of K+ via KCa3.1 and Cl− via TMEM16A, with subsequent phosphatidylserine exposure. This mechanism might potentially allow HlyA-damaged erythrocytes to be removed from the bloodstream by macrophages and thereby reduce the risk of intravascular hemolysis. PMID:20231275
Skals, Marianne; Jensen, Uffe B; Ousingsawat, Jiraporn; Kunzelmann, Karl; Leipziger, Jens; Praetorius, Helle A
2010-05-14
alpha-Hemolysin from Escherichia coli (HlyA) readily lyse erythrocytes from various species. We have recently demonstrated that this pore-forming toxin provokes distinct shrinkage and crenation before it finally leads to swelling and lysis of erythrocytes. The present study documents the underlying mechanism for this severe volume reduction. We show that HlyA-induced shrinkage and crenation of human erythrocytes occur subsequent to a significant rise in [Ca(2+)](i). The Ca(2+)-activated K(+) channel K(Ca)3.1 (or Gardos channel) is essential for the initial shrinkage, because both clotrimazole and TRAM-34 prevent the shrinkage and potentiate hemolysis produced by HlyA. Notably, the recently described Ca(2+)-activated Cl(-) channel TMEM16A contributes substantially to HlyA-induced cell volume reduction. Erythrocytes isolated from TMEM16A(-/-) mice showed significantly attenuated crenation and increased lysis compared with controls. Additionally, we found that HlyA leads to acute exposure of phosphatidylserine in the outer leaflet of the plasma membrane. This exposure was considerably reduced by K(Ca)3.1 antagonists. In conclusion, this study shows that HlyA triggers acute erythrocyte shrinkage, which depends on Ca(2+)-activated efflux of K(+) via K(Ca)3.1 and Cl(-) via TMEM16A, with subsequent phosphatidylserine exposure. This mechanism might potentially allow HlyA-damaged erythrocytes to be removed from the bloodstream by macrophages and thereby reduce the risk of intravascular hemolysis.
Domestic well locations and populations served in the contiguous U.S.: 1990
Johnson, Tyler; Belitz, Kenneth
2017-01-01
We estimate the location and population served by domestic wells in the contiguous United States in two ways: (1) the “Block Group Method” or BGM, uses data from the 1990 census, and (2) the “Road-Enhanced Method” or REM, refines the locations by using a buffer expansion and shrinkage technique along roadways to define areas where domestic wells exist. The fundamental assumption is that houses (and therefore domestic wells) are located near a named road. The results are presented as two nationally-consistent domestic-well population datasets.While both methods can be considered valid, the REM map is more precise in locating domestic wells; the REM map has a smaller amount of spatial bias (Type 1 and Type 2 errors nearly equal vs biased in Type 1), total error (10.9% vs 23.7%), and distance error (2.0 km vs 2.7 km), when comparing the REM and BGM maps to a calibration map in California. However, the BGM map is more inclusive of all potential locations for domestic wells. Independent domestic well datasets from the USGS, and the States of MN, NV, and TX show that the BGM captures about 5 to 10% more wells than the REM.One key difference between the BGM and the REM is the mapping of low density areas. The REM reduces areas mapped as low density by 57%, concentrating populations into denser regions. Therefore, if one is trying to capture all of the potential areas of domestic-well usage, then the BGM map may be more applicable. If location is more imperative, then the REM map is better at identifying areas of the landscape with the highest probability of finding a domestic well. Depending on the purpose of a study, a combination of both maps can be used.
Wiechmann, Thorsten; Pallagst, Karina M
2012-01-01
Many American and European cities have to deal with demographic and economic trajectories leading to urban shrinkage. According to official data, 13% of urban regions in the US and 54% of those in the EU have lost population in recent years. However, the extent and spatial distribution of declining populations differ significantly between Europe and the US. In Germany, the situation is driven by falling birth rates and the effects of German reunification. In the US, shrinkage is basically related to long-term industrial transformation. But the challenges of shrinking cities seldom appeared on the agendas of politicians and urban planners until recently. This article provides a critical overview of the development paths and local strategies of four shrinking cities: Schwedt and Dresden in eastern Germany; Youngstown and Pittsburgh in the US. A typology of urban growth and shrinkage, from economic and demographic perspectives, enables four types of city to be differentiated and the differences between the US and eastern Germany to be discussed. The article suggests that a new transatlantic debate on policy and planning strategies for restructuring shrinking cities is needed to overcome the dominant growth orientation that in most cases intensifies the negative consequences of shrinkage.
Prediction of Shrinkage Porosity Defect in Sand Casting Process of LM25
NASA Astrophysics Data System (ADS)
Rathod, Hardik; Dhulia, Jay K.; Maniar, Nirav P.
2017-08-01
In the present worldwide and aggressive environment, foundry commercial enterprises need to perform productively with least number of rejections and create casting parts in shortest lead time. It has become extremely difficult for foundry industries to meet demands of defects free casting and meet strict delivery schedules. The process of casting solidification is complex in nature. Prediction of shrinkage defect in metal casting is one of the critical concern in foundries and is one of the potential research areas in casting. Due to increasing pressure to improve quality and to reduce cost, it is very essential to upgrade the level of current methodology used in foundries. In the present research work, prediction methodology of shrinkage porosity defect in sand casting process of LM25 using experimentation and ANSYS is proposed. The objectives successfully achieved are prediction of shrinkage porosity distribution in Al-Si casting and determining effectiveness of investigated function for predicting shrinkage porosity by correlating results of simulating studies to those obtained experimentally. The real-time application of the research reflects from the fact that experimentation is performed on 9 different Y junctions at foundry industry and practical data obtained from experimentation are used for simulation.
NASA Astrophysics Data System (ADS)
Carneiro, Vanda S. M.; Mota, Cláudia C. B. O.; Souza, Alex F.; Cajazeira, Marlus R. R.; Gerbi, Marleny E. M. M.; Gomes, Anderson S. L.
2018-02-01
This study evaluated the polymerization shrinkage of two experimental flowable composite resins (CR) with different proportions of Urethane dimethacrylate (UDMA)/triethylene glycol dimethacrylate (TEGDMA) monomers in the organic matrix (50:50 and 60:40, respectively). A commercially available flowable CR, Tetric N-Flow (Ivoclair Vivadent, Liechtenstein, Germany), was employed as the control group. The resins were inserted in a cylindrical teflon mold (7 mm diameter, 0.6 mm height) and scanned with OCT before photoactivation, immediately after and 15 minutes after light-curing (Radii-Cal, SDI, Australia, 1,200 mW/cm2 ) exposure. A Callisto SD-OCT system (Thorlabs Inc, USA), operating at 930 nm central wavelength was employed for imaging acquisition. Cross-sectional OCT images were captured with 8 mm transverse scanning (2000x512 matrix), and processed by the ImageJ software, for comparison between the scanning times and between groups. Pearson correlation showed significant shrinkage for all groups in each time analyzed. Kruskal-Wallis test showed greater polymerization shrinkage for the 50:50 UDMA/TEGDMA group (p=0.001), followed by the control group (p=0.018). TEGDMA concentration was proportionally related to the polymerization shrinkage of the flowable composite resins.
Combined Use of Shrinkage Reducing Admixture and CaO in Cement Based Materials
NASA Astrophysics Data System (ADS)
Tittarelli, Francesca; Giosuè, Chiara; Monosi, Saveria
2017-10-01
The combined addition of a Shrinkage-Reducing Admixture (SRA) with a CaO-based expansive agent (CaO) has been found to have a synergistic effect to improve the dimensional stability of cement based materials. In this work, aimed to further investigate the effect, mortar and self-compacting concrete specimens were prepared either without admixtures, as reference, or with SRA alone and/or CaO. Their performance was compared in terms of compressive strength and free shrinkage measurements. Results showed that the synergistic effect in reducing shrinkage is confirmed in the specimens manufactured with SRA and CaO. In order to clarify this phenomenon, the effect of SRA on the hydration of CaO as well as cement was evaluated through different techniques. The obtained results show that SRA induces a finer microstructure of the CaO hydration products and a retarding effect on the microstructure development of cement based materials. A more deformable mortar or concrete, due to the delay in microstructure development by SRA, coupled with a finer microstructure of CaO hydration products could allow higher early expansion, which might contribute in contrasting better the successive drying shrinkage.
Concrete pavement mixture design and analysis (MDA) : factors influencing drying shrinkage.
DOT National Transportation Integrated Search
2014-10-01
This literature review focuses on factors influencing drying shrinkage of concrete. Although the factors are normally interrelated, they : can be categorized into three groups: paste quantity, paste quality, and other factors.
Local coexistence of VO 2 phases revealed by deep data analysis
Strelcov, Evgheni; Ievlev, Anton; Tselev, Alexander; ...
2016-07-07
We report a synergistic approach of micro-Raman spectroscopic mapping and deep data analysis to study the distribution of crystallographic phases and ferroelastic domains in a defected Al-doped VO 2 microcrystal. Bayesian linear unmixing revealed an uneven distribution of the T phase, which is stabilized by the surface defects and uneven local doping that went undetectable by other classical analysis techniques such as PCA and SIMPLISMA. This work demonstrates the impact of information recovery via statistical analysis and full mapping in spectroscopic studies of vanadium dioxide systems, which is commonly substituted by averaging or single point-probing approaches, both of which suffermore » from information misinterpretation due to low resolving power.« less
A Bayesian modelling framework for tornado occurrences in North America
NASA Astrophysics Data System (ADS)
Cheng, Vincent Y. S.; Arhonditsis, George B.; Sills, David M. L.; Gough, William A.; Auld, Heather
2015-03-01
Tornadoes represent one of nature’s most hazardous phenomena that have been responsible for significant destruction and devastating fatalities. Here we present a Bayesian modelling approach for elucidating the spatiotemporal patterns of tornado activity in North America. Our analysis shows a significant increase in the Canadian Prairies and the Northern Great Plains during the summer, indicating a clear transition of tornado activity from the United States to Canada. The linkage between monthly-averaged atmospheric variables and likelihood of tornado events is characterized by distinct seasonality; the convective available potential energy is the predominant factor in the summer; vertical wind shear appears to have a strong signature primarily in the winter and secondarily in the summer; and storm relative environmental helicity is most influential in the spring. The present probabilistic mapping can be used to draw inference on the likelihood of tornado occurrence in any location in North America within a selected time period of the year.
A Bayesian modelling framework for tornado occurrences in North America.
Cheng, Vincent Y S; Arhonditsis, George B; Sills, David M L; Gough, William A; Auld, Heather
2015-03-25
Tornadoes represent one of nature's most hazardous phenomena that have been responsible for significant destruction and devastating fatalities. Here we present a Bayesian modelling approach for elucidating the spatiotemporal patterns of tornado activity in North America. Our analysis shows a significant increase in the Canadian Prairies and the Northern Great Plains during the summer, indicating a clear transition of tornado activity from the United States to Canada. The linkage between monthly-averaged atmospheric variables and likelihood of tornado events is characterized by distinct seasonality; the convective available potential energy is the predominant factor in the summer; vertical wind shear appears to have a strong signature primarily in the winter and secondarily in the summer; and storm relative environmental helicity is most influential in the spring. The present probabilistic mapping can be used to draw inference on the likelihood of tornado occurrence in any location in North America within a selected time period of the year.
Brain white matter fiber estimation and tractography using Q-ball imaging and Bayesian MODEL.
Lu, Meng
2015-01-01
Diffusion tensor imaging allows for the non-invasive in vivo mapping of the brain tractography. However, fiber bundles have complex structures such as fiber crossings, fiber branchings and fibers with large curvatures that tensor imaging (DTI) cannot accurately handle. This study presents a novel brain white matter tractography method using Q-ball imaging as the data source instead of DTI, because QBI can provide accurate information about multiple fiber crossings and branchings in a single voxel using an orientation distribution function (ODF). The presented method also uses graph theory to construct the Bayesian model-based graph, so that the fiber tracking between two voxels can be represented as the shortest path in a graph. Our experiment showed that our new method can accurately handle brain white matter fiber crossings and branchings, and reconstruct brain tractograhpy both in phantom data and real brain data.
Hasty retreat of glaciers in the Palena province of Chile
NASA Astrophysics Data System (ADS)
Paul, F.; Mölg, N.; Bolch, T.
2013-12-01
Mapping glacier extent from optical satellite data has become a most efficient tool to create or update glacier inventories and determine glacier changes over time. A most valuable archive in this regard is the nearly 30-year time series of Landsat Thematic Mapper (TM) data that is freely available (already orthorectified) for most regions in the world from the USGS. One region with a most dramatic glacier shrinkage and a missing systematic assessment of changes, is the Palena province in Chile, south of Puerto Montt. A major bottleneck for accurate determination of glacier changes in this region is related to the huge amounts of snow falling in this very maritime region, hiding the perimeter of glaciers throughout the year. Consequently, we found only three years with Landsat scenes that can be used to map glacier extent through time. We here present the results of a glacier change analysis from six Landsat scenes (path-rows 232-89/90) acquired in 1985, 2000 and 2011 covering the Palena district in Chile. Clean glacier ice was mapped automatically with a standard technique (TM3/TM band ratio) and manual editing was applied to remove wrongly classified lakes and to add debris-covered glacier parts. The digital elevation model (DEM) from SRTM was used to derive drainage divides, determine glacier specific topographic parameters, and analyse the area changes in regard to topography. The scene from 2000 has the best snow conditions and was used to eliminate seasonal snow in the other two scenes by digital combination of the binary glacier masks. The observed changes show a huge spatial variability with a strong dependence on elevation and glacier hypsometry. While small mountain glaciers at high elevations and steep slopes show virtually no change over the 26-year period, ice at low elevations from large valley glaciers shows a dramatic decline (area and thickness loss). Some glaciers retreated more than 3 km over this time period or even disappeared completely. Typically, these glaciers lost contact to the accumulation areas of tributaries and now consist of an ablation area only. Furthermore, numerous pro-glacial lakes formed or expanded rapidly, increasing the local hazard potential. On the other hand, some glaciers located on or near to (still active) volcanoes have also advanced in the same time period. Observed trends in temperature (decreasing) are in contrast to the observed strong glacier shrinkage.
Wu, Xiaorong; Sun, Yi; Xie, Weili; Liu, Yanju; Song, Xueyu
2010-05-01
It has been the focus to develop low shrinkage dental composite resins in recent ten years. A major difficulty in developing low shrinkage dental materials is that their deficiency in mechanical properties cannot satisfy the clinical purpose. The aim of this study is to develop novel dental nanocomposites incorporated with polyhedral oligomeric silsesquioxane (POSS). It is especially interesting to evaluate the volumetric shrinkage and mechanical properties, improve the shrinkage, working performances and service life of dental composite resins. The effect of added POSS on the composites' mechanical properties has been evaluated. The weight percentages of added POSS are 0, 2, 5, 10 and 15wt% respectively. Fourier-transform infra-red spectroscopy and X-ray diffraction were used to characterize their microstructures. Physico-mechanical properties that were investigated included volumetric shrinkage, flexural strength, flexural modulus, compressive strength, compressive modulus, Viker's hardness and fracture energy. Furthermore, the possible reinforced mechanism has been discussed. The shrinkage of novel nanocomposites decreased from 3.53% to 2.18%. The nanocomposites incorporated with POSS showed greatly improved mechanical properties, for example, with only 2wt% POSS added, the nanocompsite's flexural strength increased 15%, compressive strength increased 12%, hardness increased 15% and uncommonly, even the toughness of resins was obviously increased. With 5wt% POSS polymerized, compressive strength increased from 192MPa to 251MPa and compressive modulus increased from 3.93GPa to 6.62GPa, but flexure strength began to decline from 87MPa to 75MPa. This finding indicated that the reinforcing mechanism of flexure state maybe different from that of compressive state. The mechanical properties and volumetric shrinkage of dental composite resins polymerized with POSS can be improved significantly. In current study, the nanocomposite with 2wt% POSS incorporated is observed to achieve the best improved effects. 2010 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Shibasaki, S; Takamizawa, T; Nojiri, K; Imai, A; Tsujimoto, A; Endo, H; Suzuki, S; Suda, S; Barkmeier, W W; Latta, M A; Miyazaki, M
The present study determined the mechanical properties and volumetric polymerization shrinkage of different categories of resin composite. Three high viscosity bulk fill resin composites were tested: Tetric EvoCeram Bulk Fill (TB, Ivoclar Vivadent), Filtek Bulk Fill posterior restorative (FB, 3M ESPE), and Sonic Fill (SF, Kerr Corp). Two low-shrinkage resin composites, Kalore (KL, GC Corp) and Filtek LS Posterior (LS, 3M ESPE), were used. Three conventional resin composites, Herculite Ultra (HU, Kerr Corp), Estelite ∑ Quick (EQ, Tokuyama Dental), and Filtek Supreme Ultra (SU, 3M ESPE), were used as comparison materials. Following ISO Specification 4049, six specimens for each resin composite were used to determine flexural strength, elastic modulus, and resilience. Volumetric polymerization shrinkage was determined using a water-filled dilatometer. Data were evaluated using analysis of variance followed by Tukey's honestly significant difference test (α=0.05). The flexural strength of the resin composites ranged from 115.4 to 148.1 MPa, the elastic modulus ranged from 5.6 to 13.4 GPa, and the resilience ranged from 0.70 to 1.0 MJ/m 3 . There were significant differences in flexural properties between the materials but no clear outliers. Volumetric changes as a function of time over a duration of 180 seconds depended on the type of resin composite. However, for all the resin composites, apart from LS, volumetric shrinkage began soon after the start of light irradiation, and a rapid decrease in volume during light irradiation followed by a slower decrease was observed. The low shrinkage resin composites KL and LS showed significantly lower volumetric shrinkage than the other tested materials at the measuring point of 180 seconds. In contrast, the three bulk fill resin composites showed higher volumetric change than the other resin composites. The findings from this study provide clinicians with valuable information regarding the mechanical properties and polymerization kinetics of these categories of current resin composite.
Contour-Driven Atlas-Based Segmentation
Wachinger, Christian; Fritscher, Karl; Sharp, Greg; Golland, Polina
2016-01-01
We propose new methods for automatic segmentation of images based on an atlas of manually labeled scans and contours in the image. First, we introduce a Bayesian framework for creating initial label maps from manually annotated training images. Within this framework, we model various registration- and patch-based segmentation techniques by changing the deformation field prior. Second, we perform contour-driven regression on the created label maps to refine the segmentation. Image contours and image parcellations give rise to non-stationary kernel functions that model the relationship between image locations. Setting the kernel to the covariance function in a Gaussian process establishes a distribution over label maps supported by image structures. Maximum a posteriori estimation of the distribution over label maps conditioned on the outcome of the atlas-based segmentation yields the refined segmentation. We evaluate the segmentation in two clinical applications: the segmentation of parotid glands in head and neck CT scans and the segmentation of the left atrium in cardiac MR angiography images. PMID:26068202
Bridge decks : mitigation of cracking and increased durability.
DOT National Transportation Integrated Search
2013-07-01
This report discusses the application of expansive cements (Type K and Type G) and shrinkage-reducing admixtures (SRAs) in : reducing the cracking due to drying shrinkage. The Type K expansive cement contained portland cement and calcium : sulfoalumi...
Sealing of Cracks on Florida Bridge Decks with Steel Girders
DOT National Transportation Integrated Search
2012-08-01
One of the biggest problems affecting bridges is the transverse cracking and deterioration of concrete bridge decks. The causes of early age cracking are primarily attributed to plastic shrinkage, temperature effects, autogenous shrinkage, and drying...
Measurement of early age shrinkage of Virginia concrete mixtures.
DOT National Transportation Integrated Search
2008-01-01
Concrete volume changes throughout its service life. The total in-service volume change is the resultant of applied loads and shrinkage. When loaded, concrete undergoes an instantaneous elastic deformation and a slow inelastic deformation called cree...
Limitation of Shrinkage Porosity in Aluminum Rotor Die Casting
NASA Astrophysics Data System (ADS)
Kim, Young-Chan; Choi, Se-Weon; Kim, Cheol-Woo; Cho, Jae-Ik; Lee, Sung-Ho; Kang, Chang-Seog
Aluminum rotor prone to have many casting defects especially large amount of air and shrinkage porosity, which caused eccentricity, loss and noise during motor operation. Many attempts have been made to develop methods of shrinkage porosity control, but still there are some problems to solve. In this research, the process of vacuum squeeze die casting is proposed for limitation of defects. The 6 pin point gated dies which were in capable of local squeeze at the end ring were used. Influences of filling patterns on HPDC were evaluated and the important process control parameters were high injection speed, squeeze length, venting and process conditions. By using local squeeze and vacuum during filling and solidification, air and shrinkage porosity were significantly reduced and the feeding efficiency at the upper end ring was improved 10%. As a result of controlling the defects, the dynamometer test showed improved motor efficiency by more than 4%.
Effect of hot-dry environment on fiber-reinforced self-compacting concrete
NASA Astrophysics Data System (ADS)
Tioua, Tahar; Kriker, Abdelouahed; Salhi, Aimad; Barluenga, Gonzalo
2016-07-01
Drying shrinkage can be a major reason for the deterioration of concrete structures. Variation in ambient temperature and relative humidity cause changes in the properties of hardened concrete which can affect their mechanical and drying shrinkage characteristics. The present study investigated mechanical strength and particularly drying shrinkage properties of self-compacting concretes (SCC) reinforced with date palm fiber exposed to hot and dry environment. In this study a total of nine different fibers reinforced self compacting concrete (FRSCC) mixtures and one mixture without fiber were prepared. The volume fraction and the length of fibers reinforcement were 0.1-0.2-0.3% and 10-20-30 mm. It was observed that drying shrinkage lessened with adding low volumetric fraction and short length of fibers in curing condition (T = 20 °C and RH = 50 ± 5 %), but increased in hot and dry environment.
Fugolin, Ana Paula Piovezan; Correr-Sobrinho, Lourenço; Correr, Américo Bortolazzo; Sinhoreti, Mário Alexandre Coelho; Guiraldo, Ricardo Danil; Consani, Simonides
2016-01-01
The purpose of this study was to investigate the influence of the irradiance emitted by a light-curing unit on microhardness, degree of conversion (DC), and gaps resulting from shrinkage of 2 dental composite resins. Cylinders of nanofilled and microhybrid composites were fabricated and light cured. After 24 hours, the tops and bottoms of the specimens were evaluated via indentation testing and Fourier transform infrared spectroscopy to determine Knoop hardness number (KHN) and DC, respectively. Gap width (representing polymerization shrinkage) was measured under a scanning electron microscope. The nanofilled composite specimens presented significantly greater KHNs than did the microhybrid specimens (P < 0.05). The microhybrid composite resin exhibited significantly greater DC and gap width than the nanofilled material (P < 0.05). Irradiance had a mostly material-dependent influence on the hardness and DC, but not the polymerization shrinkage, of composite resins.
Improvement of formability of high strength steel sheets in shrink flanging
NASA Astrophysics Data System (ADS)
Hamedon, Z.; Abe, Y.; Mori, K.
2016-02-01
In the shrinkage flanging, the wrinkling tends to occur due to compressive stress. The wrinkling will cause a difficulty in assembling parts, and severe wrinkling may leads to rupture of parts. The shrinkage flange of the ultra-high strength steel sheets not only defects the product by the occurrence of the wrinkling but also causes seizure and wear of the dies and shortens the life of dies. In the present study, a shape of a punch having gradual contact was optimized in order to prevent the wrinkling in shrinkage flanging of ultra-high strength steel sheets. The sheet was gradually bent from the corner of the sheet to reduce the compressive stress. The wrinkling in the shrink flanging of the ultra-high strength steel sheets was prevented by the punch having gradual contact. It was found that the punch having gradual contact is effective in preventing the occurrence of wrinkling in the shrinkage flanging.
Possibilities of using aluminate cements in high-rise construction
NASA Astrophysics Data System (ADS)
Kaddo, Maria
2018-03-01
The article describes preferable ways of usage of alternative binders for high-rise construction based on aluminate cements. Possible areas of rational use of aluminate cements with the purpose of increasing the service life of materials and the adequacy of the durability of materials with the required durability of the building are analyzed. The results of the structure, shrinkage and physical and mechanical properties of concrete obtained from dry mixes on the base of aluminate cements for self-leveling floors are presented. To study the shrinkage mechanism of curing binders and to evaluate the role of evaporation of water in the development of shrinkage was undertaken experiment with simple unfilled systems: gypsum binder, portland cement and «corrosion resistant high alumina cement + gypsum». Principle possibility of binder with compensated shrinkage based on aluminate cement, gypsum and modern superplasticizers was defined, as well as cracking resistance and corrosion resistance provide durability of the composition.
Development of early age shrinkage stresses in reinforced concrete bridge decks
NASA Astrophysics Data System (ADS)
William, Gergis W.; Shoukry, Samir N.; Riad, Mourad Y.
2008-12-01
This paper describes the instrumentation and data analysis of a reinforced concrete bridge deck constructed on 3-span continuous steel girders in Evansville, West Virginia. An instrumentation system consisting of 232 sensors is developed and implemented specifically to measure strains and temperature in concrete deck, strains in longitudinal and transverse rebars, the overall contraction and expansion of concrete deck, and crack openings. Data from all sensors are automatically collected every 30 minutes starting at the time of placing the concrete deck. Measured strain and temperature time-histories were used to calculate the stresses, which were processed to attenuate the thermal effects due to daily temperature changes and isolate the drying shrinkage component. The results indicated that most of concrete shrinkage occurs during the first three days. Under the constraining effects from stay-in-place forms and reinforcement, early age shrinkage leads to elevated longitudinal stress, which is the main factor responsible for crack initiation.
Malaria Risk Mapping for Control in the Republic of Sudan
Noor, Abdisalan M.; ElMardi, Khalid A.; Abdelgader, Tarig M.; Patil, Anand P.; Amine, Ahmed A. A.; Bakhiet, Sahar; Mukhtar, Maowia M.; Snow, Robert W.
2012-01-01
Evidence shows that malaria risk maps are rarely tailored to address national control program ambitions. Here, we generate a malaria risk map adapted for malaria control in Sudan. Community Plasmodium falciparum parasite rate (PfPR) data from 2000 to 2010 were assembled and were standardized to 2–10 years of age (PfPR2–10). Space-time Bayesian geostatistical methods were used to generate a map of malaria risk for 2010. Surfaces of aridity, urbanization, irrigation schemes, and refugee camps were combined with the PfPR2–10 map to tailor the epidemiological stratification for appropriate intervention design. In 2010, a majority of the geographical area of the Sudan had risk of < 1% PfPR2–10. Areas of meso- and hyperendemic risk were located in the south. About 80% of Sudan's population in 2011 was in the areas in the desert, urban centers, or where risk was < 1% PfPR2–10. Aggregated data suggest reducing risks in some high transmission areas since the 1960s. PMID:23033400
NASA Astrophysics Data System (ADS)
Oommen, T.; Chatterjee, S.
2017-12-01
NASA and the Indian Space Research Organization (ISRO) are generating Earth surface features data using Airborne Visible/Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG) within 380 to 2500 nm spectral range. This research focuses on the utilization of such data to better understand the mineral potential in India and to demonstrate the application of spectral data in rock type discrimination and mapping for mineral exploration by using automated mapping techniques. The primary focus area of this research is the Hutti-Maski greenstone belt, located in Karnataka, India. The AVIRIS-NG data was integrated with field analyzed data (laboratory scaled compositional analysis, mineralogy, and spectral library) to characterize minerals and rock types. An expert system was developed to produce mineral maps from AVIRIS-NG data automatically. The ground truth data from the study areas was obtained from the existing literature and collaborators from India. The Bayesian spectral unmixing algorithm was used in AVIRIS-NG data for endmember selection. The classification maps of the minerals and rock types were developed using support vector machine algorithm. The ground truth data was used to verify the mineral maps.
Control of polymerization shrinkage and stress in nanogel-modified monomer and composite materials
Moraes, Rafael R.; Garcia, Jeffrey W.; Barros, Matthew D.; Lewis, Steven H.; Pfeifer, Carmem S.; Liu, JianCheng; Stansbury, Jeffrey W.
2011-01-01
Objectives This study demonstrates the effects of nano-scale prepolymer particles as additives to model dental monomer and composite formulations. Methods Discrete nanogel particles were prepared by solution photopolymerization of isobornyl methacrylate and urethane dimethacrylate in the presence of a chain transfer agent, which also provided a means to attach reactive groups to the prepolymer. Nanogel was added to triethylene glycol dimethacrylate (TEGDMA) in increments between 5 and 40 wt% with resin viscosity, reaction kinetics, shrinkage, mechanical properties, stress and optical properties evaluated. Maximum loading of barium glass filler was determined as a function of nanogel content and composites with varied nanogel content but uniform filler loading were compared in terms of consistency, conversion, shrinkage and mechanical properties. Results High conversion, high molecular weight internally crosslinked and cyclized nanogel prepolymer was efficiently prepared and redispersed into TEGDMA with an exponential rise in viscosity accompanying nanogel content. Nanogel addition at any level produced no deleterious effects on reaction kinetics, conversion or mechanical properties, as long as reactive nanogels were used. A reduction in polymerization shrinkage and stress was achieved in proportion to nanogel content. Even at high nanogel concentrations, the maximum loading of glass filler was only marginally reduced relative to the control and high strength composite materials with low shrinkage were obtained. Significance The use of reactive nanogels offers a versatile platform from which resin and composite handling properties can be adjusted while the polymerization shrinkage and stress development that challenge the adhesive bonding of dental restoratives are controllably reduced. PMID:21388669
Polymerization stresses in low-shrinkage dental resin composites measured by crack analysis.
Yamamoto, Takatsugu; Kubota, Yu; Momoi, Yasuko; Ferracane, Jack L
2012-09-01
The objective of this study was to compare several dental restoratives currently advertised as low-shrinkage composites (Clearfil Majesty Posterior, Kalore, Reflexions XLS Dentin and Venus Diamond) with a microfill composite (Heliomolar) in terms of polymerization stress, polymerization shrinkage and elastic modulus. Cracks were made at several distances from the edge of a precision cavity in a soda-lime glass disk. The composites were placed into the cavity and lengths of the cracks were measured before and after light curing. Polymerization stresses generated in the glass at 2 and 10 min after the irradiation were calculated from the crack lengths and K(c) of the glass. Polymerization shrinkage and elastic modulus of the composites also were measured at 2 and 10 min after irradiation using a video-imaging device and a nanoindenter, respectively. The data were statistically analyzed by ANOVAs and Tukey's test (p<0.05). The stress was significantly affected by composite brand, distance and time. The stress was directly proportional to time and inversely proportional to distance from the edge of the cavity. Clearfil Majesty Posterior demonstrated the highest stress and it resulted in the fracture of the glass at 2 min. Venus Diamond and Heliomolar exhibited the greatest shrinkage at both times. The elastic moduli of Clearfil Majesty Posterior and Reflexions XLS Dentin were greatest at 2 and 10 min, respectively. Among the four low-shrinkage composites, two demonstrated significantly reduced polymerization stress compared to Heliomolar, which has previously been shown in in vitro tests to generate low curing stress. Copyright © 2012 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Thermal shrinkage for shoulder instability.
Toth, Alison P; Warren, Russell F; Petrigliano, Frank A; Doward, David A; Cordasco, Frank A; Altchek, David W; O'Brien, Stephen J
2011-07-01
Thermal capsular shrinkage was popular for the treatment of shoulder instability, despite a paucity of outcomes data in the literature defining the indications for this procedure or supporting its long-term efficacy. The purpose of this study was to perform a clinical evaluation of radiofrequency thermal capsular shrinkage for the treatment of shoulder instability, with a minimum 2-year follow-up. From 1999 to 2001, 101 consecutive patients with mild to moderate shoulder instability underwent shoulder stabilization surgery with thermal capsular shrinkage using a monopolar radiofrequency device. Follow-up included a subjective outcome questionnaire, discussion of pain, instability, and activity level. Mean follow-up was 3.3 years (range 2.0-4.7 years). The thermal capsular shrinkage procedure failed due to instability and/or pain in 31% of shoulders at a mean time of 39 months. In patients with unidirectional anterior instability and those with concomitant labral repair, the procedure proved effective. Patients with multidirectional instability had moderate success. In contrast, four of five patients with isolated posterior instability failed. Thermal capsular shrinkage has been advocated for the treatment of shoulder instability, particularly mild to moderate capsular laxity. The ease of the procedure makes it attractive. However, our retrospective review revealed an overall failure rate of 31% in 80 patients with 2-year minimum follow-up. This mid- to long-term cohort study adds to the literature lacking support for thermal capsulorrhaphy in general, particularly posterior instability. The online version of this article (doi:10.1007/s11420-010-9187-7) contains supplementary material, which is available to authorized users.
Control of polymerization shrinkage and stress in nanogel-modified monomer and composite materials.
Moraes, Rafael R; Garcia, Jeffrey W; Barros, Matthew D; Lewis, Steven H; Pfeifer, Carmem S; Liu, JianCheng; Stansbury, Jeffrey W
2011-06-01
This study demonstrates the effects of nano-scale prepolymer particles as additives to model dental monomer and composite formulations. Discrete nanogel particles were prepared by solution photopolymerization of isobornyl methacrylate and urethane dimethacrylate in the presence of a chain transfer agent, which also provided a means to attach reactive groups to the prepolymer. Nanogel was added to triethylene glycol dimethacrylate (TEGDMA) in increments between 5 and 40 wt% with resin viscosity, reaction kinetics, shrinkage, mechanical properties, stress and optical properties evaluated. Maximum loading of barium glass filler was determined as a function of nanogel content and composites with varied nanogel content but uniform filler loading were compared in terms of consistency, conversion, shrinkage and mechanical properties. High conversion, high molecular weight internally crosslinked and cyclized nanogel prepolymer was efficiently prepared and redispersed into TEGDMA with an exponential rise in viscosity accompanying nanogel content. Nanogel addition at any level produced no deleterious effects on reaction kinetics, conversion or mechanical properties, as long as reactive nanogels were used. A reduction in polymerization shrinkage and stress was achieved in proportion to nanogel content. Even at high nanogel concentrations, the maximum loading of glass filler was only marginally reduced relative to the control and high strength composite materials with low shrinkage were obtained. The use of reactive nanogels offers a versatile platform from which resin and composite handling properties can be adjusted while the polymerization shrinkage and stress development that challenge the adhesive bonding of dental restoratives are controllably reduced. Copyright © 2011 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Property evolution during vitrification of dimethacrylate photopolymer networks
Abu-Elenain, Dalia; Lewis, Steven H.; Stansbury, Jeffrey W.
2013-01-01
Objectives This study seeks to correlate the interrelated properties of conversion, shrinkage, modulus and stress as dimethacrylate networks transition from rubbery to glassy states during photopolymerization. Methods An unfilled BisGMA/TEGDMA resin was photocured for various irradiation intervals (7–600 s) to provide controlled levels of immediate conversion, which was monitored continuously for 10 min. Fiber optic near-infrared spectroscopy permitted coupling of real-time conversion measurement with dynamic polymerization shrinkage (linometer), modulus (dynamic mechanical analyzer) and stress (tensometer) development profiles. Results The varied irradiation conditions produced final conversion ranging from 6 % to more than 60 %. Post-irradiation conversion (dark cure) was quite limited when photopolymerization was interrupted either at very low or very high levels of conversion while significant dark cure contributions were possible for photocuring reactions suspended within the post-gel, rubbery regime. Analysis of conversion-based property evolution during and subsequent to photocuring demonstrated that the shrinkage rate increased significantly at about 40 % conversion followed by late-stage suppression in the conversion-dependent shrinkage rate that begins at about 45–50 % conversion. The gradual vitrification process over this conversion range is evident based on the broad but well-defined inflection in the modulus versus conversion data. As limiting conversion is approached, modulus and, to a somewhat lesser extent, stress rise precipitously as a result of vitrification with the stress profile showing little if any late-stage suppression as seen with shrinkage. Significance Near the limiting conversion for this model resin, the volumetric polymerization shrinkage rate slows while an exponential rise in modulus promotes the vitrification process that appears to largely dictate stress development. PMID:24080378
Kadarmideen, Haja N.; Janss, Luc L. G.
2005-01-01
Bayesian segregation analyses were used to investigate the mode of inheritance of osteochondral lesions (osteochondrosis, OC) in pigs. Data consisted of 1163 animals with OC and their pedigrees included 2891 animals. Mixed-inheritance threshold models (MITM) and several variants of MITM, in conjunction with Markov chain Monte Carlo methods, were developed for the analysis of these (categorical) data. Results showed major genes with significant and substantially higher variances (range 1.384–37.81), compared to the polygenic variance (\\documentclass[10pt]{article} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\pagestyle{empty} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} \\begin{equation*}{\\mathrm{{\\sigma}}}_{{\\mathrm{u}}}^{2}\\end{equation*}\\end{document}). Consequently, heritabilities for a mixed inheritance (range 0.65–0.90) were much higher than the heritabilities from the polygenes. Disease allele frequencies range was 0.38–0.88. Additional analyses estimating the transmission probabilities of the major gene showed clear evidence for Mendelian segregation of a major gene affecting osteochondrosis. The variants, MITM with informative prior on \\documentclass[10pt]{article} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\pagestyle{empty} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} \\begin{equation*}{\\mathrm{{\\sigma}}}_{{\\mathrm{u}}}^{2}\\end{equation*}\\end{document}, showed significant improvement in marginal distributions and accuracy of parameters. MITM with a “reduced polygenic model” for parameterization of polygenic effects avoided convergence problems and poor mixing encountered in an “individual polygenic model.” In all cases, “shrinkage estimators” for fixed effects avoided unidentifiability for these parameters. The mixed-inheritance linear model (MILM) was also applied to all OC lesions and compared with the MITM. This is the first study to report evidence of major genes for osteochondral lesions in pigs; these results may also form a basis for underpinning the genetic inheritance of this disease in other animals as well as in humans. PMID:16020792
DOT National Transportation Integrated Search
2012-08-01
Concrete specimens were fabricated for shrinkage, creep, and abrasion resistance : testing. Variations of self-consolidating concrete (SCC) and conventional concrete were : all tested. The results were compared to previous similar testing programs an...
Rooney, James P K; Tobin, Katy; Crampsie, Arlene; Vajda, Alice; Heverin, Mark; McLaughlin, Russell; Staines, Anthony; Hardiman, Orla
2015-10-01
Evidence of an association between areal ALS risk and population density has been previously reported. We aim to examine ALS spatial incidence in Ireland using small areas, to compare this analysis with our previous analysis of larger areas and to examine the associations between population density, social deprivation and ALS incidence. Residential area social deprivation has not been previously investigated as a risk factor for ALS. Using the Irish ALS register, we included all cases of ALS diagnosed in Ireland from 1995-2013. 2006 census data was used to calculate age and sex standardised expected cases per small area. Social deprivation was assessed using the pobalHP deprivation index. Bayesian smoothing was used to calculate small area relative risk for ALS, whilst cluster analysis was performed using SaTScan. The effects of population density and social deprivation were tested in two ways: (1) as covariates in the Bayesian spatial model; (2) via post-Bayesian regression. 1701 cases were included. Bayesian smoothed maps of relative risk at small area resolution matched closely to our previous analysis at a larger area resolution. Cluster analysis identified two areas of significant low risk. These areas did not correlate with population density or social deprivation indices. Two areas showing low frequency of ALS have been identified in the Republic of Ireland. These areas do not correlate with population density or residential area social deprivation, indicating that other reasons, such as genetic admixture may account for the observed findings. Copyright © 2015 Elsevier Inc. All rights reserved.
Predicting coastal cliff erosion using a Bayesian probabilistic model
Hapke, Cheryl J.; Plant, Nathaniel G.
2010-01-01
Regional coastal cliff retreat is difficult to model due to the episodic nature of failures and the along-shore variability of retreat events. There is a growing demand, however, for predictive models that can be used to forecast areas vulnerable to coastal erosion hazards. Increasingly, probabilistic models are being employed that require data sets of high temporal density to define the joint probability density function that relates forcing variables (e.g. wave conditions) and initial conditions (e.g. cliff geometry) to erosion events. In this study we use a multi-parameter Bayesian network to investigate correlations between key variables that control and influence variations in cliff retreat processes. The network uses Bayesian statistical methods to estimate event probabilities using existing observations. Within this framework, we forecast the spatial distribution of cliff retreat along two stretches of cliffed coast in Southern California. The input parameters are the height and slope of the cliff, a descriptor of material strength based on the dominant cliff-forming lithology, and the long-term cliff erosion rate that represents prior behavior. The model is forced using predicted wave impact hours. Results demonstrate that the Bayesian approach is well-suited to the forward modeling of coastal cliff retreat, with the correct outcomes forecast in 70–90% of the modeled transects. The model also performs well in identifying specific locations of high cliff erosion, thus providing a foundation for hazard mapping. This approach can be employed to predict cliff erosion at time-scales ranging from storm events to the impacts of sea-level rise at the century-scale.
Cai, C; Rodet, T; Legoupil, S; Mohammad-Djafari, A
2013-11-01
Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images. This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed. The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also necessary to have the accurate spectrum information about the source-detector system. When dealing with experimental data, the spectrum can be predicted by a Monte Carlo simulator. For the materials between water and bone, less than 5% separation errors are observed on the estimated decomposition fractions. The proposed approach is a statistical reconstruction approach based on a nonlinear forward model counting the full beam polychromaticity and applied directly to the projections without taking negative-log. Compared to the approaches based on linear forward models and the BHA correction approaches, it has advantages in noise robustness and reconstruction accuracy.
Spatial Modelling of Soil-Transmitted Helminth Infections in Kenya: A Disease Control Planning Tool
Pullan, Rachel L.; Gething, Peter W.; Smith, Jennifer L.; Mwandawiro, Charles S.; Sturrock, Hugh J. W.; Gitonga, Caroline W.; Hay, Simon I.; Brooker, Simon
2011-01-01
Background Implementation of control of parasitic diseases requires accurate, contemporary maps that provide intervention recommendations at policy-relevant spatial scales. To guide control of soil transmitted helminths (STHs), maps are required of the combined prevalence of infection, indicating where this prevalence exceeds an intervention threshold of 20%. Here we present a new approach for mapping the observed prevalence of STHs, using the example of Kenya in 2009. Methods and Findings Observed prevalence data for hookworm, Ascaris lumbricoides and Trichuris trichiura were assembled for 106,370 individuals from 945 cross-sectional surveys undertaken between 1974 and 2009. Ecological and climatic covariates were extracted from high-resolution satellite data and matched to survey locations. Bayesian space-time geostatistical models were developed for each species, and were used to interpolate the probability that infection prevalence exceeded the 20% threshold across the country for both 1989 and 2009. Maps for each species were integrated to estimate combined STH prevalence using the law of total probability and incorporating a correction factor to adjust for associations between species. Population census data were combined with risk models and projected to estimate the population at risk and requiring treatment in 2009. In most areas for 2009, there was high certainty that endemicity was below the 20% threshold, with areas of endemicity ≥20% located around the shores of Lake Victoria and on the coast. Comparison of the predicted distributions for 1989 and 2009 show how observed STH prevalence has gradually decreased over time. The model estimated that a total of 2.8 million school-age children live in districts which warrant mass treatment. Conclusions Bayesian space-time geostatistical models can be used to reliably estimate the combined observed prevalence of STH and suggest that a quarter of Kenya's school-aged children live in areas of high prevalence and warrant mass treatment. As control is successful in reducing infection levels, updated models can be used to refine decision making in helminth control. PMID:21347451
NASA Astrophysics Data System (ADS)
Babcock, C. R.; Finley, A. O.; Andersen, H. E.; Moskal, L. M.; Morton, D. C.; Cook, B.; Nelson, R.
2017-12-01
Upcoming satellite lidar missions, such as GEDI and IceSat-2, are designed to collect laser altimetry data from space for narrow bands along orbital tracts. As a result lidar metric sets derived from these sources will not be of complete spatial coverage. This lack of complete coverage, or sparsity, means traditional regression approaches that consider lidar metrics as explanatory variables (without error) cannot be used to generate wall-to-wall maps of forest inventory variables. We implement a coregionalization framework to jointly model sparsely sampled lidar information and point-referenced forest variable measurements to create wall-to-wall maps with full probabilistic uncertainty quantification of all inputs. We inform the model with USFS Forest Inventory and Analysis (FIA) in-situ forest measurements and GLAS lidar data to spatially predict aboveground forest biomass (AGB) across the contiguous US. We cast our model within a Bayesian hierarchical framework to better model complex space-varying correlation structures among the lidar metrics and FIA data, which yields improved prediction and uncertainty assessment. To circumvent computational difficulties that arise when fitting complex geostatistical models to massive datasets, we use a Nearest Neighbor Gaussian process (NNGP) prior. Results indicate that a coregionalization modeling approach to leveraging sampled lidar data to improve AGB estimation is effective. Further, fitting the coregionalization model within a Bayesian mode of inference allows for AGB quantification across scales ranging from individual pixel estimates of AGB density to total AGB for the continental US with uncertainty. The coregionalization framework examined here is directly applicable to future spaceborne lidar acquisitions from GEDI and IceSat-2. Pairing these lidar sources with the extensive FIA forest monitoring plot network using a joint prediction framework, such as the coregionalization model explored here, offers the potential to improve forest AGB accounting certainty and provide maps for post-model fitting analysis of the spatial distribution of AGB.
High-resolution gravity model of Venus
NASA Technical Reports Server (NTRS)
Reasenberg, R. D.; Goldberg, Z. M.
1992-01-01
The anomalous gravity field of Venus shows high correlation with surface features revealed by radar. We extract gravity models from the Doppler tracking data from the Pioneer Venus Orbiter by means of a two-step process. In the first step, we solve the nonlinear spacecraft state estimation problem using a Kalman filter-smoother. The Kalman filter has been evaluated through simulations. This evaluation and some unusual features of the filter are discussed. In the second step, we perform a geophysical inversion using a linear Bayesian estimator. To allow an unbiased comparison between gravity and topography, we use a simulation technique to smooth and distort the radar topographic data so as to yield maps having the same characteristics as our gravity maps. The maps presented cover 2/3 of the surface of Venus and display the strong topography-gravity correlation previously reported. The topography-gravity scatter plots show two distinct trends.
Mapping Land Cover Types in Amazon Basin Using 1km JERS-1 Mosaic
NASA Technical Reports Server (NTRS)
Saatchi, Sassan S.; Nelson, Bruce; Podest, Erika; Holt, John
2000-01-01
In this paper, the 100 meter JERS-1 Amazon mosaic image was used in a new classifier to generate a I km resolution land cover map. The inputs to the classifier were 1 km resolution mean backscatter and seven first order texture measures derived from the 100 m data by using a 10 x 10 independent sampling window. The classification approach included two interdependent stages: 1) a supervised maximum a posteriori Bayesian approach to classify the mean backscatter image into 5 general land cover categories of forest, savannah, inundated, white sand, and anthropogenic vegetation classes, and 2) a texture measure decision rule approach to further discriminate subcategory classes based on taxonomic information and biomass levels. Fourteen classes were successfully separated at 1 km scale. The results were verified by examining the accuracy of the approach by comparison with the IBGE and the AVHRR 1 km resolution land cover maps.
Dimensional stability of concrete slabs on grade.
DOT National Transportation Integrated Search
2012-10-01
Drying shrinkage is one of the major causes of cracking in concrete slabs on grade. The moisture : difference between the top and bottom surface of the slabs causes a dimensional or shrinkage gradient : to develop through the depth of the slabs...
Influence of gelatinous fibers on the shrinkage of silver maple
Donals G. Arganbright; Dwight W. Bensend; Floyd G. Manwiller
1970-01-01
The degree of lean was found to have a significant influence on the logitudinal and transverse shrinkage of three soft maple trees. This may be accounted for by differences in the cell wall layer thickness and fibril angle.
Drying shrinkage problems in high-plastic clay soils in Oklahoma.
DOT National Transportation Integrated Search
2013-08-01
Longitudinal cracking in pavements due to drying shrinkage of high-plastic subgrade soils has been a major : problem in Oklahoma. Annual maintenance to seal and repair these distress problems costs significant amount of : money to the state. The long...
Mitigation strategies for early-age shrinkage cracking in bridge decks.
DOT National Transportation Integrated Search
2010-04-01
Early-age shrinkage cracking has been observed in many concrete bridge decks in Washington State and elsewhere around the U.S. The cracking increases the effects of freeze-thaw damage, spalling, and corrosion of steel reinforcement, thus resulting in...
ERIC Educational Resources Information Center
Betker, Edward
1998-01-01
Looks at Ethylene Propylene Diene Terpolymer rubber roof membranes and the potential problems associated with this material's shrinkage. Discusses how long such a roof should perform and issues affecting repair or replacement. Recommends that a building's function be considered in any roofing decision. (RJM)
Shrinkage Prediction for the Investment Casting of Stainless Steels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sabau, Adrian S
2007-01-01
In this study, the alloy shrinkage factors were obtained for the investment casting of 17-4PH stainless steel parts. For the investment casting process, unfilled wax and fused silica with a zircon prime coat were used for patterns and shell molds, respectively. Dimensions of the die tooling, wax pattern, and casting were measured using a Coordinate Measurement Machine in order to obtain the actual tooling allowances. The alloy dimensions were obtained from numerical simulation results of solidification, heat transfer, and deformation phenomena. The numerical simulation results for the shrinkage factors were compared with experimental results.
Nam, Jeongsoo; Kim, Gyuyong; Yoo, Jaechul; Choe, Gyeongcheol; Kim, Hongseop; Choi, Hyeonggil; Kim, Youngduck
2016-02-26
This paper presents an experimental study conducted to investigate the effect of fiber reinforcement on the mechanical properties and shrinkage cracking of recycled fine aggregate concrete (RFAC) with two types of fiber-polyvinyl alcohol (PVA) and nylon. A small fiber volume fraction, such as 0.05% or 0.1%, in RFAC with polyvinyl alcohol or nylon fibers was used for optimum efficiency in minimum quantity. Additionally, to make a comparative evaluation of the mechanical properties and shrinkage cracking, we examined natural fine aggregate concrete as well. The test results revealed that the addition of fibers and fine aggregates plays an important role in improving the mechanical performance of the investigated concrete specimens as well as controlling their cracking behavior. The mechanical properties such as compressive strength, splitting tensile strength, and flexural strength of fiber-reinforced RFAC were slightly better than those of non-fiber-reinforced RFAC. The shrinkage cracking behavior was examined using plat-ring-type and slab-type tests. The fiber-reinforced RFAC showed a greater reduction in the surface cracks than non-fiber-reinforced concrete. The addition of fibers at a small volume fraction in RFAC is more effective for drying shrinkage cracks than for improving mechanical performance.
Nam, Jeongsoo; Kim, Gyuyong; Yoo, Jaechul; Choe, Gyeongcheol; Kim, Hongseop; Choi, Hyeonggil; Kim, Youngduck
2016-01-01
This paper presents an experimental study conducted to investigate the effect of fiber reinforcement on the mechanical properties and shrinkage cracking of recycled fine aggregate concrete (RFAC) with two types of fiber—polyvinyl alcohol (PVA) and nylon. A small fiber volume fraction, such as 0.05% or 0.1%, in RFAC with polyvinyl alcohol or nylon fibers was used for optimum efficiency in minimum quantity. Additionally, to make a comparative evaluation of the mechanical properties and shrinkage cracking, we examined natural fine aggregate concrete as well. The test results revealed that the addition of fibers and fine aggregates plays an important role in improving the mechanical performance of the investigated concrete specimens as well as controlling their cracking behavior. The mechanical properties such as compressive strength, splitting tensile strength, and flexural strength of fiber-reinforced RFAC were slightly better than those of non-fiber-reinforced RFAC. The shrinkage cracking behavior was examined using plat-ring-type and slab-type tests. The fiber-reinforced RFAC showed a greater reduction in the surface cracks than non-fiber-reinforced concrete. The addition of fibers at a small volume fraction in RFAC is more effective for drying shrinkage cracks than for improving mechanical performance. PMID:28773256
Rüttermann, Stefan; Krüger, Sören; Raab, Wolfgang H-M; Janda, Ralf
2007-10-01
To investigate the polymerization shrinkage and hygroscopic expansion of contemporary posterior resin-based filling materials. The densities of SureFil (SU), CeramXMono (CM), Clearfil AP-X (CF), Solitaire 2 (SO), TetricEvoCeram (TE), and Filtek P60 (FT) were measured using the Archimedes' principle prior to and 15min after curing for 20, 40 and 60s and after 1h, 24h, 7 d, and 30 d storage at 37 degrees C in water. Volumetric changes (DeltaV) in percent after polymerization and after each storage period in water were calculated from the changes of densities. Water sorption and solubility were determined after 30 d for all specimens and their curing times. Two-way ANOVA was calculated for shrinkage and repeated measures ANOVA was calculated for hygroscopic expansion (p<0.05). DeltaV depended on filler load but not on curing time (SU approximately -2.0%, CM approximately -2.6%, CF approximately -2.1%, SO approximately -3.3%, TE approximately -1.7%, FT approximately -1.8%). Hygroscopic expansion depended on water sorption and solubility. Except for SU, all materials showed DeltaV approximately +1% after water storage. Polymerization shrinkage depended on the type of resin-based filling material but not on curing time. Shrinkage was not compensated by hygroscopic expansion.
SURE Estimates for a Heteroscedastic Hierarchical Model
Xie, Xianchao; Kou, S. C.; Brown, Lawrence D.
2014-01-01
Hierarchical models are extensively studied and widely used in statistics and many other scientific areas. They provide an effective tool for combining information from similar resources and achieving partial pooling of inference. Since the seminal work by James and Stein (1961) and Stein (1962), shrinkage estimation has become one major focus for hierarchical models. For the homoscedastic normal model, it is well known that shrinkage estimators, especially the James-Stein estimator, have good risk properties. The heteroscedastic model, though more appropriate for practical applications, is less well studied, and it is unclear what types of shrinkage estimators are superior in terms of the risk. We propose in this paper a class of shrinkage estimators based on Stein’s unbiased estimate of risk (SURE). We study asymptotic properties of various common estimators as the number of means to be estimated grows (p → ∞). We establish the asymptotic optimality property for the SURE estimators. We then extend our construction to create a class of semi-parametric shrinkage estimators and establish corresponding asymptotic optimality results. We emphasize that though the form of our SURE estimators is partially obtained through a normal model at the sampling level, their optimality properties do not heavily depend on such distributional assumptions. We apply the methods to two real data sets and obtain encouraging results. PMID:25301976
Nguyen, Thanh Khuong; Khalloufi, Seddik; Mondor, Martin; Ratti, Cristina
2018-01-01
In the present work, the impact of glass transition on shrinkage of non-cellular food systems (NCFS) during air-drying will be assessed from experimental data and the interpretation of a 'shrinkage' function involved in a mathematical model. Two NCFS made from a mixture of water/maltodextrin/agar (w/w/w: 1/0.15/0.015) were created out of maltodextrins with dextrose equivalent 19 (MD19) or 36 (MD36). The NCFS made with MD19 had 30°C higher Tg than those with MD36. This information indicated that, during drying, the NCFS with MD19 would pass from rubbery to glassy state sooner than NCFS MD36, for which glass transition only happens close to the end of drying. For the two NCFS, porosity and volume reduction as a function of moisture content were captured with high accuracy when represented by the mathematical models previously developed. No significant differences in porosity and in maximum shrinkage between both samples during drying were observed. As well, no change in the slope of the shrinkage curve as a function of moisture content was perceived. These results indicate that glass transition alone is not a determinant factor in changes of porosity or volume during air-drying. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sant, Gaurav Niteen
The increased use of high-performance, low water-to-cement (w/c) ratio concretes has led to increased occurrences of early-age shrinkage cracking in civil engineering structures. To reduce the magnitude of early-age shrinkage and the potential for cracking, mitigation strategies using shrinkage reducing admixtures (SRAs), saturated lightweight aggregates, expansive cements and extended moist curing durations in construction have been recommended. However, to appropriately utilize these strategies, it is important to have a complete understanding of the driving forces of early-age volume change and how these methods work from a materials perspective to reduce shrinkage. This dissertation uses a first-principles approach to understand the mechanism of shrinkage reducing admixtures (SRAs) to generate an expansion and mitigate shrinkage at early-ages, quantify the influence of a CaO-based expansive additive in reducing unrestrained shrinkage, residual stress development and the cracking potential at early-ages and quantify the influence of shrinkage reducing admixtures (SRAs) and cement hydration (pore structure refinement) on the reduction induced in the fluid transport properties of the material. The effects of shrinkage reducing admixtures (SRAs) are described in terms of inducing autogenous expansions in cement pastes at early ages. An evaluation comprising measurements of autogenous deformation, x-ray diffraction (Rietveld analysis), pore solution and thermogravimetric analysis and electron microscopy is performed to understand the chemical nature and physical effects of the expansion. Thermodynamic calculations performed on the measured liquid-phase compositions indicate the SRA produces elevated Portlandite super-saturations in the pore solution which results in crystallization stress driven expansions. The thermodynamic calculations are supported by deformation measurements performed on cement pastes mixed in solutions saturated with Portlandite or containing additional Sodium Hydroxide. Further, to quantify the influence of temperature on volume changes in SRA containing materials, deformation measurements are performed at different temperatures. The results indicate maturity transformations are incapable of simulating volume changes over any temperature regime due to the influence of temperature on salt solubility and pore solution composition, crystallization stresses and self-desiccation. The performance of a CaO-based expansive additive is evaluated over a range of additive concentrations and curing conditions to quantify the reduction in restrained and unrestrained volume changes effected in low w/c cement pastes. The results suggest, under unrestrained sealed conditions the additive generates an expansion and reduces the magnitude of total shrinkage experienced by the material. However, the extent of drying shrinkage developed is noted to be similar in all systems and independent of the additive dosage. Under restrained sealed conditions, the additive induces a significant compressive stress which delays tensile stress development in the system. However, a critical additive concentration (around four percent) needs to be exceeded to appreciably reduce the risk of cracking at early-ages. The influence of shrinkage reducing admixtures (SRAs) is quantified in terms of the effects of SRA addition on fluid transport in cement-based materials. The change in the cement paste's pore solution properties, i.e., the surface tension and fluid-viscosity, induced by the addition of a SRA is observed to depress the fluid-sorption and wetting moisture diffusion coefficients, with the depression being a function of the SRA concentration. The experimental results are compared to analytical descriptions of water sorption and a good correlation is observed. These results allow for the change in pore-solution and fluid-transport properties to be incorporated from a fundamental perspective in models which aim to describe the service-life of structures. Several experimental techniques such as chemical shrinkage, low temperature calorimetry and electrical impedance spectroscopy are evaluated in terms of their suitability to identify capillary porosity depercolation in cement pastes. The evidence provided by the experiments is: (1) that there exists a capillary porosity depercolation threshold around 20% capillary porosity in cement pastes and (2) low temperature calorimetry is not suitable to detect porosity depercolation in cement pastes containing SRAs. Finally, the influence of porosity depercolation is demonstrated in terms of the reduction effected in the transport properties (i.e., the fluid-sorption coefficient) of the material as quantified using x-ray attenuation measurements. The study relates the connectivity of the pore structure to the fluid transport response providing insights related to the development of curing technologies and the specification of wet curing regimes during construction.
NASA Astrophysics Data System (ADS)
Garcia Urquia, E. L.; Braun, A.; Yamagishi, H.
2016-12-01
Tegucigalpa, the capital city of Honduras, experiences rainfall-induced landslides on a yearly basis. The high precipitation regime and the rugged topography the city has been built in couple with the lack of a proper urban expansion plan to contribute to the occurrence of landslides during the rainy season. Thousands of inhabitants live at risk of losing their belongings due to the construction of precarious shelters in landslide-prone areas on mountainous terrains and next to the riverbanks. Therefore, the city is in the need for landslide susceptibility and hazard maps to aid in the regulation of future development. Major challenges in the context of highly dynamic urbanizing areas are the overlap of natural and anthropogenic slope destabilizing factors, as well as the availability and accuracy of data. Data-driven multivariate techniques have proven to be powerful in discovering interrelations between factors, identifying important factors in large datasets, capturing non-linear problems and coping with noisy and incomplete data. This analysis focuses on the creation of a landslide susceptibility map using different methods from the field of data mining, Artificial Neural Networks (ANN), Bayesian Networks (BN) and Decision Trees (DT). The input dataset of the study contains geomorphological and hydrological factors derived from a digital elevation model with a 10 m resolution, lithological factors derived from a geological map, and anthropogenic factors, such as information on the development stage of the neighborhoods in Tegucigalpa and road density. Moreover, a landslide inventory map that was developed in 2014 through aerial photo interpretation was used as target variable in the analysis. The analysis covers an area of roughly 100 km2, while 8.95 km2 are occupied by landslides. In a first step, the dataset was explored by assessing and improving the data quality, identifying unimportant variables and finding interrelations. Then, based on a training partition of the dataset, the ANN, BN and DT were optimized for the prediction of landslides. The predictive power and ability to generalize of the resulting models were assessed in a test partition and evaluated using success rate curves, skill scores and by ensuring the spatial plausibility of the prediction.
NASA Astrophysics Data System (ADS)
Karmakar, Mampi; Maiti, Saumen; Singh, Amrita; Ojha, Maheswar; Maity, Bhabani Sankar
2017-07-01
Modeling and classification of the subsurface lithology is very important to understand the evolution of the earth system. However, precise classification and mapping of lithology using a single framework are difficult due to the complexity and the nonlinearity of the problem driven by limited core sample information. Here, we implement a joint approach by combining the unsupervised and the supervised methods in a single framework for better classification and mapping of rock types. In the unsupervised method, we use the principal component analysis (PCA), K-means cluster analysis (K-means), dendrogram analysis, Fuzzy C-means (FCM) cluster analysis and self-organizing map (SOM). In the supervised method, we use the Bayesian neural networks (BNN) optimized by the Hybrid Monte Carlo (HMC) (BNN-HMC) and the scaled conjugate gradient (SCG) (BNN-SCG) techniques. We use P-wave velocity, density, neutron porosity, resistivity and gamma ray logs of the well U1343E of the Integrated Ocean Drilling Program (IODP) Expedition 323 in the Bering Sea slope region. While the SOM algorithm allows us to visualize the clustering results in spatial domain, the combined classification schemes (supervised and unsupervised) uncover the different patterns of lithology such of as clayey-silt, diatom-silt and silty-clay from an un-cored section of the drilled hole. In addition, the BNN approach is capable of estimating uncertainty in the predictive modeling of three types of rocks over the entire lithology section at site U1343. Alternate succession of clayey-silt, diatom-silt and silty-clay may be representative of crustal inhomogeneity in general and thus could be a basis for detail study related to the productivity of methane gas in the oceans worldwide. Moreover, at the 530 m depth down below seafloor (DSF), the transition from Pliocene to Pleistocene could be linked to lithological alternation between the clayey-silt and the diatom-silt. The present results could provide the basis for the detailed study to get deeper insight into the Bering Sea' sediment deposition and sequence.
Le Bras, Ronan J; Kuzma, Heidi; Sucic, Victor; Bokelmann, Götz
2016-05-01
A notable sequence of calls was encountered, spanning several days in January 2003, in the central part of the Indian Ocean on a hydrophone triplet recording acoustic data at a 250 Hz sampling rate. This paper presents signal processing methods applied to the waveform data to detect, group, extract amplitude and bearing estimates for the recorded signals. An approximate location for the source of the sequence of calls is inferred from extracting the features from the waveform. As the source approaches the hydrophone triplet, the source level (SL) of the calls is estimated at 187 ± 6 dB re: 1 μPa-1 m in the 15-60 Hz frequency range. The calls are attributed to a subgroup of blue whales, Balaenoptera musculus, with a characteristic acoustic signature. A Bayesian location method using probabilistic models for bearing and amplitude is demonstrated on the calls sequence. The method is applied to the case of detection at a single triad of hydrophones and results in a probability distribution map for the origin of the calls. It can be extended to detections at multiple triads and because of the Bayesian formulation, additional modeling complexity can be built-in as needed.
NASA Astrophysics Data System (ADS)
Yin, Ping; Mu, Lan; Madden, Marguerite; Vena, John E.
2014-10-01
Lung cancer is the second most commonly diagnosed cancer in both men and women in Georgia, USA. However, the spatio-temporal patterns of lung cancer risk in Georgia have not been fully studied. Hierarchical Bayesian models are used here to explore the spatio-temporal patterns of lung cancer incidence risk by race and gender in Georgia for the period of 2000-2007. With the census tract level as the spatial scale and the 2-year period aggregation as the temporal scale, we compare a total of seven Bayesian spatio-temporal models including two under a separate modeling framework and five under a joint modeling framework. One joint model outperforms others based on the deviance information criterion. Results show that the northwest region of Georgia has consistently high lung cancer incidence risk for all population groups during the study period. In addition, there are inverse relationships between the socioeconomic status and the lung cancer incidence risk among all Georgian population groups, and the relationships in males are stronger than those in females. By mapping more reliable variations in lung cancer incidence risk at a relatively fine spatio-temporal scale for different Georgian population groups, our study aims to better support healthcare performance assessment, etiological hypothesis generation, and health policy making.
Kim, D; Burge, J; Lane, T; Pearlson, G D; Kiehl, K A; Calhoun, V D
2008-10-01
We utilized a discrete dynamic Bayesian network (dDBN) approach (Burge, J., Lane, T., Link, H., Qiu, S., Clark, V.P., 2007. Discrete dynamic Bayesian network analysis of fMRI data. Hum Brain Mapp.) to determine differences in brain regions between patients with schizophrenia and healthy controls on a measure of effective connectivity, termed the approximate conditional likelihood score (ACL) (Burge, J., Lane, T., 2005. Learning Class-Discriminative Dynamic Bayesian Networks. Proceedings of the International Conference on Machine Learning, Bonn, Germany, pp. 97-104.). The ACL score represents a class-discriminative measure of effective connectivity by measuring the relative likelihood of the correlation between brain regions in one group versus another. The algorithm is capable of finding non-linear relationships between brain regions because it uses discrete rather than continuous values and attempts to model temporal relationships with a first-order Markov and stationary assumption constraint (Papoulis, A., 1991. Probability, random variables, and stochastic processes. McGraw-Hill, New York.). Since Bayesian networks are overly sensitive to noisy data, we introduced an independent component analysis (ICA) filtering approach that attempted to reduce the noise found in fMRI data by unmixing the raw datasets into a set of independent spatial component maps. Components that represented noise were removed and the remaining components reconstructed into the dimensions of the original fMRI datasets. We applied the dDBN algorithm to a group of 35 patients with schizophrenia and 35 matched healthy controls using an ICA filtered and unfiltered approach. We determined that filtering the data significantly improved the magnitude of the ACL score. Patients showed the greatest ACL scores in several regions, most markedly the cerebellar vermis and hemispheres. Our findings suggest that schizophrenia patients exhibit weaker connectivity than healthy controls in multiple regions, including bilateral temporal, frontal, and cerebellar regions during an auditory paradigm.
Boulais, Christophe; Wacker, Ron; Augustin, Jean-Christophe; Cheikh, Mohamed Hedi Ben; Peladan, Fabrice
2011-07-01
Mycobacterium avium subsp. paratuberculosis (MAP) is the causal agent of paratuberculosis (Johne's disease) in cattle and other farm ruminants. The potential role of MAP in Crohn's disease in humans and the contribution of dairy products to human exposure to MAP continue to be the subject of scientific debate. The occurrence of MAP in bulk raw milk from dairy herds was assessed using a stochastic modeling approach. Raw milk samples were collected from bulk tanks in dairy plants and tested for the presence of MAP. Results from this analytical screening were used in a Bayesian network to update the model prediction. Of the 83 raw milk samples tested, 4 were positive for MAP by culture and PCR. We estimated that the level of MAP in bulk tanks ranged from 0 CFU/ml for the 2.5th percentile to 65 CFU/ml for the 97.5th percentile, with 95% credibility intervals of [0, 0] and [16, 326], respectively. The model was used to evaluate the effect of measures aimed at reducing the occurrence of MAP in raw milk. Reducing the prevalence of paratuberculosis has less of an effect on the occurrence of MAP in bulk raw milk than does managing clinically infected animals through good farming practices. Copyright ©, International Association for Food Protection
Mapping malaria risk among children in Côte d'Ivoire using Bayesian geo-statistical models.
Raso, Giovanna; Schur, Nadine; Utzinger, Jürg; Koudou, Benjamin G; Tchicaya, Emile S; Rohner, Fabian; N'goran, Eliézer K; Silué, Kigbafori D; Matthys, Barbara; Assi, Serge; Tanner, Marcel; Vounatsou, Penelope
2012-05-09
In Côte d'Ivoire, an estimated 767,000 disability-adjusted life years are due to malaria, placing the country at position number 14 with regard to the global burden of malaria. Risk maps are important to guide control interventions, and hence, the aim of this study was to predict the geographical distribution of malaria infection risk in children aged <16 years in Côte d'Ivoire at high spatial resolution. Using different data sources, a systematic review was carried out to compile and geo-reference survey data on Plasmodium spp. infection prevalence in Côte d'Ivoire, focusing on children aged <16 years. The period from 1988 to 2007 was covered. A suite of Bayesian geo-statistical logistic regression models was fitted to analyse malaria risk. Non-spatial models with and without exchangeable random effect parameters were compared to stationary and non-stationary spatial models. Non-stationarity was modelled assuming that the underlying spatial process is a mixture of separate stationary processes in each ecological zone. The best fitting model based on the deviance information criterion was used to predict Plasmodium spp. infection risk for entire Côte d'Ivoire, including uncertainty. Overall, 235 data points at 170 unique survey locations with malaria prevalence data for individuals aged <16 years were extracted. Most data points (n = 182, 77.4%) were collected between 2000 and 2007. A Bayesian non-stationary regression model showed the best fit with annualized rainfall and maximum land surface temperature identified as significant environmental covariates. This model was used to predict malaria infection risk at non-sampled locations. High-risk areas were mainly found in the north-central and western area, while relatively low-risk areas were located in the north at the country border, in the north-east, in the south-east around Abidjan, and in the central-west between two high prevalence areas. The malaria risk map at high spatial resolution gives an important overview of the geographical distribution of the disease in Côte d'Ivoire. It is a useful tool for the national malaria control programme and can be utilized for spatial targeting of control interventions and rational resource allocation.
Mapping malaria risk among children in Côte d’Ivoire using Bayesian geo-statistical models
2012-01-01
Background In Côte d’Ivoire, an estimated 767,000 disability-adjusted life years are due to malaria, placing the country at position number 14 with regard to the global burden of malaria. Risk maps are important to guide control interventions, and hence, the aim of this study was to predict the geographical distribution of malaria infection risk in children aged <16 years in Côte d’Ivoire at high spatial resolution. Methods Using different data sources, a systematic review was carried out to compile and geo-reference survey data on Plasmodium spp. infection prevalence in Côte d’Ivoire, focusing on children aged <16 years. The period from 1988 to 2007 was covered. A suite of Bayesian geo-statistical logistic regression models was fitted to analyse malaria risk. Non-spatial models with and without exchangeable random effect parameters were compared to stationary and non-stationary spatial models. Non-stationarity was modelled assuming that the underlying spatial process is a mixture of separate stationary processes in each ecological zone. The best fitting model based on the deviance information criterion was used to predict Plasmodium spp. infection risk for entire Côte d’Ivoire, including uncertainty. Results Overall, 235 data points at 170 unique survey locations with malaria prevalence data for individuals aged <16 years were extracted. Most data points (n = 182, 77.4%) were collected between 2000 and 2007. A Bayesian non-stationary regression model showed the best fit with annualized rainfall and maximum land surface temperature identified as significant environmental covariates. This model was used to predict malaria infection risk at non-sampled locations. High-risk areas were mainly found in the north-central and western area, while relatively low-risk areas were located in the north at the country border, in the north-east, in the south-east around Abidjan, and in the central-west between two high prevalence areas. Conclusion The malaria risk map at high spatial resolution gives an important overview of the geographical distribution of the disease in Côte d’Ivoire. It is a useful tool for the national malaria control programme and can be utilized for spatial targeting of control interventions and rational resource allocation. PMID:22571469
XMAP310: A Xenopus Rescue-promoting Factor Localized to the Mitotic Spindle
Andersen, Søren S.L.; Karsenti, Eric
1997-01-01
To understand the role of microtubule-associated proteins (MAPs) in the regulation of microtubule (MT) dynamics we have characterized MAPs prepared from Xenopus laevis eggs (Andersen, S.S.L., B. Buendia, J.E. Domínguez, A. Sawyer, and E. Karsenti. 1994. J. Cell Biol. 127:1289–1299). Here we report on the purification and characterization of a 310-kD MAP (XMAP310) that localizes to the nucleus in interphase and to mitotic spindle MTs in mitosis. XMAP310 is present in eggs, oocytes, a Xenopus tissue culture cell line, testis, and brain. We have purified XMAP310 to homogeneity from egg extracts. The purified protein cross-links pure MTs. Analysis of the effect of this protein on MT dynamics by time-lapse video microscopy has shown that it increases the rescue frequency 5–10-fold and decreases the shrinkage rate twofold. It has no effect on the growth rate or the catastrophe frequency. Microsequencing data suggest that XMAP230 and XMAP310 are novel MAPs. Although the three Xenopus MAPs characterized so far, XMAP215 (Vasquez, R.J., D.L. Gard, and L. Cassimeris. 1994. J. Cell Biol. 127:985–993), XMAP230, and XMAP310 are localized to the mitotic spindle, they have distinct effects on MT dynamics. While XMAP215 promotes rapid MT growth, XMAP230 decreases the catastrophe frequency and XMAP310 increases the rescue frequency. This may have important implications for the regulation of MT dynamics during spindle morphogenesis and chromosome segregation. PMID:9362515
Transdimensional, hierarchical, Bayesian inversion of ambient seismic noise: Australia
NASA Astrophysics Data System (ADS)
Crowder, E.; Rawlinson, N.; Cornwell, D. G.
2017-12-01
We present models of crustal velocity structure in southeastern Australia using a novel, transdimensional and hierarchical, Bayesian inversion approach. The inversion is applied to long-time ambient noise cross-correlations. The study area of SE Australia is thought to represent the eastern margin of Gondwana. Conflicting tectonic models have been proposed to explain the formation of eastern Gondwana and the enigmatic geological relationships in Bass Strait, which separates Tasmania and the mainland. A geologically complex area of crustal accretion, Bass Strait may contain part of an exotic continental block entrained in colliding crusts. Ambient noise data recorded by an array of 24 seismometers is used to produce a high resolution, 3D shear wave velocity model of Bass Strait. Phase velocity maps in the period range 2-30 s are produced and subsequently inverted for 3D shear wave velocity structure. The transdimensional, hierarchical Bayesian, inversion technique is used. This technique proves far superior to linearised inversion. The inversion model is dynamically parameterised during the process, implicitly controlled by the data, and noise is treated as an inversion unknown. The resulting shear wave velocity model shows three sedimentary basins in Bass Strait constrained by slow shear velocities (2.4-2.9 km/s) at 2-10 km depth. These failed rift basins from the breakup of Australia-Antartica appear to be overlying thinned crust, where typical mantle velocities of 3.8-4.0 km/s occur at depths greater than 20 km. High shear wave velocities ( 3.7-3.8 km/s) in our new model also match well with regions of high magnetic and gravity anomalies. Furthermore, we use both Rayleigh and Love wave phase data to to construct Vsv and Vsh maps. These are used to estimate crustal radial anisotropy in the Bass Strait. We interpret that structures delineated by our velocity models support the presence and extent of the exotic Precambrian micro-continent (the Selwyn Block) that was most likely entrained during crustal accretion.
Brenner, Darren R.; Amos, Christopher I.; Brhane, Yonathan; Timofeeva, Maria N.; Caporaso, Neil; Wang, Yufei; Christiani, David C.; Bickeböller, Heike; Yang, Ping; Albanes, Demetrius; Stevens, Victoria L.; Gapstur, Susan; McKay, James; Boffetta, Paolo; Zaridze, David; Szeszenia-Dabrowska, Neonilia; Lissowska, Jolanta; Rudnai, Peter; Fabianova, Eleonora; Mates, Dana; Bencko, Vladimir; Foretova, Lenka; Janout, Vladimir; Krokan, Hans E.; Skorpen, Frank; Gabrielsen, Maiken E.; Vatten, Lars; Njølstad, Inger; Chen, Chu; Goodman, Gary; Lathrop, Mark; Vooder, Tõnu; Välk, Kristjan; Nelis, Mari; Metspalu, Andres; Broderick, Peter; Eisen, Timothy; Wu, Xifeng; Zhang, Di; Chen, Wei; Spitz, Margaret R.; Wei, Yongyue; Su, Li; Xie, Dong; She, Jun; Matsuo, Keitaro; Matsuda, Fumihiko; Ito, Hidemi; Risch, Angela; Heinrich, Joachim; Rosenberger, Albert; Muley, Thomas; Dienemann, Hendrik; Field, John K.; Raji, Olaide; Chen, Ying; Gosney, John; Liloglou, Triantafillos; Davies, Michael P.A.; Marcus, Michael; McLaughlin, John; Orlow, Irene; Han, Younghun; Li, Yafang; Zong, Xuchen; Johansson, Mattias; Liu, Geoffrey; Tworoger, Shelley S.; Le Marchand, Loic; Henderson, Brian E.; Wilkens, Lynne R.; Dai, Juncheng; Shen, Hongbing; Houlston, Richard S.; Landi, Maria T.; Brennan, Paul; Hung, Rayjean J.
2015-01-01
Large-scale genome-wide association studies (GWAS) have likely uncovered all common variants at the GWAS significance level. Additional variants within the suggestive range (0.0001> P > 5×10−8) are, however, still of interest for identifying causal associations. This analysis aimed to apply novel variant prioritization approaches to identify additional lung cancer variants that may not reach the GWAS level. Effects were combined across studies with a total of 33456 controls and 6756 adenocarcinoma (AC; 13 studies), 5061 squamous cell carcinoma (SCC; 12 studies) and 2216 small cell lung cancer cases (9 studies). Based on prior information such as variant physical properties and functional significance, we applied stratified false discovery rates, hierarchical modeling and Bayesian false discovery probabilities for variant prioritization. We conducted a fine mapping analysis as validation of our methods by examining top-ranking novel variants in six independent populations with a total of 3128 cases and 2966 controls. Three novel loci in the suggestive range were identified based on our Bayesian framework analyses: KCNIP4 at 4p15.2 (rs6448050, P = 4.6×10−7) and MTMR2 at 11q21 (rs10501831, P = 3.1×10−6) with SCC, as well as GAREM at 18q12.1 (rs11662168, P = 3.4×10−7) with AC. Use of our prioritization methods validated two of the top three loci associated with SCC (P = 1.05×10−4 for KCNIP4, represented by rs9799795) and AC (P = 2.16×10−4 for GAREM, represented by rs3786309) in the independent fine mapping populations. This study highlights the utility of using prior functional data for sequence variants in prioritization analyses to search for robust signals in the suggestive range. PMID:26363033
Brenner, Darren R; Amos, Christopher I; Brhane, Yonathan; Timofeeva, Maria N; Caporaso, Neil; Wang, Yufei; Christiani, David C; Bickeböller, Heike; Yang, Ping; Albanes, Demetrius; Stevens, Victoria L; Gapstur, Susan; McKay, James; Boffetta, Paolo; Zaridze, David; Szeszenia-Dabrowska, Neonilia; Lissowska, Jolanta; Rudnai, Peter; Fabianova, Eleonora; Mates, Dana; Bencko, Vladimir; Foretova, Lenka; Janout, Vladimir; Krokan, Hans E; Skorpen, Frank; Gabrielsen, Maiken E; Vatten, Lars; Njølstad, Inger; Chen, Chu; Goodman, Gary; Lathrop, Mark; Vooder, Tõnu; Välk, Kristjan; Nelis, Mari; Metspalu, Andres; Broderick, Peter; Eisen, Timothy; Wu, Xifeng; Zhang, Di; Chen, Wei; Spitz, Margaret R; Wei, Yongyue; Su, Li; Xie, Dong; She, Jun; Matsuo, Keitaro; Matsuda, Fumihiko; Ito, Hidemi; Risch, Angela; Heinrich, Joachim; Rosenberger, Albert; Muley, Thomas; Dienemann, Hendrik; Field, John K; Raji, Olaide; Chen, Ying; Gosney, John; Liloglou, Triantafillos; Davies, Michael P A; Marcus, Michael; McLaughlin, John; Orlow, Irene; Han, Younghun; Li, Yafang; Zong, Xuchen; Johansson, Mattias; Liu, Geoffrey; Tworoger, Shelley S; Le Marchand, Loic; Henderson, Brian E; Wilkens, Lynne R; Dai, Juncheng; Shen, Hongbing; Houlston, Richard S; Landi, Maria T; Brennan, Paul; Hung, Rayjean J
2015-11-01
Large-scale genome-wide association studies (GWAS) have likely uncovered all common variants at the GWAS significance level. Additional variants within the suggestive range (0.0001> P > 5×10(-8)) are, however, still of interest for identifying causal associations. This analysis aimed to apply novel variant prioritization approaches to identify additional lung cancer variants that may not reach the GWAS level. Effects were combined across studies with a total of 33456 controls and 6756 adenocarcinoma (AC; 13 studies), 5061 squamous cell carcinoma (SCC; 12 studies) and 2216 small cell lung cancer cases (9 studies). Based on prior information such as variant physical properties and functional significance, we applied stratified false discovery rates, hierarchical modeling and Bayesian false discovery probabilities for variant prioritization. We conducted a fine mapping analysis as validation of our methods by examining top-ranking novel variants in six independent populations with a total of 3128 cases and 2966 controls. Three novel loci in the suggestive range were identified based on our Bayesian framework analyses: KCNIP4 at 4p15.2 (rs6448050, P = 4.6×10(-7)) and MTMR2 at 11q21 (rs10501831, P = 3.1×10(-6)) with SCC, as well as GAREM at 18q12.1 (rs11662168, P = 3.4×10(-7)) with AC. Use of our prioritization methods validated two of the top three loci associated with SCC (P = 1.05×10(-4) for KCNIP4, represented by rs9799795) and AC (P = 2.16×10(-4) for GAREM, represented by rs3786309) in the independent fine mapping populations. This study highlights the utility of using prior functional data for sequence variants in prioritization analyses to search for robust signals in the suggestive range. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Rope, R. C.; Ames, D. P.; Jerry, T. D.; Cherry, S. J.
2005-12-01
Invasive plant species, such as Bromus tectorum (cheatgrass), cost the United States over $36 billion per year and have encroached upon over 100 million acres while impacting range site productivity, disturbing wildlife habitat, altering the wildland fire regime and frequencies, and reducing biodiversity. Because of these adverse impacts, federal, tribal, state, and county land managers are faced with the challenge of prevention, early detection, management, and monitoring of invasive plants. Often these managers rely on the analysis of remotely sensed imagery as part of their management plan. However, it's difficult to predict specific phenological events that allow for the spectral discrimination of invasive species using only remotely sensed imagery. To address this issue tools are being developed to model and view optimal periods to collect high spatial and/or spectral resolution remotely sensed data for refined detection and mapping of invasive species and for use as a decision support tool for land managers. These tools involve the integration of historic and current climate data (cumulative growing days and precipitation) satellite imagery (MODIS) and Bayesian Belief Networks, and a web ArcIMS application to distribute the information. The general approach is to issue an initial forecast early in the year based on the previous years' data. As the year progresses, air temperature, precipitation and newly acquired low resolution MODIS satellite imagery will be used to update the prediction. Updating will be accomplished using a Bayesian Belief Network model that indicates the probabilistic relationships between prior years' conditions and those of the current year. These tools have specific application in providing a means for which land managers can efficiently and effectively detect, map, and monitor invasive plant species, specifically cheatgrass, in western rangelands. This information can then be integrated into management studies and plans to help land managers more accurately and completely determine areas infested with cheatgrass to aid in their eradication practices and future management plans.
Spatial cluster detection using dynamic programming.
Sverchkov, Yuriy; Jiang, Xia; Cooper, Gregory F
2012-03-25
The task of spatial cluster detection involves finding spatial regions where some property deviates from the norm or the expected value. In a probabilistic setting this task can be expressed as finding a region where some event is significantly more likely than usual. Spatial cluster detection is of interest in fields such as biosurveillance, mining of astronomical data, military surveillance, and analysis of fMRI images. In almost all such applications we are interested both in the question of whether a cluster exists in the data, and if it exists, we are interested in finding the most accurate characterization of the cluster. We present a general dynamic programming algorithm for grid-based spatial cluster detection. The algorithm can be used for both Bayesian maximum a-posteriori (MAP) estimation of the most likely spatial distribution of clusters and Bayesian model averaging over a large space of spatial cluster distributions to compute the posterior probability of an unusual spatial clustering. The algorithm is explained and evaluated in the context of a biosurveillance application, specifically the detection and identification of Influenza outbreaks based on emergency department visits. A relatively simple underlying model is constructed for the purpose of evaluating the algorithm, and the algorithm is evaluated using the model and semi-synthetic test data. When compared to baseline methods, tests indicate that the new algorithm can improve MAP estimates under certain conditions: the greedy algorithm we compared our method to was found to be more sensitive to smaller outbreaks, while as the size of the outbreaks increases, in terms of area affected and proportion of individuals affected, our method overtakes the greedy algorithm in spatial precision and recall. The new algorithm performs on-par with baseline methods in the task of Bayesian model averaging. We conclude that the dynamic programming algorithm performs on-par with other available methods for spatial cluster detection and point to its low computational cost and extendability as advantages in favor of further research and use of the algorithm.
Spatial cluster detection using dynamic programming
2012-01-01
Background The task of spatial cluster detection involves finding spatial regions where some property deviates from the norm or the expected value. In a probabilistic setting this task can be expressed as finding a region where some event is significantly more likely than usual. Spatial cluster detection is of interest in fields such as biosurveillance, mining of astronomical data, military surveillance, and analysis of fMRI images. In almost all such applications we are interested both in the question of whether a cluster exists in the data, and if it exists, we are interested in finding the most accurate characterization of the cluster. Methods We present a general dynamic programming algorithm for grid-based spatial cluster detection. The algorithm can be used for both Bayesian maximum a-posteriori (MAP) estimation of the most likely spatial distribution of clusters and Bayesian model averaging over a large space of spatial cluster distributions to compute the posterior probability of an unusual spatial clustering. The algorithm is explained and evaluated in the context of a biosurveillance application, specifically the detection and identification of Influenza outbreaks based on emergency department visits. A relatively simple underlying model is constructed for the purpose of evaluating the algorithm, and the algorithm is evaluated using the model and semi-synthetic test data. Results When compared to baseline methods, tests indicate that the new algorithm can improve MAP estimates under certain conditions: the greedy algorithm we compared our method to was found to be more sensitive to smaller outbreaks, while as the size of the outbreaks increases, in terms of area affected and proportion of individuals affected, our method overtakes the greedy algorithm in spatial precision and recall. The new algorithm performs on-par with baseline methods in the task of Bayesian model averaging. Conclusions We conclude that the dynamic programming algorithm performs on-par with other available methods for spatial cluster detection and point to its low computational cost and extendability as advantages in favor of further research and use of the algorithm. PMID:22443103
Detailed Aggregate Resources Study, Dry Lake Valley, Nevada.
1981-05-29
LOCAL SAND SOURCES IGENERALLY CYLINDERS. DRYING SHRINKAGE I COLLECTED WITHIN A FEW MILES OF CORRESPONDING LEDGE-ROCK SOURCES) SUPPLIED FINE MENS...COMPRESSIVE AND TENSILE STh LEDGE-ROCK SOURCES SUPPLIED COARSE AGGREGATES; LOCAL SAND SOURCES IGENERALLY CYLINDERS. DRYING SHRINKAGE COLLECTED WITHIN A FEW
Influence of fly ash, slag cement and specimen curing on shrinkage of bridge deck concrete.
DOT National Transportation Integrated Search
2014-12-01
Cracks occur in bridge decks due to restrained shrinkage of concrete materials. Concrete materials shrink as : cementitious materials hydrate and as water that is not chemically bonded to cementitious materials : migrates from the high humid environm...
Espejo, L A; Zagmutt, F J; Groenendaal, H; Muñoz-Zanzi, C; Wells, S J
2015-11-01
The objective of this study was to evaluate the performance of bacterial culture of feces and serum ELISA to correctly identify cows with Mycobacterium avium ssp. paratuberculosis (MAP) at heavy, light, and non-fecal-shedding levels. A total of 29,785 parallel test results from bacterial culture of feces and serum ELISA were collected from 17 dairy herds in Minnesota, Pennsylvania, and Colorado. Samples were obtained from adult cows from dairy herds enrolled for up to 10 yr in the National Johne's Disease Demonstration Herd Project. A Bayesian latent class model was fitted to estimate the probabilities that bacterial culture of feces (using 72-h sedimentation or 30-min centrifugation methods) and serum ELISA results correctly identified cows as high positive, low positive, or negative given that cows were heavy, light, and non-shedders, respectively. The model assumed that no gold standard test was available and conditional independency existed between diagnostic tests. The estimated conditional probabilities that bacterial culture of feces correctly identified heavy shedders, light shedders, and non-shedders were 70.9, 32.0, and 98.5%, respectively. The same values for the serum ELISA were 60.6, 18.7, and 99.5%, respectively. Differences in diagnostic test performance were observed among states. These results improve the interpretation of results from bacterial culture of feces and serum ELISA for detection of MAP and MAP antibody (respectively), which can support on-farm infection control decisions and can be used to evaluate disease-testing strategies, taking into account the accuracy of these tests. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Miller, Albert E.
Early age shrinkage of cementitious systems can result in an increased potential for cracking which can lead to a reduction in service life. Early age shrinkage cracking can be particularly problematic for high strength concretes, which are often specified due to their high strength and low permeability. However, these high strength concretes frequently exhibit a reduction in the internal relative humidity (RH) due to the hydration reaction (chemical shrinkage) and self-desiccation which results in a bulk shrinkage, termed autogenous shrinkage, which is substantial at early ages. Due to the low permeability of these concretes, standard external curing is not always efficient in addressing this reduction in internal RH since the penetration of water can be limited. Internal curing has been developed to reduce autogenous shrinkage. Internally cured mixtures use internal reservoirs filled with fluid (generally water) that release this fluid at appropriate times to counteract the effects of self-desiccation thereby maintaining a high internal RH. Internally cured concrete is frequently produced in North America using pre-wetted lightweight aggregate. One important aspect associated with preparing quality internally cured concrete is being able to determine the absorbed moisture and surface moisture associated with the lightweight aggregate which enables aggregate moisture corrections to be made for the concrete mixture. This thesis represents work performed to develop a test method using a centrifuge to determine the moisture state of pre-wetted fine lightweight aggregate. The results of the test method are then used in a series of worksheets that were developed to assist field technicians when performing the tests and applying the results to a mixture design. Additionally, research was performed on superabsorbent polymers to assess their ability to be used as an internal curing reservoir.
Predicting shrinkage and warpage in injection molding: Towards automatized mold design
NASA Astrophysics Data System (ADS)
Zwicke, Florian; Behr, Marek; Elgeti, Stefanie
2017-10-01
It is an inevitable part of any plastics molding process that the material undergoes some shrinkage during solidification. Mainly due to unavoidable inhomogeneities in the cooling process, the overall shrinkage cannot be assumed as homogeneous in all volumetric directions. The direct consequence is warpage. The accurate prediction of such shrinkage and warpage effects has been the subject of a considerable amount of research, but it is important to note that this behavior depends greatly on the type of material that is used as well as the process details. Without limiting ourselves to any specific properties of certain materials or process designs, we aim to develop a method for the automatized design of a mold cavity that will produce correctly shaped moldings after solidification. Essentially, this can be stated as a shape optimization problem, where the cavity shape is optimized to fulfill some objective function that measures defects in the molding shape. In order to be able to develop and evaluate such a method, we first require simulation methods for the diffierent steps involved in the injection molding process that can represent the phenomena responsible for shrinkage and warpage ina sufficiently accurate manner. As a starting point, we consider the solidification of purely amorphous materials. In this case, the material slowly transitions from fluid-like to solid-like behavior as it cools down. This behavior is modeled using adjusted viscoelastic material models. Once the material has passed a certain temperature threshold during cooling, any viscous effects are neglected and the behavior is assumed to be fully elastic. Non-linear elastic laws are used to predict shrinkage and warpage that occur after this point. We will present the current state of these simulation methods and show some first approaches towards optimizing the mold cavity shape based on these methods.
Property evolution during vitrification of dimethacrylate photopolymer networks.
Abu-elenain, Dalia A; Lewis, Steven H; Stansbury, Jeffrey W
2013-11-01
This study seeks to correlate the interrelated properties of conversion, shrinkage, modulus and stress as dimethacrylate networks transition from rubbery to glassy states during photopolymerization. An unfilled BisGMA/TEGDMA resin was photocured for various irradiation intervals (7-600 s) to provide controlled levels of immediate conversion, which was monitored continuously for 10 min. Fiber optic near-infrared spectroscopy permitted coupling of real-time conversion measurement with dynamic polymerization shrinkage (linometer), modulus (dynamic mechanical analyzer) and stress (tensometer) development profiles. The varied irradiation conditions produced final conversion ranging from 6% to more than 60%. Post-irradiation conversion (dark cure) was quite limited when photopolymerization was interrupted either at very low or very high levels of conversion while significant dark cure contributions were possible for photocuring reactions suspended within the post-gel, rubbery regime. Analysis of conversion-based property evolution during and subsequent to photocuring demonstrated that the shrinkage rate increased significantly at about 40% conversion followed by late-stage suppression in the conversion-dependent shrinkage rate that begins at about 45-50% conversion. The gradual vitrification process over this conversion range is evident based on the broad but well-defined inflection in the modulus versus conversion data. As limiting conversion is approached, modulus and, to a somewhat lesser extent, stress rise precipitously as a result of vitrification with the stress profile showing little if any late-stage suppression as seen with shrinkage. Near the limiting conversion for this model resin, the volumetric polymerization shrinkage rate slows while an exponential rise in modulus promotes the vitrification process that appears to largely dictate stress development. Copyright © 2013 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Effect of intrinsic and extrinsic factors on the simulated D-band length of type I collagen.
Varma, Sameer; Botlani, Mohsen; Hammond, Jeff R; Scott, H Larry; Orgel, Joseph P R O; Schieber, Jay D
2015-10-01
A signature feature of collagen is its axial periodicity visible in TEM as alternating dark and light bands. In mature, type I collagen, this repeating unit, D, is 67 nm long. This periodicity reflects an underlying packing of constituent triple-helix polypeptide monomers wherein the dark bands represent gaps between axially adjacent monomers. This organization is visible distinctly in the microfibrillar model of collagen obtained from fiber diffraction. However, to date, no atomistic simulations of this diffraction model under zero-stress conditions have reported a preservation of this structural feature. Such a demonstration is important as it provides the baseline to infer response functions of physiological stimuli. In contrast, simulations predict a considerable shrinkage of the D-band (11-19%). Here we evaluate systemically the effect of several factors on D-band shrinkage. Using force fields employed in previous studies we find that irrespective of the temperature/pressure coupling algorithms, assumed salt concentration or hydration level, and whether or not the monomers are cross-linked, the D-band shrinks considerably. This shrinkage is associated with the bending and widening of individual monomers, but employing a force field whose backbone dihedral energy landscape matches more closely with our computed CCSD(T) values produces a small D-band shrinkage of < 3%. Since this force field also performs better against other experimental data, it appears that the large shrinkage observed in earlier simulations is a force-field artifact. The residual shrinkage could be due to the absence of certain atomic-level details, such as glycosylation sites, for which we do not yet have suitable data. © 2015 Wiley Periodicals, Inc.
Prananingrum, Widyasri; Tomotake, Yoritoki; Naito, Yoshihito; Bae, Jiyoung; Sekine, Kazumitsu; Hamada, Kenichi; Ichikawa, Tetsuo
2016-08-01
The prosthetic applications of titanium have been challenging because titanium does not possess suitable properties for the conventional casting method using the lost wax technique. We have developed a production method for biomedical application of porous titanium using a moldless process. This study aimed to evaluate the physical and mechanical properties of porous titanium using various particle sizes, shapes, and mixing ratio of titanium powder to wax binder for use in prosthesis production. CP Ti powders with different particle sizes, shapes, and mixing ratios were divided into five groups. A 90:10wt% mixture of titanium powder and wax binder was prepared manually at 70°C. After debinding at 380°C, the specimen was sintered in Ar at 1100°C without a mold for 1h. The linear shrinkage ratio of sintered specimens ranged from 2.5% to 14.2%. The linear shrinkage ratio increased with decreasing particle size. While the linear shrinkage ratio of Groups 3, 4, and 5 were approximately 2%, Group 1 showed the highest shrinkage of all. The bending strength ranged from 106 to 428MPa under the influence of porosity. Groups 1 and 2 presented low porosity followed by higher strength. The shear bond strength ranged from 32 to 100MPa. The shear bond strength was also particle-size dependent. The decrease in the porosity increased the linear shrinkage ratio and bending strength. Shrinkage and mechanical strength required for prostheses were dependent on the particle size and shape of titanium powders. These findings suggested that this production method can be applied to the prosthetic framework by selecting the material design. Copyright © 2016 Elsevier Ltd. All rights reserved.
Shrinkage of Dental Composite in Simulated Cavity Measured with Digital Image Correlation
Li, Jianying; Thakur, Preetanjali; Fok, Alex S. L.
2014-01-01
Polymerization shrinkage of dental resin composites can lead to restoration debonding or cracked tooth tissues in composite-restored teeth. In order to understand where and how shrinkage strain and stress develop in such restored teeth, Digital Image Correlation (DIC) was used to provide a comprehensive view of the displacement and strain distributions within model restorations that had undergone polymerization shrinkage. Specimens with model cavities were made of cylindrical glass rods with both diameter and length being 10 mm. The dimensions of the mesial-occlusal-distal (MOD) cavity prepared in each specimen measured 3 mm and 2 mm in width and depth, respectively. After filling the cavity with resin composite, the surface under observation was sprayed with first a thin layer of white paint and then fine black charcoal powder to create high-contrast speckles. Pictures of that surface were then taken before curing and 5 min after. Finally, the two pictures were correlated using DIC software to calculate the displacement and strain distributions. The resin composite shrunk vertically towards the bottom of the cavity, with the top center portion of the restoration having the largest downward displacement. At the same time, it shrunk horizontally towards its vertical midline. Shrinkage of the composite stretched the material in the vicinity of the “tooth-restoration” interface, resulting in cuspal deflections and high tensile strains around the restoration. Material close to the cavity walls or floor had direct strains mostly in the directions perpendicular to the interfaces. Summation of the two direct strain components showed a relatively uniform distribution around the restoration and its magnitude equaled approximately to the volumetric shrinkage strain of the material. PMID:25079865
NASA Astrophysics Data System (ADS)
Olugboji, T. M.; Lekic, V.; McDonough, W.
2017-07-01
We present a new approach for evaluating existing crustal models using ambient noise data sets and its associated uncertainties. We use a transdimensional hierarchical Bayesian inversion approach to invert ambient noise surface wave phase dispersion maps for Love and Rayleigh waves using measurements obtained from Ekström (2014). Spatiospectral analysis shows that our results are comparable to a linear least squares inverse approach (except at higher harmonic degrees), but the procedure has additional advantages: (1) it yields an autoadaptive parameterization that follows Earth structure without making restricting assumptions on model resolution (regularization or damping) and data errors; (2) it can recover non-Gaussian phase velocity probability distributions while quantifying the sources of uncertainties in the data measurements and modeling procedure; and (3) it enables statistical assessments of different crustal models (e.g., CRUST1.0, LITHO1.0, and NACr14) using variable resolution residual and standard deviation maps estimated from the ensemble. These assessments show that in the stable old crust of the Archean, the misfits are statistically negligible, requiring no significant update to crustal models from the ambient noise data set. In other regions of the U.S., significant updates to regionalization and crustal structure are expected especially in the shallow sedimentary basins and the tectonically active regions, where the differences between model predictions and data are statistically significant.
A Bayesian analysis of redshifted 21-cm H I signal and foregrounds: simulations for LOFAR
NASA Astrophysics Data System (ADS)
Ghosh, Abhik; Koopmans, Léon V. E.; Chapman, E.; Jelić, V.
2015-09-01
Observations of the epoch of reionization (EoR) using the 21-cm hyperfine emission of neutral hydrogen (H I) promise to open an entirely new window on the formation of the first stars, galaxies and accreting black holes. In order to characterize the weak 21-cm signal, we need to develop imaging techniques that can reconstruct the extended emission very precisely. Here, we present an inversion technique for LOw Frequency ARray (LOFAR) baselines at the North Celestial Pole (NCP), based on a Bayesian formalism with optimal spatial regularization, which is used to reconstruct the diffuse foreground map directly from the simulated visibility data. We notice that the spatial regularization de-noises the images to a large extent, allowing one to recover the 21-cm power spectrum over a considerable k⊥-k∥ space in the range 0.03 Mpc-1 < k⊥ < 0.19 Mpc-1 and 0.14 Mpc-1 < k∥ < 0.35 Mpc-1 without subtracting the noise power spectrum. We find that, in combination with using generalized morphological component analysis (GMCA), a non-parametric foreground removal technique, we can mostly recover the spherical average power spectrum within 2σ statistical fluctuations for an input Gaussian random root-mean-square noise level of 60 mK in the maps after 600 h of integration over a 10-MHz bandwidth.
NASA Astrophysics Data System (ADS)
Rahmat, R. F.; Nasution, F. R.; Seniman; Syahputra, M. F.; Sitompul, O. S.
2018-02-01
Weather is condition of air in a certain region at a relatively short period of time, measured with various parameters such as; temperature, air preasure, wind velocity, humidity and another phenomenons in the atmosphere. In fact, extreme weather due to global warming would lead to drought, flood, hurricane and other forms of weather occasion, which directly affects social andeconomic activities. Hence, a forecasting technique is to predict weather with distinctive output, particullary mapping process based on GIS with information about current weather status in certain cordinates of each region with capability to forecast for seven days afterward. Data used in this research are retrieved in real time from the server openweathermap and BMKG. In order to obtain a low error rate and high accuracy of forecasting, the authors use Bayesian Model Averaging (BMA) method. The result shows that the BMA method has good accuracy. Forecasting error value is calculated by mean square error shows (MSE). The error value emerges at minumum temperature rated at 0.28 and maximum temperature rated at 0.15. Meanwhile, the error value of minimum humidity rates at 0.38 and the error value of maximum humidity rates at 0.04. Afterall, the forecasting error rate of wind speed is at 0.076. The lower the forecasting error rate, the more optimized the accuracy is.
Bayesian Non-Stationary Index Gauge Modeling of Gridded Precipitation Extremes
NASA Astrophysics Data System (ADS)
Verdin, A.; Bracken, C.; Caldwell, J.; Balaji, R.; Funk, C. C.
2017-12-01
We propose a Bayesian non-stationary model to generate watershed scale gridded estimates of extreme precipitation return levels. The Climate Hazards Group Infrared Precipitation with Stations (CHIRPS) dataset is used to obtain gridded seasonal precipitation extremes over the Taylor Park watershed in Colorado for the period 1981-2016. For each year, grid cells within the Taylor Park watershed are aggregated to a representative "index gauge," which is input to the model. Precipitation-frequency curves for the index gauge are estimated for each year, using climate variables with significant teleconnections as proxies. Such proxies enable short-term forecasting of extremes for the upcoming season. Disaggregation ratios of the index gauge to the grid cells within the watershed are computed for each year and preserved to translate the index gauge precipitation-frequency curve to gridded precipitation-frequency maps for select return periods. Gridded precipitation-frequency maps are of the same spatial resolution as CHIRPS (0.05° x 0.05°). We verify that the disaggregation method preserves spatial coherency of extremes in the Taylor Park watershed. Validation of the index gauge extreme precipitation-frequency method consists of ensuring extreme value statistics are preserved on a grid cell basis. To this end, a non-stationary extreme precipitation-frequency analysis is performed on each grid cell individually, and the resulting frequency curves are compared to those produced by the index gauge disaggregation method.
Gros, Sébastien A A; Xu, William; Roeske, John C; Choi, Mehe; Emami, Bahman; Surucu, Murat
2017-03-01
To develop a novel method to monitor external anatomical changes in head and neck cancer patients in order to triage possible adaptive radiotherapy needs. The presented approach aims to provide information on internal anatomical changes based on variations observed in external anatomy. Setup Cone Beam Computed Tomography (CBCT) images are processed to produce an accurate external contour of the patient's skin. After registering the CBCTs to the reference planning CT, the external contours from each CBCT are transferred to the initial - first week - CBCT. Contour radii, defined as the distances between an external contour and the isocenter projection in each CBCT slice, are calculated for each scan over the full 360 degrees. The changes in external anatomy are then quantified by the difference in radial distance between the external contours of any secondary CBCT relative to the initial CBCT. Finally, the radial difference is displayed in cylindrical coordinates as a 2D intensity map to highlight regions of interests with significant changes. Weekly CBCT scans from 15 head and neck patients were retrospectively analyzed to demonstrate the utility of this approach as a proof of principle. External changes suggested by the 2D radial difference map of an example patient after 23 fractions were then correlated with the changes in the gross tumor volumes and organs at risks. The resulting dosimetric effects were evaluated. An interactive standalone software application has been developed to facilitate the generation and the interpretation of the 2D intensity map. The 2D radial difference maps provided qualitative and quantitative information, such as the location and the magnitude of external contour changes and the rate at which these deviations occur. Out of the 15 patients, 10 presented clear evidence of general external volume shrinkage due to weight loss, and nine patients had at least one site of local shrinkage. Only two patients showed no signs of anatomical change during their entire treatment course. For the example patient, the mean (±σ) radial difference was 6.7 (±3.0) mm for the left parotid and 7.3 (±2.5) mm for the right parotid. The mean dose to the left and right parotids increased from 20.1 Gy to 30 Gy and from 16.3 Gy to 29.6 Gy, respectively. This novel method provides an efficient tool to visualize 3D external anatomical changes on a single 2D map. It quickly pinpoints the location of differences in anatomy during the course of radiotherapy, which can help physicians determine if a treatment plan needs to be adapted. The interactive graphic user interface developed in this study will be evaluated in an adaptive radiotherapy workflow for head and neck patients in a future prospective trial. © 2016 American Association of Physicists in Medicine.
Role of IGF-1 in cortical plasticity and functional deficit induced by sensorimotor restriction.
Mysoet, Julien; Dupont, Erwan; Bastide, Bruno; Canu, Marie-Hélène
2015-09-01
In the adult rat, sensorimotor restriction by hindlimb unloading (HU) is known to induce impairments in motor behavior as well as a disorganization of somatosensory cortex (shrinkage of the cortical representation of the hindpaw, enlargement of the cutaneous receptive fields, decreased cutaneous sensibility threshold). Recently, our team has demonstrated that IGF-1 level was decreased in the somatosensory cortex of rats submitted to a 14-day period of HU. To determine whether IGF-1 is involved in these plastic mechanisms, a chronic cortical infusion of this substance was performed by means of osmotic minipump. When administered in control rats, IGF-1 affects the size of receptive fields and the cutaneous threshold, but has no effect on the somatotopic map. In addition, when injected during the whole HU period, IGF-1 is interestingly implied in cortical changes due to hypoactivity: the shrinkage of somatotopic representation of hindlimb is prevented, whereas the enlargement of receptive fields is reduced. IGF-1 has no effect on the increase in neuronal response to peripheral stimulation. We also explored the functional consequences of IGF-1 level restoration on tactile sensory discrimination. In HU rats, the percentage of paw withdrawal after a light tactile stimulation was decreased, whereas it was similar to control level in HU-IGF-1 rats. Taken together, the data clearly indicate that IGF-1 plays a key-role in cortical plastic mechanisms and in behavioral alterations induced by a decrease in sensorimotor activity. Copyright © 2015 Elsevier B.V. All rights reserved.
Craig, Marlies H; Sharp, Brian L; Mabaso, Musawenkosi LH; Kleinschmidt, Immo
2007-01-01
Background Several malaria risk maps have been developed in recent years, many from the prevalence of infection data collated by the MARA (Mapping Malaria Risk in Africa) project, and using various environmental data sets as predictors. Variable selection is a major obstacle due to analytical problems caused by over-fitting, confounding and non-independence in the data. Testing and comparing every combination of explanatory variables in a Bayesian spatial framework remains unfeasible for most researchers. The aim of this study was to develop a malaria risk map using a systematic and practicable variable selection process for spatial analysis and mapping of historical malaria risk in Botswana. Results Of 50 potential explanatory variables from eight environmental data themes, 42 were significantly associated with malaria prevalence in univariate logistic regression and were ranked by the Akaike Information Criterion. Those correlated with higher-ranking relatives of the same environmental theme, were temporarily excluded. The remaining 14 candidates were ranked by selection frequency after running automated step-wise selection procedures on 1000 bootstrap samples drawn from the data. A non-spatial multiple-variable model was developed through step-wise inclusion in order of selection frequency. Previously excluded variables were then re-evaluated for inclusion, using further step-wise bootstrap procedures, resulting in the exclusion of another variable. Finally a Bayesian geo-statistical model using Markov Chain Monte Carlo simulation was fitted to the data, resulting in a final model of three predictor variables, namely summer rainfall, mean annual temperature and altitude. Each was independently and significantly associated with malaria prevalence after allowing for spatial correlation. This model was used to predict malaria prevalence at unobserved locations, producing a smooth risk map for the whole country. Conclusion We have produced a highly plausible and parsimonious model of historical malaria risk for Botswana from point-referenced data from a 1961/2 prevalence survey of malaria infection in 1–14 year old children. After starting with a list of 50 potential variables we ended with three highly plausible predictors, by applying a systematic and repeatable staged variable selection procedure that included a spatial analysis, which has application for other environmentally determined infectious diseases. All this was accomplished using general-purpose statistical software. PMID:17892584
Minimal entropy approximation for cellular automata
NASA Astrophysics Data System (ADS)
Fukś, Henryk
2014-02-01
We present a method for the construction of approximate orbits of measures under the action of cellular automata which is complementary to the local structure theory. The local structure theory is based on the idea of Bayesian extension, that is, construction of a probability measure consistent with given block probabilities and maximizing entropy. If instead of maximizing entropy one minimizes it, one can develop another method for the construction of approximate orbits, at the heart of which is the iteration of finite-dimensional maps, called minimal entropy maps. We present numerical evidence that the minimal entropy approximation sometimes outperforms the local structure theory in characterizing the properties of cellular automata. The density response curve for elementary CA rule 26 is used to illustrate this claim.
Investigation into shrinkage of high-performance concrete used for Iowa bridge decks and overlays.
DOT National Transportation Integrated Search
2013-09-01
High-performance concrete (HPC) overlays have been used increasingly as an effective and economical method for bridge decks in Iowa and other states. However, due to its high cementitious material content, HPC often displays high shrinkage cracking p...
DOT National Transportation Integrated Search
2009-01-01
Early-age cracking, typically caused by drying shrinkage (and often coupled with autogenous and thermal : shrinkage), can have several detrimental effects on long-term behavior and durability. Cracking can also provide : ingress of water that can dri...
Optimization of injection molding process parameters for a plastic cell phone housing component
NASA Astrophysics Data System (ADS)
Rajalingam, Sokkalingam; Vasant, Pandian; Khe, Cheng Seong; Merican, Zulkifli; Oo, Zeya
2016-11-01
To produce thin-walled plastic items, injection molding process is one of the most widely used application tools. However, to set optimal process parameters is difficult as it may cause to produce faulty items on injected mold like shrinkage. This study aims at to determine such an optimum injection molding process parameters which can reduce the fault of shrinkage on a plastic cell phone cover items. Currently used setting of machines process produced shrinkage and mis-specified length and with dimensions below the limit. Thus, for identification of optimum process parameters, maintaining closer targeted length and width setting magnitudes with minimal variations, more experiments are needed. The mold temperature, injection pressure and screw rotation speed are used as process parameters in this research. For optimal molding process parameters the Response Surface Methods (RSM) is applied. The major contributing factors influencing the responses were identified from analysis of variance (ANOVA) technique. Through verification runs it was found that the shrinkage defect can be minimized with the optimal setting found by RSM.
Three-dimensional aspects of the shrinking phenomenon of ArF resist
NASA Astrophysics Data System (ADS)
Laufer, Ido; Eytan, Giora E.; Dror, Ophir
2002-07-01
Previous studies of the interaction of electron beams with different types of ArF resists have shown the undesired phenomenon of the resist shrinkage. The lateral component of this shrinkage has been detected and quantified easily by SEM CD measurements. However, the vertical extent of this phenomenon has to date remained unknown. In this work we present measurements of the changes in height and sidewall angles of an ArF line by using a new e-beam tilting ability of the Vera SEM 3D. The 3D measurement results show that the height of the line shrinks in similar proportions to the top and bottom CDs, with a difference in the magnitude. Due to higher penetration depth of the e-beam on the top of the line than on the sidewall, the vertical shrinkage reaches steady state more rapidly than the lateral shrinkage. We also found a slight reduction in sidewall angle, which is less than one degree even under high e-beam exposure.
Rahbar, Mohammad H; Choi, Sangbum; Hong, Chuan; Zhu, Liang; Jeon, Sangchoon; Gardiner, Joseph C
2018-01-01
We propose a nonparametric shrinkage estimator for the median survival times from several independent samples of right-censored data, which combines the samples and hypothesis information to improve the efficiency. We compare efficiency of the proposed shrinkage estimation procedure to unrestricted estimator and combined estimator through extensive simulation studies. Our results indicate that performance of these estimators depends on the strength of homogeneity of the medians. When homogeneity holds, the combined estimator is the most efficient estimator. However, it becomes inconsistent when homogeneity fails. On the other hand, the proposed shrinkage estimator remains efficient. Its efficiency decreases as the equality of the survival medians is deviated, but is expected to be as good as or equal to the unrestricted estimator. Our simulation studies also indicate that the proposed shrinkage estimator is robust to moderate levels of censoring. We demonstrate application of these methods to estimating median time for trauma patients to receive red blood cells in the Prospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study.
Compact reflection holographic recording system with high angle multiplexing
NASA Astrophysics Data System (ADS)
Kanayasu, Mayumi; Yamada, Takehumi; Takekawa, Shunsuke; Akieda, Kensuke; Goto, Akiyo; Yamamoto, Manabu
2011-02-01
Holographic memory systems have been widely researched since 1963. However, the size of the drives required and the deterioration of reconstructed data resulting from shrinkage of the medium have made practical use of a hologram memory difficult. In light of this, we propose a novel holographic recording/reconstructing system: a dual-reference beam reflection system that is smaller than conventional systems such as the off-axis or co-axis types, and which is expected to increase the number of multiplexing in angle multiplexed recording. In this multiplex recording system, two laser beams are used as reference beams, and the recorded data are reconstructed stably, even if there is shrinkage of the recording medium. In this paper, a reflection holographic memory system is explained in detail. In addition, the change in angle selectivity resulting from shrinkage of the medium is analyzed using the laminated film three-dimensional simulation method. As a result, we demonstrate that a dual-reference beam multiplex recording system is effective in reducing the influence of medium shrinkage.
Improved Silica Aerogel Composite Materials
NASA Technical Reports Server (NTRS)
Paik, Jong-Ah; Sakamoto, Jeffrey; Jones, Steven
2008-01-01
A family of aerogel-matrix composite materials having thermal-stability and mechanical- integrity properties better than those of neat aerogels has been developed. Aerogels are known to be excellent thermal- and acoustic-insulation materials because of their molecular-scale porosity, but heretofore, the use of aerogels has been inhibited by two factors: (1) Their brittleness makes processing and handling difficult. (2) They shrink during production and shrink more when heated to high temperatures during use. The shrinkage and the consequent cracking make it difficult to use them to encapsulate objects in thermal-insulation materials. The underlying concept of aerogel-matrix composites is not new; the novelty of the present family of materials lies in formulations and processes that result in superior properties, which include (1) much less shrinkage during a supercritical-drying process employed in producing a typical aerogel, (2) much less shrinkage during exposure to high temperatures, and (3) as a result of the reduction in shrinkage, much less or even no cracking.
Choi, Sangbum; Hong, Chuan; Zhu, Liang; Jeon, Sangchoon; Gardiner, Joseph C.
2018-01-01
We propose a nonparametric shrinkage estimator for the median survival times from several independent samples of right-censored data, which combines the samples and hypothesis information to improve the efficiency. We compare efficiency of the proposed shrinkage estimation procedure to unrestricted estimator and combined estimator through extensive simulation studies. Our results indicate that performance of these estimators depends on the strength of homogeneity of the medians. When homogeneity holds, the combined estimator is the most efficient estimator. However, it becomes inconsistent when homogeneity fails. On the other hand, the proposed shrinkage estimator remains efficient. Its efficiency decreases as the equality of the survival medians is deviated, but is expected to be as good as or equal to the unrestricted estimator. Our simulation studies also indicate that the proposed shrinkage estimator is robust to moderate levels of censoring. We demonstrate application of these methods to estimating median time for trauma patients to receive red blood cells in the Prospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study. PMID:29772007
Improving the Incoherence of a Learned Dictionary via Rank Shrinkage.
Ubaru, Shashanka; Seghouane, Abd-Krim; Saad, Yousef
2017-01-01
This letter considers the problem of dictionary learning for sparse signal representation whose atoms have low mutual coherence. To learn such dictionaries, at each step, we first update the dictionary using the method of optimal directions (MOD) and then apply a dictionary rank shrinkage step to decrease its mutual coherence. In the rank shrinkage step, we first compute a rank 1 decomposition of the column-normalized least squares estimate of the dictionary obtained from the MOD step. We then shrink the rank of this learned dictionary by transforming the problem of reducing the rank to a nonnegative garrotte estimation problem and solving it using a path-wise coordinate descent approach. We establish theoretical results that show that the rank shrinkage step included will reduce the coherence of the dictionary, which is further validated by experimental results. Numerical experiments illustrating the performance of the proposed algorithm in comparison to various other well-known dictionary learning algorithms are also presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wyrzykowski, Mateusz, E-mail: mateusz.wyrzykowski@empa.ch; Lodz University of Technology, Department of Building Physics and Building Materials, Lodz; Trtik, Pavel
2015-07-15
Water transport in fresh, highly permeable concrete and rapid water evaporation from the concrete surface during the first few hours after placement are the key parameters influencing plastic shrinkage cracking. In this work, neutron tomography was used to determine both the water loss from the concrete surface due to evaporation and the redistribution of fluid that occurs in fresh mortars exposed to external drying. In addition to the reference mortar with a water to cement ratio (w/c) of 0.30, a mortar with the addition of pre-wetted lightweight aggregates (LWA) and a mortar with a shrinkage reducing admixture (SRA) were tested.more » The addition of SRA reduced the evaporation rate from the mortar at the initial stages of drying and reduced the total water loss. The pre-wetted LWA released a large part of the absorbed water as a consequence of capillary pressure developing in the fresh mortar due to evaporation.« less
Vu, Lien T; Chen, Chao-Chang A; Lee, Chia-Cheng; Yu, Chia-Wei
2018-04-20
This study aims to develop a compensating method to minimize the shrinkage error of the shell mold (SM) in the injection molding (IM) process to obtain uniform optical power in the central optical zone of soft axial symmetric multifocal contact lenses (CL). The Z-shrinkage error along the Z axis or axial axis of the anterior SM corresponding to the anterior surface of a dry contact lens in the IM process can be minimized by optimizing IM process parameters and then by compensating for additional (Add) powers in the central zone of the original lens design. First, the shrinkage error is minimized by optimizing three levels of four IM parameters, including mold temperature, injection velocity, packing pressure, and cooling time in 18 IM simulations based on an orthogonal array L 18 (2 1 ×3 4 ). Then, based on the Z-shrinkage error from IM simulation, three new contact lens designs are obtained by increasing the Add power in the central zone of the original multifocal CL design to compensate for the optical power errors. Results obtained from IM process simulations and the optical simulations show that the new CL design with 0.1 D increasing in Add power has the closest shrinkage profile to the original anterior SM profile with percentage of reduction in absolute Z-shrinkage error of 55% and more uniform power in the central zone than in the other two cases. Moreover, actual experiments of IM of SM for casting soft multifocal CLs have been performed. The final product of wet CLs has been completed for the original design and the new design. Results of the optical performance have verified the improvement of the compensated design of CLs. The feasibility of this compensating method has been proven based on the measurement results of the produced soft multifocal CLs of the new design. Results of this study can be further applied to predict or compensate for the total optical power errors of the soft multifocal CLs.
NASA Astrophysics Data System (ADS)
Li, J.; Warner, T.; Bao, A.
2017-12-01
Central Asia is one of the world most vulnerable areas responding to global change. Lakes in arid regions of Central Asia remain sensitive to climatic change and fluctuate with temperature and precipitation variations. Study showed that some central asian inland lakes in showed a trend of area shrinkage or extinct in the last decades. Quantitative analysis of lake volume changes in spatio-temporal processes will improve our understanding water resource utilization in arid regions and their responses to regional climate change. However, due to the lack of lake bathmetry or observation data, the volumes of these lakes remain unknown. In this paper, three lakes, such as Chaiwopu lake, Alik Lake and Selectyteniz Lake in Central Asia are used to reconstruct lake volume changes. Firstly, stereo mapping technologies derived from ZY-3 high resolution data are used to map the high-precision 3-D lake bathmetry, so as to create "Area-Level-Volume" based on contours of lake bathmetry. Secondly, time series lake areas in the last 50 years are mapped with multi-source and multi-temporal remote sensing images. Based on lake storage curves and time series lake areas, lake volumes in the last 5 decades can be reconstructed, and the spatio-temporal characteristics of lake volume changes and their mechanisms are also analyzed. The results showed that the high-precision lake hydrological elements are reconstructed on arid drying lakes through the application of stereo mapping technology in remote sensing.
NASA Astrophysics Data System (ADS)
García, Constantino A.; Otero, Abraham; Félix, Paulo; Presedo, Jesús; Márquez, David G.
2018-07-01
In the past few decades, it has been recognized that 1 / f fluctuations are ubiquitous in nature. The most widely used mathematical models to capture the long-term memory properties of 1 / f fluctuations have been stochastic fractal models. However, physical systems do not usually consist of just stochastic fractal dynamics, but they often also show some degree of deterministic behavior. The present paper proposes a model based on fractal stochastic and deterministic components that can provide a valuable basis for the study of complex systems with long-term correlations. The fractal stochastic component is assumed to be a fractional Brownian motion process and the deterministic component is assumed to be a band-limited signal. We also provide a method that, under the assumptions of this model, is able to characterize the fractal stochastic component and to provide an estimate of the deterministic components present in a given time series. The method is based on a Bayesian wavelet shrinkage procedure that exploits the self-similar properties of the fractal processes in the wavelet domain. This method has been validated over simulated signals and over real signals with economical and biological origin. Real examples illustrate how our model may be useful for exploring the deterministic-stochastic duality of complex systems, and uncovering interesting patterns present in time series.
Chiu, Chi-Te; Huang, Chao-Li; Hung, Kuo-Hsiang; Chiang, Tzen-Yuh
2016-01-01
Postglacial climate changes alter geographical distributions and diversity of species. Such ongoing changes often force species to migrate along the latitude/altitude. Altitudinal gradients represent assemblage of environmental, especially climatic, variable factors that influence the plant distributions. Global warming that triggered upward migrations has therefore impacted the alpine plants on an island. In this study, we examined the genetic structure of Juniperus morrisonicola, a dominant alpine species in Taiwan, and inferred historical, demographic dynamics based on multilocus analyses. Lower levels of genetic diversity in north indicated that populations at higher latitudes were vulnerable to climate change, possibly related to historical alpine glaciers. Neither organellar DNA nor nuclear genes displayed geographical subdivisions, indicating that populations were likely interconnected before migrating upward to isolated mountain peaks, providing low possibilities of seed/pollen dispersal across mountain ranges. Bayesian skyline plots suggested steady population growth of J. morrisonicola followed by recent demographic contraction. In contrast, most lower-elevation plants experienced recent demographic expansion as a result of global warming. The endemic alpine conifer may have experienced dramatic climate changes over the alternation of glacial and interglacial periods, as indicated by a trend showing decreasing genetic diversity with the altitudinal gradient, plus a fact of upward migration. PMID:27561108
Paternal occupation and birth defects: findings from the National Birth Defects Prevention Study
Desrosiers, Tania A.; Herring, Amy H.; Shapira, Stuart K.; Hooiveld, Mariette; Luben, Tom J.; Herdt-Losavio, Michele L.; Lin, Shao; Olshan, Andrew F.
2013-01-01
Objectives Several epidemiologic studies have suggested that certain paternal occupations may be associated with an increased prevalence of birth defects in offspring. Using data from the National Birth Defects Prevention Study, we investigated the association between paternal occupation and birth defects in a case-control study of cases comprising over 60 different types of birth defects (n = 9998) and non-malformed controls (n = 4066) with dates of delivery between 1997 and 2004. Methods Using paternal occupational histories reported by mothers via telephone interview, jobs were systematically classified into 63 groups based on shared exposure profiles within occupation and industry. Data were analyzed using Bayesian logistic regression with a hierarchical prior for dependent shrinkage to stabilize estimation with sparse data. Results Several occupations were associated with an increased prevalence of various birth defect categories, including: mathematical, physical and computer scientists; artists; photographers and photo processors; food service workers; landscapers and groundskeepers; hairdressers and cosmetologists; office and administrative support workers; sawmill workers; petroleum and gas workers; chemical workers; printers; material moving equipment operators; and motor vehicle operators. Conclusions Findings from this study might be used to identify specific occupations worthy of further investigation, and to generate hypotheses about chemical or physical exposures common to such occupations. PMID:22782864
Huang, Chi-Chun; Hsu, Tsai-Wen; Wang, Hao-Ven; Liu, Zin-Huang; Chen, Yi-Yen; Chiu, Chi-Te; Huang, Chao-Li; Hung, Kuo-Hsiang; Chiang, Tzen-Yuh
2016-01-01
Postglacial climate changes alter geographical distributions and diversity of species. Such ongoing changes often force species to migrate along the latitude/altitude. Altitudinal gradients represent assemblage of environmental, especially climatic, variable factors that influence the plant distributions. Global warming that triggered upward migrations has therefore impacted the alpine plants on an island. In this study, we examined the genetic structure of Juniperus morrisonicola, a dominant alpine species in Taiwan, and inferred historical, demographic dynamics based on multilocus analyses. Lower levels of genetic diversity in north indicated that populations at higher latitudes were vulnerable to climate change, possibly related to historical alpine glaciers. Neither organellar DNA nor nuclear genes displayed geographical subdivisions, indicating that populations were likely interconnected before migrating upward to isolated mountain peaks, providing low possibilities of seed/pollen dispersal across mountain ranges. Bayesian skyline plots suggested steady population growth of J. morrisonicola followed by recent demographic contraction. In contrast, most lower-elevation plants experienced recent demographic expansion as a result of global warming. The endemic alpine conifer may have experienced dramatic climate changes over the alternation of glacial and interglacial periods, as indicated by a trend showing decreasing genetic diversity with the altitudinal gradient, plus a fact of upward migration.
A close examination of double filtering with fold change and t test in microarray analysis
2009-01-01
Background Many researchers use the double filtering procedure with fold change and t test to identify differentially expressed genes, in the hope that the double filtering will provide extra confidence in the results. Due to its simplicity, the double filtering procedure has been popular with applied researchers despite the development of more sophisticated methods. Results This paper, for the first time to our knowledge, provides theoretical insight on the drawback of the double filtering procedure. We show that fold change assumes all genes to have a common variance while t statistic assumes gene-specific variances. The two statistics are based on contradicting assumptions. Under the assumption that gene variances arise from a mixture of a common variance and gene-specific variances, we develop the theoretically most powerful likelihood ratio test statistic. We further demonstrate that the posterior inference based on a Bayesian mixture model and the widely used significance analysis of microarrays (SAM) statistic are better approximations to the likelihood ratio test than the double filtering procedure. Conclusion We demonstrate through hypothesis testing theory, simulation studies and real data examples, that well constructed shrinkage testing methods, which can be united under the mixture gene variance assumption, can considerably outperform the double filtering procedure. PMID:19995439
Condition Number Regularized Covariance Estimation*
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2012-01-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197
Condition Number Regularized Covariance Estimation.
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2013-06-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.
NASA Astrophysics Data System (ADS)
Schwenker, Megan; Marlowe, Robert; Lee, Scott; Rupprecht, Allan
2005-03-01
Highly oriented, wet-spun films of DNA expand in the direction perpendicular to the helical axis as the hydration of the film is increased. CsDNA films with a high CsCl content show an unexpected shrinkage at a relative humidity of 92%. Our most recent experiments have been to measure the perpendicular dimension of CsDNA as a function of both hydration and concentration of CsCl. Our preliminary results show that no shrinkage is observed at low contents of CsCl, showing that the CsCl plays an integral role in the shrinkage phenomenon.
Use of Empirical Estimates of Shrinkage in Multiple Regression: A Caution.
ERIC Educational Resources Information Center
Kromrey, Jeffrey D.; Hines, Constance V.
1995-01-01
The accuracy of four empirical techniques to estimate shrinkage in multiple regression was studied through Monte Carlo simulation. None of the techniques provided unbiased estimates of the population squared multiple correlation coefficient, but the normalized jackknife and bootstrap techniques demonstrated marginally acceptable performance with…
DOT National Transportation Integrated Search
2002-08-01
The purpose of this research is to evaluate the effectiveness of soil cement shrinkage crack mitigation techniques. Ten test sections, 1000 feet long, were constructed on LA 89 in Vermilion Parish. The shrinkage crack mitigation methods being evaluat...
NASA Astrophysics Data System (ADS)
Khromova, T. E.; Dyurgerov, M. B.; Barry, R. G.
2003-08-01
Global analysis of glacier regimes reveals widespread wastage since the late 1970s, with a marked acceleration in the late 1980s. We investigate changes in the heavily glacierized Ak-shirak Range, central Tien Shan plateau (43°N, 75°E) using air photo mapping surveys (1943 and 1977), an ASTER imagery (2001), and long term glaciological and meteorological observations. The wasting of the Ak-shirak glacier system features a decrease in average glacier size, and an increase in the area of outcrops. A small shrinkage during 1943-1977 was followed by a greater than 20% reduction during 1977-2001 in response to increases in summer and annual air temperature and decreases in annual precipitation.
Analysis of the shrinkage at the thick plate part using response surface methodology
NASA Astrophysics Data System (ADS)
Hatta, N. M.; Azlan, M. Z.; Shayfull, Z.; Roselina, S.; Nasir, S. M.
2017-09-01
Injection moulding is well known for its manufacturing process especially in producing plastic products. To measure the final product quality, there are lots of precautions to be taken into such as parameters setting at the initial stage of the process. Sometimes, if these parameters were set up wrongly, defects may be occurred and one of the well-known defects in the injection moulding process is a shrinkage. To overcome this problem, a maximisation at the precaution stage by making an optimal adjustment on the parameter setting need to be done and this paper focuses on analysing the shrinkage by optimising the parameter at thick plate part with the help of Response Surface Methodology (RSM) and ANOVA analysis. From the previous study, the outstanding parameter gained from the optimisation method in minimising the shrinkage at the moulded part was packing pressure. Therefore, with the reference from the previous literature, packing pressure was selected as the parameter setting for this study with other three parameters which are melt temperature, cooling time and mould temperature. The analysis of the process was obtained from the simulation by Autodesk Moldflow Insight (AMI) software and the material used for moulded part was Acrylonitrile Butadiene Styrene (ABS). The analysis and result were obtained and it found that the shrinkage can be minimised and the significant parameters were found as packing pressure, mould temperature and melt temperature.
Automatic measurement for dimensional changes of woven fabrics based on texture
NASA Astrophysics Data System (ADS)
Liu, Jihong; Jiang, Hongxia; Liu, X.; Chai, Zhilei
2014-01-01
Dimensional change or shrinkage is an important functional attribute of woven fabrics that affects their basic function and price in the market. This paper presents a machine vision system that evaluates the shrinkage of woven fabrics by analyzing the change of fabric construction. The proposed measurement method has three features. (i) There will be no stain of shrinkage markers on the fabric specimen compared to the existing measurement method. (ii) The system can be used on fabric with reduced area. (iii) The system can be installed and used as a laboratory or industrial application system. The method processed can process the image of the fabric and is divided into four steps: acquiring a relative image from the sample of the woven fabric, obtaining a gray image and then the segmentation of the warp and weft from the fabric based on fast Fourier transform and inverse fast Fourier transform, calculation of the distance of the warp or weft sets by gray projection method and character shrinkage of the woven fabric by the average distance, coefficient of variation of distance and so on. Experimental results on virtual and physical woven fabrics indicated that the method provided could obtain the shrinkage information of woven fabric in detail. The method was programmed by Matlab software, and a graphical user interface was built by Delphi. The program has potential for practical use in the textile industry.
Ritz, Ludivine; Segobin, Shailendra; Lannuzel, Coralie; Boudehent, Céline; Vabret, François; Eustache, Francis; Beaunieux, Hélène; Pitel, Anne L
2016-09-01
Alcoholism is associated with widespread brain structural abnormalities affecting mainly the frontocerebellar and the Papez's circuits. Brain glucose metabolism has received limited attention, and few studies used regions of interest approach and showed reduced global brain metabolism predominantly in the frontal and parietal lobes. Even though these studies have examined the relationship between grey matter shrinkage and hypometabolism, none has performed a direct voxel-by-voxel comparison between the degrees of structural and metabolic abnormalities. Seventeen alcoholic patients and 16 control subjects underwent both structural magnetic resonance imaging and (18)F-2-fluoro-deoxy-glucose-positron emission tomography examinations. Structural abnormalities and hypometabolism were examined in alcoholic patients compared with control subjects using two-sample t-tests. Then, these two patterns of brain damage were directly compared with a paired t-test. Compared to controls, alcoholic patients had grey matter shrinkage and hypometabolism in the fronto-cerebellar circuit and several nodes of Papez's circuit. The direct comparison revealed greater shrinkage than hypometabolism in the cerebellum, cingulate cortex, thalamus and hippocampus and parahippocampal gyrus. Conversely, hypometabolism was more severe than shrinkage in the dorsolateral, premotor and parietal cortices. The distinct profiles of abnormalities found within the Papez's circuit, the fronto-cerebellar circuit and the parietal gyrus in chronic alcoholism suggest the involvement of different pathological mechanisms. © The Author(s) 2015.
SEM-induced shrinkage and site-selective modification of single-crystal silicon nanopores
NASA Astrophysics Data System (ADS)
Chen, Qi; Wang, Yifan; Deng, Tao; Liu, Zewen
2017-07-01
Solid-state nanopores with feature sizes around 5 nm play a critical role in bio-sensing fields, especially in single molecule detection and sequencing of DNA, RNA and proteins. In this paper we present a systematic study on shrinkage and site-selective modification of single-crystal silicon nanopores with a conventional scanning electron microscope (SEM). Square nanopores with measurable sizes as small as 8 nm × 8 nm and rectangle nanopores with feature sizes (the smaller one between length and width) down to 5 nm have been obtained, using the SEM-induced shrinkage technique. The analysis of energy dispersive x-ray spectroscopy and the recovery of the pore size and morphology reveal that the grown material along with the edge of the nanopore is the result of deposition of hydrocarbon compounds, without structural damage during the shrinking process. A simplified model for pore shrinkage has been developed based on observation of the cross-sectional morphology of the shrunk nanopore. The main factors impacting on the task of controllably shrinking the nanopores, such as the accelerating voltage, spot size, scanned area of e-beam, and the initial pore size have been discussed. It is found that single-crystal silicon nanopores shrink linearly with time under localized irradiation by SEM e-beam in all cases, and the pore shrinkage rate is inversely proportional to the initial equivalent diameter of the pore under the same e-beam conditions.
Ritz, Ludivine; Segobin, Shailendra; Lannuzel, Coralie; Boudehent, Céline; Vabret, François; Eustache, Francis; Beaunieux, Hélène
2015-01-01
Alcoholism is associated with widespread brain structural abnormalities affecting mainly the frontocerebellar and the Papez’s circuits. Brain glucose metabolism has received limited attention, and few studies used regions of interest approach and showed reduced global brain metabolism predominantly in the frontal and parietal lobes. Even though these studies have examined the relationship between grey matter shrinkage and hypometabolism, none has performed a direct voxel-by-voxel comparison between the degrees of structural and metabolic abnormalities. Seventeen alcoholic patients and 16 control subjects underwent both structural magnetic resonance imaging and 18F-2-fluoro-deoxy-glucose-positron emission tomography examinations. Structural abnormalities and hypometabolism were examined in alcoholic patients compared with control subjects using two-sample t-tests. Then, these two patterns of brain damage were directly compared with a paired t-test. Compared to controls, alcoholic patients had grey matter shrinkage and hypometabolism in the fronto-cerebellar circuit and several nodes of Papez’s circuit. The direct comparison revealed greater shrinkage than hypometabolism in the cerebellum, cingulate cortex, thalamus and hippocampus and parahippocampal gyrus. Conversely, hypometabolism was more severe than shrinkage in the dorsolateral, premotor and parietal cortices. The distinct profiles of abnormalities found within the Papez’s circuit, the fronto-cerebellar circuit and the parietal gyrus in chronic alcoholism suggest the involvement of different pathological mechanisms. PMID:26661206
Self-healing of drying shrinkage cracks in cement-based materials incorporating reactive MgO
NASA Astrophysics Data System (ADS)
Qureshi, T. S.; Al-Tabbaa, A.
2016-08-01
Excessive drying shrinkage is one of the major issues of concern for longevity and reduced strength performance of concrete structures. It can cause the formation of cracks in the concrete. This research aims to improve the autogenous self-healing capacity of traditional Portland cement (PC) systems, adding expansive minerals such as reactive magnesium oxide (MgO) in terms of drying shrinkage crack healing. Two different reactive grades (high ‘N50’and moderately high ‘92-200’) of MgO were added with PC. Cracks were induced in the samples with restraining end prisms through natural drying shrinkage over 28 days after casting. Samples were then cured under water for 28 and 56 days, and self-healing capacity was investigated in terms of mechanical strength recovery, crack sealing efficiency and improvement in durability. Finally, microstructures of the healing materials were investigated using FT-IR, XRD, and SEM-EDX. Overall N50 mixes show higher expansion and drying shrinkage compared to 92-200 mixes. Autogenous self-healing performance of the MgO containing samples were much higher compared to control (only PC) mixes. Cracks up to 500 μm were sealed in most MgO containing samples after 28 days. In the microstructural investigations, highly expansive Mg-rich hydro-carbonate bridges were found along with traditional calcium-based, self-healing compounds (calcite, portlandite, calcium silicate hydrates and ettringite).
Gerber, Brian D; Kendall, William L; Hooten, Mevin B; Dubovsky, James A; Drewien, Roderick C
2015-09-01
1. Prediction is fundamental to scientific enquiry and application; however, ecologists tend to favour explanatory modelling. We discuss a predictive modelling framework to evaluate ecological hypotheses and to explore novel/unobserved environmental scenarios to assist conservation and management decision-makers. We apply this framework to develop an optimal predictive model for juvenile (<1 year old) sandhill crane Grus canadensis recruitment of the Rocky Mountain Population (RMP). We consider spatial climate predictors motivated by hypotheses of how drought across multiple time-scales and spring/summer weather affects recruitment. 2. Our predictive modelling framework focuses on developing a single model that includes all relevant predictor variables, regardless of collinearity. This model is then optimized for prediction by controlling model complexity using a data-driven approach that marginalizes or removes irrelevant predictors from the model. Specifically, we highlight two approaches of statistical regularization, Bayesian least absolute shrinkage and selection operator (LASSO) and ridge regression. 3. Our optimal predictive Bayesian LASSO and ridge regression models were similar and on average 37% superior in predictive accuracy to an explanatory modelling approach. Our predictive models confirmed a priori hypotheses that drought and cold summers negatively affect juvenile recruitment in the RMP. The effects of long-term drought can be alleviated by short-term wet spring-summer months; however, the alleviation of long-term drought has a much greater positive effect on juvenile recruitment. The number of freezing days and snowpack during the summer months can also negatively affect recruitment, while spring snowpack has a positive effect. 4. Breeding habitat, mediated through climate, is a limiting factor on population growth of sandhill cranes in the RMP, which could become more limiting with a changing climate (i.e. increased drought). These effects are likely not unique to cranes. The alteration of hydrological patterns and water levels by drought may impact many migratory, wetland nesting birds in the Rocky Mountains and beyond. 5. Generalizable predictive models (trained by out-of-sample fit and based on ecological hypotheses) are needed by conservation and management decision-makers. Statistical regularization improves predictions and provides a general framework for fitting models with a large number of predictors, even those with collinearity, to simultaneously identify an optimal predictive model while conducting rigorous Bayesian model selection. Our framework is important for understanding population dynamics under a changing climate and has direct applications for making harvest and habitat management decisions. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
Predicting individual brain functional connectivity using a Bayesian hierarchical model.
Dai, Tian; Guo, Ying
2017-02-15
Network-oriented analysis of functional magnetic resonance imaging (fMRI), especially resting-state fMRI, has revealed important association between abnormal connectivity and brain disorders such as schizophrenia, major depression and Alzheimer's disease. Imaging-based brain connectivity measures have become a useful tool for investigating the pathophysiology, progression and treatment response of psychiatric disorders and neurodegenerative diseases. Recent studies have started to explore the possibility of using functional neuroimaging to help predict disease progression and guide treatment selection for individual patients. These studies provide the impetus to develop statistical methodology that would help provide predictive information on disease progression-related or treatment-related changes in neural connectivity. To this end, we propose a prediction method based on Bayesian hierarchical model that uses individual's baseline fMRI scans, coupled with relevant subject characteristics, to predict the individual's future functional connectivity. A key advantage of the proposed method is that it can improve the accuracy of individualized prediction of connectivity by combining information from both group-level connectivity patterns that are common to subjects with similar characteristics as well as individual-level connectivity features that are particular to the specific subject. Furthermore, our method also offers statistical inference tools such as predictive intervals that help quantify the uncertainty or variability of the predicted outcomes. The proposed prediction method could be a useful approach to predict the changes in individual patient's brain connectivity with the progression of a disease. It can also be used to predict a patient's post-treatment brain connectivity after a specified treatment regimen. Another utility of the proposed method is that it can be applied to test-retest imaging data to develop a more reliable estimator for individual functional connectivity. We show there exists a nice connection between our proposed estimator and a recently developed shrinkage estimator of connectivity measures in the neuroimaging community. We develop an expectation-maximization (EM) algorithm for estimation of the proposed Bayesian hierarchical model. Simulations studies are performed to evaluate the accuracy of our proposed prediction methods. We illustrate the application of the methods with two data examples: the longitudinal resting-state fMRI from ADNI2 study and the test-retest fMRI data from Kirby21 study. In both the simulation studies and the fMRI data applications, we demonstrate that the proposed methods provide more accurate prediction and more reliable estimation of individual functional connectivity as compared with alternative methods. Copyright © 2017 Elsevier Inc. All rights reserved.
DOT National Transportation Integrated Search
2017-02-01
The two focus areas of this research address longstanding problems of (1) cracking of concrete slabs due to creep and shrinkage and (2) high performance compositions for grouting and joining precast concrete structural elements. Cracking of bridge de...
2011-04-01
thus they should only be used when experienced operators have been trained on using the material with the mixer. ABC Cement was suited for various... autogenous shrinkage, all of which occur during hydration. Shrinkage potential is important because repair materials that shrink excessively are more
An optimal strategy for functional mapping of dynamic trait loci.
Jin, Tianbo; Li, Jiahan; Guo, Ying; Zhou, Xiaojing; Yang, Runqing; Wu, Rongling
2010-02-01
As an emerging powerful approach for mapping quantitative trait loci (QTLs) responsible for dynamic traits, functional mapping models the time-dependent mean vector with biologically meaningful equations and are likely to generate biologically relevant and interpretable results. Given the autocorrelation nature of a dynamic trait, functional mapping needs the implementation of the models for the structure of the covariance matrix. In this article, we have provided a comprehensive set of approaches for modelling the covariance structure and incorporated each of these approaches into the framework of functional mapping. The Bayesian information criterion (BIC) values are used as a model selection criterion to choose the optimal combination of the submodels for the mean vector and covariance structure. In an example for leaf age growth from a rice molecular genetic project, the best submodel combination was found between the Gaussian model for the correlation structure, power equation of order 1 for the variance and the power curve for the mean vector. Under this combination, several significant QTLs for leaf age growth trajectories were detected on different chromosomes. Our model can be well used to study the genetic architecture of dynamic traits of agricultural values.
A new high resolution permafrost map of Iceland from Earth Observation data
NASA Astrophysics Data System (ADS)
Barnie, Talfan; Conway, Susan; Balme, Matt; Graham, Alastair
2017-04-01
High resolution maps of permafrost are required for ongoing monitoring of environmental change and the resulting hazards to ecosystems, people and infrastructure. However, permafrost maps are difficult to construct - direct observations require maintaining networks of sensors and boreholes in harsh environments and are thus limited in extent in space and time, and indirect observations require models or assumptions relating the measurements (e.g. weather station air temperature, basal snow temperature) to ground temperature. Operationally produced Land Surface Temperature maps from Earth Observation data can be used to make spatially contiguous estimates of mean annual skin temperature, which has been used a proxy for the presence of permafrost. However these maps are subject to biases due to (i) selective sampling during the day due to limited satellite overpass times, (ii) selective sampling over the year due to seasonally varying cloud cover, (iii) selective sampling of LST only during clearsky conditions, (iv) errors in cloud masking (v) errors in temperature emissivity separation (vi) smoothing over spatial variability. In this study we attempt to compensate for some of these problems using a bayesian modelling approach and high resolution topography-based downscaling.
Manifold absolute pressure estimation using neural network with hybrid training algorithm
Selamat, Hazlina; Alimin, Ahmad Jais; Haniff, Mohamad Fadzli
2017-01-01
In a modern small gasoline engine fuel injection system, the load of the engine is estimated based on the measurement of the manifold absolute pressure (MAP) sensor, which took place in the intake manifold. This paper present a more economical approach on estimating the MAP by using only the measurements of the throttle position and engine speed, resulting in lower implementation cost. The estimation was done via two-stage multilayer feed-forward neural network by combining Levenberg-Marquardt (LM) algorithm, Bayesian Regularization (BR) algorithm and Particle Swarm Optimization (PSO) algorithm. Based on the results found in 20 runs, the second variant of the hybrid algorithm yields a better network performance than the first variant of hybrid algorithm, LM, LM with BR and PSO by estimating the MAP closely to the simulated MAP values. By using a valid experimental training data, the estimator network that trained with the second variant of the hybrid algorithm showed the best performance among other algorithms when used in an actual retrofit fuel injection system (RFIS). The performance of the estimator was also validated in steady-state and transient condition by showing a closer MAP estimation to the actual value. PMID:29190779
Mental maps and travel behaviour: meanings and models
NASA Astrophysics Data System (ADS)
Hannes, Els; Kusumastuti, Diana; Espinosa, Maikel León; Janssens, Davy; Vanhoof, Koen; Wets, Geert
2012-04-01
In this paper, the " mental map" concept is positioned with regard to individual travel behaviour to start with. Based on Ogden and Richards' triangle of meaning (The meaning of meaning: a study of the influence of language upon thought and of the science of symbolism. International library of psychology, philosophy and scientific method. Routledge and Kegan Paul, London, 1966) distinct thoughts, referents and symbols originating from different scientific disciplines are identified and explained in order to clear up the notion's fuzziness. Next, the use of this concept in two major areas of research relevant to travel demand modelling is indicated and discussed in detail: spatial cognition and decision-making. The relevance of these constructs to understand and model individual travel behaviour is explained and current research efforts to implement these concepts in travel demand models are addressed. Furthermore, these mental map notions are specified in two types of computational models, i.e. a Bayesian Inference Network (BIN) and a Fuzzy Cognitive Map (FCM). Both models are explained, and a numerical and a real-life example are provided. Both approaches yield a detailed quantitative representation of the mental map of decision-making problems in travel behaviour.
On the structure of Bayesian network for Indonesian text document paraphrase identification
NASA Astrophysics Data System (ADS)
Prayogo, Ario Harry; Syahrul Mubarok, Mohamad; Adiwijaya
2018-03-01
Paraphrase identification is an important process within natural language processing. The idea is to automatically recognize phrases that have different forms but contain same meanings. For examples if we input query “causing fire hazard”, then the computer has to recognize this query that this query has same meaning as “the cause of fire hazard. Paraphrasing is an activity that reveals the meaning of an expression, writing, or speech using different words or forms, especially to achieve greater clarity. In this research we will focus on classifying two Indonesian sentences whether it is a paraphrase to each other or not. There are four steps in this research, first is preprocessing, second is feature extraction, third is classifier building, and the last is performance evaluation. Preprocessing consists of tokenization, non-alphanumerical removal, and stemming. After preprocessing we will conduct feature extraction in order to build new features from given dataset. There are two kinds of features in the research, syntactic features and semantic features. Syntactic features consist of normalized levenshtein distance feature, term-frequency based cosine similarity feature, and LCS (Longest Common Subsequence) feature. Semantic features consist of Wu and Palmer feature and Shortest Path Feature. We use Bayesian Networks as the method of training the classifier. Parameter estimation that we use is called MAP (Maximum A Posteriori). For structure learning of Bayesian Networks DAG (Directed Acyclic Graph), we use BDeu (Bayesian Dirichlet equivalent uniform) scoring function and for finding DAG with the best BDeu score, we use K2 algorithm. In evaluation step we perform cross-validation. The average result that we get from testing the classifier as follows: Precision 75.2%, Recall 76.5%, F1-Measure 75.8% and Accuracy 75.6%.
Wavelet extractor: A Bayesian well-tie and wavelet extraction program
NASA Astrophysics Data System (ADS)
Gunning, James; Glinsky, Michael E.
2006-06-01
We introduce a new open-source toolkit for the well-tie or wavelet extraction problem of estimating seismic wavelets from seismic data, time-to-depth information, and well-log suites. The wavelet extraction model is formulated as a Bayesian inverse problem, and the software will simultaneously estimate wavelet coefficients, other parameters associated with uncertainty in the time-to-depth mapping, positioning errors in the seismic imaging, and useful amplitude-variation-with-offset (AVO) related parameters in multi-stack extractions. It is capable of multi-well, multi-stack extractions, and uses continuous seismic data-cube interpolation to cope with the problem of arbitrary well paths. Velocity constraints in the form of checkshot data, interpreted markers, and sonic logs are integrated in a natural way. The Bayesian formulation allows computation of full posterior uncertainties of the model parameters, and the important problem of the uncertain wavelet span is addressed uses a multi-model posterior developed from Bayesian model selection theory. The wavelet extraction tool is distributed as part of the Delivery seismic inversion toolkit. A simple log and seismic viewing tool is included in the distribution. The code is written in Java, and thus platform independent, but the Seismic Unix (SU) data model makes the inversion particularly suited to Unix/Linux environments. It is a natural companion piece of software to Delivery, having the capacity to produce maximum likelihood wavelet and noise estimates, but will also be of significant utility to practitioners wanting to produce wavelet estimates for other inversion codes or purposes. The generation of full parameter uncertainties is a crucial function for workers wishing to investigate questions of wavelet stability before proceeding to more advanced inversion studies.
Quantum Bayesian networks with application to games displaying Parrondo's paradox
NASA Astrophysics Data System (ADS)
Pejic, Michael
Bayesian networks and their accompanying graphical models are widely used for prediction and analysis across many disciplines. We will reformulate these in terms of linear maps. This reformulation will suggest a natural extension, which we will show is equivalent to standard textbook quantum mechanics. Therefore, this extension will be termed quantum. However, the term quantum should not be taken to imply this extension is necessarily only of utility in situations traditionally thought of as in the domain of quantum mechanics. In principle, it may be employed in any modelling situation, say forecasting the weather or the stock market---it is up to experiment to determine if this extension is useful in practice. Even restricting to the domain of quantum mechanics, with this new formulation the advantages of Bayesian networks can be maintained for models incorporating quantum and mixed classical-quantum behavior. The use of these will be illustrated by various basic examples. Parrondo's paradox refers to the situation where two, multi-round games with a fixed winning criteria, both with probability greater than one-half for one player to win, are combined. Using a possibly biased coin to determine the rule to employ for each round, paradoxically, the previously losing player now wins the combined game with probabilitygreater than one-half. Using the extended Bayesian networks, we will formulate and analyze classical observed, classical hidden, and quantum versions of a game that displays this paradox, finding bounds for the discrepancy from naive expectations for the occurrence of the paradox. A quantum paradox inspired by Parrondo's paradox will also be analyzed. We will prove a bound for the discrepancy from naive expectations for this paradox as well. Games involving quantum walks that achieve this bound will be presented.
2010-01-01
Background The information provided by dense genome-wide markers using high throughput technology is of considerable potential in human disease studies and livestock breeding programs. Genome-wide association studies relate individual single nucleotide polymorphisms (SNP) from dense SNP panels to individual measurements of complex traits, with the underlying assumption being that any association is caused by linkage disequilibrium (LD) between SNP and quantitative trait loci (QTL) affecting the trait. Often SNP are in genomic regions of no trait variation. Whole genome Bayesian models are an effective way of incorporating this and other important prior information into modelling. However a full Bayesian analysis is often not feasible due to the large computational time involved. Results This article proposes an expectation-maximization (EM) algorithm called emBayesB which allows only a proportion of SNP to be in LD with QTL and incorporates prior information about the distribution of SNP effects. The posterior probability of being in LD with at least one QTL is calculated for each SNP along with estimates of the hyperparameters for the mixture prior. A simulated example of genomic selection from an international workshop is used to demonstrate the features of the EM algorithm. The accuracy of prediction is comparable to a full Bayesian analysis but the EM algorithm is considerably faster. The EM algorithm was accurate in locating QTL which explained more than 1% of the total genetic variation. A computational algorithm for very large SNP panels is described. Conclusions emBayesB is a fast and accurate EM algorithm for implementing genomic selection and predicting complex traits by mapping QTL in genome-wide dense SNP marker data. Its accuracy is similar to Bayesian methods but it takes only a fraction of the time. PMID:20969788
Parallelized Bayesian inversion for three-dimensional dental X-ray imaging.
Kolehmainen, Ville; Vanne, Antti; Siltanen, Samuli; Järvenpää, Seppo; Kaipio, Jari P; Lassas, Matti; Kalke, Martti
2006-02-01
Diagnostic and operational tasks based on dental radiology often require three-dimensional (3-D) information that is not available in a single X-ray projection image. Comprehensive 3-D information about tissues can be obtained by computerized tomography (CT) imaging. However, in dental imaging a conventional CT scan may not be available or practical because of high radiation dose, low-resolution or the cost of the CT scanner equipment. In this paper, we consider a novel type of 3-D imaging modality for dental radiology. We consider situations in which projection images of the teeth are taken from a few sparsely distributed projection directions using the dentist's regular (digital) X-ray equipment and the 3-D X-ray attenuation function is reconstructed. A complication in these experiments is that the reconstruction of the 3-D structure based on a few projection images becomes an ill-posed inverse problem. Bayesian inversion is a well suited framework for reconstruction from such incomplete data. In Bayesian inversion, the ill-posed reconstruction problem is formulated in a well-posed probabilistic form in which a priori information is used to compensate for the incomplete information of the projection data. In this paper we propose a Bayesian method for 3-D reconstruction in dental radiology. The method is partially based on Kolehmainen et al. 2003. The prior model for dental structures consist of a weighted l1 and total variation (TV)-prior together with the positivity prior. The inverse problem is stated as finding the maximum a posteriori (MAP) estimate. To make the 3-D reconstruction computationally feasible, a parallelized version of an optimization algorithm is implemented for a Beowulf cluster computer. The method is tested with projection data from dental specimens and patient data. Tomosynthetic reconstructions are given as reference for the proposed method.
Combining Multiple Types of Intelligence to Generate Probability Maps of Moving Targets
2013-09-01
normalization coefficient k similar to Demspter-Shafer’s combination rule. d. Mass Mean This rule of combination is the most straightforward one... coefficient , we can state that without normalizing, the updated distribution is: fupdate t qk k t M 1 qk n k t M (3.3) 36...Lawrence, KS. Chen, Z. (2003). Bayesian filtering: From Kalman filters to particle filters and beyond. Technical report, McMaster University. Dempster
Learning Probabilistic Features for Robotic Navigation Using Laser Sensors
Aznar, Fidel; Pujol, Francisco A.; Pujol, Mar; Rizo, Ramón; Pujol, María-José
2014-01-01
SLAM is a popular task used by robots and autonomous vehicles to build a map of an unknown environment and, at the same time, to determine their location within the map. This paper describes a SLAM-based, probabilistic robotic system able to learn the essential features of different parts of its environment. Some previous SLAM implementations had computational complexities ranging from O(Nlog(N)) to O(N 2), where N is the number of map features. Unlike these methods, our approach reduces the computational complexity to O(N) by using a model to fuse the information from the sensors after applying the Bayesian paradigm. Once the training process is completed, the robot identifies and locates those areas that potentially match the sections that have been previously learned. After the training, the robot navigates and extracts a three-dimensional map of the environment using a single laser sensor. Thus, it perceives different sections of its world. In addition, in order to make our system able to be used in a low-cost robot, low-complexity algorithms that can be easily implemented on embedded processors or microcontrollers are used. PMID:25415377
Learning probabilistic features for robotic navigation using laser sensors.
Aznar, Fidel; Pujol, Francisco A; Pujol, Mar; Rizo, Ramón; Pujol, María-José
2014-01-01
SLAM is a popular task used by robots and autonomous vehicles to build a map of an unknown environment and, at the same time, to determine their location within the map. This paper describes a SLAM-based, probabilistic robotic system able to learn the essential features of different parts of its environment. Some previous SLAM implementations had computational complexities ranging from O(Nlog(N)) to O(N(2)), where N is the number of map features. Unlike these methods, our approach reduces the computational complexity to O(N) by using a model to fuse the information from the sensors after applying the Bayesian paradigm. Once the training process is completed, the robot identifies and locates those areas that potentially match the sections that have been previously learned. After the training, the robot navigates and extracts a three-dimensional map of the environment using a single laser sensor. Thus, it perceives different sections of its world. In addition, in order to make our system able to be used in a low-cost robot, low-complexity algorithms that can be easily implemented on embedded processors or microcontrollers are used.
Markon, C.J.; Wesser, Sara
1998-01-01
A land cover map of the National Park Service northwest Alaska management area was produced using digitally processed Landsat data. These and other environmental data were incorporated into a geographic information system to provide baseline information about the nature and extent of resources present in this northwest Alaskan environment.This report details the methodology, depicts vegetation profiles of the surrounding landscape, and describes the different vegetation types mapped. Portions of nine Landsat satellite (multispectral scanner and thematic mapper) scenes were used to produce a land cover map of the Cape Krusenstern National Monument and Noatak National Preserve and to update an existing land cover map of Kobuk Valley National Park Valley National Park. A Bayesian multivariate classifier was applied to the multispectral data sets, followed by the application of ancillary data (elevation, slope, aspect, soils, watersheds, and geology) to enhance the spectral separation of classes into more meaningful vegetation types. The resulting land cover map contains six major land cover categories (forest, shrub, herbaceous, sparse/barren, water, other) and 19 subclasses encompassing 7 million hectares. General narratives of the distribution of the subclasses throughout the project area are given along with vegetation profiles showing common relationships between topographic gradients and vegetation communities.
Occupancy mapping and surface reconstruction using local Gaussian processes with Kinect sensors.
Kim, Soohwan; Kim, Jonghyuk
2013-10-01
Although RGB-D sensors have been successfully applied to visual SLAM and surface reconstruction, most of the applications aim at visualization. In this paper, we propose a noble method of building continuous occupancy maps and reconstructing surfaces in a single framework for both navigation and visualization. Particularly, we apply a Bayesian nonparametric approach, Gaussian process classification, to occupancy mapping. However, it suffers from high-computational complexity of O(n(3))+O(n(2)m), where n and m are the numbers of training and test data, respectively, limiting its use for large-scale mapping with huge training data, which is common with high-resolution RGB-D sensors. Therefore, we partition both training and test data with a coarse-to-fine clustering method and apply Gaussian processes to each local clusters. In addition, we consider Gaussian processes as implicit functions, and thus extract iso-surfaces from the scalar fields, continuous occupancy maps, using marching cubes. By doing that, we are able to build two types of map representations within a single framework of Gaussian processes. Experimental results with 2-D simulated data show that the accuracy of our approximated method is comparable to previous work, while the computational time is dramatically reduced. We also demonstrate our method with 3-D real data to show its feasibility in large-scale environments.
Evaluation of spatio-temporal Bayesian models for the spread of infectious diseases in oil palm.
Denis, Marie; Cochard, Benoît; Syahputra, Indra; de Franqueville, Hubert; Tisné, Sébastien
2018-02-01
In the field of epidemiology, studies are often focused on mapping diseases in relation to time and space. Hierarchical modeling is a common flexible and effective tool for modeling problems related to disease spread. In the context of oil palm plantations infected by the fungal pathogen Ganoderma boninense, we propose and compare two spatio-temporal hierarchical Bayesian models addressing the lack of information on propagation modes and transmission vectors. We investigate two alternative process models to study the unobserved mechanism driving the infection process. The models help gain insight into the spatio-temporal dynamic of the infection by identifying a genetic component in the disease spread and by highlighting a spatial component acting at the end of the experiment. In this challenging context, we propose models that provide assumptions on the unobserved mechanism driving the infection process while making short-term predictions using ready-to-use software. Copyright © 2018 Elsevier Ltd. All rights reserved.
Stelzenmüller, V; Lee, J; Garnacho, E; Rogers, S I
2010-10-01
For the UK continental shelf we developed a Bayesian Belief Network-GIS framework to visualise relationships between cumulative human pressures, sensitive marine landscapes and landscape vulnerability, to assess the consequences of potential marine planning objectives, and to map uncertainty-related changes in management measures. Results revealed that the spatial assessment of footprints and intensities of human activities had more influence on landscape vulnerabilities than the type of landscape sensitivity measure used. We addressed questions regarding consequences of potential planning targets, and necessary management measures with spatially-explicit assessment of their consequences. We conclude that the BN-GIS framework is a practical tool allowing for the visualisation of relationships, the spatial assessment of uncertainty related to spatial management scenarios, the engagement of different stakeholder views, and enables a quick update of new spatial data and relationships. Ultimately, such BN-GIS based tools can support the decision-making process used in adaptive marine management. Copyright © 2010 Elsevier Ltd. All rights reserved.
Bayesian multivariate hierarchical transformation models for ROC analysis.
O'Malley, A James; Zou, Kelly H
2006-02-15
A Bayesian multivariate hierarchical transformation model (BMHTM) is developed for receiver operating characteristic (ROC) curve analysis based on clustered continuous diagnostic outcome data with covariates. Two special features of this model are that it incorporates non-linear monotone transformations of the outcomes and that multiple correlated outcomes may be analysed. The mean, variance, and transformation components are all modelled parametrically, enabling a wide range of inferences. The general framework is illustrated by focusing on two problems: (1) analysis of the diagnostic accuracy of a covariate-dependent univariate test outcome requiring a Box-Cox transformation within each cluster to map the test outcomes to a common family of distributions; (2) development of an optimal composite diagnostic test using multivariate clustered outcome data. In the second problem, the composite test is estimated using discriminant function analysis and compared to the test derived from logistic regression analysis where the gold standard is a binary outcome. The proposed methodology is illustrated on prostate cancer biopsy data from a multi-centre clinical trial.
Bayesian multivariate hierarchical transformation models for ROC analysis
O'Malley, A. James; Zou, Kelly H.
2006-01-01
SUMMARY A Bayesian multivariate hierarchical transformation model (BMHTM) is developed for receiver operating characteristic (ROC) curve analysis based on clustered continuous diagnostic outcome data with covariates. Two special features of this model are that it incorporates non-linear monotone transformations of the outcomes and that multiple correlated outcomes may be analysed. The mean, variance, and transformation components are all modelled parametrically, enabling a wide range of inferences. The general framework is illustrated by focusing on two problems: (1) analysis of the diagnostic accuracy of a covariate-dependent univariate test outcome requiring a Box–Cox transformation within each cluster to map the test outcomes to a common family of distributions; (2) development of an optimal composite diagnostic test using multivariate clustered outcome data. In the second problem, the composite test is estimated using discriminant function analysis and compared to the test derived from logistic regression analysis where the gold standard is a binary outcome. The proposed methodology is illustrated on prostate cancer biopsy data from a multi-centre clinical trial. PMID:16217836
Logarithmic Laplacian Prior Based Bayesian Inverse Synthetic Aperture Radar Imaging.
Zhang, Shuanghui; Liu, Yongxiang; Li, Xiang; Bi, Guoan
2016-04-28
This paper presents a novel Inverse Synthetic Aperture Radar Imaging (ISAR) algorithm based on a new sparse prior, known as the logarithmic Laplacian prior. The newly proposed logarithmic Laplacian prior has a narrower main lobe with higher tail values than the Laplacian prior, which helps to achieve performance improvement on sparse representation. The logarithmic Laplacian prior is used for ISAR imaging within the Bayesian framework to achieve better focused radar image. In the proposed method of ISAR imaging, the phase errors are jointly estimated based on the minimum entropy criterion to accomplish autofocusing. The maximum a posterior (MAP) estimation and the maximum likelihood estimation (MLE) are utilized to estimate the model parameters to avoid manually tuning process. Additionally, the fast Fourier Transform (FFT) and Hadamard product are used to minimize the required computational efficiency. Experimental results based on both simulated and measured data validate that the proposed algorithm outperforms the traditional sparse ISAR imaging algorithms in terms of resolution improvement and noise suppression.
Spatio-temporal Bayesian model selection for disease mapping
Carroll, R; Lawson, AB; Faes, C; Kirby, RS; Aregay, M; Watjou, K
2016-01-01
Spatio-temporal analysis of small area health data often involves choosing a fixed set of predictors prior to the final model fit. In this paper, we propose a spatio-temporal approach of Bayesian model selection to implement model selection for certain areas of the study region as well as certain years in the study time line. Here, we examine the usefulness of this approach by way of a large-scale simulation study accompanied by a case study. Our results suggest that a special case of the model selection methods, a mixture model allowing a weight parameter to indicate if the appropriate linear predictor is spatial, spatio-temporal, or a mixture of the two, offers the best option to fitting these spatio-temporal models. In addition, the case study illustrates the effectiveness of this mixture model within the model selection setting by easily accommodating lifestyle, socio-economic, and physical environmental variables to select a predominantly spatio-temporal linear predictor. PMID:28070156
Palumbo, Giovanna; Iadicicco, Agostino; Messina, Francesco; Ferone, Claudio; Campopiano, Stefania; Cioffi, Raffaele; Colangelo, Francesco
2017-12-22
This paper reports results related to early age temperature and shrinkage measurements by means fiber Bragg gratings (FBGs), which were embedded in geopolymer matrices. The sensors were properly packaged in order to discriminate between different shrinkage behavior and temperature development. Geopolymer systems based on metakaolin were investigated, which dealt with different commercial aluminosilicate precursors and siliceous filler contents. The proposed measuring system will allow us to control, in a very accurate way, the early age phases of the binding systems made by metakaolin geopolymer. A series of experiments were conducted on different compositions; moreover, rheological issues related to the proposed experimental method were also assessed.
Suspended, Shrinkage-Free, Electrospun PLGA Nanofibrous Scaffold for Skin Tissue Engineering.
Ru, Changhai; Wang, Feilong; Pang, Ming; Sun, Lining; Chen, Ruihua; Sun, Yu
2015-05-27
Electrospinning is a technique for creating continuous nanofibrous networks that can architecturally be similar to the structure of extracellular matrix (ECM). However, the shrinkage of electrospun mats is unfavorable for the triggering of cell adhesion and further growth. In this work, electrospun PLGA nanofiber assemblies are utilized to create a scaffold. Aided by a polypropylene auxiliary supporter, the scaffold is able to maintain long-term integrity without dimensional shrinkage. This scaffold is also able to suspend in cell culture medium; hence, keratinocyte cells seeded on the scaffold are exposed to air as required in skin tissue engineering. Experiments also show that human skin keratinocytes can proliferate on the scaffold and infiltrate into the scaffold.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lorut, F.; Imbert, G.; Roggero, A.
In this paper, we investigate the tendency of porous low-K dielectrics (also named Ultra Low-K, ULK) behavior to shrink when exposed to the electron beam of a scanning electron microscope. Various experimental electron beam conditions have been used for irradiating ULK thin films, and the resulting shrinkage has been measured through use of an atomic force microscope tool. We report the shrinkage to be a fast, cumulative, and dose dependent effect. Correlation of the shrinkage with incident electron beam energy loss has also been evidenced. The chemical modification of the ULK films within the interaction volume has been demonstrated, withmore » a densification of the layer and a loss of carbon and hydrogen elements being observed.« less
NASA Astrophysics Data System (ADS)
Alevizos, Evangelos; Snellen, Mirjam; Simons, Dick; Siemes, Kerstin; Greinert, Jens
2018-06-01
This study applies three classification methods exploiting the angular dependence of acoustic seafloor backscatter along with high resolution sub-bottom profiling for seafloor sediment characterization in the Eckernförde Bay, Baltic Sea Germany. This area is well suited for acoustic backscatter studies due to its shallowness, its smooth bathymetry and the presence of a wide range of sediment types. Backscatter data were acquired using a Seabeam1180 (180 kHz) multibeam echosounder and sub-bottom profiler data were recorded using a SES-2000 parametric sonar transmitting 6 and 12 kHz. The high density of seafloor soundings allowed extracting backscatter layers for five beam angles over a large part of the surveyed area. A Bayesian probability method was employed for sediment classification based on the backscatter variability at a single incidence angle, whereas Maximum Likelihood Classification (MLC) and Principal Components Analysis (PCA) were applied to the multi-angle layers. The Bayesian approach was used for identifying the optimum number of acoustic classes because cluster validation is carried out prior to class assignment and class outputs are ordinal categorical values. The method is based on the principle that backscatter values from a single incidence angle express a normal distribution for a particular sediment type. The resulting Bayesian classes were well correlated to median grain sizes and the percentage of coarse material. The MLC method uses angular response information from five layers of training areas extracted from the Bayesian classification map. The subsequent PCA analysis is based on the transformation of these five layers into two principal components that comprise most of the data variability. These principal components were clustered in five classes after running an external cluster validation test. In general both methods MLC and PCA, separated the various sediment types effectively, showing good agreement (kappa >0.7) with the Bayesian approach which also correlates well with ground truth data (r2 > 0.7). In addition, sub-bottom data were used in conjunction with the Bayesian classification results to characterize acoustic classes with respect to their geological and stratigraphic interpretation. The joined interpretation of seafloor and sub-seafloor data sets proved to be an efficient approach for a better understanding of seafloor backscatter patchiness and to discriminate acoustically similar classes in different geological/bathymetric settings.
Gonzalez-Redin, Julen; Luque, Sandra; Poggio, Laura; Smith, Ron; Gimona, Alessandro
2016-01-01
An integrated methodology, based on linking Bayesian belief networks (BBN) with GIS, is proposed for combining available evidence to help forest managers evaluate implications and trade-offs between forest production and conservation measures to preserve biodiversity in forested habitats. A Bayesian belief network is a probabilistic graphical model that represents variables and their dependencies through specifying probabilistic relationships. In spatially explicit decision problems where it is difficult to choose appropriate combinations of interventions, the proposed integration of a BBN with GIS helped to facilitate shared understanding of the human-landscape relationships, while fostering collective management that can be incorporated into landscape planning processes. Trades-offs become more and more relevant in these landscape contexts where the participation of many and varied stakeholder groups is indispensable. With these challenges in mind, our integrated approach incorporates GIS-based data with expert knowledge to consider two different land use interests - biodiversity value for conservation and timber production potential - with the focus on a complex mountain landscape in the French Alps. The spatial models produced provided different alternatives of suitable sites that can be used by policy makers in order to support conservation priorities while addressing management options. The approach provided provide a common reasoning language among different experts from different backgrounds while helped to identify spatially explicit conflictive areas. Copyright © 2015 Elsevier Inc. All rights reserved.
Cholinergic stimulation enhances Bayesian belief updating in the deployment of spatial attention.
Vossel, Simone; Bauer, Markus; Mathys, Christoph; Adams, Rick A; Dolan, Raymond J; Stephan, Klaas E; Friston, Karl J
2014-11-19
The exact mechanisms whereby the cholinergic neurotransmitter system contributes to attentional processing remain poorly understood. Here, we applied computational modeling to psychophysical data (obtained from a spatial attention task) under a psychopharmacological challenge with the cholinesterase inhibitor galantamine (Reminyl). This allowed us to characterize the cholinergic modulation of selective attention formally, in terms of hierarchical Bayesian inference. In a placebo-controlled, within-subject, crossover design, 16 healthy human subjects performed a modified version of Posner's location-cueing task in which the proportion of validly and invalidly cued targets (percentage of cue validity, % CV) changed over time. Saccadic response speeds were used to estimate the parameters of a hierarchical Bayesian model to test whether cholinergic stimulation affected the trial-wise updating of probabilistic beliefs that underlie the allocation of attention or whether galantamine changed the mapping from those beliefs to subsequent eye movements. Behaviorally, galantamine led to a greater influence of probabilistic context (% CV) on response speed than placebo. Crucially, computational modeling suggested this effect was due to an increase in the rate of belief updating about cue validity (as opposed to the increased sensitivity of behavioral responses to those beliefs). We discuss these findings with respect to cholinergic effects on hierarchical cortical processing and in relation to the encoding of expected uncertainty or precision. Copyright © 2014 the authors 0270-6474/14/3415735-08$15.00/0.
Bayesian data fusion for spatial prediction of categorical variables in environmental sciences
NASA Astrophysics Data System (ADS)
Gengler, Sarah; Bogaert, Patrick
2014-12-01
First developed to predict continuous variables, Bayesian Maximum Entropy (BME) has become a complete framework in the context of space-time prediction since it has been extended to predict categorical variables and mixed random fields. This method proposes solutions to combine several sources of data whatever the nature of the information. However, the various attempts that were made for adapting the BME methodology to categorical variables and mixed random fields faced some limitations, as a high computational burden. The main objective of this paper is to overcome this limitation by generalizing the Bayesian Data Fusion (BDF) theoretical framework to categorical variables, which is somehow a simplification of the BME method through the convenient conditional independence hypothesis. The BDF methodology for categorical variables is first described and then applied to a practical case study: the estimation of soil drainage classes using a soil map and point observations in the sandy area of Flanders around the city of Mechelen (Belgium). The BDF approach is compared to BME along with more classical approaches, as Indicator CoKringing (ICK) and logistic regression. Estimators are compared using various indicators, namely the Percentage of Correctly Classified locations (PCC) and the Average Highest Probability (AHP). Although BDF methodology for categorical variables is somehow a simplification of BME approach, both methods lead to similar results and have strong advantages compared to ICK and logistic regression.
NASA Astrophysics Data System (ADS)
Sun, Weiwei; Ma, Jun; Yang, Gang; Du, Bo; Zhang, Liangpei
2017-06-01
A new Bayesian method named Poisson Nonnegative Matrix Factorization with Parameter Subspace Clustering Constraint (PNMF-PSCC) has been presented to extract endmembers from Hyperspectral Imagery (HSI). First, the method integrates the liner spectral mixture model with the Bayesian framework and it formulates endmember extraction into a Bayesian inference problem. Second, the Parameter Subspace Clustering Constraint (PSCC) is incorporated into the statistical program to consider the clustering of all pixels in the parameter subspace. The PSCC could enlarge differences among ground objects and helps finding endmembers with smaller spectrum divergences. Meanwhile, the PNMF-PSCC method utilizes the Poisson distribution as the prior knowledge of spectral signals to better explain the quantum nature of light in imaging spectrometer. Third, the optimization problem of PNMF-PSCC is formulated into maximizing the joint density via the Maximum A Posterior (MAP) estimator. The program is finally solved by iteratively optimizing two sub-problems via the Alternating Direction Method of Multipliers (ADMM) framework and the FURTHESTSUM initialization scheme. Five state-of-the art methods are implemented to make comparisons with the performance of PNMF-PSCC on both the synthetic and real HSI datasets. Experimental results show that the PNMF-PSCC outperforms all the five methods in Spectral Angle Distance (SAD) and Root-Mean-Square-Error (RMSE), and especially it could identify good endmembers for ground objects with smaller spectrum divergences.
Evolution of the cerebellum as a neuronal machine for Bayesian state estimation
NASA Astrophysics Data System (ADS)
Paulin, M. G.
2005-09-01
The cerebellum evolved in association with the electric sense and vestibular sense of the earliest vertebrates. Accurate information provided by these sensory systems would have been essential for precise control of orienting behavior in predation. A simple model shows that individual spikes in electrosensory primary afferent neurons can be interpreted as measurements of prey location. Using this result, I construct a computational neural model in which the spatial distribution of spikes in a secondary electrosensory map forms a Monte Carlo approximation to the Bayesian posterior distribution of prey locations given the sense data. The neural circuit that emerges naturally to perform this task resembles the cerebellar-like hindbrain electrosensory filtering circuitry of sharks and other electrosensory vertebrates. The optimal filtering mechanism can be extended to handle dynamical targets observed from a dynamical platform; that is, to construct an optimal dynamical state estimator using spiking neurons. This may provide a generic model of cerebellar computation. Vertebrate motion-sensing neurons have specific fractional-order dynamical characteristics that allow Bayesian state estimators to be implemented elegantly and efficiently, using simple operations with asynchronous pulses, i.e. spikes. The computational neural models described in this paper represent a novel kind of particle filter, using spikes as particles. The models are specific and make testable predictions about computational mechanisms in cerebellar circuitry, while providing a plausible explanation of cerebellar contributions to aspects of motor control, perception and cognition.
A Commercial IOTV Cleaning Study
2010-04-12
manufacturer’s list price without taking into consideration of possible volume discount. Equipment depreciation cost was calculated based on...Laundering with Prewash Spot Cleaning) 32 Table 12 Shrinkage Statistical Data (Traditional Wet Laundering without Prewash Spot Cleaning...Statistical Data (Computer-controlled Wet Cleaning without Prewash Spot Cleaning) 35 Table 15 Shrinkage Statistical Data (Liquid CO2 Cleaning
DOT National Transportation Integrated Search
2015-12-01
This report summarizes the findings of an experimental investigation into shrinkage, and the mitigation thereof, in alkali-activated : fly ash and slag binders and concrete. The early-age (chemical and autogenous) and later-age (drying and : carbonat...
DOT National Transportation Integrated Search
2001-07-01
This work pertains to preparation of concrete drying shrinkage data for proposed concrete mixtures during normal concrete : trial batch verification. Selected concrete mixtures will include PennDOT Classes AAA and AA and will also include the use of ...
Evaluating shrinkage of wood propellers in a high-temperature environment
Richard Bergman; Robert J. Ross
2008-01-01
Minimizing wood shrinkage is a priority for many wood products in use, particularly engineered products manufactured to close tolerances, such as wood propellers for unmanned surveillance aircraft used in military operations. Those currently in service in the Middle East are experiencing performance problems as a consequence of wood shrinking during long-term storage...
DOT National Transportation Integrated Search
2005-02-01
MoDOT RDT Research Project R-I00-002 HPC for Bridge A6130 Route 412 Pemiscot County was recently completed in June of 2004 [Myers and Yang, 2004]. Among other research tasks, part of this research study investigated the creep, shrinkage and...
Reducing Shrinkage in Convenience Stores by the Use of the PSI.
ERIC Educational Resources Information Center
Terris, William; Jones, John W.
This bibliography contains over 1,200 behavioral and property is a rapidly growing problem. Successful strategies are needed to reduce employee theft; new loss prevention techniques need to be developed and evaluated. Two loss prevention programs aimed at reducing employees' theft were compared by the measures of shrinkage rates. Initially, a…
Individualized FAC on bottom tab subassemblies to minimize adhesive gap between emitter and optics
NASA Astrophysics Data System (ADS)
Sauer, Sebastian; Müller, Tobias; Haag, Sebastian; Beleke, Andreas; Zontar, Daniel; Baum, Christoph; Brecher, Christian
2017-02-01
High Power Diode Laser (HPDL) systems with short focal length fast-axis collimators (FAC) require submicron assembly precision. Conventional FAC-Lens assembly processes require adhesive gaps of 50 microns or more in order to compensate for component tolerances (e.g. deviation of back focal length) and previous assembly steps. In order to control volumetric shrinkage of fast-curing UV-adhesives shrinkage compensation is mandatory. The novel approach described in this paper aims to minimize the impact of volumetric shrinkage due to the adhesive gap between HPDL edge emitters and FAC-Lens. Firstly, the FAC is actively aligned to the edge emitter without adhesives or bottom tab. The relative position and orientation of FAC to emitter are measured and stored. Consecutively, an individual subassembly of FAC and bottom tab is assembled on Fraunhofer IPT's mounting station with a precision of +/-1 micron. Translational and lateral offsets can be compensated, so that a narrow and uniform glue gap for the consecutive bonding process of bottom tab to heatsink applies (Figure 4). Accordingly, FAC and bottom tab are mounted to the heatsink without major shrinkage compensation. Fraunhofer IPT's department assembly of optical systems and automation has made several publications regarding active alignment of FAC lenses [SPIE LASE 8241-12], volumetric shrinkage compensation [SPIE LASE 9730-28] and FAC on bottom tab assembly [SPIE LASE 9727-31] in automated production environments. The approach described in this paper combines these and is the logical continuation of that work towards higher quality of HPDLs.
Karaman, E; Ozgunaltay, G
2014-01-01
To determine the volumetric polymerization shrinkage of four different types of composite resin and to evaluate microleakage of these materials in class II (MOD) cavities with and without a resin-modified glass ionomer cement (RMGIC) liner, in vitro. One hundred twenty-eight extracted human upper premolar teeth were used. After the teeth were divided into eight groups (n=16), standardized MOD cavities were prepared. Then the teeth were restored with different resin composites (Filtek Supreme XT, Filtek P 60, Filtek Silorane, Filtek Z 250) with and without a RMGIC liner (Vitrebond). The restorations were finished and polished after 24 hours. Following thermocycling, the teeth were immersed in 0.5% basic fuchsin for 24 hours, then midsagitally sectioned in a mesiodistal plane and examined for microleakage using a stereomicroscope. The volumetric polymerization shrinkage of materials was measured using a video imaging device (Acuvol, Bisco, Inc). Data were statistically analyzed with Kruskal-Wallis and Mann-Whitney U-tests. All teeth showed microleakage, but placement of RMGIC liner reduced microleakage. No statistically significant differences were found in microleakage between the teeth restored without RMGIC liner (p>0.05). Filtek Silorane showed significantly less volumetric polymerization shrinkage than the methacrylate-based composite resins (p<0.05). The use of RMGIC liner with both silorane- and methacrylate-based composite resin restorations resulted in reduced microleakage. The volumetric polymerization shrinkage was least with the silorane-based composite.
Manojlovic, Dragica; Dramićanin, Miroslav D; Milosevic, Milos; Zeković, Ivana; Cvijović-Alagić, Ivana; Mitrovic, Nenad; Miletic, Vesna
2016-01-01
This study investigated the degree of conversion, depth of cure, Vickers hardness, flexural strength, flexural modulus and volumetric shrinkage of experimental composite containing a low shrinkage monomer FIT-852 (FIT; Esstech Inc.) and photoinitiator 2,4,6-trimethylbenzoyldiphenylphosphine oxide (TPO; Sigma Aldrich) compared to conventional composite containing Bisphenol A-glycidyl methacrylate (BisGMA) and camphorquinone-amine photoinitiator system. The degree of conversion was generally higher in FIT-based composites (45-64% range) than in BisGMA-based composites (34-58% range). Vickers hardness, flexural strength and modulus were higher in BisGMA-based composites. A polywave light-curing unit was generally more efficient in terms of conversion and hardness of experimental composites than a monowave unit. FIT-based composite containing TPO showed the depth of cure below 2mm irrespective of the curing light. The depth of cure of FIT-based composite containing CQ and BisGMA-based composites with either photoinitiator was in the range of 2.8-3.0mm. Volumetric shrinkage of FIT-based composite (0.9-5.7% range) was lower than that of BisGMA-based composite (2.2-12% range). FIT may be used as a shrinkage reducing monomer compatible with the conventional CQ-amine system as well as the alternative TPO photoinitiator. However, the depth of cure of FIT_TPO composite requires boosting to achieve clinically recommended thickness of 2mm. Copyright © 2015 Elsevier B.V. All rights reserved.
Xu, Ye-Sheng; Xie, Wen-Jia; Yao, Yu-Feng
2017-06-01
To report surgical management and favorable outcome in a case with delayed repair of traumatic laser in situ keratomileusis (LASIK) flap dislocation with shrinkage and folds. A 30-year-old man with a five-year history of bilateral LASIK experienced blunt trauma to his right eye followed by decreased vision for 5 weeks. The surgical management included initially softening the flap by irrigation with balanced salt solution (BSS). The shrinkage folds were carefully and gently stretched by scraping with a 26-gauge cannula accompanied by BSS irrigation. All of the epithelial ingrowth on the flap inner surface and on the bed was thoroughly debrided by scraping and irrigation. After the flap was repositioned to match its original margin, a soft bandage contact lens was placed. At his initial visit, slit-lamp microscopy and optical coherence tomography (OCT) showed shrinkage of the LASIK flap with an elevated margin approximately 3 mm above the original position. The flap covered half of the pupil and had multiple horizontal folds. Two months after surgery, the flap remained well positioned with only faint streaks in the anterior stroma. The uncorrected visual acuity of the right eye was 20/20 with a manifest refraction of Plano. For delayed repair of traumatically dislocated LASIK flaps, sufficient softening by BSS, stretching the shrinkage folds, and thorough debridement of ingrowth epithelium enable resetting the flap and provide satisfactory results.
Xu, Ye-sheng; Xie, Wen-jia; Yao, Yu-feng
2017-01-01
Objective: To report surgical management and favorable outcome in a case with delayed repair of traumatic laser in situ keratomileusis (LASIK) flap dislocation with shrinkage and folds. Methods: A 30-year-old man with a five-year history of bilateral LASIK experienced blunt trauma to his right eye followed by decreased vision for 5 weeks. The surgical management included initially softening the flap by irrigation with balanced salt solution (BSS). The shrinkage folds were carefully and gently stretched by scraping with a 26-gauge cannula accompanied by BSS irrigation. All of the epithelial ingrowth on the flap inner surface and on the bed was thoroughly debrided by scraping and irrigation. After the flap was repositioned to match its original margin, a soft bandage contact lens was placed. Results: At his initial visit, slit-lamp microscopy and optical coherence tomography (OCT) showed shrinkage of the LASIK flap with an elevated margin approximately 3 mm above the original position. The flap covered half of the pupil and had multiple horizontal folds. Two months after surgery, the flap remained well positioned with only faint streaks in the anterior stroma. The uncorrected visual acuity of the right eye was 20/20 with a manifest refraction of Plano. Conclusions: For delayed repair of traumatically dislocated LASIK flaps, sufficient softening by BSS, stretching the shrinkage folds, and thorough debridement of ingrowth epithelium enable resetting the flap and provide satisfactory results. PMID:28585430
He, Xiaobo; Zhang, Yang; Ma, Yuxiang; Zhou, Ting; Zhang, Jianwei; Hong, Shaodong; Sheng, Jin; Zhang, Zhonghan; Yang, Yunpeng; Huang, Yan; Zhang, Li; Zhao, Hongyun
2016-08-01
Epidermal growth factor receptor (EGFR) tyrosine kinase inhibitors (TKIs) are used as standard therapies for advanced nonsmall cell lung cancer (NSCLC) patients with EGFR mutation positive. Because these targeted therapies could cause tumor necrosis and shrinkage, the purpose of the study is to search for a value of optimal tumor shrinkage as an appropriate indicator of outcome for advanced NSCLC.A total of 88 NSCLC enrollees of 3 clinical trials (IRESSA registration clinical trial, TRUST study and ZD6474 study), who received Gefitinib (250 mg, QD), Erlotinib (150 mg, QD), and ZD6474 (100 mg, QD), respectively, during December 2003 and October 2007, were retrospectively analyzed. The response evaluation criteria in solid tumors (RECIST) were used to identify responders, who had complete response (CR) or partial responses (PR) and nonresponders who had stable disease (SD) or progressive disease (PD). Receiver operating characteristics (ROC) analysis was used to find the optimal tumor shrinkage as an indicator for tumor therapeutic outcome. Univariate and multivariate Cox regression analyses were performed to compare the progression-free survival (PFS) and overall survival (OS) between responders and nonresponders stratified based on radiologic criteria.Among the 88 NSCLC patients, 26 were responders and 62 were nonresponders based on RECIST 1.0. ROC indicated that 8.32% tumor diameter shrinkage in the sum of the longest tumor diameter (SLD) was the cutoff point of tumor shrinkage outcomes, resulting in 46 responders (≤8.32%) and 42 nonresponders (≥8.32%). Univariate and multivariate Cox regression analyses indicated that (1) the responders (≤8.32%) and nonresponders (≥ -8.32%) were significantly different in median PFS (13.40 vs 1.17 months, P < 0.001) and OS (19.80 vs 7.90 months, P < 0.001) and (2) -8.32% in SLD could be used as the optimal threshold for PFS (hazard ratio [HR], 8.11, 95% CI, 3.75 to 17.51, P < 0.001) and OS (HR, 2.36, 95% CI, 1.41 to 3.96, P = 0.001).However, 8.32% tumor diameter shrinkage is validated as a reliable outcome predictor of advanced NSCLC patients receiving EGFR-TKIs therapies and may provide a practical measure to guide therapeutic decisions.
Can pulpal floor debonding be detected from occlusal surface displacement in composite restorations?
Novaes, João Batista; Talma, Elissa; Las Casas, Estevam Barbosa; Aregawi, Wondwosen; Kolstad, Lauren Wickham; Mantell, Sue; Wang, Yan; Fok, Alex
2018-01-01
Polymerization shrinkage of resin composite restorations can cause debonding at the tooth-restoration interface. Theory based on the mechanics of materials predicts that debonding at the pulpal floor would half the shrinkage displacement at the occlusal surface. The aim of this study is to test this theory and to examine the possibility of detecting subsurface resin composite restoration debonding by measuring the superficial shrinkage displacements. A commercial dental resin composite with linear shrinkage strain of 0.8% was used to restore 2 groups of 5 model Class-II cavities (8-mm long, 4-mm wide and 4-mm deep) in aluminum blocks (8-mm thick, 10-mm wide and 14-mm tall). Group I had the restorations bonded to all cavity surfaces, while Group II had the restorations not bonded to the cavity floor to simulate debonding. One of the proximal surfaces of each specimen was sprayed with fine carbon powder to allow surface displacement measurement by Digital Image Correlation. Images of the speckled surface were taken before and after cure for displacement calculation. The experiment was simulated using finite element analysis (FEA) for comparison. Group I showed a maximum occlusal displacement of 34.7±6.7μm and a center of contraction (COC) near the pulpal floor. Group II had a COC coinciding with the geometric center and showed a maximum occlusal displacement of 17.4±3.8μm. The difference between the two groups was statistically significant (p-value=0.0007). Similar results were obtained by FEA. The theoretical shrinkage displacement was 44.6 and 22.3μm for Group I and II, respectively. The lower experimental displacements were probably caused by slumping of the resin composite before cure and deformation of the adhesive layer. The results confirmed that the occlusal shrinkage displacement of a resin composite restoration was reduced significantly by pulpal floor debonding. Recent in vitro studies seem to indicate that this reduction in shrinkage displacement could be detected by using the most accurate intraoral scanners currently available. Thus, subject to clinical validation, the occlusal displacement of a resin composite restoration may be used to assess its interfacial integrity. Copyright © 2017 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis.
Yamazaki, Keisuke
2012-07-01
Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models. Copyright © 2012 Elsevier Ltd. All rights reserved.
Stochastic DT-MRI connectivity mapping on the GPU.
McGraw, Tim; Nadar, Mariappan
2007-01-01
We present a method for stochastic fiber tract mapping from diffusion tensor MRI (DT-MRI) implemented on graphics hardware. From the simulated fibers we compute a connectivity map that gives an indication of the probability that two points in the dataset are connected by a neuronal fiber path. A Bayesian formulation of the fiber model is given and it is shown that the inversion method can be used to construct plausible connectivity. An implementation of this fiber model on the graphics processing unit (GPU) is presented. Since the fiber paths can be stochastically generated independently of one another, the algorithm is highly parallelizable. This allows us to exploit the data-parallel nature of the GPU fragment processors. We also present a framework for the connectivity computation on the GPU. Our implementation allows the user to interactively select regions of interest and observe the evolving connectivity results during computation. Results are presented from the stochastic generation of over 250,000 fiber steps per iteration at interactive frame rates on consumer-grade graphics hardware.
Smith, H A; White, B J; Kundert, P; Cheng, C; Romero-Severson, J; Andolfatto, P; Besansky, N J
2015-01-01
Although freshwater (FW) is the ancestral habitat for larval mosquitoes, multiple species independently evolved the ability to survive in saltwater (SW). Here, we use quantitative trait locus (QTL) mapping to investigate the genetic architecture of osmoregulation in Anopheles mosquitoes, vectors of human malaria. We analyzed 1134 backcross progeny from a cross between the obligate FW species An. coluzzii, and its closely related euryhaline sibling species An. merus. Tests of 2387 markers with Bayesian interval mapping and machine learning (random forests) yielded six genomic regions associated with SW tolerance. Overlap in QTL regions from both approaches enhances confidence in QTL identification. Evidence exists for synergistic as well as disruptive epistasis among loci. Intriguingly, one QTL region containing ion transporters spans the 2Rop chromosomal inversion that distinguishes these species. Rather than a simple trait controlled by one or a few loci, our data are most consistent with a complex, polygenic mode of inheritance. PMID:25920668
Yasuda, Akihito; Onuki, Yoshinori; Obata, Yasuko; Takayama, Kozo
2015-01-01
The "quality by design" concept in pharmaceutical formulation development requires the establishment of a science-based rationale and design space. In this article, we integrate thin-plate spline (TPS) interpolation, Kohonen's self-organizing map (SOM) and a Bayesian network (BN) to visualize the latent structure underlying causal factors and pharmaceutical responses. As a model pharmaceutical product, theophylline tablets were prepared using a standard formulation. We measured the tensile strength and disintegration time as response variables and the compressibility, cohesion and dispersibility of the pretableting blend as latent variables. We predicted these variables quantitatively using nonlinear TPS, generated a large amount of data on pretableting blends and tablets and clustered these data into several clusters using a SOM. Our results show that we are able to predict the experimental values of the latent and response variables with a high degree of accuracy and are able to classify the tablet data into several distinct clusters. In addition, to visualize the latent structure between the causal and latent factors and the response variables, we applied a BN method to the SOM clustering results. We found that despite having inserted latent variables between the causal factors and response variables, their relation is equivalent to the results for the SOM clustering, and thus we are able to explain the underlying latent structure. Consequently, this technique provides a better understanding of the relationships between causal factors and pharmaceutical responses in theophylline tablet formulation.
NASA Astrophysics Data System (ADS)
Waldmann, Ingo
2016-10-01
Radiative transfer retrievals have become the standard in modelling of exoplanetary transmission and emission spectra. Analysing currently available observations of exoplanetary atmospheres often invoke large and correlated parameter spaces that can be difficult to map or constrain.To address these issues, we have developed the Tau-REx (tau-retrieval of exoplanets) retrieval and the RobERt spectral recognition algorithms. Tau-REx is a bayesian atmospheric retrieval framework using Nested Sampling and cluster computing to fully map these large correlated parameter spaces. Nonetheless, data volumes can become prohibitively large and we must often select a subset of potential molecular/atomic absorbers in an atmosphere.In the era of open-source, automated and self-sufficient retrieval algorithms, such manual input should be avoided. User dependent input could, in worst case scenarios, lead to incomplete models and biases in the retrieval. The RobERt algorithm is build to address these issues. RobERt is a deep belief neural (DBN) networks trained to accurately recognise molecular signatures for a wide range of planets, atmospheric thermal profiles and compositions. Using these deep neural networks, we work towards retrieval algorithms that themselves understand the nature of the observed spectra, are able to learn from current and past data and make sensible qualitative preselections of atmospheric opacities to be used for the quantitative stage of the retrieval process.In this talk I will discuss how neural networks and Bayesian Nested Sampling can be used to solve highly degenerate spectral retrieval problems and what 'dreaming' neural networks can tell us about atmospheric characteristics.
Bayesian uncertainty quantification in linear models for diffusion MRI.
Sjölund, Jens; Eklund, Anders; Özarslan, Evren; Herberthson, Magnus; Bånkestad, Maria; Knutsson, Hans
2018-03-29
Diffusion MRI (dMRI) is a valuable tool in the assessment of tissue microstructure. By fitting a model to the dMRI signal it is possible to derive various quantitative features. Several of the most popular dMRI signal models are expansions in an appropriately chosen basis, where the coefficients are determined using some variation of least-squares. However, such approaches lack any notion of uncertainty, which could be valuable in e.g. group analyses. In this work, we use a probabilistic interpretation of linear least-squares methods to recast popular dMRI models as Bayesian ones. This makes it possible to quantify the uncertainty of any derived quantity. In particular, for quantities that are affine functions of the coefficients, the posterior distribution can be expressed in closed-form. We simulated measurements from single- and double-tensor models where the correct values of several quantities are known, to validate that the theoretically derived quantiles agree with those observed empirically. We included results from residual bootstrap for comparison and found good agreement. The validation employed several different models: Diffusion Tensor Imaging (DTI), Mean Apparent Propagator MRI (MAP-MRI) and Constrained Spherical Deconvolution (CSD). We also used in vivo data to visualize maps of quantitative features and corresponding uncertainties, and to show how our approach can be used in a group analysis to downweight subjects with high uncertainty. In summary, we convert successful linear models for dMRI signal estimation to probabilistic models, capable of accurate uncertainty quantification. Copyright © 2018 Elsevier Inc. All rights reserved.
Wainwright, Haruko M; Seki, Akiyuki; Mikami, Satoshi; Saito, Kimiaki
2018-09-01
In this study, we quantify the temporal changes of air dose rates in the regional scale around the Fukushima Dai-ichi Nuclear Power Plant in Japan, and predict the spatial distribution of air dose rates in the future. We first apply the Bayesian geostatistical method developed by Wainwright et al. (2017) to integrate multiscale datasets including ground-based walk and car surveys, and airborne surveys, all of which have different scales, resolutions, spatial coverage, and accuracy. This method is based on geostatistics to represent spatial heterogeneous structures, and also on Bayesian hierarchical models to integrate multiscale, multi-type datasets in a consistent manner. We apply this method to the datasets from three years: 2014 to 2016. The temporal changes among the three integrated maps enables us to characterize the spatiotemporal dynamics of radiation air dose rates. The data-driven ecological decay model is then coupled with the integrated map to predict future dose rates. Results show that the air dose rates are decreasing consistently across the region. While slower in the forested region, the decrease is particularly significant in the town area. The decontamination has contributed to significant reduction of air dose rates. By 2026, the air dose rates will continue to decrease, and the area above 3.8 μSv/h will be almost fully contained within the non-residential forested zone. Copyright © 2018 Elsevier Ltd. All rights reserved.
The Educational Strategies of Rural School Students
ERIC Educational Resources Information Center
Abankina, T. V.; Krasilova, A. N.; Iastrebov, G. A.
2012-01-01
Over the past two decades, Russia has been characterized by a demographic slump, a drastic decline in the number of school students, and, accordingly, a shrinkage of the system of education. The magnitude of shrinkage in rural areas is not 5-10 percent, something education could adapt to, but is about 30 percent, which requires systemic changes.…
Stiffness and shrinkage of green and dry joists
Lyman W. Wood; Lawrence A. Soltis
1964-01-01
This report gives information on the edgewise modulus of elasticity, stiffness, and shrinkage of 360 joists in three species, three grades, and two sizes, each species obtained from two sources. Each joist was evaluated nondestructively at four moisture content values ranging from the green condition to about 11 percent. Information is also given on specific gravity,...
Image-based modeling of tumor shrinkage in head and neck radiation therapy1
Chao, Ming; Xie, Yaoqin; Moros, Eduardo G.; Le, Quynh-Thu; Xing, Lei
2010-01-01
Purpose: Understanding the kinetics of tumor growth∕shrinkage represents a critical step in quantitative assessment of therapeutics and realization of adaptive radiation therapy. This article presents a novel framework for image-based modeling of tumor change and demonstrates its performance with synthetic images and clinical cases. Methods: Due to significant tumor tissue content changes, similarity-based models are not suitable for describing the process of tumor volume changes. Under the hypothesis that tissue features in a tumor volume or at the boundary region are partially preserved, the kinetic change was modeled in two steps: (1) Autodetection of homologous tissue features shared by two input images using the scale invariance feature transformation (SIFT) method; and (2) establishment of a voxel-to-voxel correspondence between the images for the remaining spatial points by interpolation. The correctness of the tissue feature correspondence was assured by a bidirectional association procedure, where SIFT features were mapped from template to target images and reversely. A series of digital phantom experiments and five head and neck clinical cases were used to assess the performance of the proposed technique. Results: The proposed technique can faithfully identify the known changes introduced when constructing the digital phantoms. The subsequent feature-guided thin plate spline calculation reproduced the “ground truth” with accuracy better than 1.5 mm. For the clinical cases, the new algorithm worked reliably for a volume change as large as 30%. Conclusions: An image-based tumor kinetic algorithm was developed to model the tumor response to radiation therapy. The technique provides a practical framework for future application in adaptive radiation therapy. PMID:20527569
Palumbo, Giovanna; Iadicicco, Agostino; Messina, Francesco; Campopiano, Stefania; Cioffi, Raffaele; Colangelo, Francesco
2017-01-01
This paper reports results related to early age temperature and shrinkage measurements by means fiber Bragg gratings (FBGs), which were embedded in geopolymer matrices. The sensors were properly packaged in order to discriminate between different shrinkage behavior and temperature development. Geopolymer systems based on metakaolin were investigated, which dealt with different commercial aluminosilicate precursors and siliceous filler contents. The proposed measuring system will allow us to control, in a very accurate way, the early age phases of the binding systems made by metakaolin geopolymer. A series of experiments were conducted on different compositions; moreover, rheological issues related to the proposed experimental method were also assessed. PMID:29271912
Anand, Vibha; Rosenman, Marc B; Downs, Stephen M
2013-09-01
To develop a map of disease associations exclusively using two publicly available genetic sources: the catalog of single nucleotide polymorphisms (SNPs) from the HapMap, and the catalog of Genome Wide Association Studies (GWAS) from the NHGRI, and to evaluate it with a large, long-standing electronic medical record (EMR). A computational model, In Silico Bayesian Integration of GWAS (IsBIG), was developed to learn associations among diseases using a Bayesian network (BN) framework, using only genetic data. The IsBIG model (I-Model) was re-trained using data from our EMR (M-Model). Separately, another clinical model (C-Model) was learned from this training dataset. The I-Model was compared with both the M-Model and the C-Model for power to discriminate a disease given other diseases using a test dataset from our EMR. Area under receiver operator characteristics curve was used as a performance measure. Direct associations between diseases in the I-Model were also searched in the PubMed database and in classes of the Human Disease Network (HDN). On the basis of genetic information alone, the I-Model linked a third of diseases from our EMR. When compared to the M-Model, the I-Model predicted diseases given other diseases with 94% specificity, 33% sensitivity, and 80% positive predictive value. The I-Model contained 117 direct associations between diseases. Of those associations, 20 (17%) were absent from the searches of the PubMed database; one of these was present in the C-Model. Of the direct associations in the I-Model, 7 (35%) were absent from disease classes of HDN. Using only publicly available genetic sources we have mapped associations in GWAS to a human disease map using an in silico approach. Furthermore, we have validated this disease map using phenotypic data from our EMR. Models predicting disease associations on the basis of known genetic associations alone are specific but not sensitive. Genetic data, as it currently exists, can only explain a fraction of the risk of a disease. Our approach makes a quantitative statement about disease variation that can be explained in an EMR on the basis of genetic associations described in the GWAS. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.