Smoothed Particle Hydrodynamics: Applications Within DSTO
2006-10-01
Most SPH codes use either an improved Euler method (a mid-point predictor - corrector method) [50] or a leapfrog predictor - corrector algorithm for...in the next section we used the predictor - corrector leapfrog algorithm for time stepping. If we write the set of equations describing the change in... predictor - corrector or leapfrog method is used when solving the equations. Monaghan has also noted [53] that, with a correctly chosen time step, total
Smoothed Residual Plots for Generalized Linear Models. Technical Report #450.
ERIC Educational Resources Information Center
Brant, Rollin
Methods for examining the viability of assumptions underlying generalized linear models are considered. By appealing to the likelihood, a natural generalization of the raw residual plot for normal theory models is derived and is applied to investigating potential misspecification of the linear predictor. A smooth version of the plot is also…
NASA Astrophysics Data System (ADS)
Mulyani, Sri; Andriyana, Yudhie; Sudartianto
2017-03-01
Mean regression is a statistical method to explain the relationship between the response variable and the predictor variable based on the central tendency of the data (mean) of the response variable. The parameter estimation in mean regression (with Ordinary Least Square or OLS) generates a problem if we apply it to the data with a symmetric, fat-tailed, or containing outlier. Hence, an alternative method is necessary to be used to that kind of data, for example quantile regression method. The quantile regression is a robust technique to the outlier. This model can explain the relationship between the response variable and the predictor variable, not only on the central tendency of the data (median) but also on various quantile, in order to obtain complete information about that relationship. In this study, a quantile regression is developed with a nonparametric approach such as smoothing spline. Nonparametric approach is used if the prespecification model is difficult to determine, the relation between two variables follow the unknown function. We will apply that proposed method to poverty data. Here, we want to estimate the Percentage of Poor People as the response variable involving the Human Development Index (HDI) as the predictor variable.
Penalized nonparametric scalar-on-function regression via principal coordinates
Reiss, Philip T.; Miller, David L.; Wu, Pei-Shien; Hua, Wen-Yu
2016-01-01
A number of classical approaches to nonparametric regression have recently been extended to the case of functional predictors. This paper introduces a new method of this type, which extends intermediate-rank penalized smoothing to scalar-on-function regression. In the proposed method, which we call principal coordinate ridge regression, one regresses the response on leading principal coordinates defined by a relevant distance among the functional predictors, while applying a ridge penalty. Our publicly available implementation, based on generalized additive modeling software, allows for fast optimal tuning parameter selection and for extensions to multiple functional predictors, exponential family-valued responses, and mixed-effects models. In an application to signature verification data, principal coordinate ridge regression, with dynamic time warping distance used to define the principal coordinates, is shown to outperform a functional generalized linear model. PMID:29217963
The piecewise-linear predictor-corrector code - A Lagrangian-remap method for astrophysical flows
NASA Technical Reports Server (NTRS)
Lufkin, Eric A.; Hawley, John F.
1993-01-01
We describe a time-explicit finite-difference algorithm for solving the nonlinear fluid equations. The method is similar to existing Eulerian schemes in its use of operator-splitting and artificial viscosity, except that we solve the Lagrangian equations of motion with a predictor-corrector and then remap onto a fixed Eulerian grid. The remap is formulated to eliminate errors associated with coordinate singularities, with a general prescription for remaps of arbitrary order. We perform a comprehensive series of tests on standard problems. Self-convergence tests show that the code has a second-order rate of convergence in smooth, two-dimensional flow, with pressure forces, gravity, and curvilinear geometry included. While not as accurate on idealized problems as high-order Riemann-solving schemes, the predictor-corrector Lagrangian-remap code has great flexibility for application to a variety of astrophysical problems.
Sequential limiting in continuous and discontinuous Galerkin methods for the Euler equations
NASA Astrophysics Data System (ADS)
Dobrev, V.; Kolev, Tz.; Kuzmin, D.; Rieben, R.; Tomov, V.
2018-03-01
We present a new predictor-corrector approach to enforcing local maximum principles in piecewise-linear finite element schemes for the compressible Euler equations. The new element-based limiting strategy is suitable for continuous and discontinuous Galerkin methods alike. In contrast to synchronized limiting techniques for systems of conservation laws, we constrain the density, momentum, and total energy in a sequential manner which guarantees positivity preservation for the pressure and internal energy. After the density limiting step, the total energy and momentum gradients are adjusted to incorporate the irreversible effect of density changes. Antidiffusive corrections to bounds-compatible low-order approximations are limited to satisfy inequality constraints for the specific total and kinetic energy. An accuracy-preserving smoothness indicator is introduced to gradually adjust lower bounds for the element-based correction factors. The employed smoothness criterion is based on a Hessian determinant test for the density. A numerical study is performed for test problems with smooth and discontinuous solutions.
ERIC Educational Resources Information Center
Imfeld, Thomas N.; And Others
1995-01-01
A method for predicting high dental caries increments for children, based on previous research, is presented. Three clinical findings were identified as predictors: number of sound primary molars, number of discolored pits/fissures on first permanent molars, and number of buccal and lingual smooth surfaces of first permanent molars with white…
Spatio-temporal modeling of chronic PM 10 exposure for the Nurses' Health Study
NASA Astrophysics Data System (ADS)
Yanosky, Jeff D.; Paciorek, Christopher J.; Schwartz, Joel; Laden, Francine; Puett, Robin; Suh, Helen H.
2008-06-01
Chronic epidemiological studies of airborne particulate matter (PM) have typically characterized the chronic PM exposures of their study populations using city- or county-wide ambient concentrations, which limit the studies to areas where nearby monitoring data are available and which ignore within-city spatial gradients in ambient PM concentrations. To provide more spatially refined and precise chronic exposure measures, we used a Geographic Information System (GIS)-based spatial smoothing model to predict monthly outdoor PM10 concentrations in the northeastern and midwestern United States. This model included monthly smooth spatial terms and smooth regression terms of GIS-derived and meteorological predictors. Using cross-validation and other pre-specified selection criteria, terms for distance to road by road class, urban land use, block group and county population density, point- and area-source PM10 emissions, elevation, wind speed, and precipitation were found to be important determinants of PM10 concentrations and were included in the final model. Final model performance was strong (cross-validation R2=0.62), with little bias (-0.4 μg m-3) and high precision (6.4 μg m-3). The final model (with monthly spatial terms) performed better than a model with seasonal spatial terms (cross-validation R2=0.54). The addition of GIS-derived and meteorological predictors improved predictive performance over spatial smoothing (cross-validation R2=0.51) or inverse distance weighted interpolation (cross-validation R2=0.29) methods alone and increased the spatial resolution of predictions. The model performed well in both rural and urban areas, across seasons, and across the entire time period. The strong model performance demonstrates its suitability as a means to estimate individual-specific chronic PM10 exposures for large populations.
Smooth Scalar-on-Image Regression via Spatial Bayesian Variable Selection
Goldsmith, Jeff; Huang, Lei; Crainiceanu, Ciprian M.
2013-01-01
We develop scalar-on-image regression models when images are registered multidimensional manifolds. We propose a fast and scalable Bayes inferential procedure to estimate the image coefficient. The central idea is the combination of an Ising prior distribution, which controls a latent binary indicator map, and an intrinsic Gaussian Markov random field, which controls the smoothness of the nonzero coefficients. The model is fit using a single-site Gibbs sampler, which allows fitting within minutes for hundreds of subjects with predictor images containing thousands of locations. The code is simple and is provided in less than one page in the Appendix. We apply this method to a neuroimaging study where cognitive outcomes are regressed on measures of white matter microstructure at every voxel of the corpus callosum for hundreds of subjects. PMID:24729670
Surface Estimation, Variable Selection, and the Nonparametric Oracle Property.
Storlie, Curtis B; Bondell, Howard D; Reich, Brian J; Zhang, Hao Helen
2011-04-01
Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting.
Surface Estimation, Variable Selection, and the Nonparametric Oracle Property
Storlie, Curtis B.; Bondell, Howard D.; Reich, Brian J.; Zhang, Hao Helen
2010-01-01
Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting. PMID:21603586
Development of an Automatic Grid Generator for Multi-Element High-Lift Wings
NASA Technical Reports Server (NTRS)
Eberhardt, Scott; Wibowo, Pratomo; Tu, Eugene
1996-01-01
The procedure to generate the grid around a complex wing configuration is presented in this report. The automatic grid generation utilizes the Modified Advancing Front Method as a predictor and an elliptic scheme as a corrector. The scheme will advance the surface grid one cell outward and the newly obtained grid is corrected using the Laplace equation. The predictor-corrector step ensures that the grid produced will be smooth for every configuration. The predictor-corrector scheme is extended for a complex wing configuration. A new technique is developed to deal with the grid generation in the wing-gaps and on the flaps. It will create the grids that fill the gap on the wing surface and the gap created by the flaps. The scheme recognizes these configurations automatically so that minimal user input is required. By utilizing an appropriate sequence in advancing the grid points on a wing surface, the automatic grid generation for complex wing configurations is achieved.
A comparison of regional flood frequency analysis approaches in a simulation framework
NASA Astrophysics Data System (ADS)
Ganora, D.; Laio, F.
2016-07-01
Regional frequency analysis (RFA) is a well-established methodology to provide an estimate of the flood frequency curve at ungauged (or scarcely gauged) sites. Different RFA approaches exist, depending on the way the information is transferred to the site of interest, but it is not clear in the literature if a specific method systematically outperforms the others. The aim of this study is to provide a framework wherein carrying out the intercomparison by building up a virtual environment based on synthetically generated data. The considered regional approaches include: (i) a unique regional curve for the whole region; (ii) a multiple-region model where homogeneous subregions are determined through cluster analysis; (iii) a Region-of-Influence model which defines a homogeneous subregion for each site; (iv) a spatially smooth estimation procedure where the parameters of the regional model vary continuously along the space. Virtual environments are generated considering different patterns of heterogeneity, including step change and smooth variations. If the region is heterogeneous, with the parent distribution changing continuously within the region, the spatially smooth regional approach outperforms the others, with overall errors 10-50% lower than the other methods. In the case of a step-change, the spatially smooth and clustering procedures perform similarly if the heterogeneity is moderate, while clustering procedures work better when the step-change is severe. To extend our findings, an extensive sensitivity analysis has been performed to investigate the effect of sample length, number of virtual stations, return period of the predicted quantile, variability of the scale parameter of the parent distribution, number of predictor variables and different parent distribution. Overall, the spatially smooth approach appears as the most robust approach as its performances are more stable across different patterns of heterogeneity, especially when short records are considered.
Smoking and Female Sex: Independent Predictors of Human Vascular Smooth Muscle Cells Stiffening
Dinardo, Carla Luana; Santos, Hadassa Campos; Vaquero, André Ramos; Martelini, André Ricardo; Dallan, Luis Alberto Oliveira; Alencar, Adriano Mesquita; Krieger, José Eduardo; Pereira, Alexandre Costa
2015-01-01
Aims Recent evidence shows the rigidity of vascular smooth muscle cells (VSMC) contributes to vascular mechanics. Arterial rigidity is an independent cardiovascular risk factor whose associated modifications in VSMC viscoelasticity have never been investigated. This study’s objective was to evaluate if the arterial rigidity risk factors aging, African ancestry, female sex, smoking and diabetes mellitus are associated with VMSC stiffening in an experimental model using a human derived vascular smooth muscle primary cell line repository. Methods Eighty patients subjected to coronary artery bypass surgery were enrolled. VSMCs were extracted from internal thoracic artery fragments and mechanically evaluated using Optical Magnetic Twisting Cytometry assay. The obtained mechanical variables were correlated with the clinical variables: age, gender, African ancestry, smoking and diabetes mellitus. Results The mechanical variables Gr, G’r and G”r had a normal distribution, demonstrating an inter-individual variability of VSMC viscoelasticity, which has never been reported before. Female sex and smoking were independently associated with VSMC stiffening: Gr (apparent cell stiffness) p = 0.022 and p = 0.018, R2 0.164; G’r (elastic modulus) p = 0.019 and p = 0.009, R2 0.184 and G”r (dissipative modulus) p = 0.011 and p = 0.66, R2 0.141. Conclusion Female sex and smoking are independent predictors of VSMC stiffening. This pro-rigidity effect represents an important element for understanding the vascular rigidity observed in post-menopausal females and smokers, as well as a potential therapeutic target to be explored in the future. There is a significant inter-individual variation of VSMC viscoelasticity, which is slightly modulated by clinical variables and probably relies on molecular factors. PMID:26661469
Estimation of retinal vessel caliber using model fitting and random forests
NASA Astrophysics Data System (ADS)
Araújo, Teresa; Mendonça, Ana Maria; Campilho, Aurélio
2017-03-01
Retinal vessel caliber changes are associated with several major diseases, such as diabetes and hypertension. These caliber changes can be evaluated using eye fundus images. However, the clinical assessment is tiresome and prone to errors, motivating the development of automatic methods. An automatic method based on vessel crosssection intensity profile model fitting for the estimation of vessel caliber in retinal images is herein proposed. First, vessels are segmented from the image, vessel centerlines are detected and individual segments are extracted and smoothed. Intensity profiles are extracted perpendicularly to the vessel, and the profile lengths are determined. Then, model fitting is applied to the smoothed profiles. A novel parametric model (DoG-L7) is used, consisting on a Difference-of-Gaussians multiplied by a line which is able to describe profile asymmetry. Finally, the parameters of the best-fit model are used for determining the vessel width through regression using ensembles of bagged regression trees with random sampling of the predictors (random forests). The method is evaluated on the REVIEW public dataset. A precision close to the observers is achieved, outperforming other state-of-the-art methods. The method is robust and reliable for width estimation in images with pathologies and artifacts, with performance independent of the range of diameters.
Cox Regression Models with Functional Covariates for Survival Data.
Gellar, Jonathan E; Colantuoni, Elizabeth; Needham, Dale M; Crainiceanu, Ciprian M
2015-06-01
We extend the Cox proportional hazards model to cases when the exposure is a densely sampled functional process, measured at baseline. The fundamental idea is to combine penalized signal regression with methods developed for mixed effects proportional hazards models. The model is fit by maximizing the penalized partial likelihood, with smoothing parameters estimated by a likelihood-based criterion such as AIC or EPIC. The model may be extended to allow for multiple functional predictors, time varying coefficients, and missing or unequally-spaced data. Methods were inspired by and applied to a study of the association between time to death after hospital discharge and daily measures of disease severity collected in the intensive care unit, among survivors of acute respiratory distress syndrome.
Parallel/Vector Integration Methods for Dynamical Astronomy
NASA Astrophysics Data System (ADS)
Fukushima, T.
Progress of parallel/vector computers has driven us to develop suitable numerical integrators utilizing their computational power to the full extent while being independent on the size of system to be integrated. Unfortunately, the parallel version of Runge-Kutta type integrators are known to be not so efficient. Recently we developed a parallel version of the extrapolation method (Ito and Fukushima 1997), which allows variable timesteps and still gives an acceleration factor of 3-4 for general problems. While the vector-mode usage of Picard-Chebyshev method (Fukushima 1997a, 1997b) will lead the acceleration factor of order of 1000 for smooth problems such as planetary/satellites orbit integration. The success of multiple-correction PECE mode of time-symmetric implicit Hermitian integrator (Kokubo 1998) seems to enlighten Milankar's so-called "pipelined predictor corrector method", which is expected to lead an acceleration factor of 3-4. We will review these directions and discuss future prospects.
Improved disturbance rejection for predictor-based control of MIMO linear systems with input delay
NASA Astrophysics Data System (ADS)
Shi, Shang; Liu, Wenhui; Lu, Junwei; Chu, Yuming
2018-02-01
In this paper, we are concerned with the predictor-based control of multi-input multi-output (MIMO) linear systems with input delay and disturbances. By taking the future values of disturbances into consideration, a new improved predictive scheme is proposed. Compared with the existing predictive schemes, our proposed predictive scheme can achieve a finite-time exact state prediction for some smooth disturbances including the constant disturbances, and a better disturbance attenuation can also be achieved for a large class of other time-varying disturbances. The attenuation of mismatched disturbances for second-order linear systems with input delay is also investigated by using our proposed predictor-based controller.
Dynamic prediction in functional concurrent regression with an application to child growth.
Leroux, Andrew; Xiao, Luo; Crainiceanu, Ciprian; Checkley, William
2018-04-15
In many studies, it is of interest to predict the future trajectory of subjects based on their historical data, referred to as dynamic prediction. Mixed effects models have traditionally been used for dynamic prediction. However, the commonly used random intercept and slope model is often not sufficiently flexible for modeling subject-specific trajectories. In addition, there may be useful exposures/predictors of interest that are measured concurrently with the outcome, complicating dynamic prediction. To address these problems, we propose a dynamic functional concurrent regression model to handle the case where both the functional response and the functional predictors are irregularly measured. Currently, such a model cannot be fit by existing software. We apply the model to dynamically predict children's length conditional on prior length, weight, and baseline covariates. Inference on model parameters and subject-specific trajectories is conducted using the mixed effects representation of the proposed model. An extensive simulation study shows that the dynamic functional regression model provides more accurate estimation and inference than existing methods. Methods are supported by fast, flexible, open source software that uses heavily tested smoothing techniques. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Rough versus smooth topography along oceanic hotspot tracks: Observations and scaling analysis
NASA Astrophysics Data System (ADS)
Orellana-Rovirosa, Felipe; Richards, Mark
2017-05-01
Some hotspot tracks are topographically smooth and broad (Nazca, Carnegie/Cocos/Galápagos, Walvis, Iceland), while others are rough and discontinuous (Easter/Sala y Gomez, Tristan-Gough, Louisville, St. Helena, Hawaiian-Emperor). Smooth topography occurs when the lithospheric age at emplacement is young, favoring intrusive magmatism, whereas rough topography is due to isolated volcanic edifices constructed on older/thicker lithosphere. The main controls on the balance of intrusive versus extrusive magmatism are expected to be the hotspot swell volume flux Qs, plate hotspot relative speed v, and lithospheric elastic thickness Te, which can be combined as a dimensionless parameter R = (Qs/v)1/2/Te, which represents the ratio of plume heat to the lithospheric heat capacity. Observational constraints show that, except for the Ninetyeast Ridge, R is a good predictor of topographic character: for R < 1.5 hotspot tracks are topographically rough and dominated by volcanic edifices, whereas for R > 3 they are smooth and dominated by intrusion.
Use of generalised additive models to categorise continuous variables in clinical prediction
2013-01-01
Background In medical practice many, essentially continuous, clinical parameters tend to be categorised by physicians for ease of decision-making. Indeed, categorisation is a common practice both in medical research and in the development of clinical prediction rules, particularly where the ensuing models are to be applied in daily clinical practice to support clinicians in the decision-making process. Since the number of categories into which a continuous predictor must be categorised depends partly on the relationship between the predictor and the outcome, the need for more than two categories must be borne in mind. Methods We propose a categorisation methodology for clinical-prediction models, using Generalised Additive Models (GAMs) with P-spline smoothers to determine the relationship between the continuous predictor and the outcome. The proposed method consists of creating at least one average-risk category along with high- and low-risk categories based on the GAM smooth function. We applied this methodology to a prospective cohort of patients with exacerbated chronic obstructive pulmonary disease. The predictors selected were respiratory rate and partial pressure of carbon dioxide in the blood (PCO2), and the response variable was poor evolution. An additive logistic regression model was used to show the relationship between the covariates and the dichotomous response variable. The proposed categorisation was compared to the continuous predictor as the best option, using the AIC and AUC evaluation parameters. The sample was divided into a derivation (60%) and validation (40%) samples. The first was used to obtain the cut points while the second was used to validate the proposed methodology. Results The three-category proposal for the respiratory rate was ≤ 20;(20,24];> 24, for which the following values were obtained: AIC=314.5 and AUC=0.638. The respective values for the continuous predictor were AIC=317.1 and AUC=0.634, with no statistically significant differences being found between the two AUCs (p =0.079). The four-category proposal for PCO2 was ≤ 43;(43,52];(52,65];> 65, for which the following values were obtained: AIC=258.1 and AUC=0.81. No statistically significant differences were found between the AUC of the four-category option and that of the continuous predictor, which yielded an AIC of 250.3 and an AUC of 0.825 (p =0.115). Conclusions Our proposed method provides clinicians with the number and location of cut points for categorising variables, and performs as successfully as the original continuous predictor when it comes to developing clinical prediction rules. PMID:23802742
Smoothing Forecasting Methods for Academic Library Circulations: An Evaluation and Recommendation.
ERIC Educational Resources Information Center
Brooks, Terrence A.; Forys, John W., Jr.
1986-01-01
Circulation time-series data from 50 midwest academic libraries were used to test 110 variants of 8 smoothing forecasting methods. Data and methodologies and illustrations of two recommended methods--the single exponential smoothing method and Brown's one-parameter linear exponential smoothing method--are given. Eight references are cited. (EJS)
Automatic prediction of protein domains from sequence information using a hybrid learning system.
Nagarajan, Niranjan; Yona, Golan
2004-06-12
We describe a novel method for detecting the domain structure of a protein from sequence information alone. The method is based on analyzing multiple sequence alignments that are derived from a database search. Multiple measures are defined to quantify the domain information content of each position along the sequence and are combined into a single predictor using a neural network. The output is further smoothed and post-processed using a probabilistic model to predict the most likely transition positions between domains. The method was assessed using the domain definitions in SCOP and CATH for proteins of known structure and was compared with several other existing methods. Our method performs well both in terms of accuracy and sensitivity. It improves significantly over the best methods available, even some of the semi-manual ones, while being fully automatic. Our method can also be used to suggest and verify domain partitions based on structural data. A few examples of predicted domain definitions and alternative partitions, as suggested by our method, are also discussed. An online domain-prediction server is available at http://biozon.org/tools/domains/
Robust Smoothing: Smoothing Parameter Selection and Applications to Fluorescence Spectroscopy∂
Lee, Jong Soo; Cox, Dennis D.
2009-01-01
Fluorescence spectroscopy has emerged in recent years as an effective way to detect cervical cancer. Investigation of the data preprocessing stage uncovered a need for a robust smoothing to extract the signal from the noise. Various robust smoothing methods for estimating fluorescence emission spectra are compared and data driven methods for the selection of smoothing parameter are suggested. The methods currently implemented in R for smoothing parameter selection proved to be unsatisfactory, and a computationally efficient procedure that approximates robust leave-one-out cross validation is presented. PMID:20729976
NASA Astrophysics Data System (ADS)
Moschetti, M. P.; Mueller, C. S.; Boyd, O. S.; Petersen, M. D.
2013-12-01
In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood values from all rate models to rank the smoothing methods. We find that adaptively smoothed seismicity models yield better likelihood values than the fixed smoothing models. Holding all other (source and ground motion) models constant, we calculate seismic hazard curves for all points across Alaska on a 0.1 degree grid, using the adaptively smoothed and fixed smoothed seismicity models separately. Because adaptively smoothed models concentrate seismicity near the earthquake epicenters where seismicity rates are high, the corresponding hazard values are higher, locally, but reduced with distance from observed seismicity, relative to the hazard from fixed-bandwidth models. We suggest that adaptively smoothed seismicity models be considered for implementation in the update to the ASHMs because of their improved likelihood estimates relative to fixed smoothing methods; however, concomitant increases in seismic hazard will cause significant changes in regions of high seismicity, such as near the subduction zone, northeast of Kotzebue, and along the NNE trending zone of seismicity in the Alaskan interior.
Moschetti, Morgan P.; Mueller, Charles S.; Boyd, Oliver S.; Petersen, Mark D.
2014-01-01
In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood values from all rate models to rank the smoothing methods. We find that adaptively smoothed seismicity models yield better likelihood values than the fixed smoothing models. Holding all other (source and ground motion) models constant, we calculate seismic hazard curves for all points across Alaska on a 0.1 degree grid, using the adaptively smoothed and fixed smoothed seismicity models separately. Because adaptively smoothed models concentrate seismicity near the earthquake epicenters where seismicity rates are high, the corresponding hazard values are higher, locally, but reduced with distance from observed seismicity, relative to the hazard from fixed-bandwidth models. We suggest that adaptively smoothed seismicity models be considered for implementation in the update to the ASHMs because of their improved likelihood estimates relative to fixed smoothing methods; however, concomitant increases in seismic hazard will cause significant changes in regions of high seismicity, such as near the subduction zone, northeast of Kotzebue, and along the NNE trending zone of seismicity in the Alaskan interior.
A coupling method for a cardiovascular simulation model which includes the Kalman filter.
Hasegawa, Yuki; Shimayoshi, Takao; Amano, Akira; Matsuda, Tetsuya
2012-01-01
Multi-scale models of the cardiovascular system provide new insight that was unavailable with in vivo and in vitro experiments. For the cardiovascular system, multi-scale simulations provide a valuable perspective in analyzing the interaction of three phenomenons occurring at different spatial scales: circulatory hemodynamics, ventricular structural dynamics, and myocardial excitation-contraction. In order to simulate these interactions, multiscale cardiovascular simulation systems couple models that simulate different phenomena. However, coupling methods require a significant amount of calculation, since a system of non-linear equations must be solved for each timestep. Therefore, we proposed a coupling method which decreases the amount of calculation by using the Kalman filter. In our method, the Kalman filter calculates approximations for the solution to the system of non-linear equations at each timestep. The approximations are then used as initial values for solving the system of non-linear equations. The proposed method decreases the number of iterations required by 94.0% compared to the conventional strong coupling method. When compared with a smoothing spline predictor, the proposed method required 49.4% fewer iterations.
Siddiqui, Hasib; Bouman, Charles A
2007-03-01
Conventional halftoning methods employed in electrophotographic printers tend to produce Moiré artifacts when used for printing images scanned from printed material, such as books and magazines. We present a novel approach for descreening color scanned documents aimed at providing an efficient solution to the Moiré problem in practical imaging devices, including copiers and multifunction printers. The algorithm works by combining two nonlinear image-processing techniques, resolution synthesis-based denoising (RSD), and modified smallest univalue segment assimilating nucleus (SUSAN) filtering. The RSD predictor is based on a stochastic image model whose parameters are optimized beforehand in a separate training procedure. Using the optimized parameters, RSD classifies the local window around the current pixel in the scanned image and applies filters optimized for the selected classes. The output of the RSD predictor is treated as a first-order estimate to the descreened image. The modified SUSAN filter uses the output of RSD for performing an edge-preserving smoothing on the raw scanned data and produces the final output of the descreening algorithm. Our method does not require any knowledge of the screening method, such as the screen frequency or dither matrix coefficients, that produced the printed original. The proposed scheme not only suppresses the Moiré artifacts, but, in addition, can be trained with intrinsic sharpening for deblurring scanned documents. Finally, once optimized for a periodic clustered-dot halftoning method, the same algorithm can be used to inverse halftone scanned images containing stochastic error diffusion halftone noise.
Power prediction in mobile communication systems using an optimal neural-network structure.
Gao, X M; Gao, X Z; Tanskanen, J A; Ovaska, S J
1997-01-01
Presents a novel neural-network-based predictor for received power level prediction in direct sequence code division multiple access (DS/CDMA) systems. The predictor consists of an adaptive linear element (Adaline) followed by a multilayer perceptron (MLP). An important but difficult problem in designing such a cascade predictor is to determine the complexity of the networks. We solve this problem by using the predictive minimum description length (PMDL) principle to select the optimal numbers of input and hidden nodes. This approach results in a predictor with both good noise attenuation and excellent generalization capability. The optimized neural networks are used for predictive filtering of very noisy Rayleigh fading signals with 1.8 GHz carrier frequency. Our results show that the optimal neural predictor can provide smoothed in-phase and quadrature signals with signal-to-noise ratio (SNR) gains of about 12 and 7 dB at the urban mobile speeds of 5 and 50 km/h, respectively. The corresponding power signal SNR gains are about 11 and 5 dB. Therefore, the neural predictor is well suitable for power control applications where ldquodelaylessrdquo noise attenuation and efficient reduction of fast fading are required.
NASA Astrophysics Data System (ADS)
Sugio, Tetsuya; Yamamoto, Masayoshi; Funabiki, Shigeyuki
The use of an SMES (Superconducting Magnetic Energy Storage) for smoothing power fluctuations in a railway substation has been discussed. This paper proposes a smoothing control method based on fuzzy reasoning for reducing the SMES capacity at substations along high-speed railways. The proposed smoothing control method comprises three countermeasures for reduction of the SMES capacity. The first countermeasure involves modification of rule 1 for smoothing out the fluctuating electric power to its average value. The other countermeasures involve the modification of the central value of the stored energy control in the SMES and revision of the membership function in rule 2 for reduction of the SMES capacity. The SMES capacity in the proposed smoothing control method is reduced by 49.5% when compared to that in the nonrevised control method. It is confirmed by computer simulations that the proposed control method is suitable for smoothing out power fluctuations in substations along high-speed railways and for reducing the SMES capacity.
Simulated Annealing in the Variable Landscape
NASA Astrophysics Data System (ADS)
Hasegawa, Manabu; Kim, Chang Ju
An experimental analysis is conducted to test whether the appropriate introduction of the smoothness-temperature schedule enhances the optimizing ability of the MASSS method, the combination of the Metropolis algorithm (MA) and the search-space smoothing (SSS) method. The test is performed on two types of random traveling salesman problems. The results show that the optimization performance of the MA is substantially improved by a single smoothing alone and slightly more by a single smoothing with cooling and by a de-smoothing process with heating. The performance is compared to that of the parallel tempering method and a clear advantage of the idea of smoothing is observed depending on the problem.
Walia, Rasna R; Caragea, Cornelia; Lewis, Benjamin A; Towfic, Fadi; Terribilini, Michael; El-Manzalawy, Yasser; Dobbs, Drena; Honavar, Vasant
2012-05-10
RNA molecules play diverse functional and structural roles in cells. They function as messengers for transferring genetic information from DNA to proteins, as the primary genetic material in many viruses, as catalysts (ribozymes) important for protein synthesis and RNA processing, and as essential and ubiquitous regulators of gene expression in living organisms. Many of these functions depend on precisely orchestrated interactions between RNA molecules and specific proteins in cells. Understanding the molecular mechanisms by which proteins recognize and bind RNA is essential for comprehending the functional implications of these interactions, but the recognition 'code' that mediates interactions between proteins and RNA is not yet understood. Success in deciphering this code would dramatically impact the development of new therapeutic strategies for intervening in devastating diseases such as AIDS and cancer. Because of the high cost of experimental determination of protein-RNA interfaces, there is an increasing reliance on statistical machine learning methods for training predictors of RNA-binding residues in proteins. However, because of differences in the choice of datasets, performance measures, and data representations used, it has been difficult to obtain an accurate assessment of the current state of the art in protein-RNA interface prediction. We provide a review of published approaches for predicting RNA-binding residues in proteins and a systematic comparison and critical assessment of protein-RNA interface residue predictors trained using these approaches on three carefully curated non-redundant datasets. We directly compare two widely used machine learning algorithms (Naïve Bayes (NB) and Support Vector Machine (SVM)) using three different data representations in which features are encoded using either sequence- or structure-based windows. Our results show that (i) Sequence-based classifiers that use a position-specific scoring matrix (PSSM)-based representation (PSSMSeq) outperform those that use an amino acid identity based representation (IDSeq) or a smoothed PSSM (SmoPSSMSeq); (ii) Structure-based classifiers that use smoothed PSSM representation (SmoPSSMStr) outperform those that use PSSM (PSSMStr) as well as sequence identity based representation (IDStr). PSSMSeq classifiers, when tested on an independent test set of 44 proteins, achieve performance that is comparable to that of three state-of-the-art structure-based predictors (including those that exploit geometric features) in terms of Matthews Correlation Coefficient (MCC), although the structure-based methods achieve substantially higher Specificity (albeit at the expense of Sensitivity) compared to sequence-based methods. We also find that the expected performance of the classifiers on a residue level can be markedly different from that on a protein level. Our experiments show that the classifiers trained on three different non-redundant protein-RNA interface datasets achieve comparable cross-validation performance. However, we find that the results are significantly affected by differences in the distance threshold used to define interface residues. Our results demonstrate that protein-RNA interface residue predictors that use a PSSM-based encoding of sequence windows outperform classifiers that use other encodings of sequence windows. While structure-based methods that exploit geometric features can yield significant increases in the Specificity of protein-RNA interface residue predictions, such increases are offset by decreases in Sensitivity. These results underscore the importance of comparing alternative methods using rigorous statistical procedures, multiple performance measures, and datasets that are constructed based on several alternative definitions of interface residues and redundancy cutoffs as well as including evaluations on independent test sets into the comparisons.
Tian, Xinyu; Wang, Xuefeng; Chen, Jun
2014-01-01
Classic multinomial logit model, commonly used in multiclass regression problem, is restricted to few predictors and does not take into account the relationship among variables. It has limited use for genomic data, where the number of genomic features far exceeds the sample size. Genomic features such as gene expressions are usually related by an underlying biological network. Efficient use of the network information is important to improve classification performance as well as the biological interpretability. We proposed a multinomial logit model that is capable of addressing both the high dimensionality of predictors and the underlying network information. Group lasso was used to induce model sparsity, and a network-constraint was imposed to induce the smoothness of the coefficients with respect to the underlying network structure. To deal with the non-smoothness of the objective function in optimization, we developed a proximal gradient algorithm for efficient computation. The proposed model was compared to models with no prior structure information in both simulations and a problem of cancer subtype prediction with real TCGA (the cancer genome atlas) gene expression data. The network-constrained mode outperformed the traditional ones in both cases.
An adaptive segment method for smoothing lidar signal based on noise estimation
NASA Astrophysics Data System (ADS)
Wang, Yuzhao; Luo, Pingping
2014-10-01
An adaptive segmentation smoothing method (ASSM) is introduced in the paper to smooth the signal and suppress the noise. In the ASSM, the noise is defined as the 3σ of the background signal. An integer number N is defined for finding the changing positions in the signal curve. If the difference of adjacent two points is greater than 3Nσ, the position is recorded as an end point of the smoothing segment. All the end points detected as above are recorded and the curves between them will be smoothed separately. In the traditional method, the end points of the smoothing windows in the signals are fixed. The ASSM creates changing end points in different signals and the smoothing windows could be set adaptively. The windows are always set as the half of the segmentations and then the average smoothing method will be applied in the segmentations. The Iterative process is required for reducing the end-point aberration effect in the average smoothing method and two or three times are enough. In ASSM, the signals are smoothed in the spacial area nor frequent area, that means the frequent disturbance will be avoided. A lidar echo was simulated in the experimental work. The echo was supposed to be created by a space-born lidar (e.g. CALIOP). And white Gaussian noise was added to the echo to act as the random noise resulted from environment and the detector. The novel method, ASSM, was applied to the noisy echo to filter the noise. In the test, N was set to 3 and the Iteration time is two. The results show that, the signal could be smoothed adaptively by the ASSM, but the N and the Iteration time might be optimized when the ASSM is applied in a different lidar.
Interacting multiple model forward filtering and backward smoothing for maneuvering target tracking
NASA Astrophysics Data System (ADS)
Nandakumaran, N.; Sutharsan, S.; Tharmarasa, R.; Lang, Tom; McDonald, Mike; Kirubarajan, T.
2009-08-01
The Interacting Multiple Model (IMM) estimator has been proven to be effective in tracking agile targets. Smoothing or retrodiction, which uses measurements beyond the current estimation time, provides better estimates of target states. Various methods have been proposed for multiple model smoothing in the literature. In this paper, a new smoothing method, which involves forward filtering followed by backward smoothing while maintaining the fundamental spirit of the IMM, is proposed. The forward filtering is performed using the standard IMM recursion, while the backward smoothing is performed using a novel interacting smoothing recursion. This backward recursion mimics the IMM estimator in the backward direction, where each mode conditioned smoother uses standard Kalman smoothing recursion. Resulting algorithm provides improved but delayed estimates of target states. Simulation studies are performed to demonstrate the improved performance with a maneuvering target scenario. The comparison with existing methods confirms the improved smoothing accuracy. This improvement results from avoiding the augmented state vector used by other algorithms. In addition, the new technique to account for model switching in smoothing is a key in improving the performance.
Wang, Yueyan; Ponce, Ninez A; Wang, Pan; Opsomer, Jean D; Yu, Hongjian
2015-12-01
We propose a method to meet challenges in generating health estimates for granular geographic areas in which the survey sample size is extremely small. Our generalized linear mixed model predicts health outcomes using both individual-level and neighborhood-level predictors. The model's feature of nonparametric smoothing function on neighborhood-level variables better captures the association between neighborhood environment and the outcome. Using 2011 to 2012 data from the California Health Interview Survey, we demonstrate an empirical application of this method to estimate the fraction of residents without health insurance for Zip Code Tabulation Areas (ZCTAs). Our method generated stable estimates of uninsurance for 1519 of 1765 ZCTAs (86%) in California. For some areas with great socioeconomic diversity across adjacent neighborhoods, such as Los Angeles County, the modeled uninsured estimates revealed much heterogeneity among geographically adjacent ZCTAs. The proposed method can increase the value of health surveys by providing modeled estimates for health data at a granular geographic level. It can account for variations in health outcomes at the neighborhood level as a result of both socioeconomic characteristics and geographic locations.
A smoothed two- and three-dimensional interface reconstruction method
Mosso, Stewart; Garasi, Christopher; Drake, Richard
2008-04-22
The Patterned Interface Reconstruction algorithm reduces the discontinuity between material interfaces in neighboring computational elements. This smoothing improves the accuracy of the reconstruction for smooth bodies. The method can be used in two- and three-dimensional Cartesian and unstructured meshes. Planar interfaces will be returned for planar volume fraction distributions. Finally, the algorithm is second-order accurate for smooth volume fraction distributions.
Spline-Based Smoothing of Airfoil Curvatures
NASA Technical Reports Server (NTRS)
Li, W.; Krist, S.
2008-01-01
Constrained fitting for airfoil curvature smoothing (CFACS) is a splinebased method of interpolating airfoil surface coordinates (and, concomitantly, airfoil thicknesses) between specified discrete design points so as to obtain smoothing of surface-curvature profiles in addition to basic smoothing of surfaces. CFACS was developed in recognition of the fact that the performance of a transonic airfoil is directly related to both the curvature profile and the smoothness of the airfoil surface. Older methods of interpolation of airfoil surfaces involve various compromises between smoothing of surfaces and exact fitting of surfaces to specified discrete design points. While some of the older methods take curvature profiles into account, they nevertheless sometimes yield unfavorable results, including curvature oscillations near end points and substantial deviations from desired leading-edge shapes. In CFACS as in most of the older methods, one seeks a compromise between smoothing and exact fitting. Unlike in the older methods, the airfoil surface is modified as little as possible from its original specified form and, instead, is smoothed in such a way that the curvature profile becomes a smooth fit of the curvature profile of the original airfoil specification. CFACS involves a combination of rigorous mathematical modeling and knowledge-based heuristics. Rigorous mathematical formulation provides assurance of removal of undesirable curvature oscillations with minimum modification of the airfoil geometry. Knowledge-based heuristics bridge the gap between theory and designers best practices. In CFACS, one of the measures of the deviation of an airfoil surface from smoothness is the sum of squares of the jumps in the third derivatives of a cubicspline interpolation of the airfoil data. This measure is incorporated into a formulation for minimizing an overall deviation- from-smoothness measure of the airfoil data within a specified fitting error tolerance. CFACS has been extensively tested on a number of supercritical airfoil data sets generated by inverse design and optimization computer programs. All of the smoothing results show that CFACS is able to generate unbiased smooth fits of curvature profiles, trading small modifications of geometry for increasing curvature smoothness by eliminating curvature oscillations and bumps (see figure).
An approach for spherical harmonic analysis of non-smooth data
NASA Astrophysics Data System (ADS)
Wang, Hansheng; Wu, Patrick; Wang, Zhiyong
2006-12-01
A method is proposed to evaluate the spherical harmonic coefficients of a global or regional, non-smooth, observable dataset sampled on an equiangular grid. The method is based on an integration strategy using new recursion relations. Because a bilinear function is used to interpolate points within the grid cells, this method is suitable for non-smooth data; the slope of the data may be piecewise continuous, with extreme changes at the boundaries. In order to validate the method, the coefficients of an axisymmetric model are computed, and compared with the derived analytical expressions. Numerical results show that this method is indeed reasonable for non-smooth models, and that the maximum degree for spherical harmonic analysis should be empirically determined by several factors including the model resolution and the degree of non-smoothness in the dataset, and it can be several times larger than the total number of latitudinal grid points. It is also shown that this method is appropriate for the approximate analysis of a smooth dataset. Moreover, this paper provides the program flowchart and an internet address where the FORTRAN code with program specifications are made available.
Conference on Satellite Meteorology and Oceanography, 6th, Atlanta, GA, Jan. 5-10, 1992, Preprints
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The present volume on satellite meteorology and oceanography discusses cloud retrieval from collocated IR sounder data and imaging systems, satellite retrievals of marine stratiform cloud systems, multispectral analysis of satellite observations of smoke and dust, and image and graphical analysis of principal components of satellite sounding channels. Attention is given to an evaluation of results from classification retrieval methods, the use of TOVS radiances, estimation of path radiance on the basis of remotely sensed data, and a reexamination of SST as a predictor for tropical storm intensity. Topics addressed include optimal smoothing of GOES VAS for upper-atmosphere thermal waves, obtainingmore » cloud motion vectors from polar orbiting satellites, the use of cloud relative animation in the analysis of satellite data, and investigations of a polar low using geostationary satellite data.« less
Waller, Niels G
2016-01-01
For a fixed set of standardized regression coefficients and a fixed coefficient of determination (R-squared), an infinite number of predictor correlation matrices will satisfy the implied quadratic form. I call such matrices fungible correlation matrices. In this article, I describe an algorithm for generating positive definite (PD), positive semidefinite (PSD), or indefinite (ID) fungible correlation matrices that have a random or fixed smallest eigenvalue. The underlying equations of this algorithm are reviewed from both algebraic and geometric perspectives. Two simulation studies illustrate that fungible correlation matrices can be profitably used in Monte Carlo research. The first study uses PD fungible correlation matrices to compare penalized regression algorithms. The second study uses ID fungible correlation matrices to compare matrix-smoothing algorithms. R code for generating fungible correlation matrices is presented in the supplemental materials.
Fast function-on-scalar regression with penalized basis expansions.
Reiss, Philip T; Huang, Lei; Mennes, Maarten
2010-01-01
Regression models for functional responses and scalar predictors are often fitted by means of basis functions, with quadratic roughness penalties applied to avoid overfitting. The fitting approach described by Ramsay and Silverman in the 1990 s amounts to a penalized ordinary least squares (P-OLS) estimator of the coefficient functions. We recast this estimator as a generalized ridge regression estimator, and present a penalized generalized least squares (P-GLS) alternative. We describe algorithms by which both estimators can be implemented, with automatic selection of optimal smoothing parameters, in a more computationally efficient manner than has heretofore been available. We discuss pointwise confidence intervals for the coefficient functions, simultaneous inference by permutation tests, and model selection, including a novel notion of pointwise model selection. P-OLS and P-GLS are compared in a simulation study. Our methods are illustrated with an analysis of age effects in a functional magnetic resonance imaging data set, as well as a reanalysis of a now-classic Canadian weather data set. An R package implementing the methods is publicly available.
Functional Additive Mixed Models
Scheipl, Fabian; Staicu, Ana-Maria; Greven, Sonja
2014-01-01
We propose an extensive framework for additive regression models for correlated functional responses, allowing for multiple partially nested or crossed functional random effects with flexible correlation structures for, e.g., spatial, temporal, or longitudinal functional data. Additionally, our framework includes linear and nonlinear effects of functional and scalar covariates that may vary smoothly over the index of the functional response. It accommodates densely or sparsely observed functional responses and predictors which may be observed with additional error and includes both spline-based and functional principal component-based terms. Estimation and inference in this framework is based on standard additive mixed models, allowing us to take advantage of established methods and robust, flexible algorithms. We provide easy-to-use open source software in the pffr() function for the R-package refund. Simulations show that the proposed method recovers relevant effects reliably, handles small sample sizes well and also scales to larger data sets. Applications with spatially and longitudinally observed functional data demonstrate the flexibility in modeling and interpretability of results of our approach. PMID:26347592
Functional Additive Mixed Models.
Scheipl, Fabian; Staicu, Ana-Maria; Greven, Sonja
2015-04-01
We propose an extensive framework for additive regression models for correlated functional responses, allowing for multiple partially nested or crossed functional random effects with flexible correlation structures for, e.g., spatial, temporal, or longitudinal functional data. Additionally, our framework includes linear and nonlinear effects of functional and scalar covariates that may vary smoothly over the index of the functional response. It accommodates densely or sparsely observed functional responses and predictors which may be observed with additional error and includes both spline-based and functional principal component-based terms. Estimation and inference in this framework is based on standard additive mixed models, allowing us to take advantage of established methods and robust, flexible algorithms. We provide easy-to-use open source software in the pffr() function for the R-package refund. Simulations show that the proposed method recovers relevant effects reliably, handles small sample sizes well and also scales to larger data sets. Applications with spatially and longitudinally observed functional data demonstrate the flexibility in modeling and interpretability of results of our approach.
NASA Astrophysics Data System (ADS)
Huang, X.; Hu, K.; Ling, X.; Zhang, Y.; Lu, Z.; Zhou, G.
2017-09-01
This paper introduces a novel global patch matching method that focuses on how to remove fronto-parallel bias and obtain continuous smooth surfaces with assuming that the scenes covered by stereos are piecewise continuous. Firstly, simple linear iterative cluster method (SLIC) is used to segment the base image into a series of patches. Then, a global energy function, which consists of a data term and a smoothness term, is built on the patches. The data term is the second-order Taylor expansion of correlation coefficients, and the smoothness term is built by combing connectivity constraints and the coplanarity constraints are combined to construct the smoothness term. Finally, the global energy function can be built by combining the data term and the smoothness term. We rewrite the global energy function in a quadratic matrix function, and use least square methods to obtain the optimal solution. Experiments on Adirondack stereo and Motorcycle stereo of Middlebury benchmark show that the proposed method can remove fronto-parallel bias effectively, and produce continuous smooth surfaces.
A new parametric method to smooth time-series data of metabolites in metabolic networks.
Miyawaki, Atsuko; Sriyudthsak, Kansuporn; Hirai, Masami Yokota; Shiraishi, Fumihide
2016-12-01
Mathematical modeling of large-scale metabolic networks usually requires smoothing of metabolite time-series data to account for measurement or biological errors. Accordingly, the accuracy of smoothing curves strongly affects the subsequent estimation of model parameters. Here, an efficient parametric method is proposed for smoothing metabolite time-series data, and its performance is evaluated. To simplify parameter estimation, the method uses S-system-type equations with simple power law-type efflux terms. Iterative calculation using this method was found to readily converge, because parameters are estimated stepwise. Importantly, smoothing curves are determined so that metabolite concentrations satisfy mass balances. Furthermore, the slopes of smoothing curves are useful in estimating parameters, because they are probably close to their true behaviors regardless of errors that may be present in the actual data. Finally, calculations for each differential equation were found to converge in much less than one second if initial parameters are set at appropriate (guessed) values. Copyright © 2016 Elsevier Inc. All rights reserved.
Du, Shouqiang; Chen, Miao
2018-01-01
We consider a kind of nonsmooth optimization problems with [Formula: see text]-norm minimization, which has many applications in compressed sensing, signal reconstruction, and the related engineering problems. Using smoothing approximate techniques, this kind of nonsmooth optimization problem can be transformed into a general unconstrained optimization problem, which can be solved by the proposed smoothing modified three-term conjugate gradient method. The smoothing modified three-term conjugate gradient method is based on Polak-Ribière-Polyak conjugate gradient method. For the Polak-Ribière-Polyak conjugate gradient method has good numerical properties, the proposed method possesses the sufficient descent property without any line searches, and it is also proved to be globally convergent. Finally, the numerical experiments show the efficiency of the proposed method.
Suppression of stochastic pulsation in laser-plasma interaction by smoothing methods
NASA Astrophysics Data System (ADS)
Hora, Heinrich; Aydin, Meral
1992-04-01
The control of the very complex behavior of a plasma with laser interaction by smoothing with induced spatial incoherence or other methods was related to improving the lateral uniformity of the irradiation. While this is important, it is shown from numerical hydrodynamic studies that the very strong temporal pulsation (stuttering) will mostly be suppressed by these smoothing methods too.
Transformation-invariant and nonparametric monotone smooth estimation of ROC curves.
Du, Pang; Tang, Liansheng
2009-01-30
When a new diagnostic test is developed, it is of interest to evaluate its accuracy in distinguishing diseased subjects from non-diseased subjects. The accuracy of the test is often evaluated by receiver operating characteristic (ROC) curves. Smooth ROC estimates are often preferable for continuous test results when the underlying ROC curves are in fact continuous. Nonparametric and parametric methods have been proposed by various authors to obtain smooth ROC curve estimates. However, there are certain drawbacks with the existing methods. Parametric methods need specific model assumptions. Nonparametric methods do not always satisfy the inherent properties of the ROC curves, such as monotonicity and transformation invariance. In this paper we propose a monotone spline approach to obtain smooth monotone ROC curves. Our method ensures important inherent properties of the underlying ROC curves, which include monotonicity, transformation invariance, and boundary constraints. We compare the finite sample performance of the newly proposed ROC method with other ROC smoothing methods in large-scale simulation studies. We illustrate our method through a real life example. Copyright (c) 2008 John Wiley & Sons, Ltd.
Nonequilibrium flows with smooth particle applied mechanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kum, Oyeon
1995-07-01
Smooth particle methods are relatively new methods for simulating solid and fluid flows through they have a 20-year history of solving complex hydrodynamic problems in astrophysics, such as colliding planets and stars, for which correct answers are unknown. The results presented in this thesis evaluate the adaptability or fitness of the method for typical hydrocode production problems. For finite hydrodynamic systems, boundary conditions are important. A reflective boundary condition with image particles is a good way to prevent a density anomaly at the boundary and to keep the fluxes continuous there. Boundary values of temperature and velocity can be separatelymore » controlled. The gradient algorithm, based on differentiating the smooth particle expression for (uρ) and (Tρ), does not show numerical instabilities for the stress tensor and heat flux vector quantities which require second derivatives in space when Fourier`s heat-flow law and Newton`s viscous force law are used. Smooth particle methods show an interesting parallel linking to them to molecular dynamics. For the inviscid Euler equation, with an isentropic ideal gas equation of state, the smooth particle algorithm generates trajectories isomorphic to those generated by molecular dynamics. The shear moduli were evaluated based on molecular dynamics calculations for the three weighting functions, B spline, Lucy, and Cusp functions. The accuracy and applicability of the methods were estimated by comparing a set of smooth particle Rayleigh-Benard problems, all in the laminar regime, to corresponding highly-accurate grid-based numerical solutions of continuum equations. Both transient and stationary smooth particle solutions reproduce the grid-based data with velocity errors on the order of 5%. The smooth particle method still provides robust solutions at high Rayleigh number where grid-based methods fails.« less
SSD with generalized phase modulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rothenberg, J.
1996-01-09
Smoothing by spectral dispersion (SSD) with standard frequency modulation (FM), although simple to implement, has the disadvantage that low spatial frequencies present in the spectrum of the target illumination are not smoothed as effectively as with a more general smoothing method (eg, induced spatial incoherence method). The reduced smoothing performance of standard FM-SSD can result in spectral power of the speckle noise at these low spatial frequencies as much as one order of magnitude larger than that achieved with a more general method. In fact, at small integration times FM-SSD has no smoothing effect at all for a broad bandmore » of low spatial frequencies. This effect may have important implications for both direct and indirect drive ICF.« less
A Pragmatic Smoothing Method for Improving the Quality of the Results in Atomic Spectroscopy
NASA Astrophysics Data System (ADS)
Bennun, Leonardo
2017-07-01
A new smoothing method for the improvement on the identification and quantification of spectral functions based on the previous knowledge of the signals that are expected to be quantified, is presented. These signals are used as weighted coefficients in the smoothing algorithm. This smoothing method was conceived to be applied in atomic and nuclear spectroscopies preferably to these techniques where net counts are proportional to acquisition time, such as particle induced X-ray emission (PIXE) and other X-ray fluorescence spectroscopic methods, etc. This algorithm, when properly applied, does not distort the form nor the intensity of the signal, so it is well suited for all kind of spectroscopic techniques. This method is extremely effective at reducing high-frequency noise in the signal much more efficient than a single rectangular smooth of the same width. As all of smoothing techniques, the proposed method improves the precision of the results, but in this case we found also a systematic improvement on the accuracy of the results. We still have to evaluate the improvement on the quality of the results when this method is applied over real experimental results. We expect better characterization of the net area quantification of the peaks, and smaller Detection and Quantification Limits. We have applied this method to signals that obey Poisson statistics, but with the same ideas and criteria, it could be applied to time series. In a general case, when this algorithm is applied over experimental results, also it would be required that the sought characteristic functions, required for this weighted smoothing method, should be obtained from a system with strong stability. If the sought signals are not perfectly clean, this method should be carefully applied
Empirical Bayes methods for smoothing data and for simultaneous estimation of many parameters.
Yanagimoto, T; Kashiwagi, N
1990-01-01
A recent successful development is found in a series of innovative, new statistical methods for smoothing data that are based on the empirical Bayes method. This paper emphasizes their practical usefulness in medical sciences and their theoretically close relationship with the problem of simultaneous estimation of parameters, depending on strata. The paper also presents two examples of analyzing epidemiological data obtained in Japan using the smoothing methods to illustrate their favorable performance. PMID:2148512
NASA Astrophysics Data System (ADS)
Fan, Qingbiao; Xu, Caijun; Yi, Lei; Liu, Yang; Wen, Yangmao; Yin, Zhi
2017-10-01
When ill-posed problems are inverted, the regularization process is equivalent to adding constraint equations or prior information from a Bayesian perspective. The veracity of the constraints (or the regularization matrix R) significantly affects the solution, and a smoothness constraint is usually added in seismic slip inversions. In this paper, an adaptive smoothness constraint (ASC) based on the classic Laplacian smoothness constraint (LSC) is proposed. The ASC not only improves the smoothness constraint, but also helps constrain the slip direction. A series of experiments are conducted in which different magnitudes of noise are imposed and different densities of observation are assumed, and the results indicated that the ASC was superior to the LSC. Using the proposed ASC, the Helmert variance component estimation method is highlighted as the best for selecting the regularization parameter compared with other methods, such as generalized cross-validation or the mean squared error criterion method. The ASC may also benefit other ill-posed problems in which a smoothness constraint is required.
ERIC Educational Resources Information Center
Cui, Zhongmin; Kolen, Michael J.
2009-01-01
This article considers two new smoothing methods in equipercentile equating, the cubic B-spline presmoothing method and the direct presmoothing method. Using a simulation study, these two methods are compared with established methods, the beta-4 method, the polynomial loglinear method, and the cubic spline postsmoothing method, under three sample…
NASA Astrophysics Data System (ADS)
Rykov, S. P.; Rykova, O. A.; Koval, V. S.; Makhno, D. E.; Fedotov, K. V.
2018-03-01
The paper aims to analyze vibrations of the dynamic system equivalent of the suspension system with regard to tyre ability to smooth road irregularities. The research is based on static dynamics for linear systems of automated control, methods of correlation, spectral and numerical analysis. Input of new data on the smoothing effect of the pneumatic tyre reflecting changes of a contact area between the wheel and road under vibrations of the suspension makes the system non-linear which requires using numerical analysis methods. Taking into account the variable smoothing ability of the tyre when calculating suspension vibrations, one can approximate calculation and experimental results and improve the constant smoothing ability of the tyre.
Smoothing of climate time series revisited
NASA Astrophysics Data System (ADS)
Mann, Michael E.
2008-08-01
We present an easily implemented method for smoothing climate time series, generalizing upon an approach previously described by Mann (2004). The method adaptively weights the three lowest order time series boundary constraints to optimize the fit with the raw time series. We apply the method to the instrumental global mean temperature series from 1850-2007 and to various surrogate global mean temperature series from 1850-2100 derived from the CMIP3 multimodel intercomparison project. These applications demonstrate that the adaptive method systematically out-performs certain widely used default smoothing methods, and is more likely to yield accurate assessments of long-term warming trends.
Pavement smoothness indices : research brief.
DOT National Transportation Integrated Search
1998-08-01
Many in the asphalt industry believe that initial pavement smoothness directly relates to : pavement life. Public perception of smoothness is also important. Oregon is interested in : determining the appropriate method of measurement to quantify smoo...
NASA Astrophysics Data System (ADS)
Song, Chi; Zhang, Xuejun; Zhang, Xin; Hu, Haifei; Zeng, Xuefeng
2017-06-01
A rigid conformal (RC) lap can smooth mid-spatial-frequency (MSF) errors, which are naturally smaller than the tool size, while still removing large-scale errors in a short time. However, the RC-lap smoothing efficiency performance is poorer than expected, and existing smoothing models cannot explicitly specify the methods to improve this efficiency. We presented an explicit time-dependent smoothing evaluation model that contained specific smoothing parameters directly derived from the parametric smoothing model and the Preston equation. Based on the time-dependent model, we proposed a strategy to improve the RC-lap smoothing efficiency, which incorporated the theoretical model, tool optimization, and efficiency limit determination. Two sets of smoothing experiments were performed to demonstrate the smoothing efficiency achieved using the time-dependent smoothing model. A high, theory-like tool influence function and a limiting tool speed of 300 RPM were o
NASA Astrophysics Data System (ADS)
Žáček, K.
Summary- The only way to make an excessively complex velocity model suitable for application of ray-based methods, such as the Gaussian beam or Gaussian packet methods, is to smooth it. We have smoothed the Marmousi model by choosing a coarser grid and by minimizing the second spatial derivatives of the slowness. This was done by minimizing the relevant Sobolev norm of slowness. We show that minimizing the relevant Sobolev norm of slowness is a suitable technique for preparing the optimum models for asymptotic ray theory methods. However, the price we pay for a model suitable for ray tracing is an increase of the difference between the smoothed and original model. Similarly, the estimated error in the travel time also increases due to the difference between the models. In smoothing the Marmousi model, we have found the estimated error of travel times at the verge of acceptability. Due to the low frequencies in the wavefield of the original Marmousi data set, we have found the Gaussian beams and Gaussian packets at the verge of applicability even in models sufficiently smoothed for ray tracing.
Smoothing optimization of supporting quadratic surfaces with Zernike polynomials
NASA Astrophysics Data System (ADS)
Zhang, Hang; Lu, Jiandong; Liu, Rui; Ma, Peifu
2018-03-01
A new optimization method to get a smooth freeform optical surface from an initial surface generated by the supporting quadratic method (SQM) is proposed. To smooth the initial surface, a 9-vertex system from the neighbor quadratic surface and the Zernike polynomials are employed to establish a linear equation system. A local optimized surface to the 9-vertex system can be build by solving the equations. Finally, a continuous smooth optimization surface is constructed by stitching the above algorithm on the whole initial surface. The spot corresponding to the optimized surface is no longer discrete pixels but a continuous distribution.
Strappini, Francesca; Gilboa, Elad; Pitzalis, Sabrina; Kay, Kendrick; McAvoy, Mark; Nehorai, Arye; Snyder, Abraham Z
2017-03-01
Temporal and spatial filtering of fMRI data is often used to improve statistical power. However, conventional methods, such as smoothing with fixed-width Gaussian filters, remove fine-scale structure in the data, necessitating a tradeoff between sensitivity and specificity. Specifically, smoothing may increase sensitivity (reduce noise and increase statistical power) but at the cost loss of specificity in that fine-scale structure in neural activity patterns is lost. Here, we propose an alternative smoothing method based on Gaussian processes (GP) regression for single subjects fMRI experiments. This method adapts the level of smoothing on a voxel by voxel basis according to the characteristics of the local neural activity patterns. GP-based fMRI analysis has been heretofore impractical owing to computational demands. Here, we demonstrate a new implementation of GP that makes it possible to handle the massive data dimensionality of the typical fMRI experiment. We demonstrate how GP can be used as a drop-in replacement to conventional preprocessing steps for temporal and spatial smoothing in a standard fMRI pipeline. We present simulated and experimental results that show the increased sensitivity and specificity compared to conventional smoothing strategies. Hum Brain Mapp 38:1438-1459, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Performance of time-series methods in forecasting the demand for red blood cell transfusion.
Pereira, Arturo
2004-05-01
Planning the future blood collection efforts must be based on adequate forecasts of transfusion demand. In this study, univariate time-series methods were investigated for their performance in forecasting the monthly demand for RBCs at one tertiary-care, university hospital. Three time-series methods were investigated: autoregressive integrated moving average (ARIMA), the Holt-Winters family of exponential smoothing models, and one neural-network-based method. The time series consisted of the monthly demand for RBCs from January 1988 to December 2002 and was divided into two segments: the older one was used to fit or train the models, and the younger to test for the accuracy of predictions. Performance was compared across forecasting methods by calculating goodness-of-fit statistics, the percentage of months in which forecast-based supply would have met the RBC demand (coverage rate), and the outdate rate. The RBC transfusion series was best fitted by a seasonal ARIMA(0,1,1)(0,1,1)(12) model. Over 1-year time horizons, forecasts generated by ARIMA or exponential smoothing laid within the +/- 10 percent interval of the real RBC demand in 79 percent of months (62% in the case of neural networks). The coverage rate for the three methods was 89, 91, and 86 percent, respectively. Over 2-year time horizons, exponential smoothing largely outperformed the other methods. Predictions by exponential smoothing laid within the +/- 10 percent interval of real values in 75 percent of the 24 forecasted months, and the coverage rate was 87 percent. Over 1-year time horizons, predictions of RBC demand generated by ARIMA or exponential smoothing are accurate enough to be of help in the planning of blood collection efforts. For longer time horizons, exponential smoothing outperforms the other forecasting methods.
Rigid shape matching by segmentation averaging.
Wang, Hongzhi; Oliensis, John
2010-04-01
We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking.
Alternative Attitude Commanding and Control for Precise Spacecraft Landing
NASA Technical Reports Server (NTRS)
Singh, Gurkirpal
2004-01-01
A report proposes an alternative method of control for precision landing on a remote planet. In the traditional method, the attitude of a spacecraft is required to track a commanded translational acceleration vector, which is generated at each time step by solving a two-point boundary value problem. No requirement of continuity is imposed on the acceleration. The translational acceleration does not necessarily vary smoothly. Tracking of a non-smooth acceleration causes the vehicle attitude to exhibit undesirable transients and poor pointing stability behavior. In the alternative method, the two-point boundary value problem is not solved at each time step. A smooth reference position profile is computed. The profile is recomputed only when the control errors get sufficiently large. The nominal attitude is still required to track the smooth reference acceleration command. A steering logic is proposed that controls the position and velocity errors about the reference profile by perturbing the attitude slightly about the nominal attitude. The overall pointing behavior is therefore smooth, greatly reducing the degree of pointing instability.
NASA Astrophysics Data System (ADS)
Hasegawa, Manabu; Hiramatsu, Kotaro
2013-10-01
The effectiveness of the Metropolis algorithm (MA) (constant-temperature simulated annealing) in optimization by the method of search-space smoothing (SSS) (potential smoothing) is studied on two types of random traveling salesman problems. The optimization mechanism of this hybrid approach (MASSS) is investigated by analyzing the exploration dynamics observed in the rugged landscape of the cost function (energy surface). The results show that the MA can be successfully utilized as a local search algorithm in the SSS approach. It is also clarified that the optimization characteristics of these two constituent methods are improved in a mutually beneficial manner in the MASSS run. Specifically, the relaxation dynamics generated by employing the MA work effectively even in a smoothed landscape and more advantage is taken of the guiding function proposed in the idea of SSS; this mechanism operates in an adaptive manner in the de-smoothing process and therefore the MASSS method maintains its optimization function over a wider temperature range than the MA.
Multigrid methods for isogeometric discretization
Gahalaut, K.P.S.; Kraus, J.K.; Tomar, S.K.
2013-01-01
We present (geometric) multigrid methods for isogeometric discretization of scalar second order elliptic problems. The smoothing property of the relaxation method, and the approximation property of the intergrid transfer operators are analyzed. These properties, when used in the framework of classical multigrid theory, imply uniform convergence of two-grid and multigrid methods. Supporting numerical results are provided for the smoothing property, the approximation property, convergence factor and iterations count for V-, W- and F-cycles, and the linear dependence of V-cycle convergence on the smoothing steps. For two dimensions, numerical results include the problems with variable coefficients, simple multi-patch geometry, a quarter annulus, and the dependence of convergence behavior on refinement levels ℓ, whereas for three dimensions, only the constant coefficient problem in a unit cube is considered. The numerical results are complete up to polynomial order p=4, and for C0 and Cp-1 smoothness. PMID:24511168
[Application of exponential smoothing method in prediction and warning of epidemic mumps].
Shi, Yun-ping; Ma, Jia-qi
2010-06-01
To analyze the daily data of epidemic Mumps in a province from 2004 to 2008 and set up exponential smoothing model for the prediction. To predict and warn the epidemic mumps in 2008 through calculating 7-day moving summation and removing the effect of weekends to the data of daily reported mumps cases during 2005-2008 and exponential summation to the data from 2005 to 2007. The performance of Holt-Winters exponential smoothing is good. The result of warning sensitivity was 76.92%, specificity was 83.33%, and timely rate was 80%. It is practicable to use exponential smoothing method to warn against epidemic Mumps.
Smoothness of In vivo Spectral Baseline Determined by Mean Squared Error
Zhang, Yan; Shen, Jun
2013-01-01
Purpose A nonparametric smooth line is usually added to spectral model to account for background signals in vivo magnetic resonance spectroscopy (MRS). The assumed smoothness of the baseline significantly influences quantitative spectral fitting. In this paper, a method is proposed to minimize baseline influences on estimated spectral parameters. Methods In this paper, the non-parametric baseline function with a given smoothness was treated as a function of spectral parameters. Its uncertainty was measured by root-mean-squared error (RMSE). The proposed method was demonstrated with a simulated spectrum and in vivo spectra of both short echo time (TE) and averaged echo times. The estimated in vivo baselines were compared with the metabolite-nulled spectra, and the LCModel-estimated baselines. The accuracies of estimated baseline and metabolite concentrations were further verified by cross-validation. Results An optimal smoothness condition was found that led to the minimal baseline RMSE. In this condition, the best fit was balanced against minimal baseline influences on metabolite concentration estimates. Conclusion Baseline RMSE can be used to indicate estimated baseline uncertainties and serve as the criterion for determining the baseline smoothness of in vivo MRS. PMID:24259436
Predicting Academic Library Circulations: A Forecasting Methods Competition.
ERIC Educational Resources Information Center
Brooks, Terrence A.; Forys, John W., Jr.
Based on sample data representing five years of monthly circulation totals from 50 academic libraries in Illinois, Iowa, Michigan, Minnesota, Missouri, and Ohio, a study was conducted to determine the most efficient smoothing forecasting methods for academic libraries. Smoothing forecasting methods were chosen because they have been characterized…
A method for smoothing segmented lung boundary in chest CT images
NASA Astrophysics Data System (ADS)
Yim, Yeny; Hong, Helen
2007-03-01
To segment low density lung regions in chest CT images, most of methods use the difference in gray-level value of pixels. However, radiodense pulmonary vessels and pleural nodules that contact with the surrounding anatomy are often excluded from the segmentation result. To smooth lung boundary segmented by gray-level processing in chest CT images, we propose a new method using scan line search. Our method consists of three main steps. First, lung boundary is extracted by our automatic segmentation method. Second, segmented lung contour is smoothed in each axial CT slice. We propose a scan line search to track the points on lung contour and find rapidly changing curvature efficiently. Finally, to provide consistent appearance between lung contours in adjacent axial slices, 2D closing in coronal plane is applied within pre-defined subvolume. Our method has been applied for performance evaluation with the aspects of visual inspection, accuracy and processing time. The results of our method show that the smoothness of lung contour was considerably increased by compensating for pulmonary vessels and pleural nodules.
Introduction to multigrid methods
NASA Technical Reports Server (NTRS)
Wesseling, P.
1995-01-01
These notes were written for an introductory course on the application of multigrid methods to elliptic and hyperbolic partial differential equations for engineers, physicists and applied mathematicians. The use of more advanced mathematical tools, such as functional analysis, is avoided. The course is intended to be accessible to a wide audience of users of computational methods. We restrict ourselves to finite volume and finite difference discretization. The basic principles are given. Smoothing methods and Fourier smoothing analysis are reviewed. The fundamental multigrid algorithm is studied. The smoothing and coarse grid approximation properties are discussed. Multigrid schedules and structured programming of multigrid algorithms are treated. Robustness and efficiency are considered.
Zurales, Katie; DeMott, Trina K.; Kim, Hogene; Allet, Lara; Ashton-Miller, James A.; Richardson, James K.
2015-01-01
Objective To determine which gait measures on smooth and uneven surfaces predict falls and fall-related injuries in older subjects with diabetic peripheral neuropathy (DPN). Design Twenty-seven subjects (12 women) with a spectrum of peripheral nerve function ranging from normal to moderately severe DPN walked on smooth and uneven surfaces, with gait parameters determined by optoelectronic kinematic techniques. Falls and injuries were then determined prospectively over the following year. Results Seventeen subjects (62.9%) fell and 12 (44.4%) sustained a fall-related injury. As compared to non-fallers, the subject group reporting any fall, as well as the subject group reporting fall-related injury, demonstrated decreased speed, greater step width (SW), shorter step length (SL) and greater step-width-to-step-length ratio (SW:SL) on both surfaces. Uneven surface SW:SL was the strongest predictor of falls (pseudo-R2 = 0.65; p = .012) and remained so with inclusion of other relevant variables into the model. Post-hoc analysis comparing injured with non-injured fallers showed no difference in any gait parameter. Conclusion SW:SL on an uneven surface is the strongest predictor of falls and injuries in older subjects with a spectrum of peripheral neurologic function. Given the relationship between SW:SL and efficiency, older neuropathic patients at increased fall risk appear to sacrifice efficiency for stability on uneven surfaces. PMID:26053187
ERIC Educational Resources Information Center
Moses, Tim; Liu, Jinghua
2011-01-01
In equating research and practice, equating functions that are smooth are typically assumed to be more accurate than equating functions with irregularities. This assumption presumes that population test score distributions are relatively smooth. In this study, two examples were used to reconsider common beliefs about smoothing and equating. The…
A method for reducing sampling jitter in digital control systems
NASA Technical Reports Server (NTRS)
Anderson, T. O.; HURBD W. J.; Hurd, W. J.
1969-01-01
Digital phase lock loop system is designed by smoothing the proportional control with a low pass filter. This method does not significantly affect the loop dynamics when the smoothing filter bandwidth is wide compared to loop bandwidth.
Bentzon, Jacob F; Falk, Erling
2010-01-01
Smooth muscle cells play a critical role in the development of atherosclerosis and its clinical complications. They were long thought to derive entirely from preexisting smooth muscle cells in the arterial wall, but this understanding has been challenged by the claim that circulating bone marrow-derived smooth muscle progenitor cells are an important source of plaque smooth muscle cells in human and experimental atherosclerosis. This theory is today accepted by many cardiovascular researchers and authors of contemporary review articles. Recently, however, we and others have refuted the existence of bone marrow-derived smooth muscle cells in animal models of atherosclerosis and other arterial diseases based on new experiments with high-resolution microscopy and improved techniques for smooth muscle cell identification and tracking. These studies have also pointed to a number of methodological deficiencies in some of the seminal papers in the field. For those unaccustomed with the methods used in this research area, it must be difficult to decide what to believe and why to do so. In this review, we summarize current knowledge about the origin of smooth muscle cells in atherosclerosis and direct the reader's attention to the methodological challenges that have contributed to the confusion in the field. 2009 Elsevier Inc. All rights reserved.
A method of smoothed particle hydrodynamics using spheroidal kernels
NASA Technical Reports Server (NTRS)
Fulbright, Michael S.; Benz, Willy; Davies, Melvyn B.
1995-01-01
We present a new method of three-dimensional smoothed particle hydrodynamics (SPH) designed to model systems dominated by deformation along a preferential axis. These systems cause severe problems for SPH codes using spherical kernels, which are best suited for modeling systems which retain rough spherical symmetry. Our method allows the smoothing length in the direction of the deformation to evolve independently of the smoothing length in the perpendicular plane, resulting in a kernel with a spheroidal shape. As a result the spatial resolution in the direction of deformation is significantly improved. As a test case we present the one-dimensional homologous collapse of a zero-temperature, uniform-density cloud, which serves to demonstrate the advantages of spheroidal kernels. We also present new results on the problem of the tidal disruption of a star by a massive black hole.
How to Quantify Penile Corpus Cavernosum Structures with Histomorphometry: Comparison of Two Methods
Felix-Patrício, Bruno; De Souza, Diogo Benchimol; Gregório, Bianca Martins; Costa, Waldemar Silva; Sampaio, Francisco José
2015-01-01
The use of morphometrical tools in biomedical research permits the accurate comparison of specimens subjected to different conditions, and the surface density of structures is commonly used for this purpose. The traditional point-counting method is reliable but time-consuming, with computer-aided methods being proposed as an alternative. The aim of this study was to compare the surface density data of penile corpus cavernosum trabecular smooth muscle in different groups of rats, measured by two observers using the point-counting or color-based segmentation method. Ten normotensive and 10 hypertensive male rats were used in this study. Rat penises were processed to obtain smooth muscle immunostained histological slices and photomicrographs captured for analysis. The smooth muscle surface density was measured in both groups by two different observers by the point-counting method and by the color-based segmentation method. Hypertensive rats showed an increase in smooth muscle surface density by the two methods, and no difference was found between the results of the two observers. However, surface density values were higher by the point-counting method. The use of either method did not influence the final interpretation of the results, and both proved to have adequate reproducibility. However, as differences were found between the two methods, results obtained by either method should not be compared. PMID:26413547
Felix-Patrício, Bruno; De Souza, Diogo Benchimol; Gregório, Bianca Martins; Costa, Waldemar Silva; Sampaio, Francisco José
2015-01-01
The use of morphometrical tools in biomedical research permits the accurate comparison of specimens subjected to different conditions, and the surface density of structures is commonly used for this purpose. The traditional point-counting method is reliable but time-consuming, with computer-aided methods being proposed as an alternative. The aim of this study was to compare the surface density data of penile corpus cavernosum trabecular smooth muscle in different groups of rats, measured by two observers using the point-counting or color-based segmentation method. Ten normotensive and 10 hypertensive male rats were used in this study. Rat penises were processed to obtain smooth muscle immunostained histological slices and photomicrographs captured for analysis. The smooth muscle surface density was measured in both groups by two different observers by the point-counting method and by the color-based segmentation method. Hypertensive rats showed an increase in smooth muscle surface density by the two methods, and no difference was found between the results of the two observers. However, surface density values were higher by the point-counting method. The use of either method did not influence the final interpretation of the results, and both proved to have adequate reproducibility. However, as differences were found between the two methods, results obtained by either method should not be compared.
Recognition of Similar Shaped Handwritten Marathi Characters Using Artificial Neural Network
NASA Astrophysics Data System (ADS)
Jane, Archana P.; Pund, Mukesh A.
2012-03-01
The growing need have handwritten Marathi character recognition in Indian offices such as passport, railways etc has made it vital area of a research. Similar shape characters are more prone to misclassification. In this paper a novel method is provided to recognize handwritten Marathi characters based on their features extraction and adaptive smoothing technique. Feature selections methods avoid unnecessary patterns in an image whereas adaptive smoothing technique form smooth shape of charecters.Combination of both these approaches leads to the better results. Previous study shows that, no one technique achieves 100% accuracy in handwritten character recognition area. This approach of combining both adaptive smoothing & feature extraction gives better results (approximately 75-100) and expected outcomes.
A Comparison of Methods for Nonparametric Estimation of Item Characteristic Curves for Binary Items
ERIC Educational Resources Information Center
Lee, Young-Sun
2007-01-01
This study compares the performance of three nonparametric item characteristic curve (ICC) estimation procedures: isotonic regression, smoothed isotonic regression, and kernel smoothing. Smoothed isotonic regression, employed along with an appropriate kernel function, provides better estimates and also satisfies the assumption of strict…
Kernel PLS Estimation of Single-trial Event-related Potentials
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Trejo, Leonard J.
2004-01-01
Nonlinear kernel partial least squaes (KPLS) regressior, is a novel smoothing approach to nonparametric regression curve fitting. We have developed a KPLS approach to the estimation of single-trial event related potentials (ERPs). For improved accuracy of estimation, we also developed a local KPLS method for situations in which there exists prior knowledge about the approximate latency of individual ERP components. To assess the utility of the KPLS approach, we compared non-local KPLS and local KPLS smoothing with other nonparametric signal processing and smoothing methods. In particular, we examined wavelet denoising, smoothing splines, and localized smoothing splines. We applied these methods to the estimation of simulated mixtures of human ERPs and ongoing electroencephalogram (EEG) activity using a dipole simulator (BESA). In this scenario we considered ongoing EEG to represent spatially and temporally correlated noise added to the ERPs. This simulation provided a reasonable but simplified model of real-world ERP measurements. For estimation of the simulated single-trial ERPs, local KPLS provided a level of accuracy that was comparable with or better than the other methods. We also applied the local KPLS method to the estimation of human ERPs recorded in an experiment on co,onitive fatigue. For these data, the local KPLS method provided a clear improvement in visualization of single-trial ERPs as well as their averages. The local KPLS method may serve as a new alternative to the estimation of single-trial ERPs and improvement of ERP averages.
Beta-function B-spline smoothing on triangulations
NASA Astrophysics Data System (ADS)
Dechevsky, Lubomir T.; Zanaty, Peter
2013-03-01
In this work we investigate a novel family of Ck-smooth rational basis functions on triangulations for fitting, smoothing, and denoising geometric data. The introduced basis function is closely related to a recently introduced general method introduced in utilizing generalized expo-rational B-splines, which provides Ck-smooth convex resolutions of unity on very general disjoint partitions and overlapping covers of multidimensional domains with complex geometry. One of the major advantages of this new triangular construction is its locality with respect to the star-1 neighborhood of the vertex on which the said base is providing Hermite interpolation. This locality of the basis functions can be in turn utilized in adaptive methods, where, for instance a local refinement of the underlying triangular mesh affects only the refined domain, whereas, in other method one needs to investigate what changes are occurring outside of the refined domain. Both the triangular and the general smooth constructions have the potential to become a new versatile tool of Computer Aided Geometric Design (CAGD), Finite and Boundary Element Analysis (FEA/BEA) and Iso-geometric Analysis (IGA).
NASA Astrophysics Data System (ADS)
Myszkowski, Karol; Tawara, Takehiro; Seidel, Hans-Peter
2002-06-01
In this paper, we consider applications of perception-based video quality metrics to improve the performance of global lighting computations for dynamic environments. For this purpose we extend the Visible Difference Predictor (VDP) developed by Daly to handle computer animations. We incorporate into the VDP the spatio-velocity CSF model developed by Kelly. The CSF model requires data on the velocity of moving patterns across the image plane. We use the 3D image warping technique to compensate for the camera motion, and we conservatively assume that the motion of animated objects (usually strong attractors of the visual attention) is fully compensated by the smooth pursuit eye motion. Our global illumination solution is based on stochastic photon tracing and takes advantage of temporal coherence of lighting distribution, by processing photons both in the spatial and temporal domains. The VDP is used to keep noise inherent in stochastic methods below the sensitivity level of the human observer. As a result a perceptually-consistent quality across all animation frames is obtained.
Federico, Alejandro; Kaufmann, Guillermo H
2003-12-10
We evaluate the use of a smoothed space-frequency distribution (SSFD) to retrieve optical phase maps in digital speckle pattern interferometry (DSPI). The performance of this method is tested by use of computer-simulated DSPI fringes. Phase gradients are found along a pixel path from a single DSPI image, and the phase map is finally determined by integration. This technique does not need the application of a phase unwrapping algorithm or the introduction of carrier fringes in the interferometer. It is shown that a Wigner-Ville distribution with a smoothing Gaussian kernel gives more-accurate results than methods based on the continuous wavelet transform. We also discuss the influence of filtering on smoothing of the DSPI fringes and some additional limitations that emerge when this technique is applied. The performance of the SSFD method for processing experimental data is then illustrated.
Point Set Denoising Using Bootstrap-Based Radial Basis Function.
Liew, Khang Jie; Ramli, Ahmad; Abd Majid, Ahmad
2016-01-01
This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.
A new axial smoothing method based on elastic mapping
NASA Astrophysics Data System (ADS)
Yang, J.; Huang, S. C.; Lin, K. P.; Czernin, J.; Wolfenden, P.; Dahlbom, M.; Hoh, C. K.; Phelps, M. E.
1996-12-01
New positron emission tomography (PET) scanners have higher axial and in-plane spatial resolutions but at the expense of reduced per plane sensitivity, which prevents the higher resolution from being fully realized. Normally, Gaussian-weighted interplane axial smoothing is used to reduce noise. In this study, the authors developed a new algorithm that first elastically maps adjacent planes, and then the mapped images are smoothed axially to reduce the image noise level. Compared to those obtained by the conventional axial-directional smoothing method, the images by the new method have improved signal-to-noise ratio. To quantify the signal-to-noise improvement, both simulated and real cardiac PET images were studied. Various Hanning reconstruction filters with cutoff frequency=0.5, 0.7, 1.0/spl times/Nyquist frequency and Ramp filter were tested on simulated images. Effective in-plane resolution was measured by the effective global Gaussian resolution (EGGR) and noise reduction was evaluated by the cross-correlation coefficient. Results showed that the new method was robust to various noise levels and indicated larger noise reduction or better image feature preservation (i.e., smaller EGGR) than by the conventional method.
Generalized Scalar-on-Image Regression Models via Total Variation.
Wang, Xiao; Zhu, Hongtu
2017-01-01
The use of imaging markers to predict clinical outcomes can have a great impact in public health. The aim of this paper is to develop a class of generalized scalar-on-image regression models via total variation (GSIRM-TV), in the sense of generalized linear models, for scalar response and imaging predictor with the presence of scalar covariates. A key novelty of GSIRM-TV is that it is assumed that the slope function (or image) of GSIRM-TV belongs to the space of bounded total variation in order to explicitly account for the piecewise smooth nature of most imaging data. We develop an efficient penalized total variation optimization to estimate the unknown slope function and other parameters. We also establish nonasymptotic error bounds on the excess risk. These bounds are explicitly specified in terms of sample size, image size, and image smoothness. Our simulations demonstrate a superior performance of GSIRM-TV against many existing approaches. We apply GSIRM-TV to the analysis of hippocampus data obtained from the Alzheimers Disease Neuroimaging Initiative (ADNI) dataset.
Tajima, Shogo; Waki, Michihiko; Fukuyama, Masashi
2016-12-01
Although primary leiomyosarcoma of the kidney is extremely rare, it is the most common sarcoma of the kidney. Leiomyosarcoma with a large pleomorphic component is designated as pleomorphic leiomyosarcoma. The pleomorphic component is usually similar to undifferentiated high-grade pleomorphic sarcoma, although it variably expresses smooth muscle markers on immunohistochemistry. In the few reported cases of pleomorphic leiomyosarcoma of the kidney, cases with the pleomorphic component showing distinct nodularity similar to dedifferentiated leiomyosarcoma have not been described, to the best of our knowledge. Herein, we present a case of a 49-year-old woman with pleomorphic leiomyosarcoma in the kidney showing distinct nodularity of smooth muscle marker-expressing pleomorphic cells within a background of classic leiomyosarcoma. Along with the classification as a pleomorphic leiomyosarcoma, suggesting aggressive clinical behavior, the renal origin itself might also be a predictor of poor prognosis, as shown in a previous study. This case also involved concomitant distant metastases, already present during the initial detection of the renal tumor.
Choi, Hyuck Jae; Lee, Joo-Hyuk; Seo, Sang-Soo; Lee, Sun; Kim, Seok Ki; Kim, Joo-Young; Lee, Jong Seok; Park, Sang-Yoon; Kim, Young Hoon
2005-01-01
The computed tomography (CT) findings of ovarian metastases from colon cancer were evaluated and were compared with those of primary malignant ovarian tumors. Sixteen patients with 21 masses from colon cancer and 20 patients with 31 primary malignant ovarian tumors were included in this study. The CT findings (laterality, size, margin, shape, mass characteristic, strong enhancement of cyst wall, enhancement of solid portion, amount of ascites, peritoneal seeding, lymph node enlargement, and metastasis) and ages of the patients in both groups were compared. Univariate analysis, the Pearson chi test, and the independent-samples t test were used to distinguish them. A smooth margin of the tumor (odds ratio=24.3, 95% confidence interval: 2.9-204.2) and cystic nature of the mass (Pearson chi=12.96, P=0.005) were strong predictors of ovarian metastasis from colon cancer. Ovarian metastases from colon cancer show a smooth margin and more cystic nature on CT compared with primary malignant ovarian tumors.
Chemical method for producing smooth surfaces on silicon wafers
Yu, Conrad
2003-01-01
An improved method for producing optically smooth surfaces in silicon wafers during wet chemical etching involves a pre-treatment rinse of the wafers before etching and a post-etching rinse. The pre-treatment with an organic solvent provides a well-wetted surface that ensures uniform mass transfer during etching, which results in optically smooth surfaces. The post-etching treatment with an acetic acid solution stops the etching instantly, preventing any uneven etching that leads to surface roughness. This method can be used to etch silicon surfaces to a depth of 200 .mu.m or more, while the finished surfaces have a surface roughness of only 15-50 .ANG. (RMS).
NASA Technical Reports Server (NTRS)
Lambert, Winifred; Wheeler, Mark
2007-01-01
This report describes the work done by the Applied Meteorology Unit (AMU) to update the lightning probability forecast equations developed in Phase I. In the time since the Phase I equations were developed, new ideas regarding certain predictors were formulated and a desire to make the tool more automated was expressed by 45 WS forecasters. Five modifications were made to the data: 1) increased the period of record from 15 to 17 years, 2) modified the valid area to match the lighting warning areas, 3) added the 1000 UTC CCAFS sounding to the other soundings in determining the flow regime, 4) used a different smoothing function for the daily climatology, and 5) determined the optimal relative humidity (RH) layer to use as a predictor. The new equations outperformed the Phase I equations in several tests, and improved the skill of the forecast over the Phase I equations by 8%. A graphical user interface (GUI) was created in the Meteorological Interactive Data Display System (MIDDS) that gathers the predictor values for the equations automatically. The GUI was transitioned to operations in May 2007 for the 2007 warm season.
Multiscale measurement error models for aggregated small area health data.
Aregay, Mehreteab; Lawson, Andrew B; Faes, Christel; Kirby, Russell S; Carroll, Rachel; Watjou, Kevin
2016-08-01
Spatial data are often aggregated from a finer (smaller) to a coarser (larger) geographical level. The process of data aggregation induces a scaling effect which smoothes the variation in the data. To address the scaling problem, multiscale models that link the convolution models at different scale levels via the shared random effect have been proposed. One of the main goals in aggregated health data is to investigate the relationship between predictors and an outcome at different geographical levels. In this paper, we extend multiscale models to examine whether a predictor effect at a finer level hold true at a coarser level. To adjust for predictor uncertainty due to aggregation, we applied measurement error models in the framework of multiscale approach. To assess the benefit of using multiscale measurement error models, we compare the performance of multiscale models with and without measurement error in both real and simulated data. We found that ignoring the measurement error in multiscale models underestimates the regression coefficient, while it overestimates the variance of the spatially structured random effect. On the other hand, accounting for the measurement error in multiscale models provides a better model fit and unbiased parameter estimates. © The Author(s) 2016.
A long-term earthquake rate model for the central and eastern United States from smoothed seismicity
Moschetti, Morgan P.
2015-01-01
I present a long-term earthquake rate model for the central and eastern United States from adaptive smoothed seismicity. By employing pseudoprospective likelihood testing (L-test), I examined the effects of fixed and adaptive smoothing methods and the effects of catalog duration and composition on the ability of the models to forecast the spatial distribution of recent earthquakes. To stabilize the adaptive smoothing method for regions of low seismicity, I introduced minor modifications to the way that the adaptive smoothing distances are calculated. Across all smoothed seismicity models, the use of adaptive smoothing and the use of earthquakes from the recent part of the catalog optimizes the likelihood for tests with M≥2.7 and M≥4.0 earthquake catalogs. The smoothed seismicity models optimized by likelihood testing with M≥2.7 catalogs also produce the highest likelihood values for M≥4.0 likelihood testing, thus substantiating the hypothesis that the locations of moderate-size earthquakes can be forecast by the locations of smaller earthquakes. The likelihood test does not, however, maximize the fraction of earthquakes that are better forecast than a seismicity rate model with uniform rates in all cells. In this regard, fixed smoothing models perform better than adaptive smoothing models. The preferred model of this study is the adaptive smoothed seismicity model, based on its ability to maximize the joint likelihood of predicting the locations of recent small-to-moderate-size earthquakes across eastern North America. The preferred rate model delineates 12 regions where the annual rate of M≥5 earthquakes exceeds 2×10−3. Although these seismic regions have been previously recognized, the preferred forecasts are more spatially concentrated than the rates from fixed smoothed seismicity models, with rate increases of up to a factor of 10 near clusters of high seismic activity.
Eliseyev, Andrey; Aksenova, Tetiana
2016-01-01
In the current paper the decoding algorithms for motor-related BCI systems for continuous upper limb trajectory prediction are considered. Two methods for the smooth prediction, namely Sobolev and Polynomial Penalized Multi-Way Partial Least Squares (PLS) regressions, are proposed. The methods are compared to the Multi-Way Partial Least Squares and Kalman Filter approaches. The comparison demonstrated that the proposed methods combined the prediction accuracy of the algorithms of the PLS family and trajectory smoothness of the Kalman Filter. In addition, the prediction delay is significantly lower for the proposed algorithms than for the Kalman Filter approach. The proposed methods could be applied in a wide range of applications beyond neuroscience. PMID:27196417
Testing local anisotropy using the method of smoothed residuals I — methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Appleby, Stephen; Shafieloo, Arman, E-mail: stephen.appleby@apctp.org, E-mail: arman@apctp.org
2014-03-01
We discuss some details regarding the method of smoothed residuals, which has recently been used to search for anisotropic signals in low-redshift distance measurements (Supernovae). In this short note we focus on some details regarding the implementation of the method, particularly the issue of effectively detecting signals in data that are inhomogeneously distributed on the sky. Using simulated data, we argue that the original method proposed in Colin et al. [1] will not detect spurious signals due to incomplete sky coverage, and that introducing additional Gaussian weighting to the statistic as in [2] can hinder its ability to detect amore » signal. Issues related to the width of the Gaussian smoothing are also discussed.« less
Yang, Jingjing; Cox, Dennis D; Lee, Jong Soo; Ren, Peng; Choi, Taeryon
2017-12-01
Functional data are defined as realizations of random functions (mostly smooth functions) varying over a continuum, which are usually collected on discretized grids with measurement errors. In order to accurately smooth noisy functional observations and deal with the issue of high-dimensional observation grids, we propose a novel Bayesian method based on the Bayesian hierarchical model with a Gaussian-Wishart process prior and basis function representations. We first derive an induced model for the basis-function coefficients of the functional data, and then use this model to conduct posterior inference through Markov chain Monte Carlo methods. Compared to the standard Bayesian inference that suffers serious computational burden and instability in analyzing high-dimensional functional data, our method greatly improves the computational scalability and stability, while inheriting the advantage of simultaneously smoothing raw observations and estimating the mean-covariance functions in a nonparametric way. In addition, our method can naturally handle functional data observed on random or uncommon grids. Simulation and real studies demonstrate that our method produces similar results to those obtainable by the standard Bayesian inference with low-dimensional common grids, while efficiently smoothing and estimating functional data with random and high-dimensional observation grids when the standard Bayesian inference fails. In conclusion, our method can efficiently smooth and estimate high-dimensional functional data, providing one way to resolve the curse of dimensionality for Bayesian functional data analysis with Gaussian-Wishart processes. © 2017, The International Biometric Society.
Tang, Cuong Q; Humphreys, Aelys M; Fontaneto, Diego; Barraclough, Timothy G; Paradis, Emmanuel
2014-01-01
Coalescent-based species delimitation methods combine population genetic and phylogenetic theory to provide an objective means for delineating evolutionarily significant units of diversity. The generalised mixed Yule coalescent (GMYC) and the Poisson tree process (PTP) are methods that use ultrametric (GMYC or PTP) or non-ultrametric (PTP) gene trees as input, intended for use mostly with single-locus data such as DNA barcodes. Here, we assess how robust the GMYC and PTP are to different phylogenetic reconstruction and branch smoothing methods. We reconstruct over 400 ultrametric trees using up to 30 different combinations of phylogenetic and smoothing methods and perform over 2000 separate species delimitation analyses across 16 empirical data sets. We then assess how variable diversity estimates are, in terms of richness and identity, with respect to species delimitation, phylogenetic and smoothing methods. The PTP method generally generates diversity estimates that are more robust to different phylogenetic methods. The GMYC is more sensitive, but provides consistent estimates for BEAST trees. The lower consistency of GMYC estimates is likely a result of differences among gene trees introduced by the smoothing step. Unresolved nodes (real anomalies or methodological artefacts) affect both GMYC and PTP estimates, but have a greater effect on GMYC estimates. Branch smoothing is a difficult step and perhaps an underappreciated source of bias that may be widespread among studies of diversity and diversification. Nevertheless, careful choice of phylogenetic method does produce equivalent PTP and GMYC diversity estimates. We recommend simultaneous use of the PTP model with any model-based gene tree (e.g. RAxML) and GMYC approaches with BEAST trees for obtaining species hypotheses. PMID:25821577
Forecasting hotspots in East Kutai, Kutai Kartanegara, and West Kutai as early warning information
NASA Astrophysics Data System (ADS)
Wahyuningsih, S.; Goejantoro, R.; Rizki, N. A.
2018-04-01
The aims of this research are to model hotspots and forecast hotspot 2017 in East Kutai, Kutai Kartanegara and West Kutai. The methods which used in this research were Holt exponential smoothing, Holt’s additive dump trend method, Holt-Winters’ additive method, additive decomposition method, multiplicative decomposition method, Loess decomposition method and Box-Jenkins method. For smoothing techniques, additive decomposition is better than Holt’s exponential smoothing. The hotspots model using Box-Jenkins method were Autoregressive Moving Average ARIMA(1,1,0), ARIMA(0,2,1), and ARIMA(0,1,0). Comparing the results from all methods which were used in this research, and based on Root of Mean Squared Error (RMSE), show that Loess decomposition method is the best times series model, because it has the least RMSE. Thus the Loess decomposition model used to forecast the number of hotspot. The forecasting result indicatethat hotspots pattern tend to increase at the end of 2017 in Kutai Kartanegara and West Kutai, but stationary in East Kutai.
Automated, on-board terrain analysis for precision landings
NASA Technical Reports Server (NTRS)
Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.; Hines, Glenn D.
2006-01-01
Advances in space robotics technology hinge to a large extent upon the development and deployment of sophisticated new vision-based methods for automated in-space mission operations and scientific survey. To this end, we have developed a new concept for automated terrain analysis that is based upon a generic image enhancement platform|multi-scale retinex (MSR) and visual servo (VS) processing. This pre-conditioning with the MSR and the vs produces a "canonical" visual representation that is largely independent of lighting variations, and exposure errors. Enhanced imagery is then processed with a biologically inspired two-channel edge detection process, followed by a smoothness based criteria for image segmentation. Landing sites can be automatically determined by examining the results of the smoothness-based segmentation which shows those areas in the image that surpass a minimum degree of smoothness. Though the msr has proven to be a very strong enhancement engine, the other elements of the approach|the vs, terrain map generation, and smoothness-based segmentation|are in early stages of development. Experimental results on data from the Mars Global Surveyor show that the imagery can be processed to automatically obtain smooth landing sites. In this paper, we describe the method used to obtain these landing sites, and also examine the smoothness criteria in terms of the imager and scene characteristics. Several examples of applying this method to simulated and real imagery are shown.
Joint Smoothed l₀-Norm DOA Estimation Algorithm for Multiple Measurement Vectors in MIMO Radar.
Liu, Jing; Zhou, Weidong; Juwono, Filbert H
2017-05-08
Direction-of-arrival (DOA) estimation is usually confronted with a multiple measurement vector (MMV) case. In this paper, a novel fast sparse DOA estimation algorithm, named the joint smoothed l 0 -norm algorithm, is proposed for multiple measurement vectors in multiple-input multiple-output (MIMO) radar. To eliminate the white or colored Gaussian noises, the new method first obtains a low-complexity high-order cumulants based data matrix. Then, the proposed algorithm designs a joint smoothed function tailored for the MMV case, based on which joint smoothed l 0 -norm sparse representation framework is constructed. Finally, for the MMV-based joint smoothed function, the corresponding gradient-based sparse signal reconstruction is designed, thus the DOA estimation can be achieved. The proposed method is a fast sparse representation algorithm, which can solve the MMV problem and perform well for both white and colored Gaussian noises. The proposed joint algorithm is about two orders of magnitude faster than the l 1 -norm minimization based methods, such as l 1 -SVD (singular value decomposition), RV (real-valued) l 1 -SVD and RV l 1 -SRACV (sparse representation array covariance vectors), and achieves better DOA estimation performance.
A Novel Method for Modeling Neumann and Robin Boundary Conditions in Smoothed Particle Hydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryan, Emily M.; Tartakovsky, Alexandre M.; Amon, Cristina
2010-08-26
In this paper we present an improved method for handling Neumann or Robin boundary conditions in smoothed particle hydrodynamics. The Neumann and Robin boundary conditions are common to many physical problems (such as heat/mass transfer), and can prove challenging to model in volumetric modeling techniques such as smoothed particle hydrodynamics (SPH). A new SPH method for diffusion type equations subject to Neumann or Robin boundary conditions is proposed. The new method is based on the continuum surface force model [1] and allows an efficient implementation of the Neumann and Robin boundary conditions in the SPH method for geometrically complex boundaries.more » The paper discusses the details of the method and the criteria needed to apply the model. The model is used to simulate diffusion and surface reactions and its accuracy is demonstrated through test cases for boundary conditions describing different surface reactions.« less
Single image super-resolution based on approximated Heaviside functions and iterative refinement
Wang, Xin-Yu; Huang, Ting-Zhu; Deng, Liang-Jian
2018-01-01
One method of solving the single-image super-resolution problem is to use Heaviside functions. This has been done previously by making a binary classification of image components as “smooth” and “non-smooth”, describing these with approximated Heaviside functions (AHFs), and iteration including l1 regularization. We now introduce a new method in which the binary classification of image components is extended to different degrees of smoothness and non-smoothness, these components being represented by various classes of AHFs. Taking into account the sparsity of the non-smooth components, their coefficients are l1 regularized. In addition, to pick up more image details, the new method uses an iterative refinement for the residuals between the original low-resolution input and the downsampled resulting image. Experimental results showed that the new method is superior to the original AHF method and to four other published methods. PMID:29329298
Colloidal nanocrystals and method of making
Kahen, Keith
2015-10-06
A tight confinement nanocrystal comprises a homogeneous center region having a first composition and a smoothly varying region having a second composition wherein a confining potential barrier monotonically increases and then monotonically decreases as the smoothly varying region extends from the surface of the homogeneous center region to an outer surface of the nanocrystal. A method of producing the nanocrystal comprises forming a first solution by combining a solvent and at most two nanocrystal precursors; heating the first solution to a nucleation temperature; adding to the first solution, a second solution having a solvent, at least one additional and different precursor to form the homogeneous center region and at most an initial portion of the smoothly varying region; and lowering the solution temperature to a growth temperature to complete growth of the smoothly varying region.
Xiao, Zhu; Havyarimana, Vincent; Li, Tong; Wang, Dong
2016-05-13
In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS), is proposed, which enables vehicle state estimation (VSE) with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student's t-distribution is adopted in order to compute the probability distribution function (PDF) related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF) is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods.
Multiple-Primitives Hierarchical Classification of Airborne Laser Scanning Data in Urban Areas
NASA Astrophysics Data System (ADS)
Ni, H.; Lin, X. G.; Zhang, J. X.
2017-09-01
A hierarchical classification method for Airborne Laser Scanning (ALS) data of urban areas is proposed in this paper. This method is composed of three stages among which three types of primitives are utilized, i.e., smooth surface, rough surface, and individual point. In the first stage, the input ALS data is divided into smooth surfaces and rough surfaces by employing a step-wise point cloud segmentation method. In the second stage, classification based on smooth surfaces and rough surfaces is performed. Points in the smooth surfaces are first classified into ground and buildings based on semantic rules. Next, features of rough surfaces are extracted. Then, points in rough surfaces are classified into vegetation and vehicles based on the derived features and Random Forests (RF). In the third stage, point-based features are extracted for the ground points, and then, an individual point classification procedure is performed to classify the ground points into bare land, artificial ground and greenbelt. Moreover, the shortages of the existing studies are analyzed, and experiments show that the proposed method overcomes these shortages and handles more types of objects.
Visual enhancement of unmixed multispectral imagery using adaptive smoothing
Lemeshewsky, G.P.; Rahman, Z.-U.; Schowengerdt, R.A.; Reichenbach, S.E.
2004-01-01
Adaptive smoothing (AS) has been previously proposed as a method to smooth uniform regions of an image, retain contrast edges, and enhance edge boundaries. The method is an implementation of the anisotropic diffusion process which results in a gray scale image. This paper discusses modifications to the AS method for application to multi-band data which results in a color segmented image. The process was used to visually enhance the three most distinct abundance fraction images produced by the Lagrange constraint neural network learning-based unmixing of Landsat 7 Enhanced Thematic Mapper Plus multispectral sensor data. A mutual information-based method was applied to select the three most distinct fraction images for subsequent visualization as a red, green, and blue composite. A reported image restoration technique (partial restoration) was applied to the multispectral data to reduce unmixing error, although evaluation of the performance of this technique was beyond the scope of this paper. The modified smoothing process resulted in a color segmented image with homogeneous regions separated by sharpened, coregistered multiband edges. There was improved class separation with the segmented image, which has importance to subsequent operations involving data classification.
A Bayesian inversion for slip distribution of 1 Apr 2007 Mw8.1 Solomon Islands Earthquake
NASA Astrophysics Data System (ADS)
Chen, T.; Luo, H.
2013-12-01
On 1 Apr 2007 the megathrust Mw8.1 Solomon Islands earthquake occurred in the southeast pacific along the New Britain subduction zone. 102 vertical displacement measurements over the southeastern end of the rupture zone from two field surveys after this event provide a unique constraint for slip distribution inversion. In conventional inversion method (such as bounded variable least squares) the smoothing parameter that determines the relative weight placed on fitting the data versus smoothing the slip distribution is often subjectively selected at the bend of the trade-off curve. Here a fully probabilistic inversion method[Fukuda,2008] is applied to estimate distributed slip and smoothing parameter objectively. The joint posterior probability density function of distributed slip and the smoothing parameter is formulated under a Bayesian framework and sampled with Markov chain Monte Carlo method. We estimate the spatial distribution of dip slip associated with the 1 Apr 2007 Solomon Islands earthquake with this method. Early results show a shallower dip angle than previous study and highly variable dip slip both along-strike and down-dip.
NASA Astrophysics Data System (ADS)
Tanaka, Takuro; Takahashi, Hisashi
In some motor applications, it is very difficult to attach a position sensor to the motor in housing. One of the examples of such applications is the dental handpiece-motor. In those designs, it is necessary to drive highly efficiency at low speed and variable load condition without a position sensor. We developed a method to control a motor high-efficient and smoothly at low speed without a position sensor. In this paper, the method in which permanent magnet synchronous motor is controlled smoothly and high-efficient by using torque angle control in synchronized operation is shown. The usefulness is confirmed by experimental results. In conclusion, the proposed sensor-less control method has been achieved to be very efficiently and smoothly.
Correction of mid-spatial-frequency errors by smoothing in spin motion for CCOS
NASA Astrophysics Data System (ADS)
Zhang, Yizhong; Wei, Chaoyang; Shao, Jianda; Xu, Xueke; Liu, Shijie; Hu, Chen; Zhang, Haichao; Gu, Haojin
2015-08-01
Smoothing is a convenient and efficient way to correct mid-spatial-frequency errors. Quantifying the smoothing effect allows improvements in efficiency for finishing precision optics. A series experiments in spin motion are performed to study the smoothing effects about correcting mid-spatial-frequency errors. Some of them use a same pitch tool at different spinning speed, and others at a same spinning speed with different tools. Introduced and improved Shu's model to describe and compare the smoothing efficiency with different spinning speed and different tools. From the experimental results, the mid-spatial-frequency errors on the initial surface were nearly smoothed out after the process in spin motion and the number of smoothing times can be estimated by the model before the process. Meanwhile this method was also applied to smooth the aspherical component, which has an obvious mid-spatial-frequency error after Magnetorheological Finishing processing. As a result, a high precision aspheric optical component was obtained with PV=0.1λ and RMS=0.01λ.
Smooth halos in the cosmic web
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaite, José, E-mail: jose.gaite@upm.es
Dark matter halos can be defined as smooth distributions of dark matter placed in a non-smooth cosmic web structure. This definition of halos demands a precise definition of smoothness and a characterization of the manner in which the transition from smooth halos to the cosmic web takes place. We introduce entropic measures of smoothness, related to measures of inequality previously used in economy and with the advantage of being connected with standard methods of multifractal analysis already used for characterizing the cosmic web structure in cold dark matter N-body simulations. These entropic measures provide us with a quantitative description ofmore » the transition from the small scales portrayed as a distribution of halos to the larger scales portrayed as a cosmic web and, therefore, allow us to assign definite sizes to halos. However, these ''smoothness sizes'' have no direct relation to the virial radii. Finally, we discuss the influence of N-body discreteness parameters on smoothness.« less
Computer programs for smoothing and scaling airfoil coordinates
NASA Technical Reports Server (NTRS)
Morgan, H. L., Jr.
1983-01-01
Detailed descriptions are given of the theoretical methods and associated computer codes of a program to smooth and a program to scale arbitrary airfoil coordinates. The smoothing program utilizes both least-squares polynomial and least-squares cubic spline techniques to smooth interatively the second derivatives of the y-axis airfoil coordinates with respect to a transformed x-axis system which unwraps the airfoil and stretches the nose and trailing-edge regions. The corresponding smooth airfoil coordinates are then determined by solving a tridiagonal matrix of simultaneous cubic-spline equations relating the y-axis coordinates and their corresponding second derivatives. A technique for computing the camber and thickness distribution of the smoothed airfoil is also discussed. The scaling program can then be used to scale the thickness distribution generated by the smoothing program to a specific maximum thickness which is then combined with the camber distribution to obtain the final scaled airfoil contour. Computer listings of the smoothing and scaling programs are included.
NASA Astrophysics Data System (ADS)
Davis, J. K.; Vincent, G. P.; Hildreth, M.; Kightlinger, L.; Carlson, C.; Wimberly, M. C.
2017-12-01
South Dakota has the highest annual incidence of human cases of West Nile virus (WNV) in all US states, and human cases can vary wildly among years; predicting WNV risk in advance is a necessary exercise if public health officials are to respond efficiently and effectively to risk. Case counts are associated with environmental factors that affect mosquitoes, avian hosts, and the virus itself. They are also correlated with entomological risk indices obtained by trapping and testing mosquitoes. However, neither weather nor insect data alone provide a sufficient basis to make timely and accurate predictions, and combining them into models of human disease is not necessarily straightforward. Here we present lessons learned in three years of making real-time forecasts of this threat to public health. Various methods of integrating data from NASA's North American Land Data Assimilation System (NLDAS) with mosquito surveillance data were explored in a model comparison framework. We found that a model of human disease summarizing weather data (by polynomial distributed lags with seasonally-varying coefficients) and mosquito data (by a mixed-effects model that smooths out these sparse and highly-variable data) made accurate predictions of risk, and was generalizable enough to be recommended in similar applications. A model based on lagged effects of temperature and humidity provided the most accurate predictions. We also found that model accuracy was improved by allowing coefficients to vary smoothly throughout the season, giving different weights to different predictor variables during different parts of the season.
Surface Wave Tomography with Spatially Varying Smoothing Based on Continuous Model Regionalization
NASA Astrophysics Data System (ADS)
Liu, Chuanming; Yao, Huajian
2017-03-01
Surface wave tomography based on continuous regionalization of model parameters is widely used to invert for 2-D phase or group velocity maps. An inevitable problem is that the distribution of ray paths is far from homogeneous due to the spatially uneven distribution of stations and seismic events, which often affects the spatial resolution of the tomographic model. We present an improved tomographic method with a spatially varying smoothing scheme that is based on the continuous regionalization approach. The smoothness of the inverted model is constrained by the Gaussian a priori model covariance function with spatially varying correlation lengths based on ray path density. In addition, a two-step inversion procedure is used to suppress the effects of data outliers on tomographic models. Both synthetic and real data are used to evaluate this newly developed tomographic algorithm. In the synthetic tests, when the contrived model has different scales of anomalies but with uneven ray path distribution, we compare the performance of our spatially varying smoothing method with the traditional inversion method, and show that the new method is capable of improving the recovery in regions of dense ray sampling. For real data applications, the resulting phase velocity maps of Rayleigh waves in SE Tibet produced using the spatially varying smoothing method show similar features to the results with the traditional method. However, the new results contain more detailed structures and appears to better resolve the amplitude of anomalies. From both synthetic and real data tests we demonstrate that our new approach is useful to achieve spatially varying resolution when used in regions with heterogeneous ray path distribution.
Vickers, Andrew J; Cronin, Angel M; Elkin, Elena B; Gonen, Mithat
2008-01-01
Background Decision curve analysis is a novel method for evaluating diagnostic tests, prediction models and molecular markers. It combines the mathematical simplicity of accuracy measures, such as sensitivity and specificity, with the clinical applicability of decision analytic approaches. Most critically, decision curve analysis can be applied directly to a data set, and does not require the sort of external data on costs, benefits and preferences typically required by traditional decision analytic techniques. Methods In this paper we present several extensions to decision curve analysis including correction for overfit, confidence intervals, application to censored data (including competing risk) and calculation of decision curves directly from predicted probabilities. All of these extensions are based on straightforward methods that have previously been described in the literature for application to analogous statistical techniques. Results Simulation studies showed that repeated 10-fold crossvalidation provided the best method for correcting a decision curve for overfit. The method for applying decision curves to censored data had little bias and coverage was excellent; for competing risk, decision curves were appropriately affected by the incidence of the competing risk and the association between the competing risk and the predictor of interest. Calculation of decision curves directly from predicted probabilities led to a smoothing of the decision curve. Conclusion Decision curve analysis can be easily extended to many of the applications common to performance measures for prediction models. Software to implement decision curve analysis is provided. PMID:19036144
Magnesium Counteracts Vascular Calcification: Passive Interference or Active Modulation?
Ter Braake, Anique D; Shanahan, Catherine M; de Baaij, Jeroen H F
2017-08-01
Over the last decade, an increasing number of studies report a close relationship between serum magnesium concentration and cardiovascular disease risk in the general population. In end-stage renal disease, an association was found between serum magnesium and survival. Hypomagnesemia was identified as a strong predictor for cardiovascular disease in these patients. A substantial body of in vitro and in vivo studies has identified a protective role for magnesium in vascular calcification. However, the precise mechanisms and its contribution to cardiovascular protection remain unclear. There are currently 2 leading hypotheses: first, magnesium may bind phosphate and delay calcium phosphate crystal growth in the circulation, thereby passively interfering with calcium phosphate deposition in the vessel wall. Second, magnesium may regulate vascular smooth muscle cell transdifferentiation toward an osteogenic phenotype by active cellular modulation of factors associated with calcification. Here, the data supporting these major hypotheses are reviewed. The literature supports both a passive inorganic phosphate-buffering role reducing hydroxyapatite formation and an active cell-mediated role, directly targeting vascular smooth muscle transdifferentiation. However, current evidence relies on basic experimental designs that are often insufficient to delineate the underlying mechanisms. The field requires more advanced experimental design, including determination of intracellular magnesium concentrations and the identification of the molecular players that regulate magnesium concentrations in vascular smooth muscle cells. © 2017 American Heart Association, Inc.
Method for producing smooth inner surfaces
Cooper, Charles A.
2016-05-17
The invention provides a method for preparing superconducting cavities, the method comprising causing polishing media to tumble by centrifugal barrel polishing within the cavities for a time sufficient to attain a surface smoothness of less than 15 nm root mean square roughness over approximately a 1 mm.sup.2 scan area. The method also provides for a method for preparing superconducting cavities, the method comprising causing polishing media bound to a carrier to tumble within the cavities. The method also provides for a method for preparing superconducting cavities, the method comprising causing polishing media in a slurry to tumble within the cavities.
Hanson, Erik A; Lundervold, Arvid
2013-11-01
Multispectral, multichannel, or time series image segmentation is important for image analysis in a wide range of applications. Regularization of the segmentation is commonly performed using local image information causing the segmented image to be locally smooth or piecewise constant. A new spatial regularization method, incorporating non-local information, was developed and tested. Our spatial regularization method applies to feature space classification in multichannel images such as color images and MR image sequences. The spatial regularization involves local edge properties, region boundary minimization, as well as non-local similarities. The method is implemented in a discrete graph-cut setting allowing fast computations. The method was tested on multidimensional MRI recordings from human kidney and brain in addition to simulated MRI volumes. The proposed method successfully segment regions with both smooth and complex non-smooth shapes with a minimum of user interaction.
Likelihood Methods for Adaptive Filtering and Smoothing. Technical Report #455.
ERIC Educational Resources Information Center
Butler, Ronald W.
The dynamic linear model or Kalman filtering model provides a useful methodology for predicting the past, present, and future states of a dynamic system, such as an object in motion or an economic or social indicator that is changing systematically with time. Recursive likelihood methods for adaptive Kalman filtering and smoothing are developed.…
Data preparation for functional data analysis of PM10 in Peninsular Malaysia
NASA Astrophysics Data System (ADS)
Shaadan, Norshahida; Jemain, Abdul Aziz; Deni, Sayang Mohd
2014-07-01
The use of curves or functional data in the study analysis is increasingly gaining momentum in the various fields of research. The statistical method to analyze such data is known as functional data analysis (FDA). The first step in FDA is to convert the observed data points which are repeatedly recorded over a period of time or space into either a rough (raw) or smooth curve. In the case of the smooth curve, basis functions expansion is one of the methods used for the data conversion. The data can be converted into a smooth curve either by using the regression smoothing or roughness penalty smoothing approach. By using the regression smoothing approach, the degree of curve's smoothness is very dependent on k number of basis functions; meanwhile for the roughness penalty approach, the smoothness is dependent on a roughness coefficient given by parameter λ Based on previous studies, researchers often used the rather time-consuming trial and error or cross validation method to estimate the appropriate number of basis functions. Thus, this paper proposes a statistical procedure to construct functional data or curves for the hourly and daily recorded data. The Bayesian Information Criteria is used to determine the number of basis functions while the Generalized Cross Validation criteria is used to identify the parameter λ The proposed procedure is then applied on a ten year (2001-2010) period of PM10 data from 30 air quality monitoring stations that are located in Peninsular Malaysia. It was found that the number of basis functions required for the construction of the PM10 daily curve in Peninsular Malaysia was in the interval of between 14 and 20 with an average value of 17; the first percentile is 15 and the third percentile is 19. Meanwhile the initial value of the roughness coefficient was in the interval of between 10-5 and 10-7 and the mode was 10-6. An example of the functional descriptive analysis is also shown.
Simple data-smoothing and noise-suppression technique
NASA Technical Reports Server (NTRS)
Duty, R. L.
1970-01-01
Algorithm, based on the Borel method of summing divergent sequences, is used for smoothing noisy data where knowledge of frequency content is not required. Technique's effectiveness is demonstrated by a series of graphs.
Federico, Alejandro; Kaufmann, Guillermo H
2005-05-10
We evaluate the use of smoothing splines with a weighted roughness measure for local denoising of the correlation fringes produced in digital speckle pattern interferometry. In particular, we also evaluate the performance of the multiplicative correlation operation between two speckle patterns that is proposed as an alternative procedure to generate the correlation fringes. It is shown that the application of a normalization algorithm to the smoothed correlation fringes reduces the excessive bias generated in the previous filtering stage. The evaluation is carried out by use of computer-simulated fringes that are generated for different average speckle sizes and intensities of the reference beam, including decorrelation effects. A comparison with filtering methods based on the continuous wavelet transform is also presented. Finally, the performance of the smoothing method in processing experimental data is illustrated.
Adaptive Fuzzy Bounded Control for Consensus of Multiple Strict-Feedback Nonlinear Systems.
Wang, Wei; Tong, Shaocheng
2018-02-01
This paper studies the adaptive fuzzy bounded control problem for leader-follower multiagent systems, where each follower is modeled by the uncertain nonlinear strict-feedback system. Combining the fuzzy approximation with the dynamic surface control, an adaptive fuzzy control scheme is developed to guarantee the output consensus of all agents under directed communication topologies. Different from the existing results, the bounds of the control inputs are known as a priori, and they can be determined by the feedback control gains. To realize smooth and fast learning, a predictor is introduced to estimate each error surface, and the corresponding predictor error is employed to learn the optimal fuzzy parameter vector. It is proved that the developed adaptive fuzzy control scheme guarantees the uniformly ultimate boundedness of the closed-loop systems, and the tracking error converges to a small neighborhood of the origin. The simulation results and comparisons are provided to show the validity of the control strategy presented in this paper.
Lee, B; Lee, J-R; Na, S
2009-06-01
The administration of short-acting opioids can be a reliable and safe method to prevent coughing during emergence from anaesthesia but the proper dose or effect site concentration of remifentanil for this purpose has not been reported. We therefore investigated the effect site concentration (Ce) of remifentanil for preventing cough during emergence from anaesthesia with propofol-remifentanil target-controlled infusion. Twenty-three ASA I-II grade female patients, aged 23-66 yr undergoing elective thyroidectomy were enrolled in this study. EC(50) and EC(95) of remifentanil for preventing cough were determined using Dixon's up-and-down method and probit analysis. Propofol effect site concentration at extubation, mean arterial pressure, and heart rate (HR) were compared in patients with smooth emergence and without smooth emergence. Three out of 11 patients with remifentanil Ce of 1.5 ng ml(-1) and all seven patients with Ce of 2.0 ng ml(-1) did not cough during emergence; the EC(50) of remifentanil that suppressed coughing was 1.46 ng ml(-1) by Dixon's up-and-down method, and EC(95) was 2.14 ng ml(-1) by probit analysis. Effect site concentration of propofol at awakening was similar in patients with a smooth emergence and those without smooth emergence, but HR and arterial pressure were higher in those who coughed during emergence. Clinically significant hypoventilation was not seen in any patient. We found that the EC(95) of effect site concentration of remifentanil to suppress coughing at emergence from anaesthesia was 2.14 ng ml(-1). Maintaining an established Ce of remifentanil is a reliable method of abolishing cough and thereby targeting smooth emergence from anaesthesia.
Garza-Gisholt, Eduardo; Hemmi, Jan M; Hart, Nathan S; Collin, Shaun P
2014-01-01
Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation) and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed 'by eye'. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation 'respects' the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the 'noise' caused by artefacts and permits a clearer representation of the dominant, 'real' distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect the outcome.
How bootstrap can help in forecasting time series with more than one seasonal pattern
NASA Astrophysics Data System (ADS)
Cordeiro, Clara; Neves, M. Manuela
2012-09-01
The search for the future is an appealing challenge in time series analysis. The diversity of forecasting methodologies is inevitable and is still in expansion. Exponential smoothing methods are the launch platform for modelling and forecasting in time series analysis. Recently this methodology has been combined with bootstrapping revealing a good performance. The algorithm (Boot. EXPOS) using exponential smoothing and bootstrap methodologies, has showed promising results for forecasting time series with one seasonal pattern. In case of more than one seasonal pattern, the double seasonal Holt-Winters methods and the exponential smoothing methods were developed. A new challenge was now to combine these seasonal methods with bootstrap and carry over a similar resampling scheme used in Boot. EXPOS procedure. The performance of such partnership will be illustrated for some well-know data sets existing in software.
An Efficient Augmented Lagrangian Method with Applications to Total Variation Minimization
2012-08-17
the classic augmented Lagrangian multiplier method, we propose, analyze and test an algorithm for solving a class of equality-constrained non-smooth...method, we propose, analyze and test an algorithm for solving a class of equality-constrained non-smooth optimization problems (chie y but not...significantly outperforming several state-of-the-art solvers on most tested problems. The resulting MATLAB solver, called TVAL3, has been posted online [23]. 2
Rowell, Amber E.; Aughey, Robert J.; Hopkins, William G.; Esmaeili, Alizera; Lazarus, Brendan H.; Cormack, Stuart J.
2018-01-01
Introduction: Training load and other measures potentially related to match performance are routinely monitored in team-sport athletes. The aim of this research was to examine the effect of training load on such measures and on match performance during a season of professional football. Materials and Methods: Training load was measured daily as session duration times perceived exertion in 23 A-League football players. Measures of exponentially weighted cumulative training load were calculated using decay factors representing time constants of 3–28 days. Players performed a countermovement jump for estimation of a measure of neuromuscular recovery (ratio of flight time to contraction time, FT:CT), and provided a saliva sample for measurement of testosterone and cortisol concentrations 1-day prior to each of 34 matches. Match performance was assessed via ratings provided by five coaching and fitness staff on a 5-point Likert scale. Effects of training load on FT:CT, hormone concentrations and match performance were modeled as quadratic predictors and expressed as changes in the outcome measure for a change in the predictor of one within-player standard deviation (1 SD) below and above the mean. Changes in each of five playing positions were assessed using standardization and magnitude-based inference. Results: The largest effects of training were generally observed in the 3- to 14-day windows. Center defenders showed a small reduction in coach rating when 14-day a smoothed load increased from −1 SD to the mean (-0.31, ±0.15; mean, ±90% confidence limits), whereas strikers and wide midfielders displayed a small increase in coach rating when load increased 1 SD above the mean. The effects of training load on FT:CT were mostly unclear or trivial, but effects of training load on hormones included a large increase in cortisol (102, ±58%) and moderate increase in testosterone (24, ±18%) in center defenders when 3-day smoothed training load increased 1 SD above the mean. A 1 SD increase in training load above the mean generally resulted in substantial reductions in testosterone:cortisol ratio. Conclusion: The effects of recent training on match performance and hormones in A-League football players highlight the importance of position-specific monitoring and training. PMID:29930514
Vickers, Andrew J; Cronin, Angel M; Elkin, Elena B; Gonen, Mithat
2008-11-26
Decision curve analysis is a novel method for evaluating diagnostic tests, prediction models and molecular markers. It combines the mathematical simplicity of accuracy measures, such as sensitivity and specificity, with the clinical applicability of decision analytic approaches. Most critically, decision curve analysis can be applied directly to a data set, and does not require the sort of external data on costs, benefits and preferences typically required by traditional decision analytic techniques. In this paper we present several extensions to decision curve analysis including correction for overfit, confidence intervals, application to censored data (including competing risk) and calculation of decision curves directly from predicted probabilities. All of these extensions are based on straightforward methods that have previously been described in the literature for application to analogous statistical techniques. Simulation studies showed that repeated 10-fold crossvalidation provided the best method for correcting a decision curve for overfit. The method for applying decision curves to censored data had little bias and coverage was excellent; for competing risk, decision curves were appropriately affected by the incidence of the competing risk and the association between the competing risk and the predictor of interest. Calculation of decision curves directly from predicted probabilities led to a smoothing of the decision curve. Decision curve analysis can be easily extended to many of the applications common to performance measures for prediction models. Software to implement decision curve analysis is provided.
Investigation of noise in gear transmissions by the method of mathematical smoothing of experiments
NASA Technical Reports Server (NTRS)
Sheftel, B. T.; Lipskiy, G. K.; Ananov, P. P.; Chernenko, I. K.
1973-01-01
A rotatable central component smoothing method is used to analyze rotating gear noise spectra. A matrix is formulated in which the randomized rows correspond to various tests and the columns to factor values. Canonical analysis of the obtained regression equation permits the calculation of optimal speed and load at a previous assigned noise level.
Xiao, Zhu; Havyarimana, Vincent; Li, Tong; Wang, Dong
2016-01-01
In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS), is proposed, which enables vehicle state estimation (VSE) with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student’s t-distribution is adopted in order to compute the probability distribution function (PDF) related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF) is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods. PMID:27187405
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, C; Adcock, A; Azevedo, S
2010-12-28
Some diagnostics at the National Ignition Facility (NIF), including the Gamma Reaction History (GRH) diagnostic, require multiple channels of data to achieve the required dynamic range. These channels need to be stitched together into a single time series, and they may have non-uniform and redundant time samples. We chose to apply the popular cubic smoothing spline technique to our stitching problem because we needed a general non-parametric method. We adapted one of the algorithms in the literature, by Hutchinson and deHoog, to our needs. The modified algorithm and the resulting code perform a cubic smoothing spline fit to multiple datamore » channels with redundant time samples and missing data points. The data channels can have different, time-varying, zero-mean white noise characteristics. The method we employ automatically determines an optimal smoothing level by minimizing the Generalized Cross Validation (GCV) score. In order to automatically validate the smoothing level selection, the Weighted Sum-Squared Residual (WSSR) and zero-mean tests are performed on the residuals. Further, confidence intervals, both analytical and Monte Carlo, are also calculated. In this paper, we describe the derivation of our cubic smoothing spline algorithm. We outline the algorithm and test it with simulated and experimental data.« less
Improved Outcomes Following a Single Session Web-Based Intervention for Problem Gambling.
Rodda, S N; Lubman, D I; Jackson, A C; Dowling, N A
2017-03-01
Research suggests online interventions can have instant impact, however this is yet to be tested with help-seeking adults and in particular those with problem gambling. This study seeks to determine the immediate impact of a single session web-based intervention for problem gambling, and to examine whether sessions evaluated positively by clients are associated with greater improvement. The current study involved 229 participants classified as problem gamblers who agreed to participate after accessing Gambling Help Online between November 2010 and February 2012. Almost half were aged under 35 years of age (45 %), male (57 %) as well as first time treatment seekers (62 %). Participants completed measures of readiness to change and distress both prior to and post-counselling. Following the provision of a single-session of counselling, participants completed ratings of the character of the session (i.e., degree of depth and smoothness) post-counselling. A significant increase in confidence to resist and urge to gamble and a significant decrease in distress (moderate effect size; d = .56 and .63 respectively) was observed after receiving online counselling. A hierarchical regression indicated the character of the session was a significant predictor of change in confidence, however only the sub-scale smoothness was a significant predictor of change in distress. This was the case even after controlling for pre-session distress, session word count and client characteristics (gender, age, preferred gambling activity, preferred mode of gambling, gambling severity, and preferred mode of help-seeking). These findings suggest that single session web-based counselling for problem gambling can have immediate benefits, although further research is required to examine the impact on longer-term outcomes.
Gosse, Philippe; Cremer, Antoine; Pereira, Helena; Bobrie, Guillaume; Chatellier, Gilles; Chamontin, Bernard; Courand, Pierre-Yves; Delsart, Pascal; Denolle, Thierry; Dourmap, Caroline; Ferrari, Emile; Girerd, Xavier; Michel Halimi, Jean; Herpin, Daniel; Lantelme, Pierre; Monge, Matthieu; Mounier-Vehier, Claire; Mourad, Jean-Jacques; Ormezzano, Olivier; Ribstein, Jean; Rossignol, Patrick; Sapoval, Marc; Vaïsse, Bernard; Zannad, Faiez; Azizi, Michel
2017-03-01
The DENERHTN trial (Renal Denervation for Hypertension) confirmed the blood pressure (BP) lowering efficacy of renal denervation added to a standardized stepped-care antihypertensive treatment for resistant hypertension at 6 months. We report here the effect of denervation on 24-hour BP and its variability and look for parameters that predicted the BP response. Patients with resistant hypertension were randomly assigned to denervation plus stepped-care treatment or treatment alone (control). Average and standard deviation of 24-hour, daytime, and nighttime BP and the smoothness index were calculated on recordings performed at randomization and 6 months. Responders were defined as a 6-month 24-hour systolic BP reduction ≥20 mm Hg. Analyses were performed on the per-protocol population. The significantly greater BP reduction in the denervation group was associated with a higher smoothness index ( P =0.02). Variability of 24-hour, daytime, and nighttime BP did not change significantly from baseline to 6 months in both groups. The number of responders was greater in the denervation (20/44, 44.5%) than in the control group (11/53, 20.8%; P =0.01). In the discriminant analysis, baseline average nighttime systolic BP and standard deviation were significant predictors of the systolic BP response in the denervation group only, allowing adequate responder classification of 70% of the patients. Our results show that denervation lowers ambulatory BP homogeneously over 24 hours in patients with resistant hypertension and suggest that nighttime systolic BP and variability are predictors of the BP response to denervation. URL: https://www.clinicaltrials.gov. Unique identifier: NCT01570777. © 2017 American Heart Association, Inc.
Rapid Structured Volume Grid Smoothing and Adaption Technique
NASA Technical Reports Server (NTRS)
Alter, Stephen J.
2006-01-01
A rapid, structured volume grid smoothing and adaption technique, based on signal processing methods, was developed and applied to the Shuttle Orbiter at hypervelocity flight conditions in support of the Columbia Accident Investigation. Because of the fast pace of the investigation, computational aerothermodynamicists, applying hypersonic viscous flow solving computational fluid dynamic (CFD) codes, refined and enhanced a grid for an undamaged baseline vehicle to assess a variety of damage scenarios. Of the many methods available to modify a structured grid, most are time-consuming and require significant user interaction. By casting the grid data into different coordinate systems, specifically two computational coordinates with arclength as the third coordinate, signal processing methods are used for filtering the data [Taubin, CG v/29 1995]. Using a reverse transformation, the processed data are used to smooth the Cartesian coordinates of the structured grids. By coupling the signal processing method with existing grid operations within the Volume Grid Manipulator tool, problems related to grid smoothing are solved efficiently and with minimal user interaction. Examples of these smoothing operations are illustrated for reductions in grid stretching and volume grid adaptation. In each of these examples, other techniques existed at the time of the Columbia accident, but the incorporation of signal processing techniques reduced the time to perform the corrections by nearly 60%. This reduction in time to perform the corrections therefore enabled the assessment of approximately twice the number of damage scenarios than previously possible during the allocated investigation time.
Rapid Structured Volume Grid Smoothing and Adaption Technique
NASA Technical Reports Server (NTRS)
Alter, Stephen J.
2004-01-01
A rapid, structured volume grid smoothing and adaption technique, based on signal processing methods, was developed and applied to the Shuttle Orbiter at hypervelocity flight conditions in support of the Columbia Accident Investigation. Because of the fast pace of the investigation, computational aerothermodynamicists, applying hypersonic viscous flow solving computational fluid dynamic (CFD) codes, refined and enhanced a grid for an undamaged baseline vehicle to assess a variety of damage scenarios. Of the many methods available to modify a structured grid, most are time-consuming and require significant user interaction. By casting the grid data into different coordinate systems, specifically two computational coordinates with arclength as the third coordinate, signal processing methods are used for filtering the data [Taubin, CG v/29 1995]. Using a reverse transformation, the processed data are used to smooth the Cartesian coordinates of the structured grids. By coupling the signal processing method with existing grid operations within the Volume Grid Manipulator tool, problems related to grid smoothing are solved efficiently and with minimal user interaction. Examples of these smoothing operations are illustrated for reduction in grid stretching and volume grid adaptation. In each of these examples, other techniques existed at the time of the Columbia accident, but the incorporation of signal processing techniques reduced the time to perform the corrections by nearly 60%. This reduction in time to perform the corrections therefore enabled the assessment of approximately twice the number of damage scenarios than previously possible during the allocated investigation time.
Surface smoothness: cartilage biomarkers for knee OA beyond the radiologist
NASA Astrophysics Data System (ADS)
Tummala, Sudhakar; Dam, Erik B.
2010-03-01
Fully automatic imaging biomarkers may allow quantification of patho-physiological processes that a radiologist would not be able to assess reliably. This can introduce new insight but is problematic to validate due to lack of meaningful ground truth expert measurements. Rather than quantification accuracy, such novel markers must therefore be validated against clinically meaningful end-goals such as the ability to allow correct diagnosis. We present a method for automatic cartilage surface smoothness quantification in the knee joint. The quantification is based on a curvature flow method used on tibial and femoral cartilage compartments resulting from an automatic segmentation scheme. These smoothness estimates are validated for their ability to diagnose osteoarthritis and compared to smoothness estimates based on manual expert segmentations and to conventional cartilage volume quantification. We demonstrate that the fully automatic markers eliminate the time required for radiologist annotations, and in addition provide a diagnostic marker superior to the evaluated semi-manual markers.
Functional mixed effects spectral analysis
KRAFTY, ROBERT T.; HALL, MARTICA; GUO, WENSHENG
2011-01-01
SUMMARY In many experiments, time series data can be collected from multiple units and multiple time series segments can be collected from the same unit. This article introduces a mixed effects Cramér spectral representation which can be used to model the effects of design covariates on the second-order power spectrum while accounting for potential correlations among the time series segments collected from the same unit. The transfer function is composed of a deterministic component to account for the population-average effects and a random component to account for the unit-specific deviations. The resulting log-spectrum has a functional mixed effects representation where both the fixed effects and random effects are functions in the frequency domain. It is shown that, when the replicate-specific spectra are smooth, the log-periodograms converge to a functional mixed effects model. A data-driven iterative estimation procedure is offered for the periodic smoothing spline estimation of the fixed effects, penalized estimation of the functional covariance of the random effects, and unit-specific random effects prediction via the best linear unbiased predictor. PMID:26855437
Microscopic morphology evolution during ion beam smoothing of Zerodur® surfaces.
Liao, Wenlin; Dai, Yifan; Xie, Xuhui; Zhou, Lin
2014-01-13
Ion sputtering of Zerodur material often results in the formation of nanoscale microstructures on the surfaces, which seriously influences optical surface quality. In this paper, we describe the microscopic morphology evolution during ion sputtering of Zerodur surfaces through experimental researches and theoretical analysis, which shows that preferential sputtering together with curvature-dependent sputtering overcomes ion-induced smoothing mechanisms leading to granular nanopatterns formation in morphology and the coarsening of the surface. Consequently, we propose a new method for ion beam smoothing (IBS) of Zerodur optics assisted by deterministic ion beam material adding (IBA) technology. With this method, Zerodur optics with surface roughness down to 0.15 nm root mean square (RMS) level is obtained through the experimental investigation, which demonstrates the feasibility of our proposed method.
Control Strategies for Smoothing of Output Power of Wind Energy Conversion Systems
NASA Astrophysics Data System (ADS)
Pratap, Alok; Urasaki, Naomitsu; Senju, Tomonobu
2013-10-01
This article presents a control method for output power smoothing of a wind energy conversion system (WECS) with a permanent magnet synchronous generator (PMSG) using the inertia of wind turbine and the pitch control. The WECS used in this article adopts an AC-DC-AC converter system. The generator-side converter controls the torque of the PMSG, while the grid-side inverter controls the DC-link and grid voltages. For the generator-side converter, the torque command is determined by using the fuzzy logic. The inputs of the fuzzy logic are the operating point of the rotational speed of the PMSG and the difference between the wind turbine torque and the generator torque. By means of the proposed method, the generator torque is smoothed, and the kinetic energy stored by the inertia of the wind turbine can be utilized to smooth the output power fluctuations of the PMSG. In addition, the wind turbines shaft stress is mitigated compared to a conventional maximum power point tracking control. Effectiveness of the proposed method is verified by the numerical simulations.
Error detection and data smoothing based on local procedures
NASA Technical Reports Server (NTRS)
Guerra, V. M.
1974-01-01
An algorithm is presented which is able to locate isolated bad points and correct them without contaminating the rest of the good data. This work has been greatly influenced and motivated by what is currently done in the manual loft. It is not within the scope of this work to handle small random errors characteristic of a noisy system, and it is therefore assumed that the bad points are isolated and relatively few when compared with the total number of points. Motivated by the desire to imitate the loftsman a visual experiment was conducted to determine what is considered smooth data. This criterion is used to determine how much the data should be smoothed and to prove that this method produces such data. The method utimately converges to a set of points that lies on the polynomial that interpolates the first and last points; however convergence to such a set is definitely not the purpose of our algorithm. The proof of convergence is necessary to demonstrate that oscillation does not take place and that in a finite number of steps the method produces a set as smooth as desired.
Inverse analysis and regularisation in conditional source-term estimation modelling
NASA Astrophysics Data System (ADS)
Labahn, Jeffrey W.; Devaud, Cecile B.; Sipkens, Timothy A.; Daun, Kyle J.
2014-05-01
Conditional Source-term Estimation (CSE) obtains the conditional species mass fractions by inverting a Fredholm integral equation of the first kind. In the present work, a Bayesian framework is used to compare two different regularisation methods: zeroth-order temporal Tikhonov regulatisation and first-order spatial Tikhonov regularisation. The objectives of the current study are: (i) to elucidate the ill-posedness of the inverse problem; (ii) to understand the origin of the perturbations in the data and quantify their magnitude; (iii) to quantify the uncertainty in the solution using different priors; and (iv) to determine the regularisation method best suited to this problem. A singular value decomposition shows that the current inverse problem is ill-posed. Perturbations to the data may be caused by the use of a discrete mixture fraction grid for calculating the mixture fraction PDF. The magnitude of the perturbations is estimated using a box filter and the uncertainty in the solution is determined based on the width of the credible intervals. The width of the credible intervals is significantly reduced with the inclusion of a smoothing prior and the recovered solution is in better agreement with the exact solution. The credible intervals for temporal and spatial smoothing are shown to be similar. Credible intervals for temporal smoothing depend on the solution from the previous time step and a smooth solution is not guaranteed. For spatial smoothing, the credible intervals are not dependent upon a previous solution and better predict characteristics for higher mixture fraction values. These characteristics make spatial smoothing a promising alternative method for recovering a solution from the CSE inversion process.
Norman, Matthew R.
2014-11-24
New Hermite Weighted Essentially Non-Oscillatory (HWENO) interpolants are developed and investigated within the Multi-Moment Finite-Volume (MMFV) formulation using the ADER-DT time discretization. Whereas traditional WENO methods interpolate pointwise, function-based WENO methods explicitly form a non-oscillatory, high-order polynomial over the cell in question. This study chooses a function-based approach and details how fast convergence to optimal weights for smooth flow is ensured. Methods of sixth-, eighth-, and tenth-order accuracy are developed. We compare these against traditional single-moment WENO methods of fifth-, seventh-, ninth-, and eleventh-order accuracy to compare against more familiar methods from literature. The new HWENO methods improve upon existingmore » HWENO methods (1) by giving a better resolution of unreinforced contact discontinuities and (2) by only needing a single HWENO polynomial to update both the cell mean value and cell mean derivative. Test cases to validate and assess these methods include 1-D linear transport, the 1-D inviscid Burger's equation, and the 1-D inviscid Euler equations. Smooth and non-smooth flows are used for evaluation. These HWENO methods performed better than comparable literature-standard WENO methods for all regimes of discontinuity and smoothness in all tests herein. They exhibit improved optimal accuracy due to the use of derivatives, and they collapse to solutions similar to typical WENO methods when limiting is required. The study concludes that the new HWENO methods are robust and effective when used in the ADER-DT MMFV framework. Finally, these results are intended to demonstrate capability rather than exhaust all possible implementations.« less
Heidema, A Geert; Boer, Jolanda M A; Nagelkerke, Nico; Mariman, Edwin C M; van der A, Daphne L; Feskens, Edith J M
2006-04-21
Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods have been developed for analyzing the relation between large numbers of genetic and environmental predictors to disease or disease-related variables in genetic association studies. In this commentary we discuss logistic regression analysis, neural networks, including the parameter decreasing method (PDM) and genetic programming optimized neural networks (GPNN) and several non-parametric methods, which include the set association approach, combinatorial partitioning method (CPM), restricted partitioning method (RPM), multifactor dimensionality reduction (MDR) method and the random forests approach. The relative strengths and weaknesses of these methods are highlighted. Logistic regression and neural networks can handle only a limited number of predictor variables, depending on the number of observations in the dataset. Therefore, they are less useful than the non-parametric methods to approach association studies with large numbers of predictor variables. GPNN on the other hand may be a useful approach to select and model important predictors, but its performance to select the important effects in the presence of large numbers of predictors needs to be examined. Both the set association approach and random forests approach are able to handle a large number of predictors and are useful in reducing these predictors to a subset of predictors with an important contribution to disease. The combinatorial methods give more insight in combination patterns for sets of genetic and/or environmental predictor variables that may be related to the outcome variable. As the non-parametric methods have different strengths and weaknesses we conclude that to approach genetic association studies using the case-control design, the application of a combination of several methods, including the set association approach, MDR and the random forests approach, will likely be a useful strategy to find the important genes and interaction patterns involved in complex diseases.
Enhancement of surface definition and gridding in the EAGLE code
NASA Technical Reports Server (NTRS)
Thompson, Joe F.
1991-01-01
Algorithms for smoothing of curves and surfaces for the EAGLE grid generation program are presented. The method uses an existing automated technique which detects undesirable geometric characteristics by using a local fairness criterion. The geometry entity is then smoothed by repeated removal and insertion of spline knots in the vicinity of the geometric irregularity. The smoothing algorithm is formulated for use with curves in Beta spline form and tensor product B-spline surfaces.
GEE-Smoothing Spline in Semiparametric Model with Correlated Nominal Data
NASA Astrophysics Data System (ADS)
Ibrahim, Noor Akma; Suliadi
2010-11-01
In this paper we propose GEE-Smoothing spline in the estimation of semiparametric models with correlated nominal data. The method can be seen as an extension of parametric generalized estimating equation to semiparametric models. The nonparametric component is estimated using smoothing spline specifically the natural cubic spline. We use profile algorithm in the estimation of both parametric and nonparametric components. The properties of the estimators are evaluated using simulation studies.
Denoising Sparse Images from GRAPPA using the Nullspace Method (DESIGN)
Weller, Daniel S.; Polimeni, Jonathan R.; Grady, Leo; Wald, Lawrence L.; Adalsteinsson, Elfar; Goyal, Vivek K
2011-01-01
To accelerate magnetic resonance imaging using uniformly undersampled (nonrandom) parallel imaging beyond what is achievable with GRAPPA alone, the Denoising of Sparse Images from GRAPPA using the Nullspace method (DESIGN) is developed. The trade-off between denoising and smoothing the GRAPPA solution is studied for different levels of acceleration. Several brain images reconstructed from uniformly undersampled k-space data using DESIGN are compared against reconstructions using existing methods in terms of difference images (a qualitative measure), PSNR, and noise amplification (g-factors) as measured using the pseudo-multiple replica method. Effects of smoothing, including contrast loss, are studied in synthetic phantom data. In the experiments presented, the contrast loss and spatial resolution are competitive with existing methods. Results for several brain images demonstrate significant improvements over GRAPPA at high acceleration factors in denoising performance with limited blurring or smoothing artifacts. In addition, the measured g-factors suggest that DESIGN mitigates noise amplification better than both GRAPPA and L1 SPIR-iT (the latter limited here by uniform undersampling). PMID:22213069
Accurate interlaminar stress recovery from finite element analysis
NASA Technical Reports Server (NTRS)
Tessler, Alexander; Riggs, H. Ronald
1994-01-01
The accuracy and robustness of a two-dimensional smoothing methodology is examined for the problem of recovering accurate interlaminar shear stress distributions in laminated composite and sandwich plates. The smoothing methodology is based on a variational formulation which combines discrete least-squares and penalty-constraint functionals in a single variational form. The smoothing analysis utilizes optimal strains computed at discrete locations in a finite element analysis. These discrete strain data are smoothed with a smoothing element discretization, producing superior accuracy strains and their first gradients. The approach enables the resulting smooth strain field to be practically C1-continuous throughout the domain of smoothing, exhibiting superconvergent properties of the smoothed quantity. The continuous strain gradients are also obtained directly from the solution. The recovered strain gradients are subsequently employed in the integration o equilibrium equations to obtain accurate interlaminar shear stresses. The problem is a simply-supported rectangular plate under a doubly sinusoidal load. The problem has an exact analytic solution which serves as a measure of goodness of the recovered interlaminar shear stresses. The method has the versatility of being applicable to the analysis of rather general and complex structures built of distinct components and materials, such as found in aircraft design. For these types of structures, the smoothing is achieved with 'patches', each patch covering the domain in which the smoothed quantity is physically continuous.
NASA Astrophysics Data System (ADS)
Divakov, D.; Sevastianov, L.; Nikolaev, N.
2017-01-01
The paper deals with a numerical solution of the problem of waveguide propagation of polarized light in smoothly-irregular transition between closed regular waveguides using the incomplete Galerkin method. This method consists in replacement of variables in the problem of reduction of the Helmholtz equation to the system of differential equations by the Kantorovich method and in formulation of the boundary conditions for the resulting system. The formulation of the boundary problem for the ODE system is realized in computer algebra system Maple. The stated boundary problem is solved using Maples libraries of numerical methods.
NASA Astrophysics Data System (ADS)
Raymond, Samuel J.; Jones, Bruce; Williams, John R.
2018-01-01
A strategy is introduced to allow coupling of the material point method (MPM) and smoothed particle hydrodynamics (SPH) for numerical simulations. This new strategy partitions the domain into SPH and MPM regions, particles carry all state variables and as such no special treatment is required for the transition between regions. The aim of this work is to derive and validate the coupling methodology between MPM and SPH. Such coupling allows for general boundary conditions to be used in an SPH simulation without further augmentation. Additionally, as SPH is a purely particle method, and MPM is a combination of particles and a mesh. This coupling also permits a smooth transition from particle methods to mesh methods, where further coupling to mesh methods could in future provide an effective farfield boundary treatment for the SPH method. The coupling technique is introduced and described alongside a number of simulations in 1D and 2D to validate and contextualize the potential of using these two methods in a single simulation. The strategy shown here is capable of fully coupling the two methods without any complicated algorithms to transform information from one method to another.
A robust method of thin plate spline and its application to DEM construction
NASA Astrophysics Data System (ADS)
Chen, Chuanfa; Li, Yanyan
2012-11-01
In order to avoid the ill-conditioning problem of thin plate spline (TPS), the orthogonal least squares (OLS) method was introduced, and a modified OLS (MOLS) was developed. The MOLS of TPS (TPS-M) can not only select significant points, termed knots, from large and dense sampling data sets, but also easily compute the weights of the knots in terms of back-substitution. For interpolating large sampling points, we developed a local TPS-M, where some neighbor sampling points around the point being estimated are selected for computation. Numerical tests indicate that irrespective of sampling noise level, the average performance of TPS-M can advantage with smoothing TPS. Under the same simulation accuracy, the computational time of TPS-M decreases with the increase of the number of sampling points. The smooth fitting results on lidar-derived noise data indicate that TPS-M has an obvious smoothing effect, which is on par with smoothing TPS. The example of constructing a series of large scale DEMs, located in Shandong province, China, was employed to comparatively analyze the estimation accuracies of the two versions of TPS and the classical interpolation methods including inverse distance weighting (IDW), ordinary kriging (OK) and universal kriging with the second-order drift function (UK). Results show that regardless of sampling interval and spatial resolution, TPS-M is more accurate than the classical interpolation methods, except for the smoothing TPS at the finest sampling interval of 20 m, and the two versions of kriging at the spatial resolution of 15 m. In conclusion, TPS-M, which avoids the ill-conditioning problem, is considered as a robust method for DEM construction.
1985-04-01
EM 32 12 MICROCOP REOUTO TETCHR NTOA B URA FSA4ARS16- AFHRL-TR-84-64 9 AIR FORCE 6 __ H EQUIPERCENTILE TEST EQUATING: THE EFFECTS OF PRESMOOTHING AND...combined or compound presmoother and a presmoothing method based on a particular model of test scores. Of the seven methods of presmoothing the score...unsmoothed distributions, the smoothing of that sequence of differences by the same compound method, and, finally, adding the smoothed differences back
Smooth Sensor Motion Planning for Robotic Cyber Physical Social Sensing (CPSS)
Tang, Hong; Li, Liangzhi; Xiao, Nanfeng
2017-01-01
Although many researchers have begun to study the area of Cyber Physical Social Sensing (CPSS), few are focused on robotic sensors. We successfully utilize robots in CPSS, and propose a sensor trajectory planning method in this paper. Trajectory planning is a fundamental problem in mobile robotics. However, traditional methods are not suited for robotic sensors, because of their low efficiency, instability, and non-smooth-generated paths. This paper adopts an optimizing function to generate several intermediate points and regress these discrete points to a quintic polynomial which can output a smooth trajectory for the robotic sensor. Simulations demonstrate that our approach is robust and efficient, and can be well applied in the CPSS field. PMID:28218649
Constrained-transport Magnetohydrodynamics with Adaptive Mesh Refinement in CHARM
NASA Astrophysics Data System (ADS)
Miniati, Francesco; Martin, Daniel F.
2011-07-01
We present the implementation of a three-dimensional, second-order accurate Godunov-type algorithm for magnetohydrodynamics (MHD) in the adaptive-mesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit corner-transport-upwind (CTU) scheme. The fluid quantities are cell-centered and are updated using the piecewise-parabolic method (PPM), while the magnetic field variables are face-centered and are evolved through application of the Stokes theorem on cell edges via a constrained-transport (CT) method. The so-called multidimensional MHD source terms required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracy or robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These include face-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. The code is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests, a three-dimensional shock-cloud interaction problem, and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence is shown to remain negligible throughout.
2015-01-01
Many commonly used coarse-grained models for proteins are based on simplified interaction sites and consequently may suffer from significant limitations, such as the inability to properly model protein secondary structure without the addition of restraints. Recent work on a benzene fluid (LettieriS.; ZuckermanD. M.J. Comput. Chem.2012, 33, 268−27522120971) suggested an alternative strategy of tabulating and smoothing fully atomistic orientation-dependent interactions among rigid molecules or fragments. Here we report our initial efforts to apply this approach to the polar and covalent interactions intrinsic to polypeptides. We divide proteins into nearly rigid fragments, construct distance and orientation-dependent tables of the atomistic interaction energies between those fragments, and apply potential energy smoothing techniques to those tables. The amount of smoothing can be adjusted to give coarse-grained models that range from the underlying atomistic force field all the way to a bead-like coarse-grained model. For a moderate amount of smoothing, the method is able to preserve about 70–90% of the α-helical structure while providing a factor of 3–10 improvement in sampling per unit computation time (depending on how sampling is measured). For a greater amount of smoothing, multiple folding–unfolding transitions of the peptide were observed, along with a factor of 10–100 improvement in sampling per unit computation time, although the time spent in the unfolded state was increased compared with less smoothed simulations. For a β hairpin, secondary structure is also preserved, albeit for a narrower range of the smoothing parameter and, consequently, for a more modest improvement in sampling. We have also applied the new method in a “resolution exchange” setting, in which each replica runs a Monte Carlo simulation with a different degree of smoothing. We obtain exchange rates that compare favorably to our previous efforts at resolution exchange (LymanE.; ZuckermanD. M.J. Chem. Theory Comput.2006, 2, 656−666). PMID:25400525
NASA Astrophysics Data System (ADS)
Garcia, Daniel D.; van de Pol, Corina; Barsky, Brian A.; Klein, Stanley A.
1999-06-01
Many current corneal topography instruments (called videokeratographs) provide an `acuity index' based on corneal smoothness to analyze expected visual acuity. However, post-refractive surgery patients often exhibit better acuity than is predicted by such indices. One reason for this is that visual acuity may not necessarily be determined by overall corneal smoothness but rather by having some part of the cornea able to focus light coherently onto the fovea. We present a new method of representing visual acuity by measuring the wavefront aberration, using principles from both ray and wave optics. For each point P on the cornea, we measure the size of the associated coherence area whose optical path length (OPL), from a reference plane to P's focus, is within a certain tolerance of the OPL for P. We measured the topographies and vision of 62 eyes of patients who had undergone the corneal refractive surgery procedures of photorefractive keratectomy (PRK) and photorefractive astigmatic keratectomy (PARK). In addition to high contrast visual acuity, our vision tests included low contrast and low luminance to test the contribution of the PRK transition zone. We found our metric for visual acuity to be better than all other metrics at predicting the acuity of low contrast and low luminance. However, high contrast visual acuity was poorly predicted by all of the indices we studied, including our own. The indices provided by current videokeratographs sometimes fail for corneas whose shape differs from simple ellipsoidal models. This is the case with post-PRK and post-PARK refractive surgery patients. Our alternative representation that displays the coherence area of the wavefront has considerable advantages, and promises to be a better predictor of low contrast and low luminance visual acuity than current shape measures.
Mitigating Short-Term Variations of Photovoltaic Generation Using Energy Storage with VOLTTRON
NASA Astrophysics Data System (ADS)
Morrissey, Kevin
A smart-building communications system performs smoothing on photovoltaic (PV) power generation using a battery energy storage system (BESS). The system runs using VOLTTRON(TM), a multi-agent python-based software platform dedicated to power systems. The VOLTTRON(TM) system designed for this project runs synergistically with the larger University of Washington VOLTTRON(TM) environment, which is designed to operate UW device communications and databases as well as to perform real-time operations for research. One such research algorithm that operates simultaneously with this PV Smoothing System is an energy cost optimization system which optimizes net demand and associated cost throughout a day using the BESS. The PV Smoothing System features an active low-pass filter with an adaptable time constant, as well as adjustable limitations on the output power and accumulated battery energy of the BESS contribution. The system was analyzed using 26 days of PV generation at 1-second resolution. PV smoothing was studied with unconstrained BESS contribution as well as under a broad range of BESS constraints analogous to variable-sized storage. It was determined that a large inverter output power was more important for PV smoothing than a large battery energy capacity. Two methods of selecting the time constant in real time, static and adaptive, are studied for their impact on system performance. It was found that both systems provide a high level of PV smoothing performance, within 8% of the ideal case where the best time constant is known ahead of time. The system was run in real time using VOLTTRON(TM) with BESS limitations of 5 kW/6.5 kWh and an adaptive update period of 7 days. The system behaved as expected given the BESS parameters and time constant selection methods, providing smoothing on the PV generation and updating the time constant periodically using the adaptive time constant selection method.
Drews, Ulrich; Renz, Matthias; Busch, Christian; Reisenauer, Christl
2012-11-01
In a previous study we observed impaired smooth muscle in the uterosacral ligament (USL) of patients with pelvic organ prolapse. The aims of the study were to describe the method of the novel microperfusion system and to determine normal function and pharmacology of smooth muscle in the USL. Samples from the USL were obtained during hysterectomy for benign reasons. Small stretches of connective tissue were mounted in a perfusion chamber under the stereomicroscope. Isotonic contractions of smooth muscle were monitored by digital time-lapse video and quantified by image processing. Constant perfusion with carbachol elicited tonic and pulse stimulation with carbachol and oxytocin rhythmic contractions of smooth muscle in the ground reticulum. Under constant perfusion with relaxin the tonic contraction after carbachol was abolished. With the novel microperfusion system, isotonic contractions of smooth muscle in the USL can be recorded and quantified in the tissue microenvironment on the microscopic level. The USL smooth muscle is cholinergic, stimulated by oxytocin and modulated by relaxin. Copyright © 2012 Wiley Periodicals, Inc.
Steady-state shear flows via nonequilibrium molecular dynamics and smooth-particle applied mechanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Posch, H.A.; Hoover, W.G.; Kum, O.
1995-08-01
We simulate both microscopic and macroscopic shear flows in two space dimensions using nonequilibrium molecular dynamics and smooth-particle applied mechanics. The time-reversible {ital microscopic} equations of motion are isomorphic to the smooth-particle description of inviscid {ital macroscopic} continuum mechanics. The corresponding microscopic particle interactions are relatively weak and long ranged. Though conventional Green-Kubo theory suggests instability or divergence in two-dimensional flows, we successfully define and measure a finite shear viscosity coefficient by simulating stationary plane Couette flow. The special nature of the weak long-ranged smooth-particle functions corresponds to an unusual kind of microscopic transport. This microscopic analog is mainly kinetic,more » even at high density. For the soft Lucy potential which we use in the present work, nearly all the system energy is potential, but the resulting shear viscosity is nearly all kinetic. We show that the measured shear viscosities can be understood, in terms of a simple weak-scattering model, and that this understanding is useful in assessing the usefulness of continuum simulations using the smooth-particle method. We apply that method to the Rayleigh-Benard problem of thermally driven convection in a gravitational field.« less
Liu, Wenjie; Hu, Xiaolong; Zou, Qiushun; Wu, Shaoying; Jin, Chongjun
2018-06-15
External light sources are mostly employed to functionalize the plasmonic components, resulting in a bulky footprint. Electrically driven integrated plasmonic devices, combining ultra-compact critical feature sizes with extremely high transmission speeds and low power consumption, can link plasmonics with the present-day electronic world. In an effort to achieve this prospect, suppressing the losses in the plasmonic devices becomes a pressing issue. In this work, we developed a novel polymethyl methacrylate 'bond and peel' method to fabricate metal films with sub-nanometer smooth surfaces on semiconductor wafers. Based on this method, we further fabricated a compact plasmonic source containing a metal-insulator-metal (MIM) waveguide with an ultra-smooth metal surface on a GaAs-based light-emitting diode wafer. An increase in propagation length of the SPP mode by a factor of 2.95 was achieved as compared with the conventional device containing a relatively rough metal surface. Numerical calculations further confirmed that the propagation length is comparable to the theoretical prediction on the MIM waveguide with perfectly smooth metal surfaces. This method facilitates low-loss and high-integration of electrically driven plasmonic devices, thus provides an immediate opportunity for the practical application of on-chip integrated plasmonic circuits.
NASA Astrophysics Data System (ADS)
Liu, Wenjie; Hu, Xiaolong; Zou, Qiushun; Wu, Shaoying; Jin, Chongjun
2018-06-01
External light sources are mostly employed to functionalize the plasmonic components, resulting in a bulky footprint. Electrically driven integrated plasmonic devices, combining ultra-compact critical feature sizes with extremely high transmission speeds and low power consumption, can link plasmonics with the present-day electronic world. In an effort to achieve this prospect, suppressing the losses in the plasmonic devices becomes a pressing issue. In this work, we developed a novel polymethyl methacrylate ‘bond and peel’ method to fabricate metal films with sub-nanometer smooth surfaces on semiconductor wafers. Based on this method, we further fabricated a compact plasmonic source containing a metal-insulator-metal (MIM) waveguide with an ultra-smooth metal surface on a GaAs-based light-emitting diode wafer. An increase in propagation length of the SPP mode by a factor of 2.95 was achieved as compared with the conventional device containing a relatively rough metal surface. Numerical calculations further confirmed that the propagation length is comparable to the theoretical prediction on the MIM waveguide with perfectly smooth metal surfaces. This method facilitates low-loss and high-integration of electrically driven plasmonic devices, thus provides an immediate opportunity for the practical application of on-chip integrated plasmonic circuits.
A simple scaling model for smooth vs. rough bathymetry along hotspot tracks
NASA Astrophysics Data System (ADS)
Orellana Rovirosa, F.; Richards, M. A.
2016-12-01
Oceanic hotspot tracks exhibit a remarkable variety of morphologies, both in terms of volcanic seamounts/ocean islands, as well as broader bathymetric swells. A conspicuous feature is that although most hotspot tracks are characterized by "rough" topography, due mainly to volcanic construction, a number are much "smoother," and likely dominated more by the thermal/dynamic swell and crustal intrusion. Examples of relatively smooth tracks include the Nazca Ridge , Carnegie/Cocos/Galápagos, Walvis Ridge, Rio Grande Rise, Iceland, and Kerguelen and much of the Ninety-east Ridge; contrasting with rough and discontinuous seamount chains such Easter/Sala y Gomez, Tristan-Gough, Louisville, Emperor, and much of the Hawaiian ridge. Previous studies have pointed out the role of age, lithospheric thickness, and the plume strength; on the style of the associated bathymetry. Here, we take a systematic approach that emphasizes remarkable along-track changes from smooth to rough topography, e.g., the rough Sala y Gomez and smooth Nazca Ridge portions of the Easter Island hotspot track. Considering the primary controls to be hotspot swell volume flux Qs, the plate-hotspot relative speed v, and the lithospheric elastic thickness D, we suggest that such transitions are controlled by the dimensionless parameter R = sqrt(Qs / v) / D, which is roughly a measure of the heat available from the plume to the heat necessary to thermally attenuate the overlying lithosphere. For very thin (young) lithosphere, such as at the Galápagos platform, igneous intrusion into the hot, weak lithosphere and lower crust may dominate the topographic expression of the hotspot, whereas older lithosphere will support large volcanoes built from magmas passing through more intact lithosphere. Using data from observational studies on mantle-plume buoyancy fluxes, gravity, bathymetry, and tectonic reconstructions, we show that R is a good predictor of bathymetric style: for R<2 hotspot tracks are rough, and for R>3 they are smooth. This analysis therefore gives a straightforward and quantitative framework for interpreting the topographic/bathymetric expressions of oceanic hotspot tracks.
Garza-Gisholt, Eduardo; Hemmi, Jan M.; Hart, Nathan S.; Collin, Shaun P.
2014-01-01
Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation) and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed ‘by eye’. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation ‘respects’ the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the ‘noise’ caused by artefacts and permits a clearer representation of the dominant, ‘real’ distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect the outcome. PMID:24747568
Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P
2015-03-01
Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical. Published by Elsevier B.V.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malashko, Ya I; Khabibulin, V M
We have derived analytical expressions, verified by the methods of numerical simulation, to evaluate the angular divergence of nondiffractive laser beams containing smooth aberrations, i.e., spherical defocusing, astigmatism and toroid. Using these expressions we have formulated the criteria for admissible values of smooth aberrations. (laser applications and other topics in quantum electronics)
Functional overestimation due to spatial smoothing of fMRI data.
Liu, Peng; Calhoun, Vince; Chen, Zikuan
2017-11-01
Pearson correlation (simply correlation) is a basic technique for neuroimage function analysis. It has been observed that the spatial smoothing may cause functional overestimation, which however remains a lack of complete understanding. Herein, we present a theoretical explanation from the perspective of correlation scale invariance. For a task-evoked spatiotemporal functional dataset, we can extract the functional spatial map by calculating the temporal correlations (tcorr) of voxel timecourses against the task timecourse. From the relationship between image noise level (changed through spatial smoothing) and the tcorr map calculation, we show that the spatial smoothing causes a noise reduction, which in turn smooths the tcorr map and leads to a spatial expansion on neuroactivity blob estimation. Through numerical simulations and subject experiments, we show that the spatial smoothing of fMRI data may overestimate activation spots in the correlation functional map. Our results suggest a small spatial smoothing (with a smoothing kernel with a full width at half maximum (FWHM) of no more than two voxels) on fMRI data processing for correlation-based functional mapping COMPARISON WITH EXISTING METHODS: In extreme noiselessness, the correlation of scale-invariance property defines a meaningless binary tcorr map. In reality, a functional activity blob in a tcorr map is shaped due to the spoilage of image noise on correlative responses. We may reduce data noise level by smoothing processing, which poses a smoothing effect on correlation. This logic allows us to understand the noise dependence and the smoothing effect of correlation-based fMRI data analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
Gao, Bo-Cai; Liu, Ming
2013-01-01
Surface reflectance spectra retrieved from remotely sensed hyperspectral imaging data using radiative transfer models often contain residual atmospheric absorption and scattering effects. The reflectance spectra may also contain minor artifacts due to errors in radiometric and spectral calibrations. We have developed a fast smoothing technique for post-processing of retrieved surface reflectance spectra. In the present spectral smoothing technique, model-derived reflectance spectra are first fit using moving filters derived with a cubic spline smoothing algorithm. A common gain curve, which contains minor artifacts in the model-derived reflectance spectra, is then derived. This gain curve is finally applied to all of the reflectance spectra in a scene to obtain the spectrally smoothed surface reflectance spectra. Results from analysis of hyperspectral imaging data collected with the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data are given. Comparisons between the smoothed spectra and those derived with the empirical line method are also presented. PMID:24129022
Cook, Daniel P.; Rector, Michael V.; Bouzek, Drake C.; Michalski, Andrew S.; Gansemer, Nicholas D.; Reznikov, Leah R.; Li, Xiaopeng; Stroik, Mallory R.; Ostedgaard, Lynda S.; Abou Alaiwa, Mahmoud H.; Thompson, Michael A.; Prakash, Y. S.; Krishnan, Ramaswamy; Meyerholz, David K.; Seow, Chun Y.
2016-01-01
Rationale: An asthma-like airway phenotype has been described in people with cystic fibrosis (CF). Whether these findings are directly caused by loss of CF transmembrane conductance regulator (CFTR) function or secondary to chronic airway infection and/or inflammation has been difficult to determine. Objectives: Airway contractility is primarily determined by airway smooth muscle. We tested the hypothesis that CFTR is expressed in airway smooth muscle and directly affects airway smooth muscle contractility. Methods: Newborn pigs, both wild type and with CF (before the onset of airway infection and inflammation), were used in this study. High-resolution immunofluorescence was used to identify the subcellular localization of CFTR in airway smooth muscle. Airway smooth muscle function was determined with tissue myography, intracellular calcium measurements, and regulatory myosin light chain phosphorylation status. Precision-cut lung slices were used to investigate the therapeutic potential of CFTR modulation on airway reactivity. Measurements and Main Results: We found that CFTR localizes to the sarcoplasmic reticulum compartment of airway smooth muscle and regulates airway smooth muscle tone. Loss of CFTR function led to delayed calcium reuptake following cholinergic stimulation and increased myosin light chain phosphorylation. CFTR potentiation with ivacaftor decreased airway reactivity in precision-cut lung slices following cholinergic stimulation. Conclusions: Loss of CFTR alters porcine airway smooth muscle function and may contribute to the airflow obstruction phenotype observed in human CF. Airway smooth muscle CFTR may represent a therapeutic target in CF and other diseases of airway narrowing. PMID:26488271
Design and simulation of origami structures with smooth folds
Peraza Hernandez, E. A.; Lagoudas, D. C.
2017-01-01
Origami has enabled new approaches to the fabrication and functionality of multiple structures. Current methods for origami design are restricted to the idealization of folds as creases of zeroth-order geometric continuity. Such an idealization is not proper for origami structures of non-negligible fold thickness or maximum curvature at the folds restricted by material limitations. For such structures, folds are not properly represented as creases but rather as bent regions of higher-order geometric continuity. Such fold regions of arbitrary order of continuity are termed as smooth folds. This paper presents a method for solving the following origami design problem: given a goal shape represented as a polygonal mesh (termed as the goal mesh), find the geometry of a single planar sheet, its pattern of smooth folds, and the history of folding motion allowing the sheet to approximate the goal mesh. The parametrization of the planar sheet and the constraints that allow for a valid pattern of smooth folds are presented. The method is tested against various goal meshes having diverse geometries. The results show that every determined sheet approximates its corresponding goal mesh in a known folded configuration having fold angles obtained from the geometry of the goal mesh. PMID:28484322
Design and simulation of origami structures with smooth folds.
Peraza Hernandez, E A; Hartl, D J; Lagoudas, D C
2017-04-01
Origami has enabled new approaches to the fabrication and functionality of multiple structures. Current methods for origami design are restricted to the idealization of folds as creases of zeroth-order geometric continuity. Such an idealization is not proper for origami structures of non-negligible fold thickness or maximum curvature at the folds restricted by material limitations. For such structures, folds are not properly represented as creases but rather as bent regions of higher-order geometric continuity. Such fold regions of arbitrary order of continuity are termed as smooth folds . This paper presents a method for solving the following origami design problem: given a goal shape represented as a polygonal mesh (termed as the goal mesh ), find the geometry of a single planar sheet, its pattern of smooth folds, and the history of folding motion allowing the sheet to approximate the goal mesh. The parametrization of the planar sheet and the constraints that allow for a valid pattern of smooth folds are presented. The method is tested against various goal meshes having diverse geometries. The results show that every determined sheet approximates its corresponding goal mesh in a known folded configuration having fold angles obtained from the geometry of the goal mesh.
Restoring a smooth function from its noisy integrals
NASA Astrophysics Data System (ADS)
Goulko, Olga; Prokof'ev, Nikolay; Svistunov, Boris
2018-05-01
Numerical (and experimental) data analysis often requires the restoration of a smooth function from a set of sampled integrals over finite bins. We present the bin hierarchy method that efficiently computes the maximally smooth function from the sampled integrals using essentially all the information contained in the data. We perform extensive tests with different classes of functions and levels of data quality, including Monte Carlo data suffering from a severe sign problem and physical data for the Green's function of the Fröhlich polaron.
An impact analysis of forecasting methods and forecasting parameters on bullwhip effect
NASA Astrophysics Data System (ADS)
Silitonga, R. Y. H.; Jelly, N.
2018-04-01
Bullwhip effect is an increase of variance of demand fluctuation from downstream to upstream of supply chain. Forecasting methods and forecasting parameters were recognized as some factors that affect bullwhip phenomena. To study these factors, we can develop simulations. There are several ways to simulate bullwhip effect in previous studies, such as mathematical equation modelling, information control modelling, computer program, and many more. In this study a spreadsheet program named Bullwhip Explorer was used to simulate bullwhip effect. Several scenarios were developed to show the change in bullwhip effect ratio because of the difference in forecasting methods and forecasting parameters. Forecasting methods used were mean demand, moving average, exponential smoothing, demand signalling, and minimum expected mean squared error. Forecasting parameters were moving average period, smoothing parameter, signalling factor, and safety stock factor. It showed that decreasing moving average period, increasing smoothing parameter, increasing signalling factor can create bigger bullwhip effect ratio. Meanwhile, safety stock factor had no impact to bullwhip effect.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tartakovsky, Alexandre M.; Trask, Nathaniel; Pan, K.
2016-03-11
Smoothed Particle Hydrodynamics (SPH) is a Lagrangian method based on a meshless discretization of partial differential equations. In this review, we present SPH discretization of the Navier-Stokes and Advection-Diffusion-Reaction equations, implementation of various boundary conditions, and time integration of the SPH equations, and we discuss applications of the SPH method for modeling pore-scale multiphase flows and reactive transport in porous and fractured media.
DeFeo, T T; Morgan, K G
1985-05-01
A modified method for enzymatically isolating mammalian vascular smooth muscle cells has been developed and tested for ferret portal vein smooth muscle. This method produces a high proportion of fully relaxed cells and these cells appear to have normal pharmacological responsiveness. The ED50 values for both alpha stimulation and potassium depolarization are not significantly different in the isolated cells from those obtained from intact strips of ferret portal vein, suggesting that the enzymatic treatment does not destroy receptors or alter the electrical responsiveness of the cells. It was also possible to demonstrate a vasodilatory action of papaverine, nitroprusside and adenosine directly on the isolated cells indicating that the pathways involved are intact in the isolated cells. This method should be of considerable usefulness, particularly in combination with the new fluorescent indicators and cell sorter techniques which require isolated cells.
NASA Technical Reports Server (NTRS)
Beutter, Brent R.; Stone, Leland S.
1997-01-01
Although numerous studies have examined the relationship between smooth-pursuit eye movements and motion perception, it remains unresolved whether a common motion-processing system subserves both perception and pursuit. To address this question, we simultaneously recorded perceptual direction judgments and the concomitant smooth eye movement response to a plaid stimulus that we have previously shown generates systematic perceptual errors. We measured the perceptual direction biases psychophysically and the smooth eye-movement direction biases using two methods (standard averaging and oculometric analysis). We found that the perceptual and oculomotor biases were nearly identical suggesting that pursuit and perception share a critical motion processing stage, perhaps in area MT or MST of extrastriate visual cortex.
NASA Technical Reports Server (NTRS)
Beutter, B. R.; Stone, L. S.
1998-01-01
Although numerous studies have examined the relationship between smooth-pursuit eye movements and motion perception, it remains unresolved whether a common motion-processing system subserves both perception and pursuit. To address this question, we simultaneously recorded perceptual direction judgments and the concomitant smooth eye-movement response to a plaid stimulus that we have previously shown generates systematic perceptual errors. We measured the perceptual direction biases psychophysically and the smooth eye-movement direction biases using two methods (standard averaging and oculometric analysis). We found that the perceptual and oculomotor biases were nearly identical, suggesting that pursuit and perception share a critical motion processing stage, perhaps in area MT or MST of extrastriate visual cortex.
Isentropic compressive wave generator impact pillow and method of making same
Barker, Lynn M.
1985-01-01
An isentropic compressive wave generator and method of making same. The w generator comprises a disk or flat "pillow" member having component materials of different shock impedances formed in a configuration resulting in a smooth shock impedance gradient over the thickness thereof for interpositioning between an impactor member and a target specimen for producing a shock wave of a smooth predictable rise time. The method of making the pillow member comprises the reduction of the component materials to a powder form and forming the pillow member by sedimentation and compressive techniques.
Isentropic compressive wave generator and method of making same
Barker, L.M.
An isentropic compressive wave generator and method of making same are disclosed. The wave generator comprises a disk or flat pillow member having component materials of different shock impedances formed in a configuration resulting in a smooth shock impedance gradient over the thickness thereof for interpositioning between an impactor member and a target specimen for producing a shock wave of a smooth predictable rise time. The method of making the pillow member comprises the reduction of the component materials to a powder form and forming the pillow member by sedimentation and compressive techniques.
Method for smoothing the surface of a protective coating
Sangeeta, D.; Johnson, Curtis Alan; Nelson, Warren Arthur
2001-01-01
A method for smoothing the surface of a ceramic-based protective coating which exhibits roughness is disclosed. The method includes the steps of applying a ceramic-based slurry or gel coating to the protective coating surface; heating the slurry/gel coating to remove volatile material; and then further heating the slurry/gel coating to cure the coating and bond it to the underlying protective coating. The slurry/gel coating is often based on yttria-stabilized zirconia, and precursors of an oxide matrix. Related articles of manufacture are also described.
DOT National Transportation Integrated Search
2013-06-01
The Indiana Department of Transportation (INDOT) is currently utilizing a profilograph and the profile index for measuring smoothness : assurance for newly constructed pavements. However, there are benefits to implementing a new IRI based smoothness ...
Ng, Valerie Y.; Morisseau, Christophe; Falck, John R.; Hammock, Bruce D.; Kroetz, Deanna L.
2007-01-01
Objective Proliferation of smooth muscle cells is implicated in cardiovascular complications. Previously, a urea-based soluble epoxide hydrolase inhibitor was shown to attenuate smooth muscle cell proliferation. We examined the possibility that urea-based alkanoic acids activate the nuclear receptor peroxisome proliferator-activated receptor α (PPARα) and the role of PPARα in smooth muscle cell proliferation. Methods and Results Alkanoic acids transactivated PPARα, induced binding of PPARα to its response element, and significantly induced the expression of PPARα-responsive genes, showing their function as PPARα agonists. Furthermore, the alkanoic acids attenuated platelet-derived growth factor–induced smooth muscle cell proliferation via repression of cyclin D1 expression. Using small interfering RNA to decrease endogenous PPARα expression, it was determined that PPARα was partially involved in the cyclin D1 repression. The antiproliferative effects of alkanoic acids may also be attributed to their inhibitory effects on soluble epoxide hydrolase, because epoxyeicosatrienoic acids alone inhibited smooth muscle cell proliferation. Conclusions These results show that attenuation of smooth muscle cell proliferation by urea-based alkanoic acids is mediated, in part, by the activation of PPARα. These acids may be useful for designing therapeutics to treat diseases characterized by excessive smooth muscle cell proliferation. PMID:16917105
Nonmuscle myosin is regulated during smooth muscle contraction.
Yuen, Samantha L; Ogut, Ozgur; Brozovich, Frank V
2009-07-01
The participation of nonmuscle myosin in force maintenance is controversial. Furthermore, its regulation is difficult to examine in a cellular context, as the light chains of smooth muscle and nonmuscle myosin comigrate under native and denaturing electrophoresis techniques. Therefore, the regulatory light chains of smooth muscle myosin (SM-RLC) and nonmuscle myosin (NM-RLC) were purified, and these proteins were resolved by isoelectric focusing. Using this method, intact mouse aortic smooth muscle homogenates demonstrated four distinct RLC isoelectric variants. These spots were identified as phosphorylated NM-RLC (most acidic), nonphosphorylated NM-RLC, phosphorylated SM-RLC, and nonphosphorylated SM-RLC (most basic). During smooth muscle activation, NM-RLC phosphorylation increased. During depolarization, the increase in NM-RLC phosphorylation was unaffected by inhibition of either Rho kinase or PKC. However, inhibition of Rho kinase blocked the angiotensin II-induced increase in NM-RLC phosphorylation. Additionally, force for angiotensin II stimulation of aortic smooth muscle from heterozygous nonmuscle myosin IIB knockout mice was significantly less than that of wild-type littermates, suggesting that, in smooth muscle, activation of nonmuscle myosin is important for force maintenance. The data also demonstrate that, in smooth muscle, the activation of nonmuscle myosin is regulated by Ca(2+)-calmodulin-activated myosin light chain kinase during depolarization and a Rho kinase-dependent pathway during agonist stimulation.
Some practical observations on the predictor jump method for solving the Laplace equation
NASA Astrophysics Data System (ADS)
Duque-Carrillo, J. F.; Vega-Fernández, J. M.; Peña-Bernal, J. J.; Rossell-Bueno, M. A.
1986-01-01
The best conditions for the application of the predictor jump (PJ) method in the solution of the Laplace equation are discussed and some practical considerations for applying this new iterative technique are presented. The PJ method was remarked on in a previous article entitled ``A new way for solving Laplace's problem (the predictor jump method)'' [J. M. Vega-Fernández, J. F. Duque-Carrillo, and J. J. Peña-Bernal, J. Math. Phys. 26, 416 (1985)].
Zaretzki, Jed; Bergeron, Charles; Rydberg, Patrik; Huang, Tao-wei; Bennett, Kristin P; Breneman, Curt M
2011-07-25
This article describes RegioSelectivity-Predictor (RS-Predictor), a new in silico method for generating predictive models of P450-mediated metabolism for drug-like compounds. Within this method, potential sites of metabolism (SOMs) are represented as "metabolophores": A concept that describes the hierarchical combination of topological and quantum chemical descriptors needed to represent the reactivity of potential metabolic reaction sites. RS-Predictor modeling involves the use of metabolophore descriptors together with multiple-instance ranking (MIRank) to generate an optimized descriptor weight vector that encodes regioselectivity trends across all cases in a training set. The resulting pathway-independent (O-dealkylation vs N-oxidation vs Csp(3) hydroxylation, etc.), isozyme-specific regioselectivity model may be used to predict potential metabolic liabilities. In the present work, cross-validated RS-Predictor models were generated for a set of 394 substrates of CYP 3A4 as a proof-of-principle for the method. Rank aggregation was then employed to merge independently generated predictions for each substrate into a single consensus prediction. The resulting consensus RS-Predictor models were shown to reliably identify at least one observed site of metabolism in the top two rank-positions on 78% of the substrates. Comparisons between RS-Predictor and previously described regioselectivity prediction methods reveal new insights into how in silico metabolite prediction methods should be compared.
Adjusting for sampling variability in sparse data: geostatistical approaches to disease mapping
2011-01-01
Background Disease maps of crude rates from routinely collected health data indexed at a small geographical resolution pose specific statistical problems due to the sparse nature of the data. Spatial smoothers allow areas to borrow strength from neighboring regions to produce a more stable estimate of the areal value. Geostatistical smoothers are able to quantify the uncertainty in smoothed rate estimates without a high computational burden. In this paper, we introduce a uniform model extension of Bayesian Maximum Entropy (UMBME) and compare its performance to that of Poisson kriging in measures of smoothing strength and estimation accuracy as applied to simulated data and the real data example of HIV infection in North Carolina. The aim is to produce more reliable maps of disease rates in small areas to improve identification of spatial trends at the local level. Results In all data environments, Poisson kriging exhibited greater smoothing strength than UMBME. With the simulated data where the true latent rate of infection was known, Poisson kriging resulted in greater estimation accuracy with data that displayed low spatial autocorrelation, while UMBME provided more accurate estimators with data that displayed higher spatial autocorrelation. With the HIV data, UMBME performed slightly better than Poisson kriging in cross-validatory predictive checks, with both models performing better than the observed data model with no smoothing. Conclusions Smoothing methods have different advantages depending upon both internal model assumptions that affect smoothing strength and external data environments, such as spatial correlation of the observed data. Further model comparisons in different data environments are required to provide public health practitioners with guidelines needed in choosing the most appropriate smoothing method for their particular health dataset. PMID:21978359
Adjusting for sampling variability in sparse data: geostatistical approaches to disease mapping.
Hampton, Kristen H; Serre, Marc L; Gesink, Dionne C; Pilcher, Christopher D; Miller, William C
2011-10-06
Disease maps of crude rates from routinely collected health data indexed at a small geographical resolution pose specific statistical problems due to the sparse nature of the data. Spatial smoothers allow areas to borrow strength from neighboring regions to produce a more stable estimate of the areal value. Geostatistical smoothers are able to quantify the uncertainty in smoothed rate estimates without a high computational burden. In this paper, we introduce a uniform model extension of Bayesian Maximum Entropy (UMBME) and compare its performance to that of Poisson kriging in measures of smoothing strength and estimation accuracy as applied to simulated data and the real data example of HIV infection in North Carolina. The aim is to produce more reliable maps of disease rates in small areas to improve identification of spatial trends at the local level. In all data environments, Poisson kriging exhibited greater smoothing strength than UMBME. With the simulated data where the true latent rate of infection was known, Poisson kriging resulted in greater estimation accuracy with data that displayed low spatial autocorrelation, while UMBME provided more accurate estimators with data that displayed higher spatial autocorrelation. With the HIV data, UMBME performed slightly better than Poisson kriging in cross-validatory predictive checks, with both models performing better than the observed data model with no smoothing. Smoothing methods have different advantages depending upon both internal model assumptions that affect smoothing strength and external data environments, such as spatial correlation of the observed data. Further model comparisons in different data environments are required to provide public health practitioners with guidelines needed in choosing the most appropriate smoothing method for their particular health dataset.
Background: Simulation studies have previously demonstrated that time-series analyses using smoothing splines correctly model null health-air pollution associations. Methods: We repeatedly simulated season, meteorology and air quality for the metropolitan area of Atlanta from cyc...
Zhang, Kun; Zhang, Yinyin; Feng, Weijing; Chen, Renhua; Chen, Jie; Touyz, Rhian M; Wang, Jingfeng; Huang, Hui
2017-10-01
Vascular calcification (VC) is an important predictor of cardiovascular morbidity and mortality. Osteogenic differentiation of vascular smooth muscle cells (VSMCs) is a key mechanism of VC. Recent studies show that IL-18 (interleukin-18) favors VC while TRPM7 (transient receptor potential melastatin 7) channel upregulation inhibits VC. However, the relationship between IL-18 and TRPM7 is unclear. We questioned whether IL-18 enhances VC and osteogenic differentiation of VSMCs through TRPM7 channel activation. Coronary artery calcification and serum IL-18 were measured in patients by computed tomographic scanning and enzyme-linked immunosorbent assay, respectively. Primary rat VSMCs calcification were induced by high inorganic phosphate and exposed to IL-18. VSMCs were also treated with TRPM7 antagonist 2-aminoethoxy-diphenylborate or TRPM7 small interfering RNA to block TRPM7 channel activity and expression. TRPM7 currents were recorded by patch-clamp. Human studies showed that serum IL-18 levels were positively associated with coronary artery calcium scores ( r =0.91; P <0.001). In VSMCs, IL-18 significantly decreased expression of contractile markers α-smooth muscle actin, smooth muscle 22 α, and increased calcium deposition, alkaline phosphatase activity, and expression of osteogenic differentiation markers bone morphogenetic protein-2, Runx2 (runt-related transcription factor 2), and osteocalcin ( P <0.05). IL-18 increased TRPM7 expression through ERK1/2 (extracellular signal-regulated kinase 1/2) signaling activation, and TRPM7 currents were augmented by IL-18 treatment. Inhibition of TRPM7 channel by 2-aminoethoxy-diphenylborate or TRPM7 small interfering RNA prevented IL-18-enhanced osteogenic differentiation and VSMCs calcification. These findings suggest that coronary artery calcification is associated with increased IL-18 levels. IL-18 enhances VSMCs osteogenic differentiation and subsequent VC induced by β-glycerophosphate via TRPM7 channel activation. Accordingly, IL-18 may contribute to VC in proinflammatory conditions. © 2017 American Heart Association, Inc.
Balancing aggregation and smoothing errors in inverse models
Turner, A. J.; Jacob, D. J.
2015-06-30
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore » state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less
Balancing aggregation and smoothing errors in inverse models
NASA Astrophysics Data System (ADS)
Turner, A. J.; Jacob, D. J.
2015-01-01
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.
Balancing aggregation and smoothing errors in inverse models
NASA Astrophysics Data System (ADS)
Turner, A. J.; Jacob, D. J.
2015-06-01
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.
Penalized spline estimation for functional coefficient regression models.
Cao, Yanrong; Lin, Haiqun; Wu, Tracy Z; Yu, Yan
2010-04-01
The functional coefficient regression models assume that the regression coefficients vary with some "threshold" variable, providing appreciable flexibility in capturing the underlying dynamics in data and avoiding the so-called "curse of dimensionality" in multivariate nonparametric estimation. We first investigate the estimation, inference, and forecasting for the functional coefficient regression models with dependent observations via penalized splines. The P-spline approach, as a direct ridge regression shrinkage type global smoothing method, is computationally efficient and stable. With established fixed-knot asymptotics, inference is readily available. Exact inference can be obtained for fixed smoothing parameter λ, which is most appealing for finite samples. Our penalized spline approach gives an explicit model expression, which also enables multi-step-ahead forecasting via simulations. Furthermore, we examine different methods of choosing the important smoothing parameter λ: modified multi-fold cross-validation (MCV), generalized cross-validation (GCV), and an extension of empirical bias bandwidth selection (EBBS) to P-splines. In addition, we implement smoothing parameter selection using mixed model framework through restricted maximum likelihood (REML) for P-spline functional coefficient regression models with independent observations. The P-spline approach also easily allows different smoothness for different functional coefficients, which is enabled by assigning different penalty λ accordingly. We demonstrate the proposed approach by both simulation examples and a real data application.
NASA Technical Reports Server (NTRS)
Zeng, S.; Wesseling, P.
1993-01-01
The performance of a linear multigrid method using four smoothing methods, called SCGS (Symmetrical Coupled GauBeta-Seidel), CLGS (Collective Line GauBeta-Seidel), SILU (Scalar ILU), and CILU (Collective ILU), is investigated for the incompressible Navier-Stokes equations in general coordinates, in association with Galerkin coarse grid approximation. Robustness and efficiency are measured and compared by application to test problems. The numerical results show that CILU is the most robust, SILU the least, with CLGS and SCGS in between. CLGS is the best in efficiency, SCGS and CILU follow, and SILU is the worst.
Investigation on filter method for smoothing spiral phase plate
NASA Astrophysics Data System (ADS)
Zhang, Yuanhang; Wen, Shenglin; Luo, Zijian; Tang, Caixue; Yan, Hao; Yang, Chunlin; Liu, Mincai; Zhang, Qinghua; Wang, Jian
2018-03-01
Spiral phase plate (SPP) for generating vortex hollow beams has high efficiency in various applications. However, it is difficult to obtain an ideal spiral phase plate because of its continuous-varying helical phase and discontinued phase step. This paper describes the demonstration of continuous spiral phase plate using filter methods. The numerical simulations indicate that different filter method including spatial domain filter, frequency domain filter has unique impact on surface topography of SPP and optical vortex characteristics. The experimental results reveal that the spatial Gaussian filter method for smoothing SPP is suitable for Computer Controlled Optical Surfacing (CCOS) technique and obtains good optical properties.
Functional Parallel Factor Analysis for Functions of One- and Two-dimensional Arguments.
Choi, Ji Yeh; Hwang, Heungsun; Timmerman, Marieke E
2018-03-01
Parallel factor analysis (PARAFAC) is a useful multivariate method for decomposing three-way data that consist of three different types of entities simultaneously. This method estimates trilinear components, each of which is a low-dimensional representation of a set of entities, often called a mode, to explain the maximum variance of the data. Functional PARAFAC permits the entities in different modes to be smooth functions or curves, varying over a continuum, rather than a collection of unconnected responses. The existing functional PARAFAC methods handle functions of a one-dimensional argument (e.g., time) only. In this paper, we propose a new extension of functional PARAFAC for handling three-way data whose responses are sequenced along both a two-dimensional domain (e.g., a plane with x- and y-axis coordinates) and a one-dimensional argument. Technically, the proposed method combines PARAFAC with basis function expansion approximations, using a set of piecewise quadratic finite element basis functions for estimating two-dimensional smooth functions and a set of one-dimensional basis functions for estimating one-dimensional smooth functions. In a simulation study, the proposed method appeared to outperform the conventional PARAFAC. We apply the method to EEG data to demonstrate its empirical usefulness.
Modeling Electrokinetic Flows by the Smoothed Profile Method
Luo, Xian; Beskok, Ali; Karniadakis, George Em
2010-01-01
We propose an efficient modeling method for electrokinetic flows based on the Smoothed Profile Method (SPM) [1–4] and spectral element discretizations. The new method allows for arbitrary differences in the electrical conductivities between the charged surfaces and the the surrounding electrolyte solution. The electrokinetic forces are included into the flow equations so that the Poisson-Boltzmann and electric charge continuity equations are cast into forms suitable for SPM. The method is validated by benchmark problems of electroosmotic flow in straight channels and electrophoresis of charged cylinders. We also present simulation results of electrophoresis of charged microtubules, and show that the simulated electrophoretic mobility and anisotropy agree with the experimental values. PMID:20352076
Morgenstern, Hai; Rafaely, Boaz
2018-02-01
Spatial analysis of room acoustics is an ongoing research topic. Microphone arrays have been employed for spatial analyses with an important objective being the estimation of the direction-of-arrival (DOA) of direct sound and early room reflections using room impulse responses (RIRs). An optimal method for DOA estimation is the multiple signal classification algorithm. When RIRs are considered, this method typically fails due to the correlation of room reflections, which leads to rank deficiency of the cross-spectrum matrix. Preprocessing methods for rank restoration, which may involve averaging over frequency, for example, have been proposed exclusively for spherical arrays. However, these methods fail in the case of reflections with equal time delays, which may arise in practice and could be of interest. In this paper, a method is proposed for systems that combine a spherical microphone array and a spherical loudspeaker array, referred to as multiple-input multiple-output systems. This method, referred to as modal smoothing, exploits the additional spatial diversity for rank restoration and succeeds where previous methods fail, as demonstrated in a simulation study. Finally, combining modal smoothing with a preprocessing method is proposed in order to increase the number of DOAs that can be estimated using low-order spherical loudspeaker arrays.
The use of generalised additive models (GAM) in dentistry.
Helfenstein, U; Steiner, M; Menghini, G
1997-12-01
Ordinary multiple regression and logistic multiple regression are widely applied statistical methods which allow a researcher to 'explain' or 'predict' a response variable from a set of explanatory variables or predictors. In these models it is usually assumed that quantitative predictors such as age enter linearly into the model. During recent years these methods have been further developed to allow more flexibility in the way explanatory variables 'act' on a response variable. The methods are called 'generalised additive models' (GAM). The rigid linear terms characterising the association between response and predictors are replaced in an optimal way by flexible curved functions of the predictors (the 'profiles'). Plotting the 'profiles' allows the researcher to visualise easily the shape by which predictors 'act' over the whole range of values. The method facilitates detection of particular shapes such as 'bumps', 'U-shapes', 'J-shapes, 'threshold values' etc. Information about the shape of the association is not revealed by traditional methods. The shapes of the profiles may be checked by performing a Monte Carlo simulation ('bootstrapping'). After the presentation of the GAM a relevant case study is presented in order to demonstrate application and use of the method. The dependence of caries in primary teeth on a set of explanatory variables is investigated. Since GAMs may not be easily accessible to dentists, this article presents them in an introductory condensed form. It was thought that a nonmathematical summary and a worked example might encourage readers to consider the methods described. GAMs may be of great value to dentists in allowing visualisation of the shape by which predictors 'act' and obtaining a better understanding of the complex relationships between predictors and response.
Kakaboura, A; Vougiouklakis, G; Argiri, G
1989-01-01
Finishing and polishing an amalgam restoration, is considered as an important and necessary step of the restorative procedure. Various polishing techniques have been recommended to success a smooth amalgam surface. The aim of this study was to investigate the influence of three different polishing treatments on the marginal integrity and surface smoothness of restorations made of three commercially available amalgams and a glass-cermet cement. The materials used were the amalgams, Amalcap (Vivadent), Dispersalloy (Johnson and Johnson), Duralloy (Degussa) and the glass-cermet Katac-Silver (ESPE). The occlusal surfaces of the restorations were polished by the methods: I) round bur, No4-rubber cup-zinc oxide paste in a small brush, II) round bur No 4-bur-brown, green and super green (Shofu) polishing cups and points successively and III) amalgam polishing bur of 12-blades-smooth amalgam polishing bur. Photographs from unpolished and polished surfaces of the restorations, were taken with scanning electron microscope, to evaluate the polishing techniques. An improvement of marginal integrity and surface smoothness of all amalgam restorations was observed after the specimens had been polished with the three techniques. Method II, included Shofu polishers, proved the best results in comparison to the methods I and III. Polishing of glass-cermet cement was impossible with the examined techniques.
NASA Astrophysics Data System (ADS)
Kaneko, Naoki; Mashiko, Toshihiro; Ohnishi, Taihei; Ohta, Makoto; Namba, Katsunari; Watanabe, Eiju; Kawai, Kensuke
2016-12-01
Patient-specific vascular replicas are essential to the simulation of endovascular treatment or for vascular research. The inside of silicone replica is required to be smooth for manipulating interventional devices without resistance. In this report, we demonstrate the fabrication of patient-specific silicone vessels with a low-cost desktop 3D printer. We show that the surface of an acrylonitrile butadiene styrene (ABS) model printed by the 3D printer can be smoothed by a single dipping in ABS solvent in a time-dependent manner, where a short dip has less effect on the shape of the model. The vascular mold is coated with transparent silicone and then the ABS mold is dissolved after the silicone is cured. Interventional devices can pass through the inside of the smoothed silicone vessel with lower pushing force compared to the vessel without smoothing. The material cost and time required to fabricate the silicone vessel is about USD $2 and 24 h, which is much lower than the current fabrication methods. This fast and low-cost method offers the possibility of testing strategies before attempting particularly difficult cases, while improving the training of endovascular therapy, enabling the trialing of new devices, and broadening the scope of vascular research.
Boron hydride polymer coated substrates
Pearson, R.K.; Bystroff, R.I.; Miller, D.E.
1986-08-27
A method is disclosed for coating a substrate with a uniformly smooth layer of a boron hydride polymer. The method comprises providing a reaction chamber which contains the substrate and the boron hydride plasma. A boron hydride feed stock is introduced into the chamber simultaneously with the generation of a plasma discharge within the chamber. A boron hydride plasma of ions, electrons and free radicals which is generated by the plasma discharge interacts to form a uniformly smooth boron hydride polymer which is deposited on the substrate.
Boron hydride polymer coated substrates
Pearson, Richard K.; Bystroff, Roman I.; Miller, Dale E.
1987-01-01
A method is disclosed for coating a substrate with a uniformly smooth layer of a boron hydride polymer. The method comprises providing a reaction chamber which contains the substrate and the boron hydride plasma. A boron hydride feed stock is introduced into the chamber simultaneously with the generation of a plasma discharge within the chamber. A boron hydride plasma of ions, electrons and free radicals which is generated by the plasma discharge interacts to form a uniformly smooth boron hydride polymer which is deposited on the substrate.
Method For Identifying Sedimentary Bodies From Images And Its Application To Mineral Exploration
NASA Technical Reports Server (NTRS)
Wilkinson, Murray Justin (Inventor)
2006-01-01
A method is disclosed for identifying a sediment accumulation from an image of a part of the earth's surface. The method includes identifying a topographic discontinuity from the image. A river which crosses the discontinuity is identified from the image. From the image, paleocourses of the river are identified which diverge from a point where the river crosses the discontinuity. The paleocourses are disposed on a topographically low side of the discontinuity. A smooth surface which emanates from the point is identified. The smooth surface is also disposed on the topographically low side of the point.
Tan, Jun; Nie, Zaiping
2018-05-12
Direction of Arrival (DOA) estimation of low-altitude targets is difficult due to the multipath coherent interference from the ground reflection image of the targets, especially for very high frequency (VHF) radars, which have antennae that are severely restricted in terms of aperture and height. The polarization smoothing generalized multiple signal classification (MUSIC) algorithm, which combines polarization smoothing and generalized MUSIC algorithm for polarization sensitive arrays (PSAs), was proposed to solve this problem in this paper. Firstly, the polarization smoothing pre-processing was exploited to eliminate the coherence between the direct and the specular signals. Secondly, we constructed the generalized MUSIC algorithm for low angle estimation. Finally, based on the geometry information of the symmetry multipath model, the proposed algorithm was introduced to convert the two-dimensional searching into one-dimensional searching, thus reducing the computational burden. Numerical results were provided to verify the effectiveness of the proposed method, showing that the proposed algorithm has significantly improved angle estimation performance in the low-angle area compared with the available methods, especially when the grazing angle is near zero.
Alger, Katrina; Bunting, Elizabeth; Schuler, Krysten; Whipps, Christopher M
2017-07-01
Lymphoproliferative disease virus (LPDV) is an oncogenic avian retrovirus that was previously thought to exclusively infect domestic turkeys but was recently shown to be widespread in Wild Turkeys ( Meleagris gallopavo ) throughout most of the eastern US. In commercial flocks, the virus spreads between birds housed in close quarters, but there is little information about potential risk factors for infection in wild birds. Initial studies focused on distribution of LPDV nationally, but investigation of state-level data is necessary to assess potential predictors of infection and detect patterns in disease prevalence and distribution. We tested wild turkey bone marrow samples (n=2,538) obtained from hunter-harvested birds in New York State from 2012 to 2014 for LPDV infection. Statewide prevalence for those 3 yr was 55% with a 95% confidence interval (CI) of 53-57%. We evaluated a suite of demographic, anthropogenic, and land cover characteristics with logistic regression to identify potential predictors for infection based on odds ratio (OR). Age (OR=0.16, 95% CI=0.13-0.19) and sex (OR=1.3, 95% CI=1.03-1.24) were strong predictors of LPDV infection, with juveniles less likely to test positive than adults, and females more likely to test positive than males. The number of birds released during the state's 40-yr translocation program (OR=0.993, 95% CI=0.990-0.997) and the ratio of agriculture to forest cover (OR=1.13, 95% CI=1.03-1.19) were also predictive of LPDV infection. Prevalence distribution was analyzed using dual kernel density smoothing to produce a risk surface map, combined with Kulldorff's spatial scan statistic and the Anselin Local Moran's I to identify statistically significant geographic clusters of high or low prevalence. These methods revealed the prevalence of LPDV was high (>50%) throughout New York State, with regions of variation and several significant clusters. We revealed new information about the risk factors and distribution of LPDV in New York State, which may be beneficial to game bird managers and producers of organic or pasture-raised poultry.
Ma, Junshui; Wang, Shubing; Raubertas, Richard; Svetnik, Vladimir
2010-07-15
With the increasing popularity of using electroencephalography (EEG) to reveal the treatment effect in drug development clinical trials, the vast volume and complex nature of EEG data compose an intriguing, but challenging, topic. In this paper the statistical analysis methods recommended by the EEG community, along with methods frequently used in the published literature, are first reviewed. A straightforward adjustment of the existing methods to handle multichannel EEG data is then introduced. In addition, based on the spatial smoothness property of EEG data, a new category of statistical methods is proposed. The new methods use a linear combination of low-degree spherical harmonic (SPHARM) basis functions to represent a spatially smoothed version of the EEG data on the scalp, which is close to a sphere in shape. In total, seven statistical methods, including both the existing and the newly proposed methods, are applied to two clinical datasets to compare their power to detect a drug effect. Contrary to the EEG community's recommendation, our results suggest that (1) the nonparametric method does not outperform its parametric counterpart; and (2) including baseline data in the analysis does not always improve the statistical power. In addition, our results recommend that (3) simple paired statistical tests should be avoided due to their poor power; and (4) the proposed spatially smoothed methods perform better than their unsmoothed versions. Copyright 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Zhi-Qian; Liu, G. R.; Khoo, Boo Cheong
2013-02-01
A three-dimensional immersed smoothed finite element method (3D IS-FEM) using four-node tetrahedral element is proposed to solve 3D fluid-structure interaction (FSI) problems. The 3D IS-FEM is able to determine accurately the physical deformation of the nonlinear solids placed within the incompressible viscous fluid governed by Navier-Stokes equations. The method employs the semi-implicit characteristic-based split scheme to solve the fluid flows and smoothed finite element methods to calculate the transient dynamics responses of the nonlinear solids based on explicit time integration. To impose the FSI conditions, a novel, effective and sufficiently general technique via simple linear interpolation is presented based on Lagrangian fictitious fluid meshes coinciding with the moving and deforming solid meshes. In the comparisons to the referenced works including experiments, it is clear that the proposed 3D IS-FEM ensures stability of the scheme with the second order spatial convergence property; and the IS-FEM is fairly independent of a wide range of mesh size ratio.
Drying of Pigment-Cellulose Nanofibril Substrates
Timofeev, Oleg; Torvinen, Katariina; Sievänen, Jenni; Kaljunen, Timo; Kouko, Jarmo; Ketoja, Jukka A.
2014-01-01
A new substrate containing cellulose nanofibrils and inorganic pigment particles has been developed for printed electronics applications. The studied composite structure contains 80% fillers and is mechanically stable and flexible. Before drying, the solids content can be as low as 20% due to the high water binding capacity of the cellulose nanofibrils. We have studied several drying methods and their effects on the substrate properties. The aim is to achieve a tight, smooth surface keeping the drying efficiency simultaneously at a high level. The methods studied include: (1) drying on a hot metal surface; (2) air impingement drying; and (3) hot pressing. Somewhat surprisingly, drying rates measured for the pigment-cellulose nanofibril substrates were quite similar to those for the reference board sheets. Very high dewatering rates were observed for the hot pressing at high moisture contents. The drying method had significant effects on the final substrate properties, especially on short-range surface smoothness. The best smoothness was obtained with a combination of impingement and contact drying. The mechanical properties of the sheets were also affected by the drying method and associated temperature. PMID:28788220
A supervoxel-based segmentation method for prostate MR images
NASA Astrophysics Data System (ADS)
Tian, Zhiqiang; Liu, LiZhi; Fei, Baowei
2015-03-01
Accurate segmentation of the prostate has many applications in prostate cancer diagnosis and therapy. In this paper, we propose a "Supervoxel" based method for prostate segmentation. The prostate segmentation problem is considered as assigning a label to each supervoxel. An energy function with data and smoothness terms is used to model the labeling process. The data term estimates the likelihood of a supervoxel belongs to the prostate according to a shape feature. The geometric relationship between two neighboring supervoxels is used to construct a smoothness term. A threedimensional (3D) graph cut method is used to minimize the energy function in order to segment the prostate. A 3D level set is then used to get a smooth surface based on the output of the graph cut. The performance of the proposed segmentation algorithm was evaluated with respect to the manual segmentation ground truth. The experimental results on 12 prostate volumes showed that the proposed algorithm yields a mean Dice similarity coefficient of 86.9%+/-3.2%. The segmentation method can be used not only for the prostate but also for other organs.
Matrix Metalloproteinase-1 Activation Contributes to Airway Smooth Muscle Growth and Asthma Severity
Naveed, Shams-un-nisa; Clements, Debbie; Jackson, David J.; Philp, Christopher; Billington, Charlotte K.; Soomro, Irshad; Reynolds, Catherine; Harrison, Timothy W.; Johnston, Sebastian L.; Shaw, Dominick E.
2017-01-01
Rationale: Matrix metalloproteinase-1 (MMP-1) and mast cells are present in the airways of people with asthma. Objectives: To investigate whether MMP-1 could be activated by mast cells and increase asthma severity. Methods: Patients with stable asthma and healthy control subjects underwent spirometry, methacholine challenge, and bronchoscopy, and their airway smooth muscle cells were grown in culture. A second asthma group and control subjects had symptom scores, spirometry, and bronchoalveolar lavage before and after rhinovirus-induced asthma exacerbations. Extracellular matrix was prepared from decellularized airway smooth muscle cultures. MMP-1 protein and activity were assessed. Measurements and Main Results: Airway smooth muscle cells generated pro–MMP-1, which was proteolytically activated by mast cell tryptase. Airway smooth muscle treated with activated mast cell supernatants produced extracellular matrix, which enhanced subsequent airway smooth muscle growth by 1.5-fold (P < 0.05), which was dependent on MMP-1 activation. In asthma, airway pro–MMP-1 was 5.4-fold higher than control subjects (P = 0.002). Mast cell numbers were associated with airway smooth muscle proliferation and MMP-1 protein associated with bronchial hyperresponsiveness. During exacerbations, MMP-1 activity increased and was associated with fall in FEV1 and worsening asthma symptoms. Conclusions: MMP-1 is activated by mast cell tryptase resulting in a proproliferative extracellular matrix. In asthma, mast cells are associated with airway smooth muscle growth, MMP-1 levels are associated with bronchial hyperresponsiveness, and MMP-1 activation are associated with exacerbation severity. Our findings suggest that airway smooth muscle/mast cell interactions contribute to asthma severity by transiently increasing MMP activation, airway smooth muscle growth, and airway responsiveness. PMID:27967204
Retrieving relevant factors with exploratory SEM and principal-covariate regression: A comparison.
Vervloet, Marlies; Van den Noortgate, Wim; Ceulemans, Eva
2018-02-12
Behavioral researchers often linearly regress a criterion on multiple predictors, aiming to gain insight into the relations between the criterion and predictors. Obtaining this insight from the ordinary least squares (OLS) regression solution may be troublesome, because OLS regression weights show only the effect of a predictor on top of the effects of other predictors. Moreover, when the number of predictors grows larger, it becomes likely that the predictors will be highly collinear, which makes the regression weights' estimates unstable (i.e., the "bouncing beta" problem). Among other procedures, dimension-reduction-based methods have been proposed for dealing with these problems. These methods yield insight into the data by reducing the predictors to a smaller number of summarizing variables and regressing the criterion on these summarizing variables. Two promising methods are principal-covariate regression (PCovR) and exploratory structural equation modeling (ESEM). Both simultaneously optimize reduction and prediction, but they are based on different frameworks. The resulting solutions have not yet been compared; it is thus unclear what the strengths and weaknesses are of both methods. In this article, we focus on the extents to which PCovR and ESEM are able to extract the factors that truly underlie the predictor scores and can predict a single criterion. The results of two simulation studies showed that for a typical behavioral dataset, ESEM (using the BIC for model selection) in this regard is successful more often than PCovR. Yet, in 93% of the datasets PCovR performed equally well, and in the case of 48 predictors, 100 observations, and large differences in the strengths of the factors, PCovR even outperformed ESEM.
Determination of wall shear stress from mean velocity and Reynolds shear stress profiles
NASA Astrophysics Data System (ADS)
Volino, Ralph J.; Schultz, Michael P.
2018-03-01
An analytical method is presented for determining the Reynolds shear stress profile in steady, two-dimensional wall-bounded flows using the mean streamwise velocity. The method is then utilized with experimental data to determine the local wall shear stress. The procedure is applicable to flows on smooth and rough surfaces with arbitrary pressure gradients. It is based on the streamwise component of the boundary layer momentum equation, which is transformed into inner coordinates. The method requires velocity profiles from at least two streamwise locations, but the formulation of the momentum equation reduces the dependence on streamwise gradients. The method is verified through application to laminar flow solutions and turbulent DNS results from both zero and nonzero pressure gradient boundary layers. With strong favorable pressure gradients, the method is shown to be accurate for finding the wall shear stress in cases where the Clauser fit technique loses accuracy. The method is then applied to experimental data from the literature from zero pressure gradient studies on smooth and rough walls, and favorable and adverse pressure gradient cases on smooth walls. Data from very near the wall are not required for determination of the wall shear stress. Wall friction velocities obtained using the present method agree with those determined in the original studies, typically to within 2%.
An improved multi-paths optimization method for video stabilization
NASA Astrophysics Data System (ADS)
Qin, Tao; Zhong, Sheng
2018-03-01
For video stabilization, the difference between original camera motion path and the optimized one is proportional to the cropping ratio and warping ratio. A good optimized path should preserve the moving tendency of the original one meanwhile the cropping ratio and warping ratio of each frame should be kept in a proper range. In this paper we use an improved warping-based motion representation model, and propose a gauss-based multi-paths optimization method to get a smoothing path and obtain a stabilized video. The proposed video stabilization method consists of two parts: camera motion path estimation and path smoothing. We estimate the perspective transform of adjacent frames according to warping-based motion representation model. It works well on some challenging videos where most previous 2D methods or 3D methods fail for lacking of long features trajectories. The multi-paths optimization method can deal well with parallax, as we calculate the space-time correlation of the adjacent grid, and then a kernel of gauss is used to weigh the motion of adjacent grid. Then the multi-paths are smoothed while minimize the crop ratio and the distortion. We test our method on a large variety of consumer videos, which have casual jitter and parallax, and achieve good results.
Okorokova, Elizaveta; Lebedev, Mikhail; Linderman, Michael; Ossadtchi, Alex
2015-01-01
In recent years, several assistive devices have been proposed to reconstruct arm and hand movements from electromyographic (EMG) activity. Although simple to implement and potentially useful to augment many functions, such myoelectric devices still need improvement before they become practical. Here we considered the problem of reconstruction of handwriting from multichannel EMG activity. Previously, linear regression methods (e.g., the Wiener filter) have been utilized for this purpose with some success. To improve reconstruction accuracy, we implemented the Kalman filter, which allows to fuse two information sources: the physical characteristics of handwriting and the activity of the leading hand muscles, registered by the EMG. Applying the Kalman filter, we were able to convert eight channels of EMG activity recorded from the forearm and the hand muscles into smooth reconstructions of handwritten traces. The filter operates in a causal manner and acts as a true predictor utilizing the EMGs from the past only, which makes the approach suitable for real-time operations. Our algorithm is appropriate for clinical neuroprosthetic applications and computer peripherals. Moreover, it is applicable to a broader class of tasks where predictive myoelectric control is needed. PMID:26578856
NASA Astrophysics Data System (ADS)
Polprasert, Jirawadee; Ongsakul, Weerakorn; Dieu, Vo Ngoc
2011-06-01
This paper proposes a self-organizing hierarchical particle swarm optimization (SPSO) with time-varying acceleration coefficients (TVAC) for solving economic dispatch (ED) problem with non-smooth functions including multiple fuel options (MFO) and valve-point loading effects (VPLE). The proposed SPSO with TVAC is the new approach optimizer and good performance for solving ED problems. It can handle the premature convergence of the problem by re-initialization of velocity whenever particles are stagnated in the search space. To properly control both local and global explorations of the swarm during the optimization process, the performance of TVAC is included. The proposed method is tested in different ED problems with non-smooth cost functions and the obtained results are compared to those from many other methods in the literature. The results have revealed that the proposed SPSO with TVAC is effective in finding higher quality solutions for non-smooth ED problems than many other methods.
Neutrophilic infiltration within the airway smooth muscle in patients with COPD
Baraldo, S; Turato, G; Badin, C; Bazzan, E; Beghe, B; Zuin, R; Calabrese, F; Casoni, G; Maestrelli, P; Papi, A; Fabbri, L; Saetta, M
2004-01-01
Background: COPD is an inflammatory disorder characterised by chronic airflow limitation, but the extent to which airway inflammation is related to functional abnormalities is still uncertain. The interaction between inflammatory cells and airway smooth muscle may have a crucial role. Methods: To investigate the microlocalisation of inflammatory cells within the airway smooth muscle in COPD, surgical specimens obtained from 26 subjects undergoing thoracotomy (eight smokers with COPD, 10 smokers with normal lung function, and eight non-smoking controls) were examined. Immunohistochemical analysis was used to quantify the number of neutrophils, macrophages, mast cells, CD4+ and CD8+ cells localised within the smooth muscle of peripheral airways. Results: Smokers with COPD had an increased number of neutrophils and CD8+ cells in the airway smooth muscle compared with non-smokers. Smokers with normal lung function also had a neutrophilic infiltration in the airway smooth muscle, but to a lesser extent. When all the subjects were analysed as one group, neutrophilic infiltration was inversely related to forced expiratory volume in 1 second (% predicted). Conclusions: Microlocalisation of neutrophils and CD8+ cells in the airway smooth muscle in smokers with COPD suggests a possible role for these cells in the pathogenesis of smoking induced airflow limitation. PMID:15047950
Temporally-Constrained Group Sparse Learning for Longitudinal Data Analysis in Alzheimer’s Disease
Jie, Biao; Liu, Mingxia; Liu, Jun
2016-01-01
Sparse learning has been widely investigated for analysis of brain images to assist the diagnosis of Alzheimer’s disease (AD) and its prodromal stage, i.e., mild cognitive impairment (MCI). However, most existing sparse learning-based studies only adopt cross-sectional analysis methods, where the sparse model is learned using data from a single time-point. Actually, multiple time-points of data are often available in brain imaging applications, which can be used in some longitudinal analysis methods to better uncover the disease progression patterns. Accordingly, in this paper we propose a novel temporally-constrained group sparse learning method aiming for longitudinal analysis with multiple time-points of data. Specifically, we learn a sparse linear regression model by using the imaging data from multiple time-points, where a group regularization term is first employed to group the weights for the same brain region across different time-points together. Furthermore, to reflect the smooth changes between data derived from adjacent time-points, we incorporate two smoothness regularization terms into the objective function, i.e., one fused smoothness term which requires that the differences between two successive weight vectors from adjacent time-points should be small, and another output smoothness term which requires the differences between outputs of two successive models from adjacent time-points should also be small. We develop an efficient optimization algorithm to solve the proposed objective function. Experimental results on ADNI database demonstrate that, compared with conventional sparse learning-based methods, our proposed method can achieve improved regression performance and also help in discovering disease-related biomarkers. PMID:27093313
Matsumoto, Hisako; Moir, Lyn M; Oliver, Brian G G; Burgess, Janette K; Roth, Michael; Black, Judith L; McParland, Brent E
2007-10-01
Exaggerated bronchial constriction is the most significant and life threatening response of patients with asthma to inhaled stimuli. However, few studies have investigated the contractility of airway smooth muscle (ASM) from these patients. The purpose of this study was to establish a method to measure contraction of ASM cells by embedding them into a collagen gel, and to compare the contraction between subjects with and without asthma. Gel contraction to histamine was examined in floating gels containing cultured ASM cells from subjects with and without asthma following overnight incubation while unattached (method 1) or attached (method 2) to casting plates. Smooth muscle myosin light chain kinase protein levels were also examined. Collagen gels containing ASM cells reduced in size when stimulated with histamine in a concentration-dependent manner and reached a maximum at a mean (SE) of 15.7 (1.2) min. This gel contraction was decreased by inhibitors for phospholipase C (U73122), myosin light chain kinase (ML-7) and Rho kinase (Y27632). When comparing the two patient groups, the maximal decreased area of gels containing ASM cells from patients with asthma was 19 (2)% (n = 8) using method 1 and 22 (3)% (n = 6) using method 2, both of which were greater than that of cells from patients without asthma: 13 (2)% (n = 9, p = 0.05) and 10 (4)% (n = 5, p = 0.024), respectively. Smooth muscle myosin light chain kinase levels were not different between the two groups. The increased contraction of asthmatic ASM cells may be responsible for exaggerated bronchial constriction in asthma.
de Carvalho, Wellington Roberto Gomes; de Moraes, Anderson Marques; Roman, Everton Paulo; Santos, Keila Donassolo; Medaets, Pedro Augusto Rodrigues; Veiga-Junior, Nélio Neves; Coelho, Adrielle Caroline Lace de Moraes; Krahenbühl, Tathyane; Sewaybricker, Leticia Esposito; Barros-Filho, Antonio de Azevedo; Morcillo, Andre Moreno; Guerra-Júnior, Gil
2015-01-01
Aims To establish normative data for phalangeal quantitative ultrasound (QUS) measures in Brazilian students. Methods The sample was composed of 6870 students (3688 females and 3182 males), aged 6 to 17 years. The bone status parameter, Amplitude Dependent Speed of Sound (AD-SoS) was assessed by QUS of the phalanges using DBM Sonic BP (IGEA, Carpi, Italy) equipment. Skin color was obtained by self-evaluation. The LMS method was used to derive smoothed percentiles reference charts for AD-SoS according to sex, age, height and weight and to generate the L, M, and S parameters. Results Girls showed higher AD-SoS values than boys in the age groups 7–16 (p<0.001). There were no differences on AD-SoS Z-scores according to skin color. In both sexes, the obese group showed lower values of AD-SoS Z-scores compared with subjects classified as thin or normal weight. Age (r2 = 0.48) and height (r2 = 0.35) were independent predictors of AD-SoS in females and males, respectively. Conclusion AD-SoS values in Brazilian children and adolescents were influenced by sex, age and weight status, but not by skin color. Our normative data could be used for monitoring AD-SoS in children or adolescents aged 6–17 years. PMID:26043082
The Investigation of Serum Vaspin Level in Atherosclerotic Coronary Artery Disease
Kobat, Mehmet Ali; Celik, Ahmet; Balin, Mehmet; Altas, Yakup; Baydas, Adil; Bulut, Musa; Aydin, Suleyman; Dagli, Necati; Yavuzkir, Mustafa Ferzeyn; Ilhan, Selcuk
2012-01-01
Background It was speculated that fatty tissue originated adipocytokines may play role in pathogenesis of atherosclerosis. These adipocytokines may alter vascular homeostasis by effecting endothelial cells, arterial smooth muscle cells and macrophages. Vaspin is a newly described member of adipocytokines family. We aimed to investigate whether plasma vaspin level has any predictive value in coronary artery disease (CAD). Methods Forty patients who have at least single vessel ≥ 70 % stenosis demostrated angiographically and 40 subjects with normal coronary anatomy were included to the study. The vaspin levels were measured from serum that is obtained by centrifigation of blood and stored at -20 oC by ELISA method. The length, weight and body mass index of patients were measured. Biochemical parameters including total cholesterol, low density lipoprotein, high density lipoprotein, creatinine, sodium, potassium, hemoglobine, uric acid and fasting glucose were also measured. Results Biochemical markers levels were similar in both groups. Serum vaspin levels were significantly lower in CAD patients than control group (respectively; 256 ± 219 pg/ml vs. 472 ( 564 pg/ml, P < 0.02). Beside this serum vaspin level was lower in control group with high systolic blood pressure. Conclusion Serum vaspin levels were found significantly lower in patients with CAD than age-matched subjects with normal coronary anatomy. Vaspin may be used as a predictor of CAD. Keywords Coronary artery disease; Vaspin; Adipokine PMID:22505983
Smoothed particle hydrodynamics method from a large eddy simulation perspective
NASA Astrophysics Data System (ADS)
Di Mascio, A.; Antuono, M.; Colagrossi, A.; Marrone, S.
2017-03-01
The Smoothed Particle Hydrodynamics (SPH) method, often used for the modelling of the Navier-Stokes equations by a meshless Lagrangian approach, is revisited from the point of view of Large Eddy Simulation (LES). To this aim, the LES filtering procedure is recast in a Lagrangian framework by defining a filter that moves with the positions of the fluid particles at the filtered velocity. It is shown that the SPH smoothing procedure can be reinterpreted as a sort of LES Lagrangian filtering, and that, besides the terms coming from the LES convolution, additional contributions (never accounted for in the SPH literature) appear in the equations when formulated in a filtered fashion. Appropriate closure formulas are derived for the additional terms and a preliminary numerical test is provided to show the main features of the proposed LES-SPH model.
Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K
2015-05-01
We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template. Copyright © 2015 Elsevier B.V. All rights reserved.
Global image analysis to determine suitability for text-based image personalization
NASA Astrophysics Data System (ADS)
Ding, Hengzhou; Bala, Raja; Fan, Zhigang; Bouman, Charles A.; Allebach, Jan P.
2012-03-01
Lately, image personalization is becoming an interesting topic. Images with variable elements such as text usually appear much more appealing to the recipients. In this paper, we describe a method to pre-analyze the image and automatically suggest to the user the most suitable regions within an image for text-based personalization. The method is based on input gathered from experiments conducted with professional designers. It has been observed that regions that are spatially smooth and regions with existing text (e.g. signage, banners, etc.) are the best candidates for personalization. This gives rise to two sets of corresponding algorithms: one for identifying smooth areas, and one for locating text regions. Furthermore, based on the smooth and text regions found in the image, we derive an overall metric to rate the image in terms of its suitability for personalization (SFP).
Uniform hydrogen fuel layers for inertial fusion targets by microgravity
NASA Technical Reports Server (NTRS)
Parks, P. B.; Fagaly, Robert L.
1994-01-01
A critical concern in the fabrication of targets for inertial confinement fusion (ICF) is ensuring that the hydrogenic (D(sub 2) or DT) fuel layer maintains spherical symmetry. Solid layered targets have structural integrity, but lack the needed surface smoothness. Liquid targets are inherently smooth, but suffer from gravitationally induced sagging. One method to reduce the effective gravitational field environment is freefall insertion into the target chamber. Another method to counterbalance field gravitational force is to use an applied magnetic field combined with a gradient field to induce a magnetic dipole force on the liquid fuel layer. Based on time dependent calculations of the dynamics of the liquid fuel layer in microgravity environments, we show that it may be possible to produce a liquid layered ICF target that satisfies both smoothness and symmetry requirements.
Accurate upwind methods for the Euler equations
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1993-01-01
A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.
Liu, Dong-Hai; Huang, Xu; Guo, Xin; Meng, Xiang-Min; Wu, Yi-Song; Lu, Hong-Li; Zhang, Chun-Mei; Kim, Young-chul; Xu, Wen-Xie
2014-01-01
Partial obstruction of the small intestine causes obvious hypertrophy of smooth muscle cells and motility disorder in the bowel proximate to the obstruction. To identify electric remodeling of hypertrophic smooth muscles in partially obstructed murine small intestine, the patch-clamp and intracellular microelectrode recording methods were used to identify the possible electric remodeling and Western blot, immunofluorescence and immunoprecipitation were utilized to examine the channel protein expression and phosphorylation level changes in this research. After 14 days of obstruction, partial obstruction caused obvious smooth muscle hypertrophy in the proximally located intestine. The slow waves of intestinal smooth muscles in the dilated region were significantly suppressed, their amplitude and frequency were reduced, whilst the resting membrane potentials were depolarized compared with normal and sham animals. The current density of voltage dependent potassium channel (KV) was significantly decreased in the hypertrophic smooth muscle cells and the voltage sensitivity of KV activation was altered. The sensitivity of KV currents (IKV) to TEA, a nonselective potassium channel blocker, increased significantly, but the sensitivity of IKv to 4-AP, a KV blocker, stays the same. The protein levels of KV4.3 and KV2.2 were up-regulated in the hypertrophic smooth muscle cell membrane. The serine and threonine phosphorylation levels of KV4.3 and KV2.2 were significantly increased in the hypertrophic smooth muscle cells. Thus this study represents the first identification of KV channel remodeling in murine small intestinal smooth muscle hypertrophy induced by partial obstruction. The enhanced phosphorylations of KV4.3 and KV2.2 may be involved in this process.
Spatial analysis on human brucellosis incidence in mainland China: 2004–2010
Zhang, Junhui; Yin, Fei; Zhang, Tao; Yang, Chao; Zhang, Xingyu; Feng, Zijian; Li, Xiaosong
2014-01-01
Objectives China has experienced a sharply increasing rate of human brucellosis in recent years. Effective spatial monitoring of human brucellosis incidence is very important for successful implementation of control and prevention programmes. The purpose of this paper is to apply exploratory spatial data analysis (ESDA) methods and the empirical Bayes (EB) smoothing technique to monitor county-level incidence rates for human brucellosis in mainland China from 2004 to 2010 by examining spatial patterns. Methods ESDA methods were used to characterise spatial patterns of EB smoothed incidence rates for human brucellosis based on county-level data obtained from the China Information System for Disease Control and Prevention (CISDCP) in mainland China from 2004 to 2010. Results EB smoothed incidence rates for human brucellosis were spatially dependent during 2004–2010. The local Moran test identified significantly high-risk clusters of human brucellosis (all p values <0.01), which persisted during the 7-year study period. High-risk counties were centred in the Inner Mongolia Autonomous Region and other Northern provinces (ie, Hebei, Shanxi, Jilin and Heilongjiang provinces) around the border with the Inner Mongolia Autonomous Region where animal husbandry was highly developed. The number of high-risk counties increased from 25 in 2004 to 54 in 2010. Conclusions ESDA methods and the EB smoothing technique can assist public health officials in identifying high-risk areas. Allocating more resources to high-risk areas is an effective way to reduce human brucellosis incidence. PMID:24713215
Ma, Liyan; Qiu, Bo; Cui, Mingyue; Ding, Jianwei
2017-01-01
Depth image-based rendering (DIBR), which is used to render virtual views with a color image and the corresponding depth map, is one of the key techniques in the 2D to 3D conversion process. Due to the absence of knowledge about the 3D structure of a scene and its corresponding texture, DIBR in the 2D to 3D conversion process, inevitably leads to holes in the resulting 3D image as a result of newly-exposed areas. In this paper, we proposed a structure-aided depth map preprocessing framework in the transformed domain, which is inspired by recently proposed domain transform for its low complexity and high efficiency. Firstly, our framework integrates hybrid constraints including scene structure, edge consistency and visual saliency information in the transformed domain to improve the performance of depth map preprocess in an implicit way. Then, adaptive smooth localization is cooperated and realized in the proposed framework to further reduce over-smoothness and enhance optimization in the non-hole regions. Different from the other similar methods, the proposed method can simultaneously achieve the effects of hole filling, edge correction and local smoothing for typical depth maps in a united framework. Thanks to these advantages, it can yield visually satisfactory results with less computational complexity for high quality 2D to 3D conversion. Numerical experimental results demonstrate the excellent performances of the proposed method. PMID:28407027
Fast focus estimation using frequency analysis in digital holography.
Oh, Seungtaik; Hwang, Chi-Young; Jeong, Il Kwon; Lee, Sung-Keun; Park, Jae-Hyeung
2014-11-17
A novel fast frequency-based method to estimate the focus distance of digital hologram for a single object is proposed. The focus distance is computed by analyzing the distribution of intersections of smoothed-rays. The smoothed-rays are determined by the directions of energy flow which are computed from local spatial frequency spectrum based on the windowed Fourier transform. So our method uses only the intrinsic frequency information of the optical field on the hologram and therefore does not require any sequential numerical reconstructions and focus detection techniques of conventional photography, both of which are the essential parts in previous methods. To show the effectiveness of our method, numerical results and analysis are presented as well.
NASA Technical Reports Server (NTRS)
Shiau, Jyh-Jen; Wahba, Grace; Johnson, Donald R.
1986-01-01
A new method, based on partial spline models, is developed for including specified discontinuities in otherwise smooth two- and three-dimensional objective analyses. The method is appropriate for including tropopause height information in two- and three-dimensinal temperature analyses, using the O'Sullivan-Wahba physical variational method for analysis of satellite radiance data, and may in principle be used in a combined variational analysis of observed, forecast, and climate information. A numerical method for its implementation is described and a prototype two-dimensional analysis based on simulated radiosonde and tropopause height data is shown. The method may also be appropriate for other geophysical problems, such as modeling the ocean thermocline, fronts, discontinuities, etc.
Random function theory revisited - Exact solutions versus the first order smoothing conjecture
NASA Technical Reports Server (NTRS)
Lerche, I.; Parker, E. N.
1975-01-01
We remark again that the mathematical conjecture known as first order smoothing or the quasi-linear approximation does not give the correct dependence on correlation length (time) in many cases, although it gives the correct limit as the correlation length (time) goes to zero. In this sense, then, the method is unreliable.
Optimal Bandwidth Selection in Observed-Score Kernel Equating
ERIC Educational Resources Information Center
Häggström, Jenny; Wiberg, Marie
2014-01-01
The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…
The performance of the spatiotemporal Kalman filter and LORETA in seizure onset localization.
Hamid, Laith; Sarabi, Masoud; Japaridze, Natia; Wiegand, Gert; Heute, Ulrich; Stephani, Ulrich; Galka, Andreas; Siniatchkin, Michael
2015-08-01
The assumption of spatial-smoothness is often used to solve the bioelectric inverse problem during electroencephalographic (EEG) source imaging, e.g., in low resolution electromagnetic tomography (LORETA). Since the EEG data show a temporal structure, the combination of the temporal-smoothness and the spatial-smoothness constraints may improve the solution of the EEG inverse problem. This study investigates the performance of the spatiotemporal Kalman filter (STKF) method, which is based on spatial and temporal smoothness, in the localization of a focal seizure's onset and compares its results to those of LORETA. The main finding of the study was that the STKF with an autoregressive model of order two significantly outperformed LORETA in the accuracy and consistency of the localization, provided that the source space consists of a whole-brain volumetric grid. In the future, these promising results will be confirmed using data from more patients and performing statistical analyses on the results. Furthermore, the effects of the temporal smoothness constraint will be studied using different types of focal seizures.
NASA Astrophysics Data System (ADS)
Gu, Junhua; Xu, Haiguang; Wang, Jingying; An, Tao; Chen, Wen
2013-08-01
We propose a continuous wavelet transform based non-parametric foreground subtraction method for the detection of redshifted 21 cm signal from the epoch of reionization. This method works based on the assumption that the foreground spectra are smooth in frequency domain, while the 21 cm signal spectrum is full of saw-tooth-like structures, thus their characteristic scales are significantly different. We can distinguish them in the wavelet coefficient space easily and perform the foreground subtraction. Compared with the traditional spectral fitting based method, our method is more tolerant to complex foregrounds. Furthermore, we also find that when the instrument has uncorrected response error, our method can also work significantly better than the spectral fitting based method. Our method can obtain similar results with the Wp smoothing method, which is also a non-parametric method, but our method consumes much less computing time.
Near atomically smooth alkali antimonide photocathode thin films
Feng, Jun; Karkare, Siddharth; Nasiatka, James; ...
2017-01-24
Nano-roughness is one of the major factors degrading the emittance of electron beams that can be generated by high efficiency photocathodes, such as the thermally reacted alkali antimonide thin films. In this paper, we demonstrate a co-deposition based method for producing alkali antimonide cathodes that produce near atomic smoothness with high reproducibility. Here, we calculate the effect of the surface roughness on the emittance and show that such smooth cathode surfaces are essential for operation of alkali antimonide cathodes in high field, low emittance radio frequency electron guns and to obtain ultracold electrons for ultrafast electron diffraction applications.
Near atomically smooth alkali antimonide photocathode thin films
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Jun; Karkare, Siddharth; Nasiatka, James
Nano-roughness is one of the major factors degrading the emittance of electron beams that can be generated by high efficiency photocathodes, such as the thermally reacted alkali antimonide thin films. In this paper, we demonstrate a co-deposition based method for producing alkali antimonide cathodes that produce near atomic smoothness with high reproducibility. Here, we calculate the effect of the surface roughness on the emittance and show that such smooth cathode surfaces are essential for operation of alkali antimonide cathodes in high field, low emittance radio frequency electron guns and to obtain ultracold electrons for ultrafast electron diffraction applications.
Methods and electrolytes for electrodeposition of smooth films
Zhang, Jiguang; Xu, Wu; Graff, Gordon L; Chen, Xilin; Ding, Fei; Shao, Yuyan
2015-03-17
Electrodeposition involving an electrolyte having a surface-smoothing additive can result in self-healing, instead of self-amplification, of initial protuberant tips that give rise to roughness and/or dendrite formation on the substrate and/or film surface. For electrodeposition of a first conductive material (C1) on a substrate from one or more reactants in an electrolyte solution, the electrolyte solution is characterized by a surface-smoothing additive containing cations of a second conductive material (C2), wherein cations of C2 have an effective electrochemical reduction potential in the solution lower than that of the reactants.
The computation of Laplacian smoothing splines with examples
NASA Technical Reports Server (NTRS)
Wendelberger, J. G.
1982-01-01
Laplacian smoothing splines (LSS) are presented as generalizations of graduation, cubic and thin plate splines. The method of generalized cross validation (GCV) to choose the smoothing parameter is described. The GCV is used in the algorithm for the computation of LSS's. An outline of a computer program which implements this algorithm is presented along with a description of the use of the program. Examples in one, two and three dimensions demonstrate how to obtain estimates of function values with confidence intervals and estimates of first and second derivatives. Probability plots are used as a diagnostic tool to check for model inadequacy.
High reflectivity mirrors and method for making same
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heikman, Sten; Jacob-Mitos, Matthew; Li, Ting
2016-06-07
A composite high reflectivity mirror (CHRM) with at least one relatively smooth interior surface interface. The CHRM includes a composite portion, for example dielectric and metal layers, on a base element. At least one of the internal surfaces is polished to achieve a smooth interface. The polish can be performed on the surface of the base element, on various layers of the composite portion, or both. The resulting smooth interface(s) reflect more of the incident light in an intended direction. The CHRMs may be integrated into light emitting diode (LED) devices to increase optical output efficiency
Sufficient Dimension Reduction for Longitudinally Measured Predictors
Pfeiffer, Ruth M.; Forzani, Liliana; Bura, Efstathia
2013-01-01
We propose a method to combine several predictors (markers) that are measured repeatedly over time into a composite marker score without assuming a model and only requiring a mild condition on the predictor distribution. Assuming that the first and second moments of the predictors can be decomposed into a time and a marker component via a Kronecker product structure, that accommodates the longitudinal nature of the predictors, we develop first moment sufficient dimension reduction techniques to replace the original markers with linear transformations that contain sufficient information for the regression of the predictors on the outcome. These linear combinations can then be combined into a score that has better predictive performance than the score built under a general model that ignores the longitudinal structure of the data. Our methods can be applied to either continuous or categorical outcome measures. In simulations we focus on binary outcomes and show that our method outperforms existing alternatives using the AUC, the area under the receiver-operator characteristics (ROC) curve, as a summary measure of the discriminatory ability of a single continuous diagnostic marker for binary disease outcomes. PMID:22161635
Poster - 52: Smoothing constraints in Modulated Photon Radiotherapy (XMRT) fluence map optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGeachy, Philip; Villarreal-Barajas, Jose Eduardo
Purpose: Modulated Photon Radiotherapy (XMRT), which simultaneously optimizes photon beamlet energy (6 and 18 MV) and fluence, has recently shown dosimetric improvement in comparison to conventional IMRT. That said, the degree of smoothness of resulting fluence maps (FMs) has yet to be investigated and could impact the deliverability of XMRT. This study looks at investigating FM smoothness and imposing smoothing constraint in the fluence map optimization. Methods: Smoothing constraints were modeled in the XMRT algorithm with the sum of positive gradient (SPG) technique. XMRT solutions, with and without SPG constraints, were generated for a clinical prostate scan using standard dosimetricmore » prescriptions, constraints, and a seven coplanar beam arrangement. The smoothness, with and without SPG constraints, was assessed by looking at the absolute and relative maximum SPG scores for each fluence map. Dose volume histograms were utilized when evaluating impact on the dose distribution. Results: Imposing SPG constraints reduced the absolute and relative maximum SPG values by factors of up to 5 and 2, respectively, when compared with their non-SPG constrained counterparts. This leads to a more seamless conversion of FMS to their respective MLC sequences. This improved smoothness resulted in an increase to organ at risk (OAR) dose, however the increase is not clinically significant. Conclusions: For a clinical prostate case, there was a noticeable improvement in the smoothness of the XMRT FMs when SPG constraints were applied with a minor increase in dose to OARs. This increase in OAR dose is not clinically meaningful.« less
Hu, Pingsha; Maiti, Tapabrata
2011-01-01
Microarray is a powerful tool for genome-wide gene expression analysis. In microarray expression data, often mean and variance have certain relationships. We present a non-parametric mean-variance smoothing method (NPMVS) to analyze differentially expressed genes. In this method, a nonlinear smoothing curve is fitted to estimate the relationship between mean and variance. Inference is then made upon shrinkage estimation of posterior means assuming variances are known. Different methods have been applied to simulated datasets, in which a variety of mean and variance relationships were imposed. The simulation study showed that NPMVS outperformed the other two popular shrinkage estimation methods in some mean-variance relationships; and NPMVS was competitive with the two methods in other relationships. A real biological dataset, in which a cold stress transcription factor gene, CBF2, was overexpressed, has also been analyzed with the three methods. Gene ontology and cis-element analysis showed that NPMVS identified more cold and stress responsive genes than the other two methods did. The good performance of NPMVS is mainly due to its shrinkage estimation for both means and variances. In addition, NPMVS exploits a non-parametric regression between mean and variance, instead of assuming a specific parametric relationship between mean and variance. The source code written in R is available from the authors on request.
Hu, Pingsha; Maiti, Tapabrata
2011-01-01
Microarray is a powerful tool for genome-wide gene expression analysis. In microarray expression data, often mean and variance have certain relationships. We present a non-parametric mean-variance smoothing method (NPMVS) to analyze differentially expressed genes. In this method, a nonlinear smoothing curve is fitted to estimate the relationship between mean and variance. Inference is then made upon shrinkage estimation of posterior means assuming variances are known. Different methods have been applied to simulated datasets, in which a variety of mean and variance relationships were imposed. The simulation study showed that NPMVS outperformed the other two popular shrinkage estimation methods in some mean-variance relationships; and NPMVS was competitive with the two methods in other relationships. A real biological dataset, in which a cold stress transcription factor gene, CBF2, was overexpressed, has also been analyzed with the three methods. Gene ontology and cis-element analysis showed that NPMVS identified more cold and stress responsive genes than the other two methods did. The good performance of NPMVS is mainly due to its shrinkage estimation for both means and variances. In addition, NPMVS exploits a non-parametric regression between mean and variance, instead of assuming a specific parametric relationship between mean and variance. The source code written in R is available from the authors on request. PMID:21611181
He, Xiyang; Zhang, Xiaohong; Tang, Long; Liu, Wanke
2015-12-22
Many applications, such as marine navigation, land vehicles location, etc., require real time precise positioning under medium or long baseline conditions. In this contribution, we develop a model of real-time kinematic decimeter-level positioning with BeiDou Navigation Satellite System (BDS) triple-frequency signals over medium distances. The ambiguities of two extra-wide-lane (EWL) combinations are fixed first, and then a wide lane (WL) combination is reformed based on the two EWL combinations for positioning. Theoretical analysis and empirical analysis is given of the ambiguity fixing rate and the positioning accuracy of the presented method. The results indicate that the ambiguity fixing rate can be up to more than 98% when using BDS medium baseline observations, which is much higher than that of dual-frequency Hatch-Melbourne-Wübbena (HMW) method. As for positioning accuracy, decimeter level accuracy can be achieved with this method, which is comparable to that of carrier-smoothed code differential positioning method. Signal interruption simulation experiment indicates that the proposed method can realize fast high-precision positioning whereas the carrier-smoothed code differential positioning method needs several hundreds of seconds for obtaining high precision results. We can conclude that a relatively high accuracy and high fixing rate can be achieved for triple-frequency WL method with single-epoch observations, displaying significant advantage comparing to traditional carrier-smoothed code differential positioning method.
A detail-preserved and luminance-consistent multi-exposure image fusion algorithm
NASA Astrophysics Data System (ADS)
Wang, Guanquan; Zhou, Yue
2018-04-01
When irradiance across a scene varies greatly, we can hardly get an image of the scene without over- or underexposure area, because of the constraints of cameras. Multi-exposure image fusion (MEF) is an effective method to deal with this problem by fusing multi-exposure images of a static scene. A novel MEF method is described in this paper. In the proposed algorithm, coarser-scale luminance consistency is preserved by contribution adjustment using the luminance information between blocks; detail-preserved smoothing filter can stitch blocks smoothly without losing details. Experiment results show that the proposed method performs well in preserving luminance consistency and details.
Compressive Sensing via Nonlocal Smoothed Rank Function
Fan, Ya-Ru; Liu, Jun; Zhao, Xi-Le
2016-01-01
Compressive sensing (CS) theory asserts that we can reconstruct signals and images with only a small number of samples or measurements. Recent works exploiting the nonlocal similarity have led to better results in various CS studies. To better exploit the nonlocal similarity, in this paper, we propose a non-convex smoothed rank function based model for CS image reconstruction. We also propose an efficient alternating minimization method to solve the proposed model, which reduces a difficult and coupled problem to two tractable subproblems. Experimental results have shown that the proposed method performs better than several existing state-of-the-art CS methods for image reconstruction. PMID:27583683
NASA Astrophysics Data System (ADS)
Torgoev, Almaz; Havenith, Hans-Balder
2016-07-01
A 2D elasto-dynamic modelling of the pure topographic seismic response is performed for six models with a total length of around 23.0 km. These models are reconstructed from the real topographic settings of the landslide-prone slopes situated in the Mailuu-Suu River Valley, Southern Kyrgyzstan. The main studied parameter is the Arias Intensity (Ia, m/sec), which is applied in the GIS-based Newmark method to regionally map the seismically-induced landslide susceptibility. This method maps the Ia values via empirical attenuation laws and our studies investigate a potential to include topographic input into them. Numerical studies analyse several signals with varying shape and changing central frequency values. All tests demonstrate that the spectral amplification patterns directly affect the amplification of the Ia values. These results let to link the 2D distribution of the topographically amplified Ia values with the parameter called as smoothed curvature. The amplification values for the low-frequency signals are better correlated with the curvature smoothed over larger spatial extent, while those values for the high-frequency signals are more linked to the curvature with smaller smoothing extent. The best predictions are provided by the curvature smoothed over the extent calculated according to Geli's law. The sample equations predicting the Ia amplification based on the smoothed curvature are presented for the sinusoid-shape input signals. These laws cannot be directly implemented in the regional Newmark method, as 3D amplification of the Ia values addresses more problem complexities which are not studied here. Nevertheless, our 2D results prepare the theoretical framework which can potentially be applied to the 3D domain and, therefore, represent a robust basis for these future research targets.
Alegría, Margarita; Kessler, Ronald C.; McLaughlin, Katie A.; Gruber, Michael J.; Sampson, Nancy A.; Zaslavsky, Alan M.
2014-01-01
We evaluate the precision of a model estimating school prevalence of SED using a small area estimation method based on readily-available predictors from area-level census block data and school principal questionnaires. Adolescents at 314 schools participated in the National Comorbidity Supplement, a national survey of DSM-IV disorders among adolescents. A multilevel model indicated that predictors accounted for under half of the variance in school-level SED and even less when considering block-group predictors or principal report alone. While Census measures and principal questionnaires are significant predictors of individual-level SED, associations are too weak to generate precise school-level predictions of SED prevalence. PMID:24740174
Impedance computed tomography using an adaptive smoothing coefficient algorithm.
Suzuki, A; Uchiyama, A
2001-01-01
In impedance computed tomography, a fixed coefficient regularization algorithm has been frequently used to improve the ill-conditioning problem of the Newton-Raphson algorithm. However, a lot of experimental data and a long period of computation time are needed to determine a good smoothing coefficient because a good smoothing coefficient has to be manually chosen from a number of coefficients and is a constant for each iteration calculation. Thus, sometimes the fixed coefficient regularization algorithm distorts the information or fails to obtain any effect. In this paper, a new adaptive smoothing coefficient algorithm is proposed. This algorithm automatically calculates the smoothing coefficient from the eigenvalue of the ill-conditioned matrix. Therefore, the effective images can be obtained within a short computation time. Also the smoothing coefficient is automatically adjusted by the information related to the real resistivity distribution and the data collection method. In our impedance system, we have reconstructed the resistivity distributions of two phantoms using this algorithm. As a result, this algorithm only needs one-fifth the computation time compared to the fixed coefficient regularization algorithm. When compared to the fixed coefficient regularization algorithm, it shows that the image is obtained more rapidly and applicable in real-time monitoring of the blood vessel.
Takahashi, Hiro; Kobayashi, Takeshi; Honda, Hiroyuki
2005-01-15
For establishing prognostic predictors of various diseases using DNA microarray analysis technology, it is desired to find selectively significant genes for constructing the prognostic model and it is also necessary to eliminate non-specific genes or genes with error before constructing the model. We applied projective adaptive resonance theory (PART) to gene screening for DNA microarray data. Genes selected by PART were subjected to our FNN-SWEEP modeling method for the construction of a cancer class prediction model. The model performance was evaluated through comparison with a conventional screening signal-to-noise (S2N) method or nearest shrunken centroids (NSC) method. The FNN-SWEEP predictor with PART screening could discriminate classes of acute leukemia in blinded data with 97.1% accuracy and classes of lung cancer with 90.0% accuracy, while the predictor with S2N was only 85.3 and 70.0% or the predictor with NSC was 88.2 and 90.0%, respectively. The results have proven that PART was superior for gene screening. The software is available upon request from the authors. honda@nubio.nagoya-u.ac.jp
A Smoothed Eclipse Model for Solar Electric Propulsion Trajectory Optimization
NASA Technical Reports Server (NTRS)
Aziz, Jonathan D.; Scheeres, Daniel J.; Parker, Jeffrey S.; Englander, Jacob A.
2017-01-01
Solar electric propulsion (SEP) is the dominant design option for employing low-thrust propulsion on a space mission. Spacecraft solar arrays power the SEP system but are subject to blackout periods during solar eclipse conditions. Discontinuity in power available to the spacecraft must be accounted for in trajectory optimization, but gradient-based methods require a differentiable power model. This work presents a power model that smooths the eclipse transition from total eclipse to total sunlight with a logistic function. Example trajectories are computed with differential dynamic programming, a second-order gradient-based method.
in Mapping of Gastric Cancer Incidence in Iran
Asmarian, Naeimehossadat; Jafari-Koshki, Tohid; Soleimani, Ali; Taghi Ayatollahi, Seyyed Mohammad
2016-10-01
Background: In many countries gastric cancer has the highest incidence among the gastrointestinal cancers and is the second most common cancer in Iran. The aim of this study was to identify and map high risk gastric cancer regions at the county-level in Iran. Methods: In this study we analyzed gastric cancer data for Iran in the years 2003-2010. Areato- area Poisson kriging and Besag, York and Mollie (BYM) spatial models were applied to smoothing the standardized incidence ratios of gastric cancer for the 373 counties surveyed in this study. The two methods were compared in term of accuracy and precision in identifying high risk regions. Result: The highest smoothed standardized incidence rate (SIR) according to area-to-area Poisson kriging was in Meshkinshahr county in Ardabil province in north-western Iran (2.4,SD=0.05), while the highest smoothed standardized incidence rate (SIR) according to the BYM model was in Ardabil, the capital of that province (2.9,SD=0.09). Conclusion: Both methods of mapping, ATA Poisson kriging and BYM, showed the gastric cancer incidence rate to be highest in north and north-west Iran. However, area-to-area Poisson kriging was more precise than the BYM model and required less smoothing. According to the results obtained, preventive measures and treatment programs should be focused on particular counties of Iran. Creative Commons Attribution License
Giorio, Chiara; Moyroud, Edwige; Glover, Beverley J; Skelton, Paul C; Kalberer, Markus
2015-10-06
Plant cuticle, which is the outermost layer covering the aerial parts of all plants including petals and leaves, can present a wide range of patterns that, combined with cell shape, can generate unique physical, mechanical, or optical properties. For example, arrays of regularly spaced nanoridges have been found on the dark (anthocyanin-rich) portion at the base of the petals of Hibiscus trionum. Those ridges act as a diffraction grating, producing an iridescent effect. Because the surface of the distal white region of the petals is smooth and noniridescent, a selective chemical characterization of the surface of the petals on different portions (i.e., ridged vs smooth) is needed to understand whether distinct cuticular patterns correlate with distinct chemical compositions of the cuticle. In the present study, a rapid screening method has been developed for the direct surface analysis of Hibiscus trionum petals using liquid extraction surface analysis (LESA) coupled with high-resolution mass spectrometry. The optimized method was used to characterize a wide range of plant metabolites and cuticle monomers on the upper (adaxial) surface of the petals on both the white/smooth and anthocyanic/ridged regions, and on the lower (abaxial) surface, which is entirely smooth. The main components detected on the surface of the petals are low-molecular-weight organic acids, sugars, and flavonoids. The ridged portion on the upper surface of the petal is enriched in long-chain fatty acids, which are constituents of the wax fraction of the cuticle. These compounds were not detected on the white/smooth region of the upper petal surface or on the smooth lower surface.
Kim, Dae-Hee; Choi, Jae-Hun; Lim, Myung-Eun; Park, Soo-Jun
2008-01-01
This paper suggests the method of correcting distance between an ambient intelligence display and a user based on linear regression and smoothing method, by which distance information of a user who approaches to the display can he accurately output even in an unanticipated condition using a passive infrared VIR) sensor and an ultrasonic device. The developed system consists of an ambient intelligence display and an ultrasonic transmitter, and a sensor gateway. Each module communicates with each other through RF (Radio frequency) communication. The ambient intelligence display includes an ultrasonic receiver and a PIR sensor for motion detection. In particular, this system selects and processes algorithms such as smoothing or linear regression for current input data processing dynamically through judgment process that is determined using the previous reliable data stored in a queue. In addition, we implemented GUI software with JAVA for real time location tracking and an ambient intelligence display.
Gradient approach to quantify the gradation smoothness for output media
NASA Astrophysics Data System (ADS)
Kim, Youn Jin; Bang, Yousun; Choh, Heui-Keun
2010-01-01
We aim to quantify the perception of color gradation smoothness using objectively measurable properties. We propose a model to compute the smoothness of hardcopy color-to-color gradations. It is a gradient-based method that can be determined as a function of the 95th percentile of second derivative for the tone-jump estimator and the fifth percentile of first derivative for the tone-clipping estimator. Performance of the model and a previously suggested method were psychophysically appreciated, and their prediction accuracies were compared to each other. Our model showed a stronger Pearson correlation to the corresponding visual data, and the magnitude of the Pearson correlation reached up to 0.87. Its statistical significance was verified through analysis of variance. Color variations of the representative memory colors-blue sky, green grass and Caucasian skin-were rendered as gradational scales and utilized as the test stimuli.
Signal processing method and system for noise removal and signal extraction
Fu, Chi Yung; Petrich, Loren
2009-04-14
A signal processing method and system combining smooth level wavelet pre-processing together with artificial neural networks all in the wavelet domain for signal denoising and extraction. Upon receiving a signal corrupted with noise, an n-level decomposition of the signal is performed using a discrete wavelet transform to produce a smooth component and a rough component for each decomposition level. The n.sup.th level smooth component is then inputted into a corresponding neural network pre-trained to filter out noise in that component by pattern recognition in the wavelet domain. Additional rough components, beginning at the highest level, may also be retained and inputted into corresponding neural networks pre-trained to filter out noise in those components also by pattern recognition in the wavelet domain. In any case, an inverse discrete wavelet transform is performed on the combined output from all the neural networks to recover a clean signal back in the time domain.
A generalized transport-velocity formulation for smoothed particle hydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Chi; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A.
The standard smoothed particle hydrodynamics (SPH) method suffers from tensile instability. In fluid-dynamics simulations this instability leads to particle clumping and void regions when negative pressure occurs. In solid-dynamics simulations, it results in unphysical structure fragmentation. In this work the transport-velocity formulation of Adami et al. (2013) is generalized for providing a solution of this long-standing problem. Other than imposing a global background pressure, a variable background pressure is used to modify the particle transport velocity and eliminate the tensile instability completely. Furthermore, such a modification is localized by defining a shortened smoothing length. The generalized formulation is suitable formore » fluid and solid materials with and without free surfaces. The results of extensive numerical tests on both fluid and solid dynamics problems indicate that the new method provides a unified approach for multi-physics SPH simulations.« less
Clinical predictors of vestibulo-ocular dysfunction in pediatric sports-related concussion.
Ellis, Michael J; Cordingley, Dean M; Vis, Sara; Reimer, Karen M; Leiter, Jeff; Russell, Kelly
2017-01-01
OBJECTIVE There were 2 objectives of this study. The first objective was to identify clinical variables associated with vestibulo-ocular dysfunction (VOD) detected at initial consultation among pediatric patients with acute sports-related concussion (SRC) and postconcussion syndrome (PCS). The second objective was to reexamine the prevalence of VOD in this clinical cohort and evaluate the effect of VOD on length of recovery and the development of PCS. METHODS A retrospective review was conducted for all patients with acute SRC and PCS who were evaluated at a pediatric multidisciplinary concussion program from September 2013 to May 2015. Acute SRS was defined as presenting < 30 days postinjury, and PCS was defined according to the International Classification of Diseases, 10th Revision criteria and included being symptomatic 30 days or longer postinjury. The initial assessment included clinical history and physical examination performed by 1 neurosurgeon. Patients were assessed for VOD, defined as the presence of more than 1 subjective vestibular and oculomotor complaint (dizziness, diplopia, blurred vision, etc.) and more than 1 objective physical examination finding (abnormal near point of convergence, smooth pursuits, saccades, or vestibulo-ocular reflex testing). Poisson regression analysis was used to identify factors that increased the risk of VOD at initial presentation and the development of PCS. RESULTS Three hundred ninety-nine children, including 306 patients with acute SRC and 93 with PCS, were included. Of these patients, 30.1% of those with acute SRC (65.0% male, mean age 13.9 years) and 43.0% of those with PCS (41.9% male, mean age 15.4 years) met the criteria for VOD at initial consultation. Independent predictors of VOD at initial consultation included female sex, preinjury history of depression, posttraumatic amnesia, and presence of dizziness, blurred vision, or difficulty focusing at the time of injury. Independent predictors of PCS among patients with acute SRC included the presence of VOD at initial consultation, preinjury history of depression, and posttraumatic amnesia at the time of injury. CONCLUSIONS This study identified important potential risk factors for the development of VOD following pediatric SRC. These results provide confirmatory evidence that VOD at initial consultation is associated with prolonged recovery and is an independent predictor for the development of PCS. Future studies examining clinical prediction rules in pediatric concussion should include VOD. Additional research is needed to elucidate the natural history of VOD following SRC and establish evidence-based indications for targeted vestibular rehabilitation.
NASA Technical Reports Server (NTRS)
Petot, D.; Loiseau, H.
1982-01-01
Unsteady aerodynamic methods adopted for the study of aeroelasticity in helicopters are considered with focus on the development of a semiempirical model of unsteady aerodynamic forces acting on an oscillating profile at high incidence. The successive smoothing algorithm described leads to the model's coefficients in a very satisfactory manner.
Runoff potentiality of a watershed through SCS and functional data analysis technique.
Adham, M I; Shirazi, S M; Othman, F; Rahman, S; Yusop, Z; Ismail, Z
2014-01-01
Runoff potentiality of a watershed was assessed based on identifying curve number (CN), soil conservation service (SCS), and functional data analysis (FDA) techniques. Daily discrete rainfall data were collected from weather stations in the study area and analyzed through lowess method for smoothing curve. As runoff data represents a periodic pattern in each watershed, Fourier series was introduced to fit the smooth curve of eight watersheds. Seven terms of Fourier series were introduced for the watersheds 5 and 8, while 8 terms of Fourier series were used for the rest of the watersheds for the best fit of data. Bootstrapping smooth curve analysis reveals that watersheds 1, 2, 3, 6, 7, and 8 are with monthly mean runoffs of 29, 24, 22, 23, 26, and 27 mm, respectively, and these watersheds would likely contribute to surface runoff in the study area. The purpose of this study was to transform runoff data into a smooth curve for representing the surface runoff pattern and mean runoff of each watershed through statistical method. This study provides information of runoff potentiality of each watershed and also provides input data for hydrological modeling.
Runoff Potentiality of a Watershed through SCS and Functional Data Analysis Technique
Adham, M. I.; Shirazi, S. M.; Othman, F.; Rahman, S.; Yusop, Z.; Ismail, Z.
2014-01-01
Runoff potentiality of a watershed was assessed based on identifying curve number (CN), soil conservation service (SCS), and functional data analysis (FDA) techniques. Daily discrete rainfall data were collected from weather stations in the study area and analyzed through lowess method for smoothing curve. As runoff data represents a periodic pattern in each watershed, Fourier series was introduced to fit the smooth curve of eight watersheds. Seven terms of Fourier series were introduced for the watersheds 5 and 8, while 8 terms of Fourier series were used for the rest of the watersheds for the best fit of data. Bootstrapping smooth curve analysis reveals that watersheds 1, 2, 3, 6, 7, and 8 are with monthly mean runoffs of 29, 24, 22, 23, 26, and 27 mm, respectively, and these watersheds would likely contribute to surface runoff in the study area. The purpose of this study was to transform runoff data into a smooth curve for representing the surface runoff pattern and mean runoff of each watershed through statistical method. This study provides information of runoff potentiality of each watershed and also provides input data for hydrological modeling. PMID:25152911
SU-E-T-314: Dosimetric Effect of Smooth Drilling On Proton Compensators in Prostate Patients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reyhan, M; Yue, N; Zou, J
2015-06-15
Purpose: To evaluate the dosimetric effect of smooth drilling of proton compensators in proton prostate plans when compared to typical plunge drilling settings. Methods: Twelve prostate patients were planned in Eclipse treatment planning system using three different drill settings Smooth, Plunge drill A, and Plunge drill B. The differences between A and B were: spacing X[cm]: 0.4(A), 0.1(B), spacing Y[cm]: 0.35(A), 0.1(B), row offset [cm]: 0.2(A), 0(B). Planning parameters were kept consistent between the different plans, which utilized two opposed lateral beams arrangement. Mean differences absolute dosimetry in OAR constraints are presented. Results: The smooth drilled compensator based plans yieldedmore » equivalent target coverage to the plans generated with drill settings A and B. Overall, the smooth compensators reduced dose to the majority of organs at risk compared to settings A and B. Constraints were reduced for the following OAR: Rectal V75 by 2.12 and 2.48%, V70 by 2.45 and 2.91%, V65 by 2.85 and 3.37%, V50 by 2.3 and 5.1%, Bladder V65 by 4.49 and 3.67%, Penial Bulb mean by 3.7 and 4.2Gy, and the maximum plan dose 5.3 and 7.4Gy for option A vs smooth and option B vs smooth respectively. The femoral head constraint (V50<5%) was met by all plans, but it was not consistently lower for the smooth drilling plan. Conclusion: Smooth drilled compensators provide equivalent target coverage and overall slightly cooler plans to the majority of organs at risk; it also minimizes the potential dosimetric impacts caused by patient positioning uncertainty.« less
Stability of smooth and rough mini-implants: clinical and biomechanical evaluation - an in vivostudy
Vilani, Giselle Naback Lemes; Ruellas, Antônio Carlos de Oliveira; Elias, Carlos Nelson; Mattos, Cláudia Trindade
2015-01-01
Objective: To compare in vivo orthodontic mini-implants (MI) of smooth (machined) and rough (acid etched) surfaces, assessing primary and secondary stability. Methods: Thirty-six (36) MI were inserted in the mandibles of six (6) dogs. Each animal received six (6) MI. In the right hemiarch, three (3) MI without surface treatment (smooth) were inserted, whereas in the left hemiarch, another three (3) MI with acid etched surfaces (rough) were inserted. The two distal MI in each hemiarch received an immediate load of 1.0 N for 16 weeks, whereas the MI in the mesial extremity was not subject to loading. Stability was measured by insertion and removal torque, initial and final mobility and by inter mini-implant distance. Results: There was no statistical behavioral difference between smooth and rough MI. High insertion torque and reduced initial mobility were observed in all groups, as well as a reduction in removal torques in comparison with insertion torque. Rough MI presented higher removal torque and lower final mobility in comparison to smooth MI. MI did not remain static, with displacement of rough MI being smaller in comparison with smooth MI, but with no statistical difference. Conclusions: MI primary stability was greater than stability measured at removal. There was no difference in stability between smooth and rough MI when assessing mobility, displacement and insertion as well as removal torques. PMID:26560819
Remote sensing of soil moisture content over bare fields at 1.4 GHz frequency
NASA Technical Reports Server (NTRS)
Wang, J. R.; Choudhury, B. J.
1980-01-01
A simple method of estimating moisture content (W) of a bare soil from the observed brightness temperature (T sub B) at 1.4 GHz is discussed. The method is based on a radiative transfer model calculation, which has been successfully used in the past to account for many observational results, with some modifications to take into account the effect of surface roughness. Besides the measured T sub B's, the three additional inputs required by the method are the effective soil thermodynamic temperature, the precise relation between W and the smooth field brightness temperature T sub B and a parameter specifying the surface roughness characteristics. The soil effective temperature can be readily measured and the procedures of estimating surface roughness parameter and obtaining the relation between W and smooth field brightness temperature are discussed in detail. Dual polarized radiometric measurements at an off-nadir incident angle are sufficient to estimate both surface roughness parameter and W, provided that the relation between W and smooth field brightness temperature at the same angle is known. The method of W estimate is demonstrated with two sets of experimental data, one from a controlled field experiment by a mobile tower and the other, from aircraft overflight. The results from both data sets are encouraging when the estimated W's are compared with the acquired ground truth of W's in the top 2 cm layer. An offset between the estimated and the measured W's exists in the results of the analyses, but that can be accounted for by the presently poor knowledge of the relationship between W and smooth field brightness temperature for various types of soils. An approach to quantify this relationship for different soils and thus improve the method of W estimate is suggested.
NASA Astrophysics Data System (ADS)
García-Senz, Domingo; Cabezón, Rubén M.; Escartín, José A.; Ebinger, Kevin
2014-10-01
Context. The smoothed-particle hydrodynamics (SPH) technique is a numerical method for solving gas-dynamical problems. It has been applied to simulate the evolution of a wide variety of astrophysical systems. The method has a second-order accuracy, with a resolution that is usually much higher in the compressed regions than in the diluted zones of the fluid. Aims: We propose and check a method to balance and equalize the resolution of SPH between high- and low-density regions. This method relies on the versatility of a family of interpolators called sinc kernels, which allows increasing the interpolation quality by varying only a single parameter (the exponent of the sinc function). Methods: The proposed method was checked and validated through a number of numerical tests, from standard one-dimensional Riemann problems in shock tubes, to multidimensional simulations of explosions, hydrodynamic instabilities, and the collapse of a Sun-like polytrope. Results: The analysis of the hydrodynamical simulations suggests that the scheme devised to equalize the accuracy improves the treatment of the post-shock regions and, in general, of the rarefacted zones of fluids while causing no harm to the growth of hydrodynamic instabilities. The method is robust and easy to implement with a low computational overload. It conserves mass, energy, and momentum and reduces to the standard SPH scheme in regions of the fluid that have smooth density gradients.
s-SMOOTH: Sparsity and Smoothness Enhanced EEG Brain Tomography
Li, Ying; Qin, Jing; Hsin, Yue-Loong; Osher, Stanley; Liu, Wentai
2016-01-01
EEG source imaging enables us to reconstruct current density in the brain from the electrical measurements with excellent temporal resolution (~ ms). The corresponding EEG inverse problem is an ill-posed one that has infinitely many solutions. This is due to the fact that the number of EEG sensors is usually much smaller than that of the potential dipole locations, as well as noise contamination in the recorded signals. To obtain a unique solution, regularizations can be incorporated to impose additional constraints on the solution. An appropriate choice of regularization is critically important for the reconstruction accuracy of a brain image. In this paper, we propose a novel Sparsity and SMOOthness enhanced brain TomograpHy (s-SMOOTH) method to improve the reconstruction accuracy by integrating two recently proposed regularization techniques: Total Generalized Variation (TGV) regularization and ℓ1−2 regularization. TGV is able to preserve the source edge and recover the spatial distribution of the source intensity with high accuracy. Compared to the relevant total variation (TV) regularization, TGV enhances the smoothness of the image and reduces staircasing artifacts. The traditional TGV defined on a 2D image has been widely used in the image processing field. In order to handle 3D EEG source images, we propose a voxel-based Total Generalized Variation (vTGV) regularization that extends the definition of second-order TGV from 2D planar images to 3D irregular surfaces such as cortex surface. In addition, the ℓ1−2 regularization is utilized to promote sparsity on the current density itself. We demonstrate that ℓ1−2 regularization is able to enhance sparsity and accelerate computations than ℓ1 regularization. The proposed model is solved by an efficient and robust algorithm based on the difference of convex functions algorithm (DCA) and the alternating direction method of multipliers (ADMM). Numerical experiments using synthetic data demonstrate the advantages of the proposed method over other state-of-the-art methods in terms of total reconstruction accuracy, localization accuracy and focalization degree. The application to the source localization of event-related potential data further demonstrates the performance of the proposed method in real-world scenarios. PMID:27965529
NASA Technical Reports Server (NTRS)
Lewis, Michael
1994-01-01
Statistical encoding techniques enable the reduction of the number of bits required to encode a set of symbols, and are derived from their probabilities. Huffman encoding is an example of statistical encoding that has been used for error-free data compression. The degree of compression given by Huffman encoding in this application can be improved by the use of prediction methods. These replace the set of elevations by a set of corrections that have a more advantageous probability distribution. In particular, the method of Lagrange Multipliers for minimization of the mean square error has been applied to local geometrical predictors. Using this technique, an 8-point predictor achieved about a 7 percent improvement over an existing simple triangular predictor.
NASA Astrophysics Data System (ADS)
Jiang, Jiamin; Younis, Rami M.
2017-06-01
The first-order methods commonly employed in reservoir simulation for computing the convective fluxes introduce excessive numerical diffusion leading to severe smoothing of displacement fronts. We present a fully-implicit cell-centered finite-volume (CCFV) framework that can achieve second-order spatial accuracy on smooth solutions, while at the same time maintain robustness and nonlinear convergence performance. A novel multislope MUSCL method is proposed to construct the required values at edge centroids in a straightforward and effective way by taking advantage of the triangular mesh geometry. In contrast to the monoslope methods in which a unique limited gradient is used, the multislope concept constructs specific scalar slopes for the interpolations on each edge of a given element. Through the edge centroids, the numerical diffusion caused by mesh skewness is reduced, and optimal second order accuracy can be achieved. Moreover, an improved smooth flux-limiter is introduced to ensure monotonicity on non-uniform meshes. The flux-limiter provides high accuracy without degrading nonlinear convergence performance. The CCFV framework is adapted to accommodate a lower-dimensional discrete fracture-matrix (DFM) model. Several numerical tests with discrete fractured system are carried out to demonstrate the efficiency and robustness of the numerical model.
Li, Qing; Liang, Steven Y
2018-04-20
Microstructure images of metallic materials play a significant role in industrial applications. To address image degradation problem of metallic materials, a novel image restoration technique based on K-means singular value decomposition (KSVD) and smoothing penalty sparse representation (SPSR) algorithm is proposed in this work, the microstructure images of aluminum alloy 7075 (AA7075) material are used as examples. To begin with, to reflect the detail structure characteristics of the damaged image, the KSVD dictionary is introduced to substitute the traditional sparse transform basis (TSTB) for sparse representation. Then, due to the image restoration, modeling belongs to a highly underdetermined equation, and traditional sparse reconstruction methods may cause instability and obvious artifacts in the reconstructed images, especially reconstructed image with many smooth regions and the noise level is strong, thus the SPSR (here, q = 0.5) algorithm is designed to reconstruct the damaged image. The results of simulation and two practical cases demonstrate that the proposed method has superior performance compared with some state-of-the-art methods in terms of restoration performance factors and visual quality. Meanwhile, the grain size parameters and grain boundaries of microstructure image are discussed before and after they are restored by proposed method.
Intermittent Demand Forecasting in a Tertiary Pediatric Intensive Care Unit.
Cheng, Chen-Yang; Chiang, Kuo-Liang; Chen, Meng-Yin
2016-10-01
Forecasts of the demand for medical supplies both directly and indirectly affect the operating costs and the quality of the care provided by health care institutions. Specifically, overestimating demand induces an inventory surplus, whereas underestimating demand possibly compromises patient safety. Uncertainty in forecasting the consumption of medical supplies generates intermittent demand events. The intermittent demand patterns for medical supplies are generally classified as lumpy, erratic, smooth, and slow-moving demand. This study was conducted with the purpose of advancing a tertiary pediatric intensive care unit's efforts to achieve a high level of accuracy in its forecasting of the demand for medical supplies. On this point, several demand forecasting methods were compared in terms of the forecast accuracy of each. The results confirm that applying Croston's method combined with a single exponential smoothing method yields the most accurate results for forecasting lumpy, erratic, and slow-moving demand, whereas the Simple Moving Average (SMA) method is the most suitable for forecasting smooth demand. In addition, when the classification of demand consumption patterns were combined with the demand forecasting models, the forecasting errors were minimized, indicating that this classification framework can play a role in improving patient safety and reducing inventory management costs in health care institutions.
Diet and scavenging habits of the smooth skate Dipturus innominatus.
Forman, J S; Dunn, M R
2012-04-01
The diet of smooth skate Dipturus innominatus was determined from examination of stomach contents of 321 specimens of 29·3-152·0 cm pelvic length, sampled from research and commercial trawlers at depths of 231-789 m on Chatham Rise, New Zealand. The diet was dominated by the benthic decapods Metanephrops challengeri and Munida gracilis, the natant decapod Campylonotus rathbunae and fishes from 17 families, of which hoki Macruronus novaezelandiae, sea perch Helicolenus barathri, various Macrouridae and a variety of discarded fishes were the most important. Multivariate analyses indicated the best predictors of diet variability were D. innominatus length and a spatial model. The diet of small D. innominatus was predominantly small crustaceans, with larger crustaceans, fishes and then scavenged discarded fishes increasing in importance as D. innominatus got larger. Scavenged discards were obvious as fish heads or tails only, or skeletal remains after filleting, often from pelagic species. Demersal fish prey were most frequent on the south and west Chatham Rise, in areas where commercial fishing was most active. Dipturus innominatus are highly vulnerable to overfishing, but discarding practices by commercial fishing vessels may provide a positive feedback to populations through improved scavenging opportunities. © 2012 NIWA. Journal of Fish Biology © 2012 The Fisheries Society of the British Isles.
Jahani, Sahar; Setarehdan, Seyed K; Boas, David A; Yücel, Meryem A
2018-01-01
Motion artifact contamination in near-infrared spectroscopy (NIRS) data has become an important challenge in realizing the full potential of NIRS for real-life applications. Various motion correction algorithms have been used to alleviate the effect of motion artifacts on the estimation of the hemodynamic response function. While smoothing methods, such as wavelet filtering, are excellent in removing motion-induced sharp spikes, the baseline shifts in the signal remain after this type of filtering. Methods, such as spline interpolation, on the other hand, can properly correct baseline shifts; however, they leave residual high-frequency spikes. We propose a hybrid method that takes advantage of different correction algorithms. This method first identifies the baseline shifts and corrects them using a spline interpolation method or targeted principal component analysis. The remaining spikes, on the other hand, are corrected by smoothing methods: Savitzky-Golay (SG) filtering or robust locally weighted regression and smoothing. We have compared our new approach with the existing correction algorithms in terms of hemodynamic response function estimation using the following metrics: mean-squared error, peak-to-peak error ([Formula: see text]), Pearson's correlation ([Formula: see text]), and the area under the receiver operator characteristic curve. We found that spline-SG hybrid method provides reasonable improvements in all these metrics with a relatively short computational time. The dataset and the code used in this study are made available online for the use of all interested researchers.
NASA Technical Reports Server (NTRS)
Rodriguez, G.
1981-01-01
A function space approach to smoothing is used to obtain a set of model error estimates inherent in a reduced-order model. By establishing knowledge of inevitable deficiencies in the truncated model, the error estimates provide a foundation for updating the model and thereby improving system performance. The function space smoothing solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for spacecraft attitude control.
Methods and energy storage devices utilizing electrolytes having surface-smoothing additives
Xu, Wu; Zhang, Jiguang; Graff, Gordon L; Chen, Xilin; Ding, Fei
2015-11-12
Electrodeposition and energy storage devices utilizing an electrolyte having a surface-smoothing additive can result in self-healing, instead of self-amplification, of initial protuberant tips that give rise to roughness and/or dendrite formation on the substrate and anode surface. For electrodeposition of a first metal (M1) on a substrate or anode from one or more cations of M1 in an electrolyte solution, the electrolyte solution is characterized by a surface-smoothing additive containing cations of a second metal (M2), wherein cations of M2 have an effective electrochemical reduction potential in the solution lower than that of the cations of M1.
Curvilinear grids for WENO methods in astrophysical simulations
NASA Astrophysics Data System (ADS)
Grimm-Strele, H.; Kupka, F.; Muthsam, H. J.
2014-03-01
We investigate the applicability of curvilinear grids in the context of astrophysical simulations and WENO schemes. With the non-smooth mapping functions from Calhoun et al. (2008), we can tackle many astrophysical problems which were out of scope with the standard grids in numerical astrophysics. We describe the difficulties occurring when implementing curvilinear coordinates into our WENO code, and how we overcome them. We illustrate the theoretical results with numerical data. The WENO finite difference scheme works only for high Mach number flows and smooth mapping functions, whereas the finite volume scheme gives accurate results even for low Mach number flows and on non-smooth grids.
Automated Knowledge Discovery From Simulators
NASA Technical Reports Server (NTRS)
Burl, Michael; DeCoste, Dennis; Mazzoni, Dominic; Scharenbroich, Lucas; Enke, Brian; Merline, William
2007-01-01
A computational method, SimLearn, has been devised to facilitate efficient knowledge discovery from simulators. Simulators are complex computer programs used in science and engineering to model diverse phenomena such as fluid flow, gravitational interactions, coupled mechanical systems, and nuclear, chemical, and biological processes. SimLearn uses active-learning techniques to efficiently address the "landscape characterization problem." In particular, SimLearn tries to determine which regions in "input space" lead to a given output from the simulator, where "input space" refers to an abstraction of all the variables going into the simulator, e.g., initial conditions, parameters, and interaction equations. Landscape characterization can be viewed as an attempt to invert the forward mapping of the simulator and recover the inputs that produce a particular output. Given that a single simulation run can take days or weeks to complete even on a large computing cluster, SimLearn attempts to reduce costs by reducing the number of simulations needed to effect discoveries. Unlike conventional data-mining methods that are applied to static predefined datasets, SimLearn involves an iterative process in which a most informative dataset is constructed dynamically by using the simulator as an oracle. On each iteration, the algorithm models the knowledge it has gained through previous simulation trials and then chooses which simulation trials to run next. Running these trials through the simulator produces new data in the form of input-output pairs. The overall process is embodied in an algorithm that combines support vector machines (SVMs) with active learning. SVMs use learning from examples (the examples are the input-output pairs generated by running the simulator) and a principle called maximum margin to derive predictors that generalize well to new inputs. In SimLearn, the SVM plays the role of modeling the knowledge that has been gained through previous simulation trials. Active learning is used to determine which new input points would be most informative if their output were known. The selected input points are run through the simulator to generate new information that can be used to refine the SVM. The process is then repeated. SimLearn carefully balances exploration (semi-randomly searching around the input space) versus exploitation (using the current state of knowledge to conduct a tightly focused search). During each iteration, SimLearn uses not one, but an ensemble of SVMs. Each SVM in the ensemble is characterized by different hyper-parameters that control various aspects of the learned predictor - for example, whether the predictor is constrained to be very smooth (nearby points in input space lead to similar output predictions) or whether the predictor is allowed to be "bumpy." The various SVMs will have different preferences about which input points they would like to run through the simulator next. SimLearn includes a formal mechanism for balancing the ensemble SVM preferences so that a single choice can be made for the next set of trials.
NUMERICAL CONVERGENCE IN SMOOTHED PARTICLE HYDRODYNAMICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Qirong; Li, Yuexing; Hernquist, Lars
2015-02-10
We study the convergence properties of smoothed particle hydrodynamics (SPH) using numerical tests and simple analytic considerations. Our analysis shows that formal numerical convergence is possible in SPH only in the joint limit N → ∞, h → 0, and N{sub nb} → ∞, where N is the total number of particles, h is the smoothing length, and N{sub nb} is the number of neighbor particles within the smoothing volume used to compute smoothed estimates. Previous work has generally assumed that the conditions N → ∞ and h → 0 are sufficient to achieve convergence, while holding N{sub nb} fixed.more » We demonstrate that if N{sub nb} is held fixed as the resolution is increased, there will be a residual source of error that does not vanish as N → ∞ and h → 0. Formal numerical convergence in SPH is possible only if N{sub nb} is increased systematically as the resolution is improved. Using analytic arguments, we derive an optimal compromise scaling for N{sub nb} by requiring that this source of error balance that present in the smoothing procedure. For typical choices of the smoothing kernel, we find N{sub nb} ∝N {sup 0.5}. This means that if SPH is to be used as a numerically convergent method, the required computational cost does not scale with particle number as O(N), but rather as O(N {sup 1} {sup +} {sup δ}), where δ ≈ 0.5, with a weak dependence on the form of the smoothing kernel.« less
Deep Laser-Assisted Lamellar Anterior Keratoplasty with Microkeratome-Cut Grafts
Yokogawa, Hideaki; Tang, Maolong; Li, Yan; Liu, Liang; Chamberlain, Winston; Huang, David
2016-01-01
Background The goals of this laboratory study were to evaluate the interface quality in laser-assisted lamellar anterior keratoplasty (LALAK) with microkeratome-cut grafts, and to achieve good graft–host apposition. Methods Simulated LALAK surgeries were performed on six pairs of eye bank corneoscleral discs. Anterior lamellar grafts were precut with microkeratomes. Deep femtosecond (FS) laser cuts were performed on host corneas followed by excimer laser smoothing. Different parameters of FS laser cuts and excimer laser smoothing were tested. OCT was used to measure corneal pachymetry and evaluate graft-host apposition. The interface quality was quantified in a masked fashion using a 5-point scale based on scanning electron microscopy images. Results Deep FS laser cuts at 226–380 μm resulted in visible ridges on the host bed. Excimer laser smoothing with central ablation depth of 29 μm and saline as a smoothing agent did not adequately reduce ridges (score = 4.0). Deeper excimer laser ablation of 58 μm and Optisol-GS as a smoothing agent smoothed ridges to an acceptable level (score = 2.1). Same sizing of the graft and host cut diameters with an approximately 50 μm deeper host side-cut relative to the central graft thickness provided the best graft–host fit. Conclusions Deep excimer laser ablation with a viscous smoothing agent was needed to remove ridges after deep FS lamellar cuts. The host side cut should be deep enough to accommodate thicker graft peripheral thickness compared to the center. This LALAK design provides smooth lamellar interfaces, moderately thick grafts, and good graft-host fits. PMID:26890667
Digital relief generation from 3D models
NASA Astrophysics Data System (ADS)
Wang, Meili; Sun, Yu; Zhang, Hongming; Qian, Kun; Chang, Jian; He, Dongjian
2016-09-01
It is difficult to extend image-based relief generation to high-relief generation, as the images contain insufficient height information. To generate reliefs from three-dimensional (3D) models, it is necessary to extract the height fields from the model, but this can only generate bas-reliefs. To overcome this problem, an efficient method is proposed to generate bas-reliefs and high-reliefs directly from 3D meshes. To produce relief features that are visually appropriate, the 3D meshes are first scaled. 3D unsharp masking is used to enhance the visual features in the 3D mesh, and average smoothing and Laplacian smoothing are implemented to achieve better smoothing results. A nonlinear variable scaling scheme is then employed to generate the final bas-reliefs and high-reliefs. Using the proposed method, relief models can be generated from arbitrary viewing positions with different gestures and combinations of multiple 3D models. The generated relief models can be printed by 3D printers. The proposed method provides a means of generating both high-reliefs and bas-reliefs in an efficient and effective way under the appropriate scaling factors.
Defining window-boundaries for genomic analyses using smoothing spline techniques
Beissinger, Timothy M.; Rosa, Guilherme J.M.; Kaeppler, Shawn M.; ...
2015-04-17
High-density genomic data is often analyzed by combining information over windows of adjacent markers. Interpretation of data grouped in windows versus at individual locations may increase statistical power, simplify computation, reduce sampling noise, and reduce the total number of tests performed. However, use of adjacent marker information can result in over- or under-smoothing, undesirable window boundary specifications, or highly correlated test statistics. We introduce a method for defining windows based on statistically guided breakpoints in the data, as a foundation for the analysis of multiple adjacent data points. This method involves first fitting a cubic smoothing spline to the datamore » and then identifying the inflection points of the fitted spline, which serve as the boundaries of adjacent windows. This technique does not require prior knowledge of linkage disequilibrium, and therefore can be applied to data collected from individual or pooled sequencing experiments. Moreover, in contrast to existing methods, an arbitrary choice of window size is not necessary, since these are determined empirically and allowed to vary along the genome.« less
NASA Astrophysics Data System (ADS)
Huang, Chengcheng; Zheng, Xiaogu; Tait, Andrew; Dai, Yongjiu; Yang, Chi; Chen, Zhuoqi; Li, Tao; Wang, Zhonglei
2014-01-01
Partial thin-plate smoothing spline model is used to construct the trend surface.Correction of the spline estimated trend surface is often necessary in practice.Cressman weight is modified and applied in residual correction.The modified Cressman weight performs better than Cressman weight.A method for estimating the error covariance matrix of gridded field is provided.
The Existence of Smooth Densities for the Prediction, Filtering and Smoothing Problems
1990-12-20
128 - 139. [14] With D. COLWELL and P.E. KOPP, Martingale representation and hedging policies. Stochastic Processes and Applications. (Accepted) [5j...Martingale Representation and Hedging Policies David B. COLWELL Robert J. ELLIOTT P. Ekkehard KOPP* Department of Statistics and Applied Probability...is determined by elementary methods in the Markov situation. Applications to hedging portfolios in finance are described. martingale representation
Comparison of three explicit multigrid methods for the Euler and Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Chima, Rodrick V.; Turkel, Eli; Schaffer, Steve
1987-01-01
Three explicit multigrid methods, Ni's method, Jameson's finite-volume method, and a finite-difference method based on Brandt's work, are described and compared for two model problems. All three methods use an explicit multistage Runge-Kutta scheme on the fine grid, and this scheme is also described. Convergence histories for inviscid flow over a bump in a channel for the fine-grid scheme alone show that convergence rate is proportional to Courant number and that implicit residual smoothing can significantly accelerate the scheme. Ni's method was slightly slower than the implicitly-smoothed scheme alone. Brandt's and Jameson's methods are shown to be equivalent in form but differ in their node versus cell-centered implementations. They are about 8.5 times faster than Ni's method in terms of CPU time. Results for an oblique shock/boundary layer interaction problem verify the accuracy of the finite-difference code. All methods slowed considerably on the stretched viscous grid but Brandt's method was still 2.1 times faster than Ni's method.
Kongsted, A; Jørgensen, L V; Bendix, T; Korsholm, L; Leboeuf-Yde, C
2007-11-01
To evaluate whether smooth pursuit eye movements differed between patients with long-lasting whiplash-associated disorders and controls when using a purely computerized method for the eye movement analysis. Cross-sectional study comparing patients with whiplash-associated disorders and controls who had not been exposed to head or neck trauma and had no notable neck complaints. Smooth pursuit eye movements were registered while the subjects were seated with and without rotated cervical spine. Thirty-four patients with whiplash-associated disorders with symptoms more than six months after a car collision and 60 controls. Smooth pursuit eye movements were almost identical in patients with chronic whiplash-associated disorders and controls, both when the neck was rotated and in the neutral position. Disturbed smooth pursuit eye movements do not appear to be a distinct feature in patients with chronic whiplash-associated disorders. This is in contrast to results of previous studies and may be due to the fact that analyses were performed in a computerized and objective manner. Other possible reasons for the discrepancy to previous studies are discussed.
Non-smooth Hopf-type bifurcations arising from impact–friction contact events in rotating machinery
Mora, Karin; Budd, Chris; Glendinning, Paul; Keogh, Patrick
2014-01-01
We analyse the novel dynamics arising in a nonlinear rotor dynamic system by investigating the discontinuity-induced bifurcations corresponding to collisions with the rotor housing (touchdown bearing surface interactions). The simplified Föppl/Jeffcott rotor with clearance and mass unbalance is modelled by a two degree of freedom impact–friction oscillator, as appropriate for a rigid rotor levitated by magnetic bearings. Two types of motion observed in experiments are of interest in this paper: no contact and repeated instantaneous contact. We study how these are affected by damping and stiffness present in the system using analytical and numerical piecewise-smooth dynamical systems methods. By studying the impact map, we show that these types of motion arise at a novel non-smooth Hopf-type bifurcation from a boundary equilibrium bifurcation point for certain parameter values. A local analysis of this bifurcation point allows us a complete understanding of this behaviour in a general setting. The analysis identifies criteria for the existence of such smooth and non-smooth bifurcations, which is an essential step towards achieving reliable and robust controllers that can take compensating action. PMID:25383034
Perry, Thomas Ernest; Zha, Hongyuan; Zhou, Ke; Frias, Patricio; Zeng, Dadan; Braunstein, Mark
2014-02-01
Electronic health records possess critical predictive information for machine-learning-based diagnostic aids. However, many traditional machine learning methods fail to simultaneously integrate textual data into the prediction process because of its high dimensionality. In this paper, we present a supervised method using Laplacian Eigenmaps to enable existing machine learning methods to estimate both low-dimensional representations of textual data and accurate predictors based on these low-dimensional representations at the same time. We present a supervised Laplacian Eigenmap method to enhance predictive models by embedding textual predictors into a low-dimensional latent space, which preserves the local similarities among textual data in high-dimensional space. The proposed implementation performs alternating optimization using gradient descent. For the evaluation, we applied our method to over 2000 patient records from a large single-center pediatric cardiology practice to predict if patients were diagnosed with cardiac disease. In our experiments, we consider relatively short textual descriptions because of data availability. We compared our method with latent semantic indexing, latent Dirichlet allocation, and local Fisher discriminant analysis. The results were assessed using four metrics: the area under the receiver operating characteristic curve (AUC), Matthews correlation coefficient (MCC), specificity, and sensitivity. The results indicate that supervised Laplacian Eigenmaps was the highest performing method in our study, achieving 0.782 and 0.374 for AUC and MCC, respectively. Supervised Laplacian Eigenmaps showed an increase of 8.16% in AUC and 20.6% in MCC over the baseline that excluded textual data and a 2.69% and 5.35% increase in AUC and MCC, respectively, over unsupervised Laplacian Eigenmaps. As a solution, we present a supervised Laplacian Eigenmap method to embed textual predictors into a low-dimensional Euclidean space. This method allows many existing machine learning predictors to effectively and efficiently capture the potential of textual predictors, especially those based on short texts.
NASA Astrophysics Data System (ADS)
Zeng, Fanhai; Zhang, Zhongqiang; Karniadakis, George Em
2017-12-01
Starting with the asymptotic expansion of the error equation of the shifted Gr\\"{u}nwald--Letnikov formula, we derive a new modified weighted shifted Gr\\"{u}nwald--Letnikov (WSGL) formula by introducing appropriate correction terms. We then apply one special case of the modified WSGL formula to solve multi-term fractional ordinary and partial differential equations, and we prove the linear stability and second-order convergence for both smooth and non-smooth solutions. We show theoretically and numerically that numerical solutions up to certain accuracy can be obtained with only a few correction terms. Moreover, the correction terms can be tuned according to the fractional derivative orders without explicitly knowing the analytical solutions. Numerical simulations verify the theoretical results and demonstrate that the new formula leads to better performance compared to other known numerical approximations with similar resolution.
On The Calculation Of Derivatives From Digital Information
NASA Astrophysics Data System (ADS)
Pettett, Christopher G.; Budney, David R.
1982-02-01
Biomechanics analysis frequently requires cinematographic studies as a first step toward understanding the essential mechanics of a sport or exercise. In order to understand the exertion by the athlete, cinematography is used to establish the kinematics from which the energy exchanges can be considered and the equilibrium equations can be studied. Errors in the raw digital information necessitate smoothing of the data before derivatives can be obtained. Researchers employ a variety of curve-smoothing techniques including filtering and polynomial spline methods. It is essential that the researcher understands the accuracy which can be expected in velocities and accelerations obtained from smoothed digital information. This paper considers particular types of data inherent in athletic motion and the expected accuracy of calculated velocities and accelerations using typical error distributions in the raw digital information. Included in this paper are high acceleration, impact and smooth motion types of data.
THE EFFECT OF SMOOTH MUSCLE ON THE INTERCELLULAR SPACES IN TOAD URINARY BLADDER
DiBona, Donald R.; Civan, Mortimer M.
1970-01-01
Phase microscopy of toad urinary bladder has demonstrated that vasopressin can cause an enlargement of the epithelial intercellular spaces under conditions of no net transfer of water or sodium. The suggestion that this phenomenon is linked to the hormone's action as a smooth muscle relaxant has been tested and verified with the use of other agents effecting smooth muscle: atropine and adenine compounds (relaxants), K+ and acetylcholine (contractants). Furthermore, it was possible to reduce the size and number of intercellular spaces, relative to a control, while increasing the rate of osmotic water flow. A method for quantifying these results has been developed and shows that they are, indeed, significant. It is concluded, therefore, that the configuration of intercellular spaces is not a reliable index of water flow across this epithelium and that such a morphologic-physiologic relationship is tenuous in any epithelium supported by a submucosa rich in smooth muscle. PMID:4915450
Smoothed dissipative particle dynamics with angular momentum conservation
NASA Astrophysics Data System (ADS)
Müller, Kathrin; Fedosov, Dmitry A.; Gompper, Gerhard
2015-01-01
Smoothed dissipative particle dynamics (SDPD) combines two popular mesoscopic techniques, the smoothed particle hydrodynamics and dissipative particle dynamics (DPD) methods, and can be considered as an improved dissipative particle dynamics approach. Despite several advantages of the SDPD method over the conventional DPD model, the original formulation of SDPD by Español and Revenga (2003) [9], lacks angular momentum conservation, leading to unphysical results for problems where the conservation of angular momentum is essential. To overcome this limitation, we extend the SDPD method by introducing a particle spin variable such that local and global angular momentum conservation is restored. The new SDPD formulation (SDPD+a) is directly derived from the Navier-Stokes equation for fluids with spin, while thermal fluctuations are incorporated similarly to the DPD method. We test the new SDPD method and demonstrate that it properly reproduces fluid transport coefficients. Also, SDPD with angular momentum conservation is validated using two problems: (i) the Taylor-Couette flow with two immiscible fluids and (ii) a tank-treading vesicle in shear flow with a viscosity contrast between inner and outer fluids. For both problems, the new SDPD method leads to simulation predictions in agreement with the corresponding analytical theories, while the original SDPD method fails to capture properly physical characteristics of the systems due to violation of angular momentum conservation. In conclusion, the extended SDPD method with angular momentum conservation provides a new approach to tackle fluid problems such as multiphase flows and vesicle/cell suspensions, where the conservation of angular momentum is essential.
Adaptive non-local smoothing-based weberface for illumination-insensitive face recognition
NASA Astrophysics Data System (ADS)
Yao, Min; Zhu, Changming
2017-07-01
Compensating the illumination of a face image is an important process to achieve effective face recognition under severe illumination conditions. This paper present a novel illumination normalization method which specifically considers removing the illumination boundaries as well as reducing the regional illumination. We begin with the analysis of the commonly used reflectance model and then expatiate the hybrid usage of adaptive non-local smoothing and the local information coding based on Weber's law. The effectiveness and advantages of this combination are evidenced visually and experimentally. Results on Extended YaleB database show its better performance than several other famous methods.
Nonparametric methods for doubly robust estimation of continuous treatment effects.
Kennedy, Edward H; Ma, Zongming; McHugh, Matthew D; Small, Dylan S
2017-09-01
Continuous treatments (e.g., doses) arise often in practice, but many available causal effect estimators are limited by either requiring parametric models for the effect curve, or by not allowing doubly robust covariate adjustment. We develop a novel kernel smoothing approach that requires only mild smoothness assumptions on the effect curve, and still allows for misspecification of either the treatment density or outcome regression. We derive asymptotic properties and give a procedure for data-driven bandwidth selection. The methods are illustrated via simulation and in a study of the effect of nurse staffing on hospital readmissions penalties.
A monolithic homotopy continuation algorithm with application to computational fluid dynamics
NASA Astrophysics Data System (ADS)
Brown, David A.; Zingg, David W.
2016-09-01
A new class of homotopy continuation methods is developed suitable for globalizing quasi-Newton methods for large sparse nonlinear systems of equations. The new continuation methods, described as monolithic homotopy continuation, differ from the classical predictor-corrector algorithm in that the predictor and corrector phases are replaced with a single phase which includes both a predictor and corrector component. Conditional convergence and stability are proved analytically. Using a Laplacian-like operator to construct the homotopy, the new algorithm is shown to be more efficient than the predictor-corrector homotopy continuation algorithm as well as an implementation of the widely-used pseudo-transient continuation algorithm for some inviscid and turbulent, subsonic and transonic external aerodynamic flows over the ONERA M6 wing and the NACA 0012 airfoil using a parallel implicit Newton-Krylov finite-difference flow solver.
Mean phase predictor for maximum a posteriori demodulator
NASA Technical Reports Server (NTRS)
Altes, Richard A. (Inventor)
1996-01-01
A system and method for optimal maximum a posteriori (MAP) demodulation using a novel mean phase predictor. The mean phase predictor conducts cumulative averaging over multiple blocks of phase samples to provide accurate prior mean phases, to be input into a MAP phase estimator.
Predictor symbology in computer-generated pictorial displays
NASA Technical Reports Server (NTRS)
Grunwald, A. J.
1981-01-01
The display under investigation, is a tunnel display for the four-dimensional commercial aircraft approach-to-landing under instrument flight rules. It is investigated whether more complex predictive information such as a three-dimensional perspective vehicle symbol, predicting the future vehicle position as well as future vehicle attitude angles, contributes to a better system response, and suitable predictor laws for the predictor motions, are formulated. Methods for utilizing the predictor symbol in controlling the forward velocity of the aircraft in four-dimensional approaches, are investigated. The simulator tests show, that the complex perspective vehicle symbol yields improved damping in the lateral response as compared to a flat two-dimensional predictor cross, but yields generally larger vertical deviations. Methods of using the predictor symbol in controlling the forward velocity of the vehicle are shown to be effective. The tunnel display with superimposed perspective vehicle symbol yields very satisfactory results and pilot acceptance in the lateral control but is found to be unsatisfactory in the vertical control, as a result of too large vertical path-angle deviations.
Differential rotation in Jupiter: A comparison of methods
NASA Astrophysics Data System (ADS)
Wisdom, J.; Hubbard, W. B.
2016-03-01
Whether Jupiter rotates as a solid body or has some element of differential rotation along concentric cylinders is unknown. But Jupiter's zonal wind is not north/south symmetric so at most some average of the north/south zonal winds could be an expression of cylinders. Here we explore the signature in the gravitational moments of such a smooth differential rotation. We carry out this investigation with two general methods for solving for the interior structure of a differentially rotating planet: the CMS method of Hubbard (Hubbard, W.B. [2013]. Astrophys. J. 768, 1-8) and the CLC method of Wisdom (Wisdom, J. [1996]. Non-Perturbative Hydrostatic Equilibrium. http://web.mit.edu/wisdom/www/interior.pdf). The two methods are in remarkable agreement. We find that for smooth differential rotation the moments do not level off as they do for strong differential rotation.
Embedded WENO: A design strategy to improve existing WENO schemes
NASA Astrophysics Data System (ADS)
van Lith, Bart S.; ten Thije Boonkkamp, Jan H. M.; IJzerman, Wilbert L.
2017-02-01
Embedded WENO methods utilise all adjacent smooth substencils to construct a desirable interpolation. Conventional WENO schemes under-use this possibility close to large gradients or discontinuities. We develop a general approach for constructing embedded versions of existing WENO schemes. Embedded methods based on the WENO schemes of Jiang and Shu [1] and on the WENO-Z scheme of Borges et al. [2] are explicitly constructed. Several possible choices are presented that result in either better spectral properties or a higher order of convergence for sufficiently smooth solutions. However, these improvements carry over to discontinuous solutions. The embedded methods are demonstrated to be indeed improvements over their standard counterparts by several numerical examples. All the embedded methods presented have no added computational effort compared to their standard counterparts.
Hudson, H M; Ma, J; Green, P
1994-01-01
Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.
Using Perturbation Theory to Reduce Noise in Diffusion Tensor Fields
Bansal, Ravi; Staib, Lawrence H.; Xu, Dongrong; Laine, Andrew F.; Liu, Jun; Peterson, Bradley S.
2009-01-01
We propose the use of Perturbation theory to reduce noise in Diffusion Tensor (DT) fields. Diffusion Tensor Imaging (DTI) encodes the diffusion of water molecules along different spatial directions in a positive-definite, 3 × 3 symmetric tensor. Eigenvectors and eigenvalues of DTs allow the in vivo visualization and quantitative analysis of white matter fiber bundles across the brain. The validity and reliability of these analyses are limited, however, by the low spatial resolution and low Signal-to-Noise Ratio (SNR) in DTI datasets. Our procedures can be applied to improve the validity and reliability of these quantitative analyses by reducing noise in the tensor fields. We model a tensor field as a three-dimensional Markov Random Field and then compute the likelihood and the prior terms of this model using Perturbation theory. The prior term constrains the tensor field to be smooth, whereas the likelihood term constrains the smoothed tensor field to be similar to the original field. Thus, the proposed method generates a smoothed field that is close in structure to the original tensor field. We evaluate the performance of our method both visually and quantitatively using synthetic and real-world datasets. We quantitatively assess the performance of our method by computing the SNR for eigenvalues and the coherence measures for eigenvectors of DTs across tensor fields. In addition, we quantitatively compare the performance of our procedures with the performance of one method that uses a Riemannian distance to compute the similarity between two tensors, and with another method that reduces noise in tensor fields by anisotropically filtering the diffusion weighted images that are used to estimate diffusion tensors. These experiments demonstrate that our method significantly increases the coherence of the eigenvectors and the SNR of the eigenvalues, while simultaneously preserving the fine structure and boundaries between homogeneous regions, in the smoothed tensor field. PMID:19540791
Kundu, Suman; Mazumdar, Madhu; Ferket, Bart
2017-04-19
The area under the ROC curve (AUC) of risk models is known to be influenced by differences in case-mix and effect size of predictors. The impact of heterogeneity in correlation among predictors has however been under investigated. We sought to evaluate how correlation among predictors affects the AUC in development and external populations. We simulated hypothetical populations using two different methods based on means, standard deviations, and correlation of two continuous predictors. In the first approach, the distribution and correlation of predictors were assumed for the total population. In the second approach, these parameters were modeled conditional on disease status. In both approaches, multivariable logistic regression models were fitted to predict disease risk in individuals. Each risk model developed in a population was validated in the remaining populations to investigate external validity. For both approaches, we observed that the magnitude of the AUC in the development and external populations depends on the correlation among predictors. Lower AUCs were estimated in scenarios of both strong positive and negative correlation, depending on the direction of predictor effects and the simulation method. However, when adjusted effect sizes of predictors were specified in the opposite directions, increasingly negative correlation consistently improved the AUC. AUCs in external validation populations were higher or lower than in the derivation cohort, even in the presence of similar predictor effects. Discrimination of risk prediction models should be assessed in various external populations with different correlation structures to make better inferences about model generalizability.
Relations among Socioeconomic Status, Age, and Predictors of Phonological Awareness
ERIC Educational Resources Information Center
McDowell, Kimberly D.; Lonigan, Christopher J.; Goldstein, Howard
2007-01-01
Purpose: This study simultaneously examined predictors of phonological awareness within the framework of 2 theories: the phonological distinctness hypothesis and the lexical restructuring model. Additionally, age as a moderator of the relations between predictor variables and phonological awareness was examined. Method: This cross-sectional…
NASA Astrophysics Data System (ADS)
Mahadewi, Alfiani Guntari; Christina, Daisy; Hermansyah, Heri; Wijanarko, Anondho; Farida, Siti; Adawiyah, Robiatul; Rohmatin, Etin; Sahlan, Muhamad
2018-02-01
The increase in fungal resistance against antifungal drugs available in the market will reduce the effectiveness of treatment for Candidiasis. Propolis contains various compounds with antifungal properties Candida albicans, but the content of each type is very diverse. The sample used was Sulawesi propolis type smooth (taken from inside the nest), rough (taken from outside the hive) and mix (a combination of both). Anti-C. albicans molecule marker is a marker compound for selecting propolis with the ability to overcome Candidiasis. The initial step was to test the levels of flavonoids and phenolic by using UV-Vis spectrometry method. It was founded that each sample was not always superior to any substance, so propolis cannot be directly selected. In Phenolic content, mix propolis has the highest value than other 5.109%. In Flavonoid content, propolis smooth has the highest value than other, 16.38%. Furthermore, propolis selected by antifungal activity test with good diffusion method at the concentration propolis 5% either 7%, the inhibitory diameter zone propolis smooth and rough has same value 10 mm. Propolis mix has an advantage while propolis smooth and rough have the same capability range 12 mm and 13 mm. In this study, the phenolic content plays a major role in antifungal cases.
Fast global image smoothing based on weighted least squares.
Min, Dongbo; Choi, Sunghwan; Lu, Jiangbo; Ham, Bumsub; Sohn, Kwanghoon; Do, Minh N
2014-12-01
This paper presents an efficient technique for performing a spatially inhomogeneous edge-preserving image smoothing, called fast global smoother. Focusing on sparse Laplacian matrices consisting of a data term and a prior term (typically defined using four or eight neighbors for 2D image), our approach efficiently solves such global objective functions. In particular, we approximate the solution of the memory-and computation-intensive large linear system, defined over a d-dimensional spatial domain, by solving a sequence of 1D subsystems. Our separable implementation enables applying a linear-time tridiagonal matrix algorithm to solve d three-point Laplacian matrices iteratively. Our approach combines the best of two paradigms, i.e., efficient edge-preserving filters and optimization-based smoothing. Our method has a comparable runtime to the fast edge-preserving filters, but its global optimization formulation overcomes many limitations of the local filtering approaches. Our method also achieves high-quality results as the state-of-the-art optimization-based techniques, but runs ∼10-30 times faster. Besides, considering the flexibility in defining an objective function, we further propose generalized fast algorithms that perform Lγ norm smoothing (0 < γ < 2) and support an aggregated (robust) data term for handling imprecise data constraints. We demonstrate the effectiveness and efficiency of our techniques in a range of image processing and computer graphics applications.
Cid, Jaime A; von Davier, Alina A
2015-05-01
Test equating is a method of making the test scores from different test forms of the same assessment comparable. In the equating process, an important step involves continuizing the discrete score distributions. In traditional observed-score equating, this step is achieved using linear interpolation (or an unscaled uniform kernel). In the kernel equating (KE) process, this continuization process involves Gaussian kernel smoothing. It has been suggested that the choice of bandwidth in kernel smoothing controls the trade-off between variance and bias. In the literature on estimating density functions using kernels, it has also been suggested that the weight of the kernel depends on the sample size, and therefore, the resulting continuous distribution exhibits bias at the endpoints, where the samples are usually smaller. The purpose of this article is (a) to explore the potential effects of atypical scores (spikes) at the extreme ends (high and low) on the KE method in distributions with different degrees of asymmetry using the randomly equivalent groups equating design (Study I), and (b) to introduce the Epanechnikov and adaptive kernels as potential alternative approaches to reducing boundary bias in smoothing (Study II). The beta-binomial model is used to simulate observed scores reflecting a range of different skewed shapes.
Interpreting SBUV Smoothing Errors: an Example Using the Quasi-biennial Oscillation
NASA Technical Reports Server (NTRS)
Kramarova, N. A.; Bhartia, Pawan K.; Frith, S. M.; McPeters, R. D.; Stolarski, R. S.
2013-01-01
The Solar Backscattered Ultraviolet (SBUV) observing system consists of a series of instruments that have been measuring both total ozone and the ozone profile since 1970. SBUV measures the profile in the upper stratosphere with a resolution that is adequate to resolve most of the important features of that region. In the lower stratosphere the limited vertical resolution of the SBUV system means that there are components of the profile variability that SBUV cannot measure. The smoothing error, as defined in the optimal estimation retrieval method, describes the components of the profile variability that the SBUV observing system cannot measure. In this paper we provide a simple visual interpretation of the SBUV smoothing error by comparing SBUV ozone anomalies in the lower tropical stratosphere associated with the quasi-biennial oscillation (QBO) to anomalies obtained from the Aura Microwave Limb Sounder (MLS). We describe a methodology for estimating the SBUV smoothing error for monthly zonal mean (mzm) profiles. We construct covariance matrices that describe the statistics of the inter-annual ozone variability using a 6 yr record of Aura MLS and ozonesonde data. We find that the smoothing error is of the order of 1percent between 10 and 1 hPa, increasing up to 15-20 percent in the troposphere and up to 5 percent in the mesosphere. The smoothing error for total ozone columns is small, mostly less than 0.5 percent. We demonstrate that by merging the partial ozone columns from several layers in the lower stratosphere/troposphere into one thick layer, we can minimize the smoothing error. We recommend using the following layer combinations to reduce the smoothing error to about 1 percent: surface to 25 hPa (16 hPa) outside (inside) of the narrow equatorial zone 20 S-20 N.
Al-Qudah, M.; Alkahtani, R.; Akbarali, H.I.; Murthy, K.S.; Grider, J.R.
2015-01-01
Background Brain-derived neurotrophic factor (BDNF) is a neurotrophin present in the intestine where it participates in survival and growth of enteric neurons, augmentation of enteric circuits, and stimulation of intestinal peristalsis and propulsion. Previous studies largely focused on the role of neural and mucosal BDNF. The expression and release of BDNF from intestinal smooth muscle and the interaction with enteric neuropeptides has not been studied in gut. Methods The expression and secretion of BDNF from smooth muscle cultured from rabbit longitudinal intestinal muscle in response to substance P and pituitary adenylate cyclase activating peptide (PACAP) was measured by western blot and ELISA. BDNF mRNA was measured by rt-PCR. Key Results The expression of BNDF protein and mRNA was greater in smooth muscle cells from the longitudinal muscle than from circular muscle layer. PACAP and substance P increased the expression of BDNF protein and mRNA in cultured longitudinal smooth muscle cells. PACAP and substance P also stimulated the secretion of BDNF from cultured longitudinal smooth muscle cells. Chelation of intracellular calcium with BAPTA prevented substance P-induced increase in BDNF mRNA and protein expression as well as substance P-induced secretion of BDNF. Conclusions & Inferences Neuropeptides known to be present in enteric neurons innervating the longitudinal layer increase the expression of BDNF mRNA and protein in smooth muscle cells and stimulate the release of BDNF. Considering the ability of BDNF to enhance smooth muscle contraction, this autocrine loop may partially explain the characteristic hypercontractility of longitudinal muscle in inflammatory bowel disease. PMID:26088546
A smoothing algorithm using cubic spline functions
NASA Technical Reports Server (NTRS)
Smith, R. E., Jr.; Price, J. M.; Howser, L. M.
1974-01-01
Two algorithms are presented for smoothing arbitrary sets of data. They are the explicit variable algorithm and the parametric variable algorithm. The former would be used where large gradients are not encountered because of the smaller amount of calculation required. The latter would be used if the data being smoothed were double valued or experienced large gradients. Both algorithms use a least-squares technique to obtain a cubic spline fit to the data. The advantage of the spline fit is that the first and second derivatives are continuous. This method is best used in an interactive graphics environment so that the junction values for the spline curve can be manipulated to improve the fit.
eulerAPE: Drawing Area-Proportional 3-Venn Diagrams Using Ellipses
Micallef, Luana; Rodgers, Peter
2014-01-01
Venn diagrams with three curves are used extensively in various medical and scientific disciplines to visualize relationships between data sets and facilitate data analysis. The area of the regions formed by the overlapping curves is often directly proportional to the cardinality of the depicted set relation or any other related quantitative data. Drawing these diagrams manually is difficult and current automatic drawing methods do not always produce appropriate diagrams. Most methods depict the data sets as circles, as they perceptually pop out as complete distinct objects due to their smoothness and regularity. However, circles cannot draw accurate diagrams for most 3-set data and so the generated diagrams often have misleading region areas. Other methods use polygons to draw accurate diagrams. However, polygons are non-smooth and non-symmetric, so the curves are not easily distinguishable and the diagrams are difficult to comprehend. Ellipses are more flexible than circles and are similarly smooth, but none of the current automatic drawing methods use ellipses. We present eulerAPE as the first method and software that uses ellipses for automatically drawing accurate area-proportional Venn diagrams for 3-set data. We describe the drawing method adopted by eulerAPE and we discuss our evaluation of the effectiveness of eulerAPE and ellipses for drawing random 3-set data. We compare eulerAPE and various other methods that are currently available and we discuss differences between their generated diagrams in terms of accuracy and ease of understanding for real world data. PMID:25032825
eulerAPE: drawing area-proportional 3-Venn diagrams using ellipses.
Micallef, Luana; Rodgers, Peter
2014-01-01
Venn diagrams with three curves are used extensively in various medical and scientific disciplines to visualize relationships between data sets and facilitate data analysis. The area of the regions formed by the overlapping curves is often directly proportional to the cardinality of the depicted set relation or any other related quantitative data. Drawing these diagrams manually is difficult and current automatic drawing methods do not always produce appropriate diagrams. Most methods depict the data sets as circles, as they perceptually pop out as complete distinct objects due to their smoothness and regularity. However, circles cannot draw accurate diagrams for most 3-set data and so the generated diagrams often have misleading region areas. Other methods use polygons to draw accurate diagrams. However, polygons are non-smooth and non-symmetric, so the curves are not easily distinguishable and the diagrams are difficult to comprehend. Ellipses are more flexible than circles and are similarly smooth, but none of the current automatic drawing methods use ellipses. We present eulerAPE as the first method and software that uses ellipses for automatically drawing accurate area-proportional Venn diagrams for 3-set data. We describe the drawing method adopted by eulerAPE and we discuss our evaluation of the effectiveness of eulerAPE and ellipses for drawing random 3-set data. We compare eulerAPE and various other methods that are currently available and we discuss differences between their generated diagrams in terms of accuracy and ease of understanding for real world data.
Docherty, Paul D; Schranz, Christoph; Chase, J Geoffrey; Chiew, Yeong Shiong; Möller, Knut
2014-05-01
Accurate model parameter identification relies on accurate forward model simulations to guide convergence. However, some forward simulation methodologies lack the precision required to properly define the local objective surface and can cause failed parameter identification. The role of objective surface smoothness in identification of a pulmonary mechanics model was assessed using forward simulation from a novel error-stepping method and a proprietary Runge-Kutta method. The objective surfaces were compared via the identified parameter discrepancy generated in a Monte Carlo simulation and the local smoothness of the objective surfaces they generate. The error-stepping method generated significantly smoother error surfaces in each of the cases tested (p<0.0001) and more accurate model parameter estimates than the Runge-Kutta method in three of the four cases tested (p<0.0001) despite a 75% reduction in computational cost. Of note, parameter discrepancy in most cases was limited to a particular oblique plane, indicating a non-intuitive multi-parameter trade-off was occurring. The error-stepping method consistently improved or equalled the outcomes of the Runge-Kutta time-integration method for forward simulations of the pulmonary mechanics model. This study indicates that accurate parameter identification relies on accurate definition of the local objective function, and that parameter trade-off can occur on oblique planes resulting prematurely halted parameter convergence. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Optimization-based scatter estimation using primary modulation for computed tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yi; Ma, Jingchen; Zhao, Jun, E-mail: junzhao
Purpose: Scatter reduces the image quality in computed tomography (CT), but scatter correction remains a challenge. A previously proposed primary modulation method simultaneously obtains the primary and scatter in a single scan. However, separating the scatter and primary in primary modulation is challenging because it is an underdetermined problem. In this study, an optimization-based scatter estimation (OSE) algorithm is proposed to estimate and correct scatter. Methods: In the concept of primary modulation, the primary is modulated, but the scatter remains smooth by inserting a modulator between the x-ray source and the object. In the proposed algorithm, an objective function ismore » designed for separating the scatter and primary. Prior knowledge is incorporated in the optimization-based framework to improve the accuracy of the estimation: (1) the primary is always positive; (2) the primary is locally smooth and the scatter is smooth; (3) the location of penumbra can be determined; and (4) the scatter-contaminated data provide knowledge about which part is smooth. Results: The simulation study shows that the edge-preserving weighting in OSE improves the estimation accuracy near the object boundary. Simulation study also demonstrates that OSE outperforms the two existing primary modulation algorithms for most regions of interest in terms of the CT number accuracy and noise. The proposed method was tested on a clinical cone beam CT, demonstrating that OSE corrects the scatter even when the modulator is not accurately registered. Conclusions: The proposed OSE algorithm improves the robustness and accuracy in scatter estimation and correction. This method is promising for scatter correction of various kinds of x-ray imaging modalities, such as x-ray radiography, cone beam CT, and the fourth-generation CT.« less
A new smoothing function to introduce long-range electrostatic effects in QM/MM calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Dong; Department of Chemistry, University of Wisconsin, Madison, Wisconsin 53706; Duke, Robert E.
2015-07-28
A new method to account for long range electrostatic contributions is proposed and implemented for quantum mechanics/molecular mechanics long range electrostatic correction (QM/MM-LREC) calculations. This method involves the use of the minimum image convention under periodic boundary conditions and a new smoothing function for energies and forces at the cutoff boundary for the Coulomb interactions. Compared to conventional QM/MM calculations without long-range electrostatic corrections, the new method effectively includes effects on the MM environment in the primary image from its replicas in the neighborhood. QM/MM-LREC offers three useful features including the avoidance of calculations in reciprocal space (k-space), with themore » concomitant avoidance of having to reproduce (analytically or approximately) the QM charge density in k-space, and the straightforward availability of analytical Hessians. The new method is tested and compared with results from smooth particle mesh Ewald (PME) for three systems including a box of neat water, a double proton transfer reaction, and the geometry optimization of the critical point structures for the rate limiting step of the DNA dealkylase AlkB. As with other smoothing or shifting functions, relatively large cutoffs are necessary to achieve comparable accuracy with PME. For the double-proton transfer reaction, the use of a 22 Å cutoff shows a close reaction energy profile and geometries of stationary structures with QM/MM-LREC compared to conventional QM/MM with no truncation. Geometry optimization of stationary structures for the hydrogen abstraction step by AlkB shows some differences between QM/MM-LREC and the conventional QM/MM. These differences underscore the necessity of the inclusion of the long-range electrostatic contribution.« less
Smith predictor-based multiple periodic disturbance compensation for long dead-time processes
NASA Astrophysics Data System (ADS)
Tan, Fang; Li, Han-Xiong; Shen, Ping
2018-05-01
Many disturbance rejection methods have been proposed for processes with dead-time, while these existing methods may not work well under multiple periodic disturbances. In this paper, a multiple periodic disturbance rejection is proposed under the Smith predictor configuration for processes with long dead-time. One feedback loop is added to compensate periodic disturbance while retaining the advantage of the Smith predictor. With information of the disturbance spectrum, the added feedback loop can remove multiple periodic disturbances effectively. The robust stability can be easily maintained through the rigorous analysis. Finally, simulation examples demonstrate the effectiveness and robustness of the proposed method for processes with long dead-time.
Critical evaluation of five methods for quantifying chewing lice (Insecta: Phthiraptera).
Clayton, D H; Drown, D M
2001-12-01
Five methods for estimating the abundance of chewing lice (Insecta: Phthiraptera) were tested. To evaluate the methods, feral pigeons (Columba livia) and 2 species of ischnoceran lice were used. The fraction of lice removed by each method was compared, and least squares linear regression was used to determine how well each method predicted total abundance. Total abundance was assessed in most cases using KOH dissolution. The 2 methods involving dead birds (body washing and post-mortem-ruffling) provided better results than 3 methods involving live birds (dust-ruffling, fumigation chambers, and visual examination). Body washing removed the largest fraction of lice (>82%) and was an extremely accurate predictor of total abundance (r2 = 0.99). Post-mortem-ruffling was also an accurate predictor of total abundance (r2 > or = 0.88), even though it removed a smaller proportion of lice (<70%) than body washing. Dust-ruffling and fumigation chambers removed even fewer lice, but were still reasonably accurate predictors of total abundance, except in the case of data sets restricted to birds with relatively few lice. Visual examination, the only method not requiring that lice be removed from the host, was an accurate predictor of louse abundance, except in the case of wing lice on lightly parasitized birds.
Spatial analysis of county-based gonorrhoea incidence in mainland China, from 2004 to 2009.
Yin, Fei; Feng, Zijian; Li, Xiaosong
2012-07-01
Gonorrhoea is one of the most common sexually transmissible infections in mainland China. Effective spatial monitoring of gonorrhoea incidence is important for successful implementation of control and prevention programs. The county-level gonorrhoea incidence rates for all of mainland China was monitored through examining spatial patterns. County-level data on gonorrhoea cases between 2004 and 2009 were obtained from the China Information System for Disease Control and Prevention. Bayesian smoothing and exploratory spatial data analysis (ESDA) methods were used to characterise the spatial distribution pattern of gonorrhoea cases. During the 6-year study period, the average annual gonorrhoea incidence was 12.41 cases per 100000 people. Using empirical Bayes smoothed rates, the local Moran test identified one significant single-centre cluster and two significant multi-centre clusters of high gonorrhoea risk (all P-values <0.01). Bayesian smoothing and ESDA methods can assist public health officials in using gonorrhoea surveillance data to identify high risk areas. Allocating more resources to such areas could effectively reduce gonorrhoea incidence.
Scattering apodizer for laser beams
Summers, Mark A.; Hagen, Wilhelm F.; Boyd, Robert D.
1985-01-01
A method is disclosed for apodizing a laser beam to smooth out the production of diffraction peaks due to optical discontinuities in the path of the laser beam, such method comprising introduction of a pattern of scattering elements for reducing the peak intensity in the region of such optical discontinuities, such pattern having smoothly tapering boundaries in which the distribution density of the scattering elements is tapered gradually to produce small gradients in the distribution density, such pattern of scattering elements being effective to reduce and smooth out the diffraction effects which would otherwise be produced. The apodizer pattern may be produced by selectively blasting a surface of a transparent member with fine abrasive particles to produce a multitude of minute pits. In one embodiment, a scattering apodizer pattern is employed to overcome diffraction patterns in a multiple element crystal array for harmonic conversion of a laser beam. The interstices and the supporting grid between the crystal elements are obscured by the gradually tapered apodizer pattern of scattering elements.
MULTISCALE ADAPTIVE SMOOTHING MODELS FOR THE HEMODYNAMIC RESPONSE FUNCTION IN FMRI*
Wang, Jiaping; Zhu, Hongtu; Fan, Jianqing; Giovanello, Kelly; Lin, Weili
2012-01-01
In the event-related functional magnetic resonance imaging (fMRI) data analysis, there is an extensive interest in accurately and robustly estimating the hemodynamic response function (HRF) and its associated statistics (e.g., the magnitude and duration of the activation). Most methods to date are developed in the time domain and they have utilized almost exclusively the temporal information of fMRI data without accounting for the spatial information. The aim of this paper is to develop a multiscale adaptive smoothing model (MASM) in the frequency domain by integrating the spatial and temporal information to adaptively and accurately estimate HRFs pertaining to each stimulus sequence across all voxels in a three-dimensional (3D) volume. We use two sets of simulation studies and a real data set to examine the finite sample performance of MASM in estimating HRFs. Our real and simulated data analyses confirm that MASM outperforms several other state-of-art methods, such as the smooth finite impulse response (sFIR) model. PMID:24533041
Neighbour lists for smoothed particle hydrodynamics on GPUs
NASA Astrophysics Data System (ADS)
Winkler, Daniel; Rezavand, Massoud; Rauch, Wolfgang
2018-04-01
The efficient iteration of neighbouring particles is a performance critical aspect of any high performance smoothed particle hydrodynamics (SPH) solver. SPH solvers that implement a constant smoothing length generally divide the simulation domain into a uniform grid to reduce the computational complexity of the neighbour search. Based on this method, particle neighbours are either stored per grid cell or for each individual particle, denoted as Verlet list. While the latter approach has significantly higher memory requirements, it has the potential for a significant computational speedup. A theoretical comparison is performed to estimate the potential improvements of the method based on unknown hardware dependent factors. Subsequently, the computational performance of both approaches is empirically evaluated on graphics processing units. It is shown that the speedup differs significantly for different hardware, dimensionality and floating point precision. The Verlet list algorithm is implemented as an alternative to the cell linked list approach in the open-source SPH solver DualSPHysics and provided as a standalone software package.
Scattering apodizer for laser beams
Summers, M.A.; Hagen, W.F.; Boyd, R.D.
1984-01-01
A method is disclosed for apodizing a laser beam to smooth out the production of diffraction peaks due to optical discontinuities in the path of the laser beam, such method comprising introduction of a pattern of scattering elements for reducing the peak intensity in the region of such optical discontinuities, such pattern having smoothly tapering boundaries in which the distribution density of the scattering elements is tapered gradually to produce small gradients in the distribution density, such pattern of scattering elements being effective to reduce and smooth out the diffraction effects which would otherwise be produced. The apodizer pattern may be produced by selectively blasting a surface of a transparent member with fine abrasive particles to produce a multitude of minute pits. In one embodiment, a scattering apodizer pattern is employed to overcome diffraction patterns in a multiple element crystal array for harmonic conversion of a laser beam. The interstices and the supporting grid between the crystal elements are obscured by the gradually tapered apodizer pattern of scattering elements.
Verification of micro-scale photogrammetry for smooth three-dimensional object measurement
NASA Astrophysics Data System (ADS)
Sims-Waterhouse, Danny; Piano, Samanta; Leach, Richard
2017-05-01
By using sub-millimetre laser speckle pattern projection we show that photogrammetry systems are able to measure smooth three-dimensional objects with surface height deviations less than 1 μm. The projection of laser speckle patterns allows correspondences on the surface of smooth spheres to be found, and as a result, verification artefacts with low surface height deviations were measured. A combination of VDI/VDE and ISO standards were also utilised to provide a complete verification method, and determine the quality parameters for the system under test. Using the proposed method applied to a photogrammetry system, a 5 mm radius sphere was measured with an expanded uncertainty of 8.5 μm for sizing errors, and 16.6 μm for form errors with a 95 % confidence interval. Sphere spacing lengths between 6 mm and 10 mm were also measured by the photogrammetry system, and were found to have expanded uncertainties of around 20 μm with a 95 % confidence interval.
Predictor laws for pictorial flight displays
NASA Technical Reports Server (NTRS)
Grunwald, A. J.
1985-01-01
Two predictor laws are formulated and analyzed: (1) a circular path law based on constant accelerations perpendicular to the path and (2) a predictor law based on state transition matrix computations. It is shown that for both methods the predictor provides the essential lead zeros for the path-following task. However, in contrast to the circular path law, the state transition matrix law furnishes the system with additional zeros that entirely cancel out the higher-frequency poles of the vehicle dynamics. On the other hand, the circular path law yields a zero steady-state error in following a curved trajectory with a constant radius. A combined predictor law is suggested that utilizes the advantages of both methods. A simple analysis shows that the optimal prediction time mainly depends on the level of precision required in the path-following task, and guidelines for determining the optimal prediction time are given.
NASA Astrophysics Data System (ADS)
Shestopalov, D. I.; McFadden, L. A.; Golubeva, L. F.
2007-04-01
An optimization method of smoothing noisy spectra was developed to investigate faint absorption bands in the visual spectral region of reflectance spectra of asteroids and the compositional information derived from their analysis. The smoothing algorithm is called "optimal" because the algorithm determines the best running box size to separate weak absorption bands from the noise. The method is tested for its sensitivity to identifying false features in the smoothed spectrum, and its correctness of forecasting real absorption bands was tested with artificial spectra simulating asteroid reflectance spectra. After validating the method we optimally smoothed 22 vestoid spectra from SMASS1 [Xu, Sh., Binzel, R.P., Burbine, T.H., Bus, S.J., 1995. Icarus 115, 1-35]. We show that the resulting bands are not telluric features. Interpretation of the absorption bands in the asteroid spectra was based on the spectral properties of both terrestrial and meteorite pyroxenes. The bands located near 480, 505, 530, and 550 nm we assigned to spin-forbidden crystal field bands of ferrous iron, whereas the bands near 570, 600, and 650 nm are attributed to the crystal field bands of trivalent chromium and/or ferric iron in low-calcium pyroxenes on the asteroids' surface. While not measured by microprobe analysis, Fe 3+ site occupancy can be measured with Mössbauer spectroscopy, and is seen in trace amounts in pyroxenes. We believe that trace amounts of Fe 3+ on vestoid surfaces may be due to oxidation from impacts by icy bodies. If that is the case, they should be ubiquitous in the asteroid belt wherever pyroxene absorptions are found. Pyroxene composition of four asteroids of our set is determined from the band position of absorptions at 505 and 1000 nm, implying that there can be orthopyroxenes in all range of ferruginosity on the vestoid surfaces. For the present we cannot unambiguously interpret of the faint absorption bands that are seen in the spectra of 4005 Dyagilev, 4038 Kristina, 4147 Lennon, and 5143 Heracles. Probably there are other spectrally active materials along with pyroxenes on the surfaces of these asteroids.
Characterizing Accuracy and Precision of Glucose Sensors and Meters
2014-01-01
There is need for a method to describe precision and accuracy of glucose measurement as a smooth continuous function of glucose level rather than as a step function for a few discrete ranges of glucose. We propose and illustrate a method to generate a “Glucose Precision Profile” showing absolute relative deviation (ARD) and /or %CV versus glucose level to better characterize measurement errors at any glucose level. We examine the relationship between glucose measured by test and comparator methods using linear regression. We examine bias by plotting deviation = (test – comparator method) versus glucose level. We compute the deviation, absolute deviation (AD), ARD, and standard deviation (SD) for each data pair. We utilize curve smoothing procedures to minimize the effects of random sampling variability to facilitate identification and display of the underlying relationships between ARD or %CV and glucose level. AD, ARD, SD, and %CV display smooth continuous relationships versus glucose level. Estimates of MARD and %CV are subject to relatively large errors in the hypoglycemic range due in part to a markedly nonlinear relationship with glucose level and in part to the limited number of observations in the hypoglycemic range. The curvilinear relationships of ARD and %CV versus glucose level are helpful when characterizing and comparing the precision and accuracy of glucose sensors and meters. PMID:25037194
Liang, Steven Y.
2018-01-01
Microstructure images of metallic materials play a significant role in industrial applications. To address image degradation problem of metallic materials, a novel image restoration technique based on K-means singular value decomposition (KSVD) and smoothing penalty sparse representation (SPSR) algorithm is proposed in this work, the microstructure images of aluminum alloy 7075 (AA7075) material are used as examples. To begin with, to reflect the detail structure characteristics of the damaged image, the KSVD dictionary is introduced to substitute the traditional sparse transform basis (TSTB) for sparse representation. Then, due to the image restoration, modeling belongs to a highly underdetermined equation, and traditional sparse reconstruction methods may cause instability and obvious artifacts in the reconstructed images, especially reconstructed image with many smooth regions and the noise level is strong, thus the SPSR (here, q = 0.5) algorithm is designed to reconstruct the damaged image. The results of simulation and two practical cases demonstrate that the proposed method has superior performance compared with some state-of-the-art methods in terms of restoration performance factors and visual quality. Meanwhile, the grain size parameters and grain boundaries of microstructure image are discussed before and after they are restored by proposed method. PMID:29677163
Arima model and exponential smoothing method: A comparison
NASA Astrophysics Data System (ADS)
Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri
2013-04-01
This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.
NASA Astrophysics Data System (ADS)
Cai, Yong; Cui, Xiangyang; Li, Guangyao; Liu, Wenyang
2018-04-01
The edge-smooth finite element method (ES-FEM) can improve the computational accuracy of triangular shell elements and the mesh partition efficiency of complex models. In this paper, an approach is developed to perform explicit finite element simulations of contact-impact problems with a graphical processing unit (GPU) using a special edge-smooth triangular shell element based on ES-FEM. Of critical importance for this problem is achieving finer-grained parallelism to enable efficient data loading and to minimize communication between the device and host. Four kinds of parallel strategies are then developed to efficiently solve these ES-FEM based shell element formulas, and various optimization methods are adopted to ensure aligned memory access. Special focus is dedicated to developing an approach for the parallel construction of edge systems. A parallel hierarchy-territory contact-searching algorithm (HITA) and a parallel penalty function calculation method are embedded in this parallel explicit algorithm. Finally, the program flow is well designed, and a GPU-based simulation system is developed, using Nvidia's CUDA. Several numerical examples are presented to illustrate the high quality of the results obtained with the proposed methods. In addition, the GPU-based parallel computation is shown to significantly reduce the computing time.
Edge-augmented Fourier partial sums with applications to Magnetic Resonance Imaging (MRI)
NASA Astrophysics Data System (ADS)
Larriva-Latt, Jade; Morrison, Angela; Radgowski, Alison; Tobin, Joseph; Iwen, Mark; Viswanathan, Aditya
2017-08-01
Certain applications such as Magnetic Resonance Imaging (MRI) require the reconstruction of functions from Fourier spectral data. When the underlying functions are piecewise-smooth, standard Fourier approximation methods suffer from the Gibbs phenomenon - with associated oscillatory artifacts in the vicinity of edges and an overall reduced order of convergence in the approximation. This paper proposes an edge-augmented Fourier reconstruction procedure which uses only the first few Fourier coefficients of an underlying piecewise-smooth function to accurately estimate jump information and then incorporate it into a Fourier partial sum approximation. We provide both theoretical and empirical results showing the improved accuracy of the proposed method, as well as comparisons demonstrating superior performance over existing state-of-the-art sparse optimization-based methods.
Level-set-based reconstruction algorithm for EIT lung images: first clinical results.
Rahmati, Peyman; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz; Adler, Andy
2012-05-01
We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure-volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM.
Boosting structured additive quantile regression for longitudinal childhood obesity data.
Fenske, Nora; Fahrmeir, Ludwig; Hothorn, Torsten; Rzehak, Peter; Höhle, Michael
2013-07-25
Childhood obesity and the investigation of its risk factors has become an important public health issue. Our work is based on and motivated by a German longitudinal study including 2,226 children with up to ten measurements on their body mass index (BMI) and risk factors from birth to the age of 10 years. We introduce boosting of structured additive quantile regression as a novel distribution-free approach for longitudinal quantile regression. The quantile-specific predictors of our model include conventional linear population effects, smooth nonlinear functional effects, varying-coefficient terms, and individual-specific effects, such as intercepts and slopes. Estimation is based on boosting, a computer intensive inference method for highly complex models. We propose a component-wise functional gradient descent boosting algorithm that allows for penalized estimation of the large variety of different effects, particularly leading to individual-specific effects shrunken toward zero. This concept allows us to flexibly estimate the nonlinear age curves of upper quantiles of the BMI distribution, both on population and on individual-specific level, adjusted for further risk factors and to detect age-varying effects of categorical risk factors. Our model approach can be regarded as the quantile regression analog of Gaussian additive mixed models (or structured additive mean regression models), and we compare both model classes with respect to our obesity data.
Parrish, Donna; Butryn, Ryan S.; Rizzo, Donna M.
2012-01-01
We developed a methodology to predict brook trout (Salvelinus fontinalis) distribution using summer temperature metrics as predictor variables. Our analysis used long-term fish and hourly water temperature data from the Dog River, Vermont (USA). Commonly used metrics (e.g., mean, maximum, maximum 7-day maximum) tend to smooth the data so information on temperature variation is lost. Therefore, we developed a new set of metrics (called event metrics) to capture temperature variation by describing the frequency, area, duration, and magnitude of events that exceeded a user-defined temperature threshold. We used 16, 18, 20, and 22°C. We built linear discriminant models and tested and compared the event metrics against the commonly used metrics. Correct classification of the observations was 66% with event metrics and 87% with commonly used metrics. However, combined event and commonly used metrics correctly classified 92%. Of the four individual temperature thresholds, it was difficult to assess which threshold had the “best” accuracy. The 16°C threshold had slightly fewer misclassifications; however, the 20°C threshold had the fewest extreme misclassifications. Our method leveraged the volumes of existing long-term data and provided a simple, systematic, and adaptable framework for monitoring changes in fish distribution, specifically in the case of irregular, extreme temperature events.
Robust Surface Reconstruction via Laplace-Beltrami Eigen-Projection and Boundary Deformation
Shi, Yonggang; Lai, Rongjie; Morra, Jonathan H.; Dinov, Ivo; Thompson, Paul M.; Toga, Arthur W.
2010-01-01
In medical shape analysis, a critical problem is reconstructing a smooth surface of correct topology from a binary mask that typically has spurious features due to segmentation artifacts. The challenge is the robust removal of these outliers without affecting the accuracy of other parts of the boundary. In this paper, we propose a novel approach for this problem based on the Laplace-Beltrami (LB) eigen-projection and properly designed boundary deformations. Using the metric distortion during the LB eigen-projection, our method automatically detects the location of outliers and feeds this information to a well-composed and topology-preserving deformation. By iterating between these two steps of outlier detection and boundary deformation, we can robustly filter out the outliers without moving the smooth part of the boundary. The final surface is the eigen-projection of the filtered mask boundary that has the correct topology, desired accuracy and smoothness. In our experiments, we illustrate the robustness of our method on different input masks of the same structure, and compare with the popular SPHARM tool and the topology preserving level set method to show that our method can reconstruct accurate surface representations without introducing artificial oscillations. We also successfully validate our method on a large data set of more than 900 hippocampal masks and demonstrate that the reconstructed surfaces retain volume information accurately. PMID:20624704
Efficient data assimilation algorithm for bathymetry application
NASA Astrophysics Data System (ADS)
Ghorbanidehno, H.; Lee, J. H.; Farthing, M.; Hesser, T.; Kitanidis, P. K.; Darve, E. F.
2017-12-01
Information on the evolving state of the nearshore zone bathymetry is crucial to shoreline management, recreational safety, and naval operations. The high cost and complex logistics of using ship-based surveys for bathymetry estimation have encouraged the use of remote sensing techniques. Data assimilation methods combine the remote sensing data and nearshore hydrodynamic models to estimate the unknown bathymetry and the corresponding uncertainties. In particular, several recent efforts have combined Kalman Filter-based techniques such as ensembled-based Kalman filters with indirect video-based observations to address the bathymetry inversion problem. However, these methods often suffer from ensemble collapse and uncertainty underestimation. Here, the Compressed State Kalman Filter (CSKF) method is used to estimate the bathymetry based on observed wave celerity. In order to demonstrate the accuracy and robustness of the CSKF method, we consider twin tests with synthetic observations of wave celerity, while the bathymetry profiles are chosen based on surveys taken by the U.S. Army Corps of Engineer Field Research Facility (FRF) in Duck, NC. The first test case is a bathymetry estimation problem for a spatially smooth and temporally constant bathymetry profile. The second test case is a bathymetry estimation problem for a temporally evolving bathymetry from a smooth to a non-smooth profile. For both problems, we compare the results of CSKF with those obtained by the local ensemble transform Kalman filter (LETKF), which is a popular ensemble-based Kalman filter method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samet Y. Kadioglu
2011-12-01
We present a computational gas dynamics method based on the Spectral Deferred Corrections (SDC) time integration technique and the Piecewise Parabolic Method (PPM) finite volume method. The PPM framework is used to define edge averaged quantities which are then used to evaluate numerical flux functions. The SDC technique is used to integrate solution in time. This kind of approach was first taken by Anita et al in [17]. However, [17] is problematic when it is implemented to certain shock problems. Here we propose significant improvements to [17]. The method is fourth order (both in space and time) for smooth flows,more » and provides highly resolved discontinuous solutions. We tested the method by solving variety of problems. Results indicate that the fourth order of accuracy in both space and time has been achieved when the flow is smooth. Results also demonstrate the shock capturing ability of the method.« less
Application of Holt exponential smoothing and ARIMA method for data population in West Java
NASA Astrophysics Data System (ADS)
Supriatna, A.; Susanti, D.; Hertini, E.
2017-01-01
One method of time series that is often used to predict data that contains trend is Holt. Holt method using different parameters used in the original data which aims to smooth the trend value. In addition to Holt, ARIMA method can be used on a wide variety of data including data pattern containing a pattern trend. Data actual of population from 1998-2015 contains the trends so can be solved by Holt and ARIMA method to obtain the prediction value of some periods. The best method is measured by looking at the smallest MAPE and MAE error. The result using Holt method is 47.205.749 populations in 2016, 47.535.324 populations in 2017, and 48.041.672 populations in 2018, with MAPE error is 0,469744 and MAE error is 189.731. While the result using ARIMA method is 46.964.682 populations in 2016, 47.342.189 in 2017, and 47.899.696 in 2018, with MAPE error is 0,4380 and MAE is 176.626.
A Predictor-Corrector Approach for the Numerical Solution of Fractional Differential Equations
NASA Technical Reports Server (NTRS)
Diethelm, Kai; Ford, Neville J.; Freed, Alan D.; Gray, Hugh R. (Technical Monitor)
2002-01-01
We discuss an Adams-type predictor-corrector method for the numerical solution of fractional differential equations. The method may be used both for linear and for nonlinear problems, and it may be extended to multi-term equations (involving more than one differential operator) too.
An improved local radial point interpolation method for transient heat conduction analysis
NASA Astrophysics Data System (ADS)
Wang, Feng; Lin, Gao; Zheng, Bao-Jing; Hu, Zhi-Qiang
2013-06-01
The smoothing thin plate spline (STPS) interpolation using the penalty function method according to the optimization theory is presented to deal with transient heat conduction problems. The smooth conditions of the shape functions and derivatives can be satisfied so that the distortions hardly occur. Local weak forms are developed using the weighted residual method locally from the partial differential equations of the transient heat conduction. Here the Heaviside step function is used as the test function in each sub-domain to avoid the need for a domain integral. Essential boundary conditions can be implemented like the finite element method (FEM) as the shape functions possess the Kronecker delta property. The traditional two-point difference method is selected for the time discretization scheme. Three selected numerical examples are presented in this paper to demonstrate the availability and accuracy of the present approach comparing with the traditional thin plate spline (TPS) radial basis functions.
A Runge-Kutta discontinuous finite element method for high speed flows
NASA Technical Reports Server (NTRS)
Bey, Kim S.; Oden, J. T.
1991-01-01
A Runge-Kutta discontinuous finite element method is developed for hyperbolic systems of conservation laws in two space variables. The discontinuous Galerkin spatial approximation to the conservation laws results in a system of ordinary differential equations which are marched in time using Runge-Kutta methods. Numerical results for the two-dimensional Burger's equation show that the method is (p+1)-order accurate in time and space, where p is the degree of the polynomial approximation of the solution within an element and is capable of capturing shocks over a single element without oscillations. Results for this problem also show that the accuracy of the solution in smooth regions is unaffected by the local projection and that the accuracy in smooth regions increases as p increases. Numerical results for the Euler equations show that the method captures shocks without oscillations and with higher resolution than a first-order scheme.
NASA Astrophysics Data System (ADS)
Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris
2018-02-01
We investigate the predictability of monthly temperature and precipitation by applying automatic univariate time series forecasting methods to a sample of 985 40-year-long monthly temperature and 1552 40-year-long monthly precipitation time series. The methods include a naïve one based on the monthly values of the last year, as well as the random walk (with drift), AutoRegressive Fractionally Integrated Moving Average (ARFIMA), exponential smoothing state-space model with Box-Cox transformation, ARMA errors, Trend and Seasonal components (BATS), simple exponential smoothing, Theta and Prophet methods. Prophet is a recently introduced model inspired by the nature of time series forecasted at Facebook and has not been applied to hydrometeorological time series before, while the use of random walk, BATS, simple exponential smoothing and Theta is rare in hydrology. The methods are tested in performing multi-step ahead forecasts for the last 48 months of the data. We further investigate how different choices of handling the seasonality and non-normality affect the performance of the models. The results indicate that: (a) all the examined methods apart from the naïve and random walk ones are accurate enough to be used in long-term applications; (b) monthly temperature and precipitation can be forecasted to a level of accuracy which can barely be improved using other methods; (c) the externally applied classical seasonal decomposition results mostly in better forecasts compared to the automatic seasonal decomposition used by the BATS and Prophet methods; and (d) Prophet is competitive, especially when it is combined with externally applied classical seasonal decomposition.
Fu, Hai-Yan; Guo, Jun-Wei; Yu, Yong-Jie; Li, He-Dong; Cui, Hua-Peng; Liu, Ping-Ping; Wang, Bing; Wang, Sheng; Lu, Peng
2016-06-24
Peak detection is a critical step in chromatographic data analysis. In the present work, we developed a multi-scale Gaussian smoothing-based strategy for accurate peak extraction. The strategy consisted of three stages: background drift correction, peak detection, and peak filtration. Background drift correction was implemented using a moving window strategy. The new peak detection method is a variant of the system used by the well-known MassSpecWavelet, i.e., chromatographic peaks are found at local maximum values under various smoothing window scales. Therefore, peaks can be detected through the ridge lines of maximum values under these window scales, and signals that are monotonously increased/decreased around the peak position could be treated as part of the peak. Instrumental noise was estimated after peak elimination, and a peak filtration strategy was performed to remove peaks with signal-to-noise ratios smaller than 3. The performance of our method was evaluated using two complex datasets. These datasets include essential oil samples for quality control obtained from gas chromatography and tobacco plant samples for metabolic profiling analysis obtained from gas chromatography coupled with mass spectrometry. Results confirmed the reasonability of the developed method. Copyright © 2016 Elsevier B.V. All rights reserved.
Enhancement of flow measurements using fluid-dynamic constraints
NASA Astrophysics Data System (ADS)
Egger, H.; Seitz, T.; Tropea, C.
2017-09-01
Novel experimental modalities acquire spatially resolved velocity measurements for steady state and transient flows which are of interest for engineering and biological applications. One of the drawbacks of such high resolution velocity data is their susceptibility to measurement errors. In this paper, we propose a novel filtering strategy that allows enhancement of the noisy measurements to obtain reconstruction of smooth divergence free velocity and corresponding pressure fields which together approximately comply to a prescribed flow model. The main step in our approach consists of the appropriate use of the velocity measurements in the design of a linearized flow model which can be shown to be well-posed and consistent with the true velocity and pressure fields up to measurement and modeling errors. The reconstruction procedure is then formulated as an optimal control problem for this linearized flow model. The resulting filter has analyzable smoothing and approximation properties. We briefly discuss the discretization of the approach by finite element methods and comment on the efficient solution by iterative methods. The capability of the proposed filter to significantly reduce data noise is demonstrated by numerical tests including the application to experimental data. In addition, we compare with other methods like smoothing and solenoidal filtering.
Fatigue Life Prediction Based on Crack Closure and Equivalent Initial Flaw Size
Wang, Qiang; Zhang, Wei; Jiang, Shan
2015-01-01
Failure analysis and fatigue life prediction are necessary and critical for engineering structural materials. In this paper, a general methodology is proposed to predict fatigue life of smooth and circular-hole specimens, in which the crack closure model and equivalent initial flaw size (EIFS) concept are employed. Different effects of crack closure on small crack growth region and long crack growth region are considered in the proposed method. The EIFS is determined by the fatigue limit and fatigue threshold stress intensity factor △Kth. Fatigue limit is directly obtained from experimental data, and △Kth is calculated by using a back-extrapolation method. Experimental data for smooth and circular-hole specimens in three different alloys (Al2024-T3, Al7075-T6 and Ti-6Al-4V) under multiple stress ratios are used to validate the method. In the validation section, Semi-circular surface crack and quarter-circular corner crack are assumed to be the initial crack shapes for the smooth and circular-hole specimens, respectively. A good agreement is observed between model predictions and experimental data. The detailed analysis and discussion are performed on the proposed model. Some conclusions and future work are given. PMID:28793625
Review of smoothing methods for enhancement of noisy data from heavy-duty LHD mining machines
NASA Astrophysics Data System (ADS)
Wodecki, Jacek; Michalak, Anna; Stefaniak, Paweł
2018-01-01
Appropriate analysis of data measured on heavy-duty mining machines is essential for processes monitoring, management and optimization. Some particular classes of machines, for example LHD (load-haul-dump) machines, hauling trucks, drilling/bolting machines etc. are characterized with cyclicity of operations. In those cases, identification of cycles and their segments or in other words - simply data segmentation is a key to evaluate their performance, which may be very useful from the management point of view, for example leading to introducing optimization to the process. However, in many cases such raw signals are contaminated with various artifacts, and in general are expected to be very noisy, which makes the segmentation task very difficult or even impossible. To deal with that problem, there is a need for efficient smoothing methods that will allow to retain informative trends in the signals while disregarding noises and other undesired non-deterministic components. In this paper authors present a review of various approaches to diagnostic data smoothing. Described methods can be used in a fast and efficient way, effectively cleaning the signals while preserving informative deterministic behaviour, that is a crucial to precise segmentation and other approaches to industrial data analysis.
1996-09-16
approaches are: • Adaptive filtering • Single exponential smoothing (Brown, 1963) * The Box-Jenkins methodology ( ARIMA modeling ) - Linear exponential... ARIMA • Linear exponential smoothing: Holt’s two parameter modeling (Box and Jenkins, 1976). However, there are two approach (Holt et al., 1960) very...crucial disadvantages: The most important point in - Winters’ three parameter method (Winters, 1960) ARIMA modeling is model identification. As shown in
Topological analysis of the motion of an ellipsoid on a smooth plane
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivochkin, M Yu
2008-06-30
The problem of the motion of a dynamically and geometrically symmetric heavy ellipsoid on a smooth horizontal plane is investigated. The problem is integrable and can be considered a generalization of the problem of motion of a heavy rigid body with fixed point in the Lagrangian case. The Smale bifurcation diagrams are constructed. Surgeries of tori are investigated using methods developed by Fomenko and his students. Bibliography: 9 titles.
A three-level support method for smooth switching of the micro-grid operation model
NASA Astrophysics Data System (ADS)
Zong, Yuanyang; Gong, Dongliang; Zhang, Jianzhou; Liu, Bin; Wang, Yun
2018-01-01
Smooth switching of micro-grid between the grid-connected operation mode and off-grid operation mode is one of the key technologies to ensure it runs flexible and efficiently. The basic control strategy and the switching principle of micro-grid are analyzed in this paper. The reasons for the fluctuations of the voltage and the frequency in the switching process are analyzed from views of power balance and control strategy, and the operation mode switching strategy has been improved targeted. From the three aspects of controller’s current inner loop reference signal, voltage outer loop control strategy optimization and micro-grid energy balance management, a three-level security strategy for smooth switching of micro-grid operation mode is proposed. From the three aspects of controller’s current inner loop reference signal tracking, voltage outer loop control strategy optimization and micro-grid energy balance management, a three-level strategy for smooth switching of micro-grid operation mode is proposed. At last, it is proved by simulation that the proposed control strategy can make the switching process smooth and stable, the fluctuation problem of the voltage and frequency has been effectively improved.
Smooth plains on Mercury. A comparison with Vesta.
NASA Astrophysics Data System (ADS)
Zambon, F.; Capaccioni, F.; Carli, C.; De Sanctis, M. C.; Filacchione, G.; Giacomini, L.
Mercury, the closest planet to the Sun, has been visited by the MESSENGER spacecraft \\citet{solomon2007}. After 3 years of orbit around Mercury a global coverage of the surface has been done revealing that ∼27% of Mercury's surface is covered by smooth plains \\citet{denevi2013}. Large part of Mercury's smooth plain (SP) seems to have volcanic origin. Different composition has been observed, most of the SP have a magnesian alkali-basalt-like composition, while some of them have been interpreted as ultramafic. A further 2% of smooth plains have been identified as Odin-type plains and represent the knobby and hummocky plains surrounding the Caloris basin \\citet{denevi2013}. Application of classification methods \\citet{adams2006} applied to color image data of the MESSENGER wide angle camera (MDIS-WAC) \\citet{MDIS} and a spectral analysis of the spec- trometer data (MASCS-VIRS) \\citet{MASCS} are useful to highlight the differences in composition of the smooth planes. A compa rison between Mercury's SP and those of other solar system bodies, such as Vesta \\citet{desanctis2012}, reveals useful to obtain information on the origin and the evolution of this bodies.
Alani, Behrang; Zare, Mohammad; Noureddini, Mahdi
2015-01-01
The smooth muscle contractions of the tracheobronchial airways are mediated through the balance of adrenergic, cholinergic and peptidergic nervous mechanisms. This research was designed to determine the bronchodilatory and B-adrenergic effects of methanolic and aqueous extracts of root Althaea on the isolated tracheobronchial smooth muscle of the rat. In this experimental study, 116 tracheobronchial sections (5 mm) from 58 healthy male Sprague-Dawley rats were dissected and divided into 23 groups. The effect of methanolic and aqueous extracts of the root Althaea was assayed at different concentrations (0.2, 0.6, 2.6, 6.6, 14.6 μg/ml) and epinephrine (5 μm) in the presence and absence of propranolol (1 μM) under one g tension based on the isometric method. This assay was recorded in an organ bath containing Krebs-Henseleit solution for tracheobronchial smooth muscle contractions using potassium chloride (KCl) (60 mM) induction. Epinephrine (5 μm) alone and root methanolic and aqueous extract concentrations (0.6-14.6 μg/ml) reduced tracheobronchial smooth muscle contractions induced using KCl (60 mM) in a dose dependent manner. Propranolol inhibited the antispasmodic effect of epinephrine on tracheobronchial smooth muscle contractions, but could not reduce the antispasmodic effect of the root extract concentrations. The methanolic and aqueous extracts of Althaea root inhibited the tracheobronchial smooth muscle contractions of rats in a dose dependent manner, but B-adrenergic receptors do not appear to engage in this process. Understanding the mechanism of this process can be useful in the treatment of pulmonary obstructive diseases like asthma.
Backfitting in Smoothing Spline Anova, with Application to Historical Global Temperature Data
NASA Astrophysics Data System (ADS)
Luo, Zhen
In the attempt to estimate the temperature history of the earth using the surface observations, various biases can exist. An important source of bias is the incompleteness of sampling over both time and space. There have been a few methods proposed to deal with this problem. Although they can correct some biases resulting from incomplete sampling, they have ignored some other significant biases. In this dissertation, a smoothing spline ANOVA approach which is a multivariate function estimation method is proposed to deal simultaneously with various biases resulting from incomplete sampling. Besides that, an advantage of this method is that we can get various components of the estimated temperature history with a limited amount of information stored. This method can also be used for detecting erroneous observations in the data base. The method is illustrated through an example of modeling winter surface air temperature as a function of year and location. Extension to more complicated models are discussed. The linear system associated with the smoothing spline ANOVA estimates is too large to be solved by full matrix decomposition methods. A computational procedure combining the backfitting (Gauss-Seidel) algorithm and the iterative imputation algorithm is proposed. This procedure takes advantage of the tensor product structure in the data to make the computation feasible in an environment of limited memory. Various related issues are discussed, e.g., the computation of confidence intervals and the techniques to speed up the convergence of the backfitting algorithm such as collapsing and successive over-relaxation.
Predictors of Self-Regulated Learning in Malaysian Smart Schools
ERIC Educational Resources Information Center
Yen, Ng Lee; Bakar, Kamariah Abu; Roslan, Samsilah; Luan, Wong Su; Abd Rahman, Petri Zabariah Mega
2005-01-01
This study sought to uncover the predictors of self-regulated learning in Malaysian smart schools. The sample consisted of 409 students, from six randomly chosen smart schools. A quantitative correlational research design was employed and the data were collected through survey method. Six factors were examined in relation to the predictors of…
ERIC Educational Resources Information Center
Blader, Joseph C.
2004-01-01
Objective: To investigate predictors of readmission to inpatient psychiatric treatment for children aged 5 to 12 discharged from acute-care hospitalization. Method: One hundred nine children were followed for 1 year after discharge from inpatient care. Time to rehospitalization was the outcome of interest. Predictors of readmission, examined via…
Determining Predictor Importance in Hierarchical Linear Models Using Dominance Analysis
ERIC Educational Resources Information Center
Luo, Wen; Azen, Razia
2013-01-01
Dominance analysis (DA) is a method used to evaluate the relative importance of predictors that was originally proposed for linear regression models. This article proposes an extension of DA that allows researchers to determine the relative importance of predictors in hierarchical linear models (HLM). Commonly used measures of model adequacy in…
Song, Yong Sub; Kim, Ji-Hoon; Na, Dong Gyu; Min, Hye Sook; Won, Jae-Kyung; Yun, Tae Jin; Choi, Seung Hong; Sohn, Chul-Ho
2016-08-01
We evaluate the gray-scale ultrasonographic characteristics that differentiate between nodular hyperplasia (NH) and neoplastic follicular-patterned lesions (NFPLs) of the thyroid gland. Ultrasonographic features of 750 patients with 832 thyroid nodules (NH, n = 361; or NFPLs, follicular adenoma, n = 123; follicular carcinoma, n = 159; and follicular variant papillary carcinoma, n = 189) were analyzed. Except for echogenicity, over two-thirds of the cases of NH and NFPLs share the ultrasonographic characteristics of solid internal content, a well-defined smooth margin and round-to-ovoid shape. Independent predictors for NH were non-solid internal content (sensitivity 27.1%, specificity 90.2%), isoechogenicity (sensitivity 69.5%, specificity 63.5%) and an ill-defined margin (sensitivity 18.8%, specificity 94.5%). Independent predictors for NFPLs were hypoechogenicity (sensitivity 60.5%, specificity 70.4%), marked hypoechogenicity (sensitivity 2.8%, specificity 99.4%) and taller-than-wide shape (sensitivity 6.6%, specificity 98.1%). Although NH and NFPLs commonly share ultrasonographic characteristics, non-solid internal content and ill-defined margin are specific to NH and marked hypoechogenicity and taller-than-wide shape are specific to NFPLs. Copyright © 2016 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Rigby, Robert A; Stasinopoulos, D Mikis
2004-10-15
The Box-Cox power exponential (BCPE) distribution, developed in this paper, provides a model for a dependent variable Y exhibiting both skewness and kurtosis (leptokurtosis or platykurtosis). The distribution is defined by a power transformation Y(nu) having a shifted and scaled (truncated) standard power exponential distribution with parameter tau. The distribution has four parameters and is denoted BCPE (mu,sigma,nu,tau). The parameters, mu, sigma, nu and tau, may be interpreted as relating to location (median), scale (approximate coefficient of variation), skewness (transformation to symmetry) and kurtosis (power exponential parameter), respectively. Smooth centile curves are obtained by modelling each of the four parameters of the distribution as a smooth non-parametric function of an explanatory variable. A Fisher scoring algorithm is used to fit the non-parametric model by maximizing a penalized likelihood. The first and expected second and cross derivatives of the likelihood, with respect to mu, sigma, nu and tau, required for the algorithm, are provided. The centiles of the BCPE distribution are easy to calculate, so it is highly suited to centile estimation. This application of the BCPE distribution to smooth centile estimation provides a generalization of the LMS method of the centile estimation to data exhibiting kurtosis (as well as skewness) different from that of a normal distribution and is named here the LMSP method of centile estimation. The LMSP method of centile estimation is applied to modelling the body mass index of Dutch males against age. 2004 John Wiley & Sons, Ltd.
Immersed boundary-simplified lattice Boltzmann method for incompressible viscous flows
NASA Astrophysics Data System (ADS)
Chen, Z.; Shu, C.; Tan, D.
2018-05-01
An immersed boundary-simplified lattice Boltzmann method is developed in this paper for simulations of two-dimensional incompressible viscous flows with immersed objects. Assisted by the fractional step technique, the problem is resolved in a predictor-corrector scheme. The predictor step solves the flow field without considering immersed objects, and the corrector step imposes the effect of immersed boundaries on the velocity field. Different from the previous immersed boundary-lattice Boltzmann method which adopts the standard lattice Boltzmann method (LBM) as the flow solver in the predictor step, a recently developed simplified lattice Boltzmann method (SLBM) is applied in the present method to evaluate intermediate flow variables. Compared to the standard LBM, SLBM requires lower virtual memories, facilitates the implementation of physical boundary conditions, and shows better numerical stability. The boundary condition-enforced immersed boundary method, which accurately ensures no-slip boundary conditions, is implemented as the boundary solver in the corrector step. Four typical numerical examples are presented to demonstrate the stability, the flexibility, and the accuracy of the present method.
Solution of axisymmetric and two-dimensional inviscid flow over blunt bodies by the method of lines
NASA Technical Reports Server (NTRS)
Hamilton, H. H., II
1978-01-01
Comparisons with experimental data and the results of other computational methods demonstrated that very accurate solutions can be obtained by using relatively few lines with the method of lines approach. This method is semidiscrete and has relatively low core storage requirements as compared with fully discrete methods since very little data were stored across the shock layer. This feature is very attractive for three dimensional problems because it enables computer storage requirements to be reduced by approximately an order of magnitude. In the present study it was found that nine lines was a practical upper limit for two dimensional and axisymmetric problems. This condition limits application of the method to smooth body geometries where relatively few lines would be adequate to describe changes in the flow variables around the body. Extension of the method to three dimensions was conceptually straightforward; however, three dimensional applications would also be limited to smooth body geometries although not necessarily to total of nine lines.
NASA Astrophysics Data System (ADS)
Kim, Do-Kyung; Lee, Gyu-Jeong; Lee, Jae-Hyun; Kim, Min-Hoi; Bae, Jin-Hyuk
2018-05-01
We suggest a viable surface control method to improve the electrical properties of organic nonvolatile memory transistors. For viable surface control, the surface of the ferroelectric insulator in the memory field-effect transistors was modified using a smooth-contact-curing process. For the modification of the ferroelectric polymer, during the curing of the ferroelectric insulators, the smooth surface of a soft elastomer contacts intimately with the ferroelectric surface. This smooth-contact-curing process reduced the surface roughness of the ferroelectric insulator without degrading its ferroelectric properties. The reduced roughness of the ferroelectric insulator increases the mobility of the organic field-effect transistor by approximately eight times, which results in a high memory on–off ratio and a low-voltage reading operation.
Rayatpisheh, Shahrzad; Heath, Daniel E; Shakouri, Amir; Rujitanaroj, Pim-On; Chew, Sing Yian; Chan-Park, Mary B
2014-03-01
Herein we combine cell sheet technology and electrospun scaffolding to rapidly generate circumferentially aligned tubular constructs of human aortic smooth muscles cells with contractile gene expression for use as tissue engineered blood vessel media. Smooth muscle cells cultured on micropatterned and N-isopropylacrylamide-grafted (pNIPAm) polydimethylsiloxane (PDMS), a small portion of which was covered by aligned electrospun scaffolding, resulted in a single sheet of unidirectionally aligned cells. Upon cooling to room temperature, the scaffold, its adherent cells, and the remaining cell sheet detached and were collected on a mandrel to generating tubular constructs with circumferentially aligned smooth muscle cells which possess contractile gene expression and a single layer of electrospun scaffold as an analogue to a small diameter blood vessel's internal elastic lamina (IEL). This method improves cell sheet handling, results in rapid circumferential alignment of smooth muscle cells which immediately express contractile genes, and introduction of an analogue to small diameter blood vessel IEL. Copyright © 2013 Elsevier Ltd. All rights reserved.
Smoothed Particle Inference Analysis of SNR RCW 103
NASA Astrophysics Data System (ADS)
Frank, Kari A.; Burrows, David N.; Dwarkadas, Vikram
2016-04-01
We present preliminary results of applying a novel analysis method, Smoothed Particle Inference (SPI), to an XMM-Newton observation of SNR RCW 103. SPI is a Bayesian modeling process that fits a population of gas blobs ("smoothed particles") such that their superposed emission reproduces the observed spatial and spectral distribution of photons. Emission-weighted distributions of plasma properties, such as abundances and temperatures, are then extracted from the properties of the individual blobs. This technique has important advantages over analysis techniques which implicitly assume that remnants are two-dimensional objects in which each line of sight encompasses a single plasma. By contrast, SPI allows superposition of as many blobs of plasma as are needed to match the spectrum observed in each direction, without the need to bin the data spatially. This RCW 103 analysis is part of a pilot study for the larger SPIES (Smoothed Particle Inference Exploration of SNRs) project, in which SPI will be applied to a sample of 12 bright SNRs.
Nelumbo nucifera leaves extracts inhibit mouse airway smooth muscle contraction.
Yang, Xiao; Xue, Lu; Zhao, Qingyang; Cai, Congli; Liu, Qing-Hua; Shen, Jinhua
2017-03-20
Alkaloids extracted from lotus leaves (AELL) can relax vascular smooth muscle. However, whether AELL has a similar relaxant role on airway smooth muscle (ASM) remains unknown. This study aimed to explore the relaxant property of AELL on ASM and the underlying mechanism. Alkaloids were extracted from dried lotus leaves using the high temperature rotary evaporation extraction method. The effects of AELL on mouse ASM tension were studied using force measuring and patch-clamp techniques. It was found that AELL inhibited the high K + or acetylcholine chloride (ACh)-induced precontraction of mouse tracheal rings by 64.8 ± 2.9%, or 48.8 ± 4.7%, respectively. The inhibition was statistically significant and performed in a dose-dependent manner. Furthermore, AELL-induced smooth muscle relaxation was partially mediated by blocking voltage-dependent Ca 2+ channels (VDCC) and non-selective cation channels (NSCC). AELL, which plays a relaxant role in ASM, might be a new complementary treatment to treat abnormal contractions of the trachea and asthma.
Nanopatterning of optical surfaces during low-energy ion beam sputtering
NASA Astrophysics Data System (ADS)
Liao, Wenlin; Dai, Yifan; Xie, Xuhui
2014-06-01
Ion beam figuring (IBF) provides a highly deterministic method for high-precision optical surface fabrication, whereas ion-induced microscopic morphology evolution would occur on surfaces. Consequently, the fabrication specification for surface smoothness must be seriously considered during the IBF process. In this work, low-energy ion nanopatterning of our frequently used optical material surfaces is investigated to discuss the manufacturability of an ultrasmooth surface. The research results indicate that ion beam sputtering (IBS) can directly smooth some amorphous or amorphizable material surfaces, such as fused silica, Si, and ULE under appropriate processing conditions. However, for IBS of a Zerodur surface, preferential sputtering together with curvature-dependent sputtering overcome ion-induced smoothing mechanisms, leading to the granular nanopatterns' formation and the coarsening of the surface. Furthermore, the material property difference at microscopic scales and the continuous impurity incorporation would affect the ion beam smoothing of optical surfaces. Overall, IBS can be used as a promising technique for ultrasmooth surface fabrication, which strongly depends on processing conditions and material characters.
Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors.
Woodard, Dawn B; Crainiceanu, Ciprian; Ruppert, David
2013-01-01
We propose a new method for regression using a parsimonious and scientifically interpretable representation of functional predictors. Our approach is designed for data that exhibit features such as spikes, dips, and plateaus whose frequency, location, size, and shape varies stochastically across subjects. We propose Bayesian inference of the joint functional and exposure models, and give a method for efficient computation. We contrast our approach with existing state-of-the-art methods for regression with functional predictors, and show that our method is more effective and efficient for data that include features occurring at varying locations. We apply our methodology to a large and complex dataset from the Sleep Heart Health Study, to quantify the association between sleep characteristics and health outcomes. Software and technical appendices are provided in online supplemental materials.
Continuum Level Density in Complex Scaling Method
NASA Astrophysics Data System (ADS)
Suzuki, R.; Myo, T.; Katō, K.
2005-11-01
A new calculational method of continuum level density (CLD) at unbound energies is studied in the complex scaling method (CSM). It is shown that the CLD can be calculated by employing the discretization of continuum states in the CSM without any smoothing technique.
Kinematics, structural mechanics, and design of origami structures with smooth folds
NASA Astrophysics Data System (ADS)
Peraza Hernandez, Edwin Alexander
Origami provides novel approaches to the fabrication, assembly, and functionality of engineering structures in various fields such as aerospace, robotics, etc. With the increase in complexity of the geometry and materials for origami structures that provide engineering utility, computational models and design methods for such structures have become essential. Currently available models and design methods for origami structures are generally limited to the idealization of the folds as creases of zeroth-order geometric continuity. Such an idealization is not proper for origami structures having non-negligible thickness or maximum curvature at the folds restricted by material limitations. Thus, for general structures, creased folds of merely zeroth-order geometric continuity are not appropriate representations of structural response and a new approach is needed. The first contribution of this dissertation is a model for the kinematics of origami structures having realistic folds of non-zero surface area and exhibiting higher-order geometric continuity, here termed smooth folds. The geometry of the smooth folds and the constraints on their associated kinematic variables are presented. A numerical implementation of the model allowing for kinematic simulation of structures having arbitrary fold patterns is also described. Examples illustrating the capability of the model to capture realistic structural folding response are provided. Subsequently, a method for solving the origami design problem of determining the geometry of a single planar sheet and its pattern of smooth folds that morphs into a given three-dimensional goal shape, discretized as a polygonal mesh, is presented. The design parameterization of the planar sheet and the constraints that allow for a valid pattern of smooth folds and approximation of the goal shape in a known folded configuration are presented. Various testing examples considering goal shapes of diverse geometries are provided. Afterwards, a model for the structural mechanics of origami continuum bodies with smooth folds is presented. Such a model entails the integration of the presented kinematic model and existing plate theories in order to obtain a structural representation for folds having non-zero thickness and comprised of arbitrary materials. The model is validated against finite element analysis. The last contribution addresses the design and analysis of active material-based self-folding structures that morph via simultaneous folding towards a given three-dimensional goal shape starting from a planar configuration. Implementation examples including shape memory alloy (SMA)-based self-folding structures are provided.
Ramírez-Vélez, Robinson; Moreno-Jiménez, Javier; Correa-Bautista, Jorge Enrique; Martínez-Torres, Javier; González-Ruiz, Katherine; González-Jiménez, Emilio; Schmidt-RioValle, Jacqueline; Lobelo, Felipe; Garcia-Hermoso, Antonio
2017-07-11
Waist circumference (WC) and waist-to-height ratio (WHtR) are often used as indices predictive of central obesity. The aims of this study were: 1) to obtain smoothed centile charts and LMS tables for WC and WHtR among Colombian children and adolescents; 2) to evaluate the utility of these parameters as predictors of overweight and obesity. A cross-sectional study was conducted of a sample population of 7954 healthy Colombian schoolchildren [3460 boys and 4494 girls, mean age 12.8 (±2.3) years]. Weight, height, body mass index (BMI), WC and WHtR were measured, and percentiles were calculated using the LMS method (Box-Cox, median and coefficient of variation). Appropriate cut-off points of WC and WHtR for overweight and obesity, according to International Obesity Task Force definitions, were selected using receiver operating characteristic (ROC) analysis. The discriminating power of WC and WHtR is expressed as area under the curve (AUC). Reference values for WC and WHtR are presented. Mean WC increased and WHtR decreased with age for both genders. A moderate positive correlation was observed between WC and BMI (r = 0.756, P < 0.01) and between WHtR and BMI (r = 0.604, P < 0.01). ROC analysis revealed strong discrimination power in the identification of overweight and obesity for both measures in our sample population. Overall, WHtR was a slightly better predictor of overweight/obesity (AUC 95% CI 0.868-0.916) than WC (AUC 95% CI 0.862-0.904). This paper presents the first sex and age-specific WC and WHtR percentiles for Colombian children and adolescents aged 9.0-17.9 years. The LMS tables obtained, based on Colombian reference data, can be used as quantitative tools for the study of obesity and its comorbidities.
Liu, Yang; Paciorek, Christopher J.; Koutrakis, Petros
2009-01-01
Background Studies of chronic health effects due to exposures to particulate matter with aerodynamic diameters ≤ 2.5 μm (PM2.5) are often limited by sparse measurements. Satellite aerosol remote sensing data may be used to extend PM2.5 ground networks to cover a much larger area. Objectives In this study we examined the benefits of using aerosol optical depth (AOD) retrieved by the Geostationary Operational Environmental Satellite (GOES) in conjunction with land use and meteorologic information to estimate ground-level PM2.5 concentrations. Methods We developed a two-stage generalized additive model (GAM) for U.S. Environmental Protection Agency PM2.5 concentrations in a domain centered in Massachusetts. The AOD model represents conditions when AOD retrieval is successful; the non-AOD model represents conditions when AOD is missing in the domain. Results The AOD model has a higher predicting power judged by adjusted R2 (0.79) than does the non-AOD model (0.48). The predicted PM2.5 concentrations by the AOD model are, on average, 0.8–0.9 μg/m3 higher than the non-AOD model predictions, with a more smooth spatial distribution, higher concentrations in rural areas, and the highest concentrations in areas other than major urban centers. Although AOD is a highly significant predictor of PM2.5, meteorologic parameters are major contributors to the better performance of the AOD model. Conclusions GOES aerosol/smoke product (GASP) AOD is able to summarize a set of weather and land use conditions that stratify PM2.5 concentrations into two different spatial patterns. Even if land use regression models do not include AOD as a predictor variable, two separate models should be fitted to account for different PM2.5 spatial patterns related to AOD availability. PMID:19590678
Strand, Matthew; Sillau, Stefan; Grunwald, Gary K; Rabinovitch, Nathan
2014-02-10
Regression calibration provides a way to obtain unbiased estimators of fixed effects in regression models when one or more predictors are measured with error. Recent development of measurement error methods has focused on models that include interaction terms between measured-with-error predictors, and separately, methods for estimation in models that account for correlated data. In this work, we derive explicit and novel forms of regression calibration estimators and associated asymptotic variances for longitudinal models that include interaction terms, when data from instrumental and unbiased surrogate variables are available but not the actual predictors of interest. The longitudinal data are fit using linear mixed models that contain random intercepts and account for serial correlation and unequally spaced observations. The motivating application involves a longitudinal study of exposure to two pollutants (predictors) - outdoor fine particulate matter and cigarette smoke - and their association in interactive form with levels of a biomarker of inflammation, leukotriene E4 (LTE 4 , outcome) in asthmatic children. Because the exposure concentrations could not be directly observed, we used measurements from a fixed outdoor monitor and urinary cotinine concentrations as instrumental variables, and we used concentrations of fine ambient particulate matter and cigarette smoke measured with error by personal monitors as unbiased surrogate variables. We applied the derived regression calibration methods to estimate coefficients of the unobserved predictors and their interaction, allowing for direct comparison of toxicity of the different pollutants. We used simulations to verify accuracy of inferential methods based on asymptotic theory. Copyright © 2013 John Wiley & Sons, Ltd.
Improving transmembrane protein consensus topology prediction using inter-helical interaction.
Wang, Han; Zhang, Chao; Shi, Xiaohu; Zhang, Li; Zhou, You
2012-11-01
Alpha helix transmembrane proteins (αTMPs) represent roughly 30% of all open reading frames (ORFs) in a typical genome and are involved in many critical biological processes. Due to the special physicochemical properties, it is hard to crystallize and obtain high resolution structures experimentally, thus, sequence-based topology prediction is highly desirable for the study of transmembrane proteins (TMPs), both in structure prediction and function prediction. Various model-based topology prediction methods have been developed, but the accuracy of those individual predictors remain poor due to the limitation of the methods or the features they used. Thus, the consensus topology prediction method becomes practical for high accuracy applications by combining the advances of the individual predictors. Here, based on the observation that inter-helical interactions are commonly found within the transmembrane helixes (TMHs) and strongly indicate the existence of them, we present a novel consensus topology prediction method for αTMPs, CNTOP, which incorporates four top leading individual topology predictors, and further improves the prediction accuracy by using the predicted inter-helical interactions. The method achieved 87% prediction accuracy based on a benchmark dataset and 78% accuracy based on a non-redundant dataset which is composed of polytopic αTMPs. Our method derives the highest topology accuracy than any other individual predictors and consensus predictors, at the same time, the TMHs are more accurately predicted in their length and locations, where both the false positives (FPs) and the false negatives (FNs) decreased dramatically. The CNTOP is available at: http://ccst.jlu.edu.cn/JCSB/cntop/CNTOP.html. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Jiang, Jiamin; Younis, Rami M.
2017-10-01
In the presence of counter-current flow, nonlinear convergence problems may arise in implicit time-stepping when the popular phase-potential upwinding (PPU) scheme is used. The PPU numerical flux is non-differentiable across the co-current/counter-current flow regimes. This may lead to cycles or divergence in the Newton iterations. Recently proposed methods address improved smoothness of the numerical flux. The objective of this work is to devise and analyze an alternative numerical flux scheme called C1-PPU that, in addition to improving smoothness with respect to saturations and phase potentials, also improves the level of scalar nonlinearity and accuracy. C1-PPU involves a novel use of the flux limiter concept from the context of high-resolution methods, and allows a smooth variation between the co-current/counter-current flow regimes. The scheme is general and applies to fully coupled flow and transport formulations with an arbitrary number of phases. We analyze the consistency property of the C1-PPU scheme, and derive saturation and pressure estimates, which are used to prove the solution existence. Several numerical examples for two- and three-phase flows in heterogeneous and multi-dimensional reservoirs are presented. The proposed scheme is compared to the conventional PPU and the recently proposed Hybrid Upwinding schemes. We investigate three properties of these numerical fluxes: smoothness, nonlinearity, and accuracy. The results indicate that in addition to smoothness, nonlinearity may also be critical for convergence behavior and thus needs to be considered in the design of an efficient numerical flux scheme. Moreover, the numerical examples show that the C1-PPU scheme exhibits superior convergence properties for large time steps compared to the other alternatives.
Bayesian multi-scale smoothing of photon-limited images with applications to astronomy and medicine
NASA Astrophysics Data System (ADS)
White, John
Multi-scale models for smoothing Poisson signals or images have gained much attention over the past decade. A new Bayesian model is developed using the concept of the Chinese restaurant process to find structures in two-dimensional images when performing image reconstruction or smoothing. This new model performs very well when compared to other leading methodologies for the same problem. It is developed and evaluated theoretically and empirically throughout Chapter 2. The newly developed Bayesian model is extended to three-dimensional images in Chapter 3. The third dimension has numerous different applications, such as different energy spectra, another spatial index, or possibly a temporal dimension. Empirically, this method shows promise in reducing error with the use of simulation studies. A further development removes background noise in the image. This removal can further reduce the error and is done using a modeling adjustment and post-processing techniques. These details are given in Chapter 4. Applications to real world problems are given throughout. Photon-based images are common in astronomical imaging due to the collection of different types of energy such as X-Rays. Applications to real astronomical images are given, and these consist of X-ray images from the Chandra X-ray observatory satellite. Diagnostic medicine uses many types of imaging such as magnetic resonance imaging and computed tomography that can also benefit from smoothing techniques such as the one developed here. Reducing the amount of radiation a patient takes will make images more noisy, but this can be mitigated through the use of image smoothing techniques. Both types of images represent the potential real world use for these methods.
Reimold, Matthias; Slifstein, Mark; Heinz, Andreas; Mueller-Schauenburg, Wolfgang; Bares, Roland
2006-06-01
Voxelwise statistical analysis has become popular in explorative functional brain mapping with fMRI or PET. Usually, results are presented as voxelwise levels of significance (t-maps), and for clusters that survive correction for multiple testing the coordinates of the maximum t-value are reported. Before calculating a voxelwise statistical test, spatial smoothing is required to achieve a reasonable statistical power. Little attention is being given to the fact that smoothing has a nonlinear effect on the voxel variances and thus the local characteristics of a t-map, which becomes most evident after smoothing over different types of tissue. We investigated the related artifacts, for example, white matter peaks whose position depend on the relative variance (variance over contrast) of the surrounding regions, and suggest improving spatial precision with 'masked contrast images': color-codes are attributed to the voxelwise contrast, and significant clusters (e.g., detected with statistical parametric mapping, SPM) are enlarged by including contiguous pixels with a contrast above the mean contrast in the original cluster, provided they satisfy P < 0.05. The potential benefit is demonstrated with simulations and data from a [11C]Carfentanil PET study. We conclude that spatial smoothing may lead to critical, sometimes-counterintuitive artifacts in t-maps, especially in subcortical brain regions. If significant clusters are detected, for example, with SPM, the suggested method is one way to improve spatial precision and may give the investigator a more direct sense of the underlying data. Its simplicity and the fact that no further assumptions are needed make it a useful complement for standard methods of statistical mapping.
NASA Astrophysics Data System (ADS)
Prato, Marco; Bonettini, Silvia; Loris, Ignace; Porta, Federica; Rebegoldi, Simone
2016-10-01
The scaled gradient projection (SGP) method is a first-order optimization method applicable to the constrained minimization of smooth functions and exploiting a scaling matrix multiplying the gradient and a variable steplength parameter to improve the convergence of the scheme. For a general nonconvex function, the limit points of the sequence generated by SGP have been proved to be stationary, while in the convex case and with some restrictions on the choice of the scaling matrix the sequence itself converges to a constrained minimum point. In this paper we extend these convergence results by showing that the SGP sequence converges to a limit point provided that the objective function satisfies the Kurdyka-Łojasiewicz property at each point of its domain and its gradient is Lipschitz continuous.
NASA Astrophysics Data System (ADS)
Massambone de Oliveira, Rafael; Salomão Helou, Elias; Fontoura Costa, Eduardo
2016-11-01
We present a method for non-smooth convex minimization which is based on subgradient directions and string-averaging techniques. In this approach, the set of available data is split into sequences (strings) and a given iterate is processed independently along each string, possibly in parallel, by an incremental subgradient method (ISM). The end-points of all strings are averaged to form the next iterate. The method is useful to solve sparse and large-scale non-smooth convex optimization problems, such as those arising in tomographic imaging. A convergence analysis is provided under realistic, standard conditions. Numerical tests are performed in a tomographic image reconstruction application, showing good performance for the convergence speed when measured as the decrease ratio of the objective function, in comparison to classical ISM.
Global solutions to the equation of thermoelasticity with fading memory
NASA Astrophysics Data System (ADS)
Okada, Mari; Kawashima, Shuichi
2017-07-01
We consider the initial-history value problem for the one-dimensional equation of thermoelasticity with fading memory. It is proved that if the data are smooth and small, then a unique smooth solution exists globally in time and converges to the constant equilibrium state as time goes to infinity. Our proof is based on a technical energy method which makes use of the strict convexity of the entropy function and the properties of strongly positive definite kernels.
A Smoothing Technique for the Multifractal Analysis of a Medium Voltage Feeders Electric Current
NASA Astrophysics Data System (ADS)
de Santis, Enrico; Sadeghian, Alireza; Rizzi, Antonello
2017-12-01
The current paper presents a data-driven detrending technique allowing to smooth complex sinusoidal trends from a real-world electric load time series before applying the Detrended Multifractal Fluctuation Analysis (MFDFA). The algorithm we call Smoothed Sort and Cut Fourier Detrending (SSC-FD) is based on a suitable smoothing of high power periodicities operating directly in the Fourier spectrum through a polynomial fitting technique of the DFT. The main aim consists of disambiguating the characteristic slow varying periodicities, that can impair the MFDFA analysis, from the residual signal in order to study its correlation properties. The algorithm performances are evaluated on a simple benchmark test consisting of a persistent series where the Hurst exponent is known, with superimposed ten sinusoidal harmonics. Moreover, the behavior of the algorithm parameters is assessed computing the MFDFA on the well-known sunspot data, whose correlation characteristics are reported in literature. In both cases, the SSC-FD method eliminates the apparent crossover induced by the synthetic and natural periodicities. Results are compared with some existing detrending methods within the MFDFA paradigm. Finally, a study of the multifractal characteristics of the electric load time series detrendended by the SSC-FD algorithm is provided, showing a strong persistent behavior and an appreciable amplitude of the multifractal spectrum that allows to conclude that the series at hand has multifractal characteristics.
ERIC Educational Resources Information Center
Garcia, Abbe Marrs; Sapyta, Jeffrey J.; Moore, Phoebe S.; Freeman, Jennifer B.; Franklin, Martin E.; March, John S.; Foa, Edna B.
2010-01-01
Objective: To identify predictors and moderators of outcome in the first Pediatric OCD Treatment Study (POTS I) among youth (N = 112) randomly assigned to sertraline, cognitive behavioral therapy (CBT), both sertraline and CBT (COMB), or a pill placebo. Method: Potential baseline predictors and moderators were identified by literature review. The…
ERIC Educational Resources Information Center
Vivo, Juana-Maria; Franco, Manuel
2008-01-01
This article attempts to present a novel application of a method of measuring accuracy for academic success predictors that could be used as a standard. This procedure is known as the receiver operating characteristic (ROC) curve, which comes from statistical decision techniques. The statistical prediction techniques provide predictor models and…
2014-10-30
Force Weather Agency (AFWA) WRF 15-km atmospheric model forecast data and low-level turbulence. Archives of historical model data forecast predictors...Relationships between WRF model predictors and PIREPS were developed using the new data mining methodology. The new methodology was inspired...convection. Predictors of turbulence were collected from the AFWA WRF 15km model, and corresponding PIREPS (the predictand) were collected between 2013
Kim, Tae-gu; Kang, Young-sig; Lee, Hyung-won
2011-01-01
To begin a zero accident campaign for industry, the first thing is to estimate the industrial accident rate and the zero accident time systematically. This paper considers the social and technical change of the business environment after beginning the zero accident campaign through quantitative time series analysis methods. These methods include sum of squared errors (SSE), regression analysis method (RAM), exponential smoothing method (ESM), double exponential smoothing method (DESM), auto-regressive integrated moving average (ARIMA) model, and the proposed analytic function method (AFM). The program is developed to estimate the accident rate, zero accident time and achievement probability of an efficient industrial environment. In this paper, MFC (Microsoft Foundation Class) software of Visual Studio 2008 was used to develop a zero accident program. The results of this paper will provide major information for industrial accident prevention and be an important part of stimulating the zero accident campaign within all industrial environments.
Bayer Demosaicking with Polynomial Interpolation.
Wu, Jiaji; Anisetti, Marco; Wu, Wei; Damiani, Ernesto; Jeon, Gwanggil
2016-08-30
Demosaicking is a digital image process to reconstruct full color digital images from incomplete color samples from an image sensor. It is an unavoidable process for many devices incorporating camera sensor (e.g. mobile phones, tablet, etc.). In this paper, we introduce a new demosaicking algorithm based on polynomial interpolation-based demosaicking (PID). Our method makes three contributions: calculation of error predictors, edge classification based on color differences, and a refinement stage using a weighted sum strategy. Our new predictors are generated on the basis of on the polynomial interpolation, and can be used as a sound alternative to other predictors obtained by bilinear or Laplacian interpolation. In this paper we show how our predictors can be combined according to the proposed edge classifier. After populating three color channels, a refinement stage is applied to enhance the image quality and reduce demosaicking artifacts. Our experimental results show that the proposed method substantially improves over existing demosaicking methods in terms of objective performance (CPSNR, S-CIELAB E, and FSIM), and visual performance.
Ellerbe, Caitlyn; Lawson, Andrew B.; Alia, Kassandra A.; Meyers, Duncan C.; Coulon, Sandra M.; Lawman, Hannah G.
2013-01-01
Background This study examined imputational modeling effects of spatial proximity and social factors of walking in African American adults. Purpose Models were compared that examined relationships between household proximity to a walking trail and social factors in determining walking status. Methods Participants (N=133; 66% female; mean age=55 yrs) were recruited to a police-supported walking and social marketing intervention. Bayesian modeling was used to identify predictors of walking at 12 months. Results Sensitivity analysis using different imputation approaches, and spatial contextual effects, were compared. All the imputation methods showed social life and income were significant predictors of walking, however, the complete data approach was the best model indicating Age (1.04, 95% OR: 1.00, 1.08), Social Life (0.83, 95% OR: 0.69, 0.98) and Income > $10,000 (0.10, 95% OR: 0.01, 0.97) were all predictors of walking. Conclusions The complete data approach was the best model of predictors of walking in African Americans. PMID:23481250
Malloy, Elizabeth J; Morris, Jeffrey S; Adar, Sara D; Suh, Helen; Gold, Diane R; Coull, Brent A
2010-07-01
Frequently, exposure data are measured over time on a grid of discrete values that collectively define a functional observation. In many applications, researchers are interested in using these measurements as covariates to predict a scalar response in a regression setting, with interest focusing on the most biologically relevant time window of exposure. One example is in panel studies of the health effects of particulate matter (PM), where particle levels are measured over time. In such studies, there are many more values of the functional data than observations in the data set so that regularization of the corresponding functional regression coefficient is necessary for estimation. Additional issues in this setting are the possibility of exposure measurement error and the need to incorporate additional potential confounders, such as meteorological or co-pollutant measures, that themselves may have effects that vary over time. To accommodate all these features, we develop wavelet-based linear mixed distributed lag models that incorporate repeated measures of functional data as covariates into a linear mixed model. A Bayesian approach to model fitting uses wavelet shrinkage to regularize functional coefficients. We show that, as long as the exposure error induces fine-scale variability in the functional exposure profile and the distributed lag function representing the exposure effect varies smoothly in time, the model corrects for the exposure measurement error without further adjustment. Both these conditions are likely to hold in the environmental applications we consider. We examine properties of the method using simulations and apply the method to data from a study examining the association between PM, measured as hourly averages for 1-7 days, and markers of acute systemic inflammation. We use the method to fully control for the effects of confounding by other time-varying predictors, such as temperature and co-pollutants.
Method for producing highly reflective metal surfaces
Arnold, J.B.; Steger, P.J.; Wright, R.R.
1982-03-04
The invention is a novel method for producing mirror surfaces which are extremely smooth and which have high optical reflectivity. The method includes depositing, by electrolysis, an amorphous layer of nickel on an article and then diamond-machining the resulting nickel surface to increase its smoothness and reflectivity. The machined nickel surface then is passivated with respect to the formation of bonds with electrodeposited nickel. Nickel then is electrodeposited on the passivated surface to form a layer of electroplated nickel whose inside surface is a replica of the passivated surface. The mandrel then may be-re-passivated and provided with a layer of electrodeposited nickel, which is then recovered from the mandrel providing a second replica. The mandrel can be so re-used to provide many such replicas. As compared with producing each mirror-finished article by plating and diamond-machining, the new method is faster and less expensive.
Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context.
Martinez, Josue G; Carroll, Raymond J; Müller, Samuel; Sampson, Joshua N; Chatterjee, Nilanjan
2011-11-01
When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso.
Cosmological parameter estimation using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Prasad, J.; Souradeep, T.
2014-03-01
Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.
NASA Astrophysics Data System (ADS)
Capecelatro, Jesse
2018-03-01
It has long been suggested that a purely Lagrangian solution to global-scale atmospheric/oceanic flows can potentially outperform tradition Eulerian schemes. Meanwhile, a demonstration of a scalable and practical framework remains elusive. Motivated by recent progress in particle-based methods when applied to convection dominated flows, this work presents a fully Lagrangian method for solving the inviscid shallow water equations on a rotating sphere in a smooth particle hydrodynamics framework. To avoid singularities at the poles, the governing equations are solved in Cartesian coordinates, augmented with a Lagrange multiplier to ensure that fluid particles are constrained to the surface of the sphere. An underlying grid in spherical coordinates is used to facilitate efficient neighbor detection and parallelization. The method is applied to a suite of canonical test cases, and conservation, accuracy, and parallel performance are assessed.
Method for producing highly reflective metal surfaces
Arnold, Jones B.; Steger, Philip J.; Wright, Ralph R.
1983-01-01
The invention is a novel method for producing mirror surfaces which are extremely smooth and which have high optical reflectivity. The method includes electrolessly depositing an amorphous layer of nickel on an article and then diamond-machining the resulting nickel surface to increase its smoothness and reflectivity. The machined nickel surface then is passivated with respect to the formation of bonds with electrodeposited nickel. Nickel then is electrodeposited on the passivated surface to form a layer of electroplated nickel whose inside surface is a replica of the passivated surface. The electroplated nickel layer then is separated from the passivated surface. The mandrel then may be re-passivated and provided with a layer of electrodeposited nickel, which is then recovered from the mandrel providing a second replica. The mandrel can be so re-used to provide many such replicas. As compared with producing each mirror-finished article by plating and diamond-machining, the new method is faster and less expensive.
Retaining both discrete and smooth features in 1D and 2D NMR relaxation and diffusion experiments
NASA Astrophysics Data System (ADS)
Reci, A.; Sederman, A. J.; Gladden, L. F.
2017-11-01
A new method of regularization of 1D and 2D NMR relaxation and diffusion experiments is proposed and a robust algorithm for its implementation is introduced. The new form of regularization, termed the Modified Total Generalized Variation (MTGV) regularization, offers a compromise between distinguishing discrete and smooth features in the reconstructed distributions. The method is compared to the conventional method of Tikhonov regularization and the recently proposed method of L1 regularization, when applied to simulated data of 1D spin-lattice relaxation, T1, 1D spin-spin relaxation, T2, and 2D T1-T2 NMR experiments. A range of simulated distributions composed of two lognormally distributed peaks were studied. The distributions differed with regard to the variance of the peaks, which were designed to investigate a range of distributions containing only discrete, only smooth or both features in the same distribution. Three different signal-to-noise ratios were studied: 2000, 200 and 20. A new metric is proposed to compare the distributions reconstructed from the different regularization methods with the true distributions. The metric is designed to penalise reconstructed distributions which show artefact peaks. Based on this metric, MTGV regularization performs better than Tikhonov and L1 regularization in all cases except when the distribution is known to only comprise of discrete peaks, in which case L1 regularization is slightly more accurate than MTGV regularization.
The Initiation of Smooth Pursuit is Delayed in Anisometropic Amblyopia
Raashid, Rana Arham; Liu, Ivy Ziqian; Blakeman, Alan; Goltz, Herbert C.; Wong, Agnes M. F.
2016-01-01
Purpose Several behavioral studies have shown that the reaction times of visually guided movements are slower in people with amblyopia, particularly during amblyopic eye viewing. Here, we tested the hypothesis that the initiation of smooth pursuit eye movements, which are responsible for accurately keeping moving objects on the fovea, is delayed in people with anisometropic amblyopia. Methods Eleven participants with anisometropic amblyopia and 14 visually normal observers were asked to track a step-ramp target moving at ±15°/s horizontally as quickly and as accurately as possible. The experiment was conducted under three viewing conditions: amblyopic/nondominant eye, binocular, and fellow/dominant eye viewing. Outcome measures were smooth pursuit latency, open-loop gain, steady state gain, and catch-up saccade frequency. Results Participants with anisometropic amblyopia initiated smooth pursuit significantly slower during amblyopic eye viewing (206 ± 20 ms) than visually normal observers viewing with their nondominant eye (183 ± 17 ms, P = 0.002). However, mean pursuit latency in the anisometropic amblyopia group during binocular and monocular fellow eye viewing was comparable to the visually normal group. Mean open-loop gain, steady state gain, and catch-up saccade frequency were similar between the two groups, but participants with anisometropic amblyopia exhibited more variable steady state gain (P = 0.045). Conclusions This study provides evidence of temporally delayed smooth pursuit initiation in anisometropic amblyopia. After initiation, the smooth pursuit velocity profile in anisometropic amblyopia participants is similar to visually normal controls. This finding differs from what has been observed previously in participants with strabismic amblyopia who exhibit reduced smooth pursuit velocity gains with more catch-up saccades. PMID:27070109
Adam, Ryan J.; Hisert, Katherine B.; Dodd, Jonathan D.; Grogan, Brenda; Launspach, Janice L.; Barnes, Janel K.; Gallagher, Charles G.; Sieren, Jered P.; Gross, Thomas J.; Fischer, Anthony J.; Cavanaugh, Joseph E.; Hoffman, Eric A.; Singh, Pradeep K.; Welsh, Michael J.; McKone, Edward F.; Stoltz, David A.
2016-01-01
BACKGROUND. Airflow obstruction is common in cystic fibrosis (CF), yet the underlying pathogenesis remains incompletely understood. People with CF often exhibit airway hyperresponsiveness, CF transmembrane conductance regulator (CFTR) is present in airway smooth muscle (ASM), and ASM from newborn CF pigs has increased contractile tone, suggesting that loss of CFTR causes a primary defect in ASM function. We hypothesized that restoring CFTR activity would decrease smooth muscle tone in people with CF. METHODS. To increase or potentiate CFTR function, we administered ivacaftor to 12 adults with CF with the G551D-CFTR mutation; ivacaftor stimulates G551D-CFTR function. We studied people before and immediately after initiation of ivacaftor (48 hours) to minimize secondary consequences of CFTR restoration. We tested smooth muscle function by investigating spirometry, airway distensibility, and vascular tone. RESULTS. Ivacaftor rapidly restored CFTR function, indicated by reduced sweat chloride concentration. Airflow obstruction and air trapping also improved. Airway distensibility increased in airways less than 4.5 mm but not in larger-sized airways. To assess smooth muscle function in a tissue outside the lung, we measured vascular pulse wave velocity (PWV) and augmentation index, which both decreased following CFTR potentiation. Finally, change in distensibility of <4.5-mm airways correlated with changes in PWV. CONCLUSIONS. Acute CFTR potentiation provided a unique opportunity to investigate CFTR-dependent mechanisms of CF pathogenesis. The rapid effects of ivacaftor on airway distensibility and vascular tone suggest that CFTR dysfunction may directly cause increased smooth muscle tone in people with CF and that ivacaftor may relax smooth muscle. FUNDING. This work was funded in part from an unrestricted grant from the Vertex Investigator-Initiated Studies Program. PMID:27158673
ERIC Educational Resources Information Center
Fisher, Evelyn L.
2017-01-01
Purpose: The purpose of this study was to explore the literature on predictors of outcomes among late talkers using systematic review and meta-analysis methods. We sought to answer the question: What factors predict preschool-age expressive-language outcomes among late-talking toddlers? Method: We entered carefully selected search terms into the…
Chaplin, Nathan L.; Nieves-Cintrón, Madeline; Fresquez, Adriana M.; Navedo, Manuel F.; Amberg, Gregory C.
2015-01-01
Rationale Mitochondria are key integrators of convergent intracellular signaling pathways. Two important second messengers modulated by mitochondria are calcium and reactive oxygen species. To date, coherent mechanisms describing mitochondrial integration of calcium and oxidative signaling in arterial smooth muscle are incomplete. Objective To address and add clarity to this issue we tested the hypothesis that mitochondria regulate subplasmalemmal calcium and hydrogen peroxide microdomain signaling in cerebral arterial smooth muscle. Methods and Results Using an image-based approach we investigated the impact of mitochondrial regulation of L-type calcium channels on subcellular calcium and ROS signaling microdomains in isolated arterial smooth muscle cells. Our single cell observations were then related experimentally to intact arterial segments and to living animals. We found that subplasmalemmal mitochondrial amplification of hydrogen peroxide microdomain signaling stimulates L-type calcium channels and that this mechanism strongly impacts the functional capacity of the vasoconstrictor angiotensin II. Importantly, we also found that disrupting this mitochondrial amplification mechanism in vivo normalized arterial function and attenuated the hypertensive response to systemic endothelial dysfunction. Conclusions From these observations we conclude that mitochondrial amplification of subplasmalemmal calcium and hydrogen peroxide microdomain signaling is a fundamental mechanism regulating arterial smooth muscle function. As the principle components involved are fairly ubiquitous and positioning of mitochondria near the plasma membrane is not restricted to arterial smooth muscle, this mechanism could occur in many cell types and contribute to pathological elevations of intracellular calcium and increased oxidative stress associated with many diseases. PMID:26390880
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jassal, K; Sarkar, B; Ganesh, T
Purpose: The study investigates the effect of fluence smoothing parameter on VMAT plans for ten head-neck cancer patients using Monaco5.00.04. Methods: VMAT plans were created using Monaco5.00.04 planning system for 10 head-neck patients. Four plans were generated for each patient using available smoothing parameters i.e. high, medium, low and off. The number of monitor units required to deliver 1 cGy was defined as a modulation degree; and was taken as a measure of plan complexity. Routinely used plan quality parameters Conformity index (CI) and Homogeneity index (HI) were used in the study. As a protocol our center, practices “medium” smoothingmore » for clinical implementation. Plans with medium smoothing were opted as reference plans due to the clinical acceptance and dosimetric verifications made on these plans. Plans were generated by varying the smoothing parameter and re-optimization was done. The PTV was evaluated for D98%, D95%, D50%, D1% and prescription isodose volume (PIV). For critical organs; spine and parotids the parameters recorded were D1cc and Dmean respectively. Results: The cohort had the median prescription as 6000 cGy in the range of 6600 cGy - 4500 cGy. The modulation degree was observed to increase up to 6% from reference to the most complex plan. High smoothing had about 11% increase in segments which marginally (0.5 to 1%) increased the homogeneity index while conformity index remains constant. For spine the maximum D1cc was observed in medium smoothing as 4639.8 cGy, this plan was clinically accepted and dosimetrically verified. Similarly for parotids, the Dmean was 2011.9 cGy and 1817.05 cGy. Conclusion: The sensitivity of plan quality in terms of smoothing options (high, medium, low and off) available in Monaco 5.00.04 was resulted in minimal difference in terms of target coverage, conformity index and homogeneity index. Similarly changing smoothing did not result in any enhanced advantage in sparing of critical organs.« less
Evaluation of earthquake potential in China
NASA Astrophysics Data System (ADS)
Rong, Yufang
I present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (that is, the probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. I test all three estimates, and another published estimate, against earthquake data. I constructed a special earthquake catalog which combines previous catalogs covering different times. I estimated moment magnitudes for some events using regression relationships that are derived in this study. I used the special catalog to construct the smoothed seismicity model and to test all models retrospectively. In all the models, I adopted a kind of Gutenberg-Richter magnitude distribution with modifications at higher magnitude. The assumed magnitude distribution depends on three parameters: a multiplicative " a-value," the slope or "b-value," and a "corner magnitude" marking a rapid decrease of earthquake rate with magnitude. I assumed the "b-value" to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and declines as a negative power of the epicentral distance out to a few hundred kilometers. I derived the upper magnitude limit from the special catalog, and estimated local "a-values" from smoothed seismicity. I have begun a "prospective" test, and earthquakes since the beginning of 2000 are quite compatible with the model. For the geologic estimations, I adopted the seismic source zones that are used in the published Global Seismic Hazard Assessment Project (GSHAP) model. The zones are divided according to geological, geodetic and seismicity data. Corner magnitudes are estimated from fault length, while fault slip rates and an assumed locking depth determine earthquake rates. The geological model fits the earthquake data better than the GSHAP model. By smoothing geodetic strain rate, another potential model was constructed and tested. I derived the upper magnitude limit from the Special catalog, and assume local "a-values" proportional to geodetic strain rates. "Prospective" tests show that the geodetic strain rate model is quite compatible with earthquakes. By assuming the smoothed seismicity model as a null hypothesis, I tested every other model against it. Test results indicate that the smoothed seismicity model performs best.
Khozani, Zohreh Sheikh; Bonakdari, Hossein; Zaji, Amir Hossein
2016-01-01
Two new soft computing models, namely genetic programming (GP) and genetic artificial algorithm (GAA) neural network (a combination of modified genetic algorithm and artificial neural network methods) were developed in order to predict the percentage of shear force in a rectangular channel with non-homogeneous roughness. The ability of these methods to estimate the percentage of shear force was investigated. Moreover, the independent parameters' effectiveness in predicting the percentage of shear force was determined using sensitivity analysis. According to the results, the GP model demonstrated superior performance to the GAA model. A comparison was also made between the GP program determined as the best model and five equations obtained in prior research. The GP model with the lowest error values (root mean square error ((RMSE) of 0.0515) had the best function compared with the other equations presented for rough and smooth channels as well as smooth ducts. The equation proposed for rectangular channels with rough boundaries (RMSE of 0.0642) outperformed the prior equations for smooth boundaries.
Pace, Danielle F.; Aylward, Stephen R.; Niethammer, Marc
2014-01-01
We propose a deformable image registration algorithm that uses anisotropic smoothing for regularization to find correspondences between images of sliding organs. In particular, we apply the method for respiratory motion estimation in longitudinal thoracic and abdominal computed tomography scans. The algorithm uses locally adaptive diffusion tensors to determine the direction and magnitude with which to smooth the components of the displacement field that are normal and tangential to an expected sliding boundary. Validation was performed using synthetic, phantom, and 14 clinical datasets, including the publicly available DIR-Lab dataset. We show that motion discontinuities caused by sliding can be effectively recovered, unlike conventional regularizations that enforce globally smooth motion. In the clinical datasets, target registration error showed improved accuracy for lung landmarks compared to the diffusive regularization. We also present a generalization of our algorithm to other sliding geometries, including sliding tubes (e.g., needles sliding through tissue, or contrast agent flowing through a vessel). Potential clinical applications of this method include longitudinal change detection and radiotherapy for lung or abdominal tumours, especially those near the chest or abdominal wall. PMID:23899632
Pace, Danielle F; Aylward, Stephen R; Niethammer, Marc
2013-11-01
We propose a deformable image registration algorithm that uses anisotropic smoothing for regularization to find correspondences between images of sliding organs. In particular, we apply the method for respiratory motion estimation in longitudinal thoracic and abdominal computed tomography scans. The algorithm uses locally adaptive diffusion tensors to determine the direction and magnitude with which to smooth the components of the displacement field that are normal and tangential to an expected sliding boundary. Validation was performed using synthetic, phantom, and 14 clinical datasets, including the publicly available DIR-Lab dataset. We show that motion discontinuities caused by sliding can be effectively recovered, unlike conventional regularizations that enforce globally smooth motion. In the clinical datasets, target registration error showed improved accuracy for lung landmarks compared to the diffusive regularization. We also present a generalization of our algorithm to other sliding geometries, including sliding tubes (e.g., needles sliding through tissue, or contrast agent flowing through a vessel). Potential clinical applications of this method include longitudinal change detection and radiotherapy for lung or abdominal tumours, especially those near the chest or abdominal wall.
Park, Jong Hyuk; Nagpal, Prashant; McPeak, Kevin M; Lindquist, Nathan C; Oh, Sang-Hyun; Norris, David J
2013-10-09
The template-stripping method can yield smooth patterned films without surface contamination. However, the process is typically limited to coinage metals such as silver and gold because other materials cannot be readily stripped from silicon templates due to strong adhesion. Herein, we report a more general template-stripping method that is applicable to a larger variety of materials, including refractory metals, semiconductors, and oxides. To address the adhesion issue, we introduce a thin gold layer between the template and the deposited materials. After peeling off the combined film from the template, the gold layer can be selectively removed via wet etching to reveal a smooth patterned structure of the desired material. Further, we demonstrate template-stripped multilayer structures that have potential applications for photovoltaics and solar absorbers. An entire patterned device, which can include a transparent conductor, semiconductor absorber, and back contact, can be fabricated. Since our approach can also produce many copies of the patterned structure with high fidelity by reusing the template, a low-cost and high-throughput process in micro- and nanofabrication is provided that is useful for electronics, plasmonics, and nanophotonics.
NASA Technical Reports Server (NTRS)
Kreinovich, Vladik
1996-01-01
For a space mission to be successful it is vitally important to have a good control strategy. For example, with the Space Shuttle it is necessary to guarantee the success and smoothness of docking, the smoothness and fuel efficiency of trajectory control, etc. For an automated planetary mission it is important to control the spacecraft's trajectory, and after that, to control the planetary rover so that it would be operable for the longest possible period of time. In many complicated control situations, traditional methods of control theory are difficult or even impossible to apply. In general, in uncertain situations, where no routine methods are directly applicable, we must rely on the creativity and skill of the human operators. In order to simulate these experts, an intelligent control methodology must be developed. The research objectives of this project were: to analyze existing control techniques; to find out which of these techniques is the best with respect to the basic optimality criteria (stability, smoothness, robustness); and, if for some problems, none of the existing techniques is satisfactory, to design new, better intelligent control techniques.
Unification of field theory and maximum entropy methods for learning probability densities
NASA Astrophysics Data System (ADS)
Kinney, Justin B.
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
NASA Astrophysics Data System (ADS)
Eghtesad, Adnan; Knezevic, Marko
2018-07-01
A corrective smooth particle method (CSPM) within smooth particle hydrodynamics (SPH) is used to study the deformation of an aircraft structure under high-velocity water-ditching impact load. The CSPM-SPH method features a new approach for the prediction of two-way fluid-structure interaction coupling. Results indicate that the implementation is well suited for modeling the deformation of structures under high-velocity impact into water as evident from the predicted stress and strain localizations in the aircraft structure as well as the integrity of the impacted interfaces, which show no artificial particle penetrations. To reduce the simulation time, a heterogeneous particle size distribution over a complex three-dimensional geometry is used. The variable particle size is achieved from a finite element mesh with variable element size and, as a result, variable nodal (i.e., SPH particle) spacing. To further accelerate the simulations, the SPH code is ported to a graphics processing unit using the OpenACC standard. The implementation and simulation results are described and discussed in this paper.
Unification of field theory and maximum entropy methods for learning probability densities.
Kinney, Justin B
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
EIT Imaging Regularization Based on Spectral Graph Wavelets.
Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Vauhkonen, Marko; Wolf, Gerhard; Mueller-Lisse, Ullrich; Moeller, Knut
2017-09-01
The objective of electrical impedance tomographic reconstruction is to identify the distribution of tissue conductivity from electrical boundary conditions. This is an ill-posed inverse problem usually solved under the finite-element method framework. In previous studies, standard sparse regularization was used for difference electrical impedance tomography to achieve a sparse solution. However, regarding elementwise sparsity, standard sparse regularization interferes with the smoothness of conductivity distribution between neighboring elements and is sensitive to noise. As an effect, the reconstructed images are spiky and depict a lack of smoothness. Such unexpected artifacts are not realistic and may lead to misinterpretation in clinical applications. To eliminate such artifacts, we present a novel sparse regularization method that uses spectral graph wavelet transforms. Single-scale or multiscale graph wavelet transforms are employed to introduce local smoothness on different scales into the reconstructed images. The proposed approach relies on viewing finite-element meshes as undirected graphs and applying wavelet transforms derived from spectral graph theory. Reconstruction results from simulations, a phantom experiment, and patient data suggest that our algorithm is more robust to noise and produces more reliable images.
NASA Astrophysics Data System (ADS)
Eghtesad, Adnan; Knezevic, Marko
2017-12-01
A corrective smooth particle method (CSPM) within smooth particle hydrodynamics (SPH) is used to study the deformation of an aircraft structure under high-velocity water-ditching impact load. The CSPM-SPH method features a new approach for the prediction of two-way fluid-structure interaction coupling. Results indicate that the implementation is well suited for modeling the deformation of structures under high-velocity impact into water as evident from the predicted stress and strain localizations in the aircraft structure as well as the integrity of the impacted interfaces, which show no artificial particle penetrations. To reduce the simulation time, a heterogeneous particle size distribution over a complex three-dimensional geometry is used. The variable particle size is achieved from a finite element mesh with variable element size and, as a result, variable nodal (i.e., SPH particle) spacing. To further accelerate the simulations, the SPH code is ported to a graphics processing unit using the OpenACC standard. The implementation and simulation results are described and discussed in this paper.
Banno, Masaki; Komiyama, Yusuke; Cao, Wei; Oku, Yuya; Ueki, Kokoro; Sumikoshi, Kazuya; Nakamura, Shugo; Terada, Tohru; Shimizu, Kentaro
2017-02-01
Several methods have been proposed for protein-sugar binding site prediction using machine learning algorithms. However, they are not effective to learn various properties of binding site residues caused by various interactions between proteins and sugars. In this study, we classified sugars into acidic and nonacidic sugars and showed that their binding sites have different amino acid occurrence frequencies. By using this result, we developed sugar-binding residue predictors dedicated to the two classes of sugars: an acid sugar binding predictor and a nonacidic sugar binding predictor. We also developed a combination predictor which combines the results of the two predictors. We showed that when a sugar is known to be an acidic sugar, the acidic sugar binding predictor achieves the best performance, and showed that when a sugar is known to be a nonacidic sugar or is not known to be either of the two classes, the combination predictor achieves the best performance. Our method uses only amino acid sequences for prediction. Support vector machine was used as a machine learning algorithm and the position-specific scoring matrix created by the position-specific iterative basic local alignment search tool was used as the feature vector. We evaluated the performance of the predictors using five-fold cross-validation. We have launched our system, as an open source freeware tool on the GitHub repository (https://doi.org/10.5281/zenodo.61513). Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Learning Spatially-Smooth Mappings in Non-Rigid Structure from Motion
Hamsici, Onur C.; Gotardo, Paulo F.U.; Martinez, Aleix M.
2013-01-01
Non-rigid structure from motion (NRSFM) is a classical underconstrained problem in computer vision. A common approach to make NRSFM more tractable is to constrain 3D shape deformation to be smooth over time. This constraint has been used to compress the deformation model and reduce the number of unknowns that are estimated. However, temporal smoothness cannot be enforced when the data lacks temporal ordering and its benefits are less evident when objects undergo abrupt deformations. This paper proposes a new NRSFM method that addresses these problems by considering deformations as spatial variations in shape space and then enforcing spatial, rather than temporal, smoothness. This is done by modeling each 3D shape coefficient as a function of its input 2D shape. This mapping is learned in the feature space of a rotation invariant kernel, where spatial smoothness is intrinsically defined by the mapping function. As a result, our model represents shape variations compactly using custom-built coefficient bases learned from the input data, rather than a pre-specified set such as the Discrete Cosine Transform. The resulting kernel-based mapping is a by-product of the NRSFM solution and leads to another fundamental advantage of our approach: for a newly observed 2D shape, its 3D shape is recovered by simply evaluating the learned function. PMID:23946937
Learning Spatially-Smooth Mappings in Non-Rigid Structure from Motion.
Hamsici, Onur C; Gotardo, Paulo F U; Martinez, Aleix M
2012-01-01
Non-rigid structure from motion (NRSFM) is a classical underconstrained problem in computer vision. A common approach to make NRSFM more tractable is to constrain 3D shape deformation to be smooth over time. This constraint has been used to compress the deformation model and reduce the number of unknowns that are estimated. However, temporal smoothness cannot be enforced when the data lacks temporal ordering and its benefits are less evident when objects undergo abrupt deformations. This paper proposes a new NRSFM method that addresses these problems by considering deformations as spatial variations in shape space and then enforcing spatial, rather than temporal, smoothness. This is done by modeling each 3D shape coefficient as a function of its input 2D shape. This mapping is learned in the feature space of a rotation invariant kernel, where spatial smoothness is intrinsically defined by the mapping function. As a result, our model represents shape variations compactly using custom-built coefficient bases learned from the input data, rather than a pre-specified set such as the Discrete Cosine Transform. The resulting kernel-based mapping is a by-product of the NRSFM solution and leads to another fundamental advantage of our approach: for a newly observed 2D shape, its 3D shape is recovered by simply evaluating the learned function.
Use of a genetic algorithm for the analysis of eye movements from the linear vestibulo-ocular reflex
NASA Technical Reports Server (NTRS)
Shelhamer, M.
2001-01-01
It is common in vestibular and oculomotor testing to use a single-frequency (sine) or combination of frequencies [sum-of-sines (SOS)] stimulus for head or target motion. The resulting eye movements typically contain a smooth tracking component, which follows the stimulus, in which are interspersed rapid eye movements (saccades or fast phases). The parameters of the smooth tracking--the amplitude and phase of each component frequency--are of interest; many methods have been devised that attempt to identify and remove the fast eye movements from the smooth. We describe a new approach to this problem, tailored to both single-frequency and sum-of-sines stimulation of the human linear vestibulo-ocular reflex. An approximate derivative is used to identify fast movements, which are then omitted from further analysis. The remaining points form a series of smooth tracking segments. A genetic algorithm is used to fit these segments together to form a smooth (but disconnected) wave form, by iteratively removing biases due to the missing fast phases. A genetic algorithm is an iterative optimization procedure; it provides a basis for extending this approach to more complex stimulus-response situations. In the SOS case, the genetic algorithm estimates the amplitude and phase values of the component frequencies as well as removing biases.
Lu, Dan; Zhang, Guannan; Webster, Clayton G.; ...
2016-12-30
In this paper, we develop an improved multilevel Monte Carlo (MLMC) method for estimating cumulative distribution functions (CDFs) of a quantity of interest, coming from numerical approximation of large-scale stochastic subsurface simulations. Compared with Monte Carlo (MC) methods, that require a significantly large number of high-fidelity model executions to achieve a prescribed accuracy when computing statistical expectations, MLMC methods were originally proposed to significantly reduce the computational cost with the use of multifidelity approximations. The improved performance of the MLMC methods depends strongly on the decay of the variance of the integrand as the level increases. However, the main challengemore » in estimating CDFs is that the integrand is a discontinuous indicator function whose variance decays slowly. To address this difficult task, we approximate the integrand using a smoothing function that accelerates the decay of the variance. In addition, we design a novel a posteriori optimization strategy to calibrate the smoothing function, so as to balance the computational gain and the approximation error. The combined proposed techniques are integrated into a very general and practical algorithm that can be applied to a wide range of subsurface problems for high-dimensional uncertainty quantification, such as a fine-grid oil reservoir model considered in this effort. The numerical results reveal that with the use of the calibrated smoothing function, the improved MLMC technique significantly reduces the computational complexity compared to the standard MC approach. Finally, we discuss several factors that affect the performance of the MLMC method and provide guidance for effective and efficient usage in practice.« less
Maximum Path Information and Fokker Planck Equation
NASA Astrophysics Data System (ADS)
Li, Wei; Wang A., Q.; LeMehaute, A.
2008-04-01
We present a rigorous method to derive the nonlinear Fokker-Planck (FP) equation of anomalous diffusion directly from a generalization of the principle of least action of Maupertuis proposed by Wang [Chaos, Solitons & Fractals 23 (2005) 1253] for smooth or quasi-smooth irregular dynamics evolving in Markovian process. The FP equation obtained may take two different but equivalent forms. It was also found that the diffusion constant may depend on both q (the index of Tsallis entropy [J. Stat. Phys. 52 (1988) 479] and the time t.
Fabrication and Characterization of Polyvinylidene Fluoride Microfilms for Microfluidic Applications
NASA Astrophysics Data System (ADS)
Rao, Yammani Venkat Subba; Raghavan, Aravinda Narayanan; Viswanathan, Meenakshi
2016-10-01
The ability to create patterns of piezo responsive material on smooth substrate is an important method to develop efficient microfluidic mixers. This paper reports the fabrication of Poly vinylidene fluoride microfilms using spin-coating on smooth glass surface. The suitable crystalline phases, surface morphology and microstructural properties of the PVDF films have been investigated. We found that films of average thickness 10μm, had average roughness of 0.13μm. These PVDF films are useful in microfluidic mixer applications.
APC: A New Code for Atmospheric Polarization Computations
NASA Technical Reports Server (NTRS)
Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.
2014-01-01
A new polarized radiative transfer code Atmospheric Polarization Computations (APC) is described. The code is based on separation of the diffuse light field into anisotropic and smooth (regular) parts. The anisotropic part is computed analytically. The smooth regular part is computed numerically using the discrete ordinates method. Vertical stratification of the atmosphere, common types of bidirectional surface reflection and scattering by spherical particles or spheroids are included. A particular consideration is given to computation of the bidirectional polarization distribution function (BPDF) of the waved ocean surface.
2013-02-06
high order and smoothness. Consequently, the use of IGA for col- location suggests itself, since spline functions such as NURBS or T-splines can be...for the development of higher-order accurate time integration schemes due to the convergence of the high modes in the eigenspectrum [46] as well as...flows [19, 20, 49–52]. Due to their maximum smoothness, B-splines exhibit a high resolution power, which allows the representation of a broad range
High accurate interpolation of NURBS tool path for CNC machine tools
NASA Astrophysics Data System (ADS)
Liu, Qiang; Liu, Huan; Yuan, Songmei
2016-09-01
Feedrate fluctuation caused by approximation errors of interpolation methods has great effects on machining quality in NURBS interpolation, but few methods can efficiently eliminate or reduce it to a satisfying level without sacrificing the computing efficiency at present. In order to solve this problem, a high accurate interpolation method for NURBS tool path is proposed. The proposed method can efficiently reduce the feedrate fluctuation by forming a quartic equation with respect to the curve parameter increment, which can be efficiently solved by analytic methods in real-time. Theoretically, the proposed method can totally eliminate the feedrate fluctuation for any 2nd degree NURBS curves and can interpolate 3rd degree NURBS curves with minimal feedrate fluctuation. Moreover, a smooth feedrate planning algorithm is also proposed to generate smooth tool motion with considering multiple constraints and scheduling errors by an efficient planning strategy. Experiments are conducted to verify the feasibility and applicability of the proposed method. This research presents a novel NURBS interpolation method with not only high accuracy but also satisfying computing efficiency.
Meshfree Modeling of Munitions Penetration in Soils
2017-04-01
discretization ...................... 8 Figure 2. Nodal smoothing domain for the modified stabilized nonconforming nodal integration...projectile ............................................................................................... 36 Figure 17. Discretization for the...List of Acronyms DEM: discrete element methods FEM: finite element methods MSNNI: modified stabilized nonconforming nodal integration RK
NASA Astrophysics Data System (ADS)
Li, Tianfang; Wang, Jing; Wen, Junhai; Li, Xiang; Lu, Hongbing; Hsieh, Jiang; Liang, Zhengrong
2004-05-01
To treat the noise in low-dose x-ray CT projection data more accurately, analysis of the noise properties of the data and development of a corresponding efficient noise treatment method are two major problems to be addressed. In order to obtain an accurate and realistic model to describe the x-ray CT system, we acquired thousands of repeated measurements on different phantoms at several fixed scan angles by a GE high-speed multi-slice spiral CT scanner. The collected data were calibrated and log-transformed by the sophisticated system software, which converts the detected photon energy into sinogram data that satisfies the Radon transform. From the analysis of these experimental data, a nonlinear relation between mean and variance for each datum of the sinogram was obtained. In this paper, we integrated this nonlinear relation into a penalized likelihood statistical framework for a SNR (signal-to-noise ratio) adaptive smoothing of noise in the sinogram. After the proposed preprocessing, the sinograms were reconstructed with unapodized FBP (filtered backprojection) method. The resulted images were evaluated quantitatively, in terms of noise uniformity and noise-resolution tradeoff, with comparison to other noise smoothing methods such as Hanning filter and Butterworth filter at different cutoff frequencies. Significant improvement on noise and resolution tradeoff and noise property was demonstrated.
2013-01-01
Background There is a rising public and political demand for prospective cancer cluster monitoring. But there is little empirical evidence on the performance of established cluster detection tests under conditions of small and heterogeneous sample sizes and varying spatial scales, such as are the case for most existing population-based cancer registries. Therefore this simulation study aims to evaluate different cluster detection methods, implemented in the open soure environment R, in their ability to identify clusters of lung cancer using real-life data from an epidemiological cancer registry in Germany. Methods Risk surfaces were constructed with two different spatial cluster types, representing a relative risk of RR = 2.0 or of RR = 4.0, in relation to the overall background incidence of lung cancer, separately for men and women. Lung cancer cases were sampled from this risk surface as geocodes using an inhomogeneous Poisson process. The realisations of the cancer cases were analysed within small spatial (census tracts, N = 1983) and within aggregated large spatial scales (communities, N = 78). Subsequently, they were submitted to the cluster detection methods. The test accuracy for cluster location was determined in terms of detection rates (DR), false-positive (FP) rates and positive predictive values. The Bayesian smoothing models were evaluated using ROC curves. Results With moderate risk increase (RR = 2.0), local cluster tests showed better DR (for both spatial aggregation scales > 0.90) and lower FP rates (both < 0.05) than the Bayesian smoothing methods. When the cluster RR was raised four-fold, the local cluster tests showed better DR with lower FPs only for the small spatial scale. At a large spatial scale, the Bayesian smoothing methods, especially those implementing a spatial neighbourhood, showed a substantially lower FP rate than the cluster tests. However, the risk increases at this scale were mostly diluted by data aggregation. Conclusion High resolution spatial scales seem more appropriate as data base for cancer cluster testing and monitoring than the commonly used aggregated scales. We suggest the development of a two-stage approach that combines methods with high detection rates as a first-line screening with methods of higher predictive ability at the second stage. PMID:24314148
Finan, Samantha J.; Swierzbiolek, Brooke; Priest, Naomi; Warren, Narelle
2018-01-01
Background Child mental health problems are now recognised as a key public health concern. Parenting programs have been developed as one solution to reduce children’s risk of developing mental health problems. However, their potential for widespread dissemination is hindered by low parental engagement, which includes intent to enrol, enrolment, and attendance. To increase parental engagement in preventive parenting programs, we need a better understanding of the predictors of engagement, and the strategies that can be used to enhance engagement. Method Employing a PRISMA method, we conducted a systematic review of the predictors of parent engagement and engagement enhancement strategies in preventive parenting programs. Key inclusion criteria included: (1) the intervention is directed primarily at the parent, (2) parent age >18 years, the article is (3) written in English and (4) published between 2004–2016. Stouffer’s method of combining p-values was used to determine whether associations between variables were reliable. Results Twenty-three articles reported a variety of predictors of parental engagement and engagement enhancement strategies. Only one of eleven predictors (child mental health symptoms) demonstrated a reliable association with enrolment (Stouffer’s p < .01). Discussion There was a lack of consistent evidence for predictors of parental engagement. Nonetheless, preliminary evidence suggests that engagement enhancement strategies modelled on theories, such as the Health Belief Model and Theory of Planned Behaviour, may increase parents’ engagement. Systematic review registration PROSPERO CRD42014013664. PMID:29719737
An Adaptive Prediction-Based Approach to Lossless Compression of Floating-Point Volume Data.
Fout, N; Ma, Kwan-Liu
2012-12-01
In this work, we address the problem of lossless compression of scientific and medical floating-point volume data. We propose two prediction-based compression methods that share a common framework, which consists of a switched prediction scheme wherein the best predictor out of a preset group of linear predictors is selected. Such a scheme is able to adapt to different datasets as well as to varying statistics within the data. The first method, called APE (Adaptive Polynomial Encoder), uses a family of structured interpolating polynomials for prediction, while the second method, which we refer to as ACE (Adaptive Combined Encoder), combines predictors from previous work with the polynomial predictors to yield a more flexible, powerful encoder that is able to effectively decorrelate a wide range of data. In addition, in order to facilitate efficient visualization of compressed data, our scheme provides an option to partition floating-point values in such a way as to provide a progressive representation. We compare our two compressors to existing state-of-the-art lossless floating-point compressors for scientific data, with our data suite including both computer simulations and observational measurements. The results demonstrate that our polynomial predictor, APE, is comparable to previous approaches in terms of speed but achieves better compression rates on average. ACE, our combined predictor, while somewhat slower, is able to achieve the best compression rate on all datasets, with significantly better rates on most of the datasets.
Young, Robin L; Weinberg, Janice; Vieira, Verónica; Ozonoff, Al; Webster, Thomas F
2010-07-19
A common, important problem in spatial epidemiology is measuring and identifying variation in disease risk across a study region. In application of statistical methods, the problem has two parts. First, spatial variation in risk must be detected across the study region and, second, areas of increased or decreased risk must be correctly identified. The location of such areas may give clues to environmental sources of exposure and disease etiology. One statistical method applicable in spatial epidemiologic settings is a generalized additive model (GAM) which can be applied with a bivariate LOESS smoother to account for geographic location as a possible predictor of disease status. A natural hypothesis when applying this method is whether residential location of subjects is associated with the outcome, i.e. is the smoothing term necessary? Permutation tests are a reasonable hypothesis testing method and provide adequate power under a simple alternative hypothesis. These tests have yet to be compared to other spatial statistics. This research uses simulated point data generated under three alternative hypotheses to evaluate the properties of the permutation methods and compare them to the popular spatial scan statistic in a case-control setting. Case 1 was a single circular cluster centered in a circular study region. The spatial scan statistic had the highest power though the GAM method estimates did not fall far behind. Case 2 was a single point source located at the center of a circular cluster and Case 3 was a line source at the center of the horizontal axis of a square study region. Each had linearly decreasing logodds with distance from the point. The GAM methods outperformed the scan statistic in Cases 2 and 3. Comparing sensitivity, measured as the proportion of the exposure source correctly identified as high or low risk, the GAM methods outperformed the scan statistic in all three Cases. The GAM permutation testing methods provide a regression-based alternative to the spatial scan statistic. Across all hypotheses examined in this research, the GAM methods had competing or greater power estimates and sensitivities exceeding that of the spatial scan statistic.
2010-01-01
Background A common, important problem in spatial epidemiology is measuring and identifying variation in disease risk across a study region. In application of statistical methods, the problem has two parts. First, spatial variation in risk must be detected across the study region and, second, areas of increased or decreased risk must be correctly identified. The location of such areas may give clues to environmental sources of exposure and disease etiology. One statistical method applicable in spatial epidemiologic settings is a generalized additive model (GAM) which can be applied with a bivariate LOESS smoother to account for geographic location as a possible predictor of disease status. A natural hypothesis when applying this method is whether residential location of subjects is associated with the outcome, i.e. is the smoothing term necessary? Permutation tests are a reasonable hypothesis testing method and provide adequate power under a simple alternative hypothesis. These tests have yet to be compared to other spatial statistics. Results This research uses simulated point data generated under three alternative hypotheses to evaluate the properties of the permutation methods and compare them to the popular spatial scan statistic in a case-control setting. Case 1 was a single circular cluster centered in a circular study region. The spatial scan statistic had the highest power though the GAM method estimates did not fall far behind. Case 2 was a single point source located at the center of a circular cluster and Case 3 was a line source at the center of the horizontal axis of a square study region. Each had linearly decreasing logodds with distance from the point. The GAM methods outperformed the scan statistic in Cases 2 and 3. Comparing sensitivity, measured as the proportion of the exposure source correctly identified as high or low risk, the GAM methods outperformed the scan statistic in all three Cases. Conclusions The GAM permutation testing methods provide a regression-based alternative to the spatial scan statistic. Across all hypotheses examined in this research, the GAM methods had competing or greater power estimates and sensitivities exceeding that of the spatial scan statistic. PMID:20642827
Røislien, Jo; Lossius, Hans Morten; Kristiansen, Thomas
2015-01-01
Background Trauma is a leading global cause of death. Trauma mortality rates are higher in rural areas, constituting a challenge for quality and equality in trauma care. The aim of the study was to explore population density and transport time to hospital care as possible predictors of geographical differences in mortality rates, and to what extent choice of statistical method might affect the analytical results and accompanying clinical conclusions. Methods Using data from the Norwegian Cause of Death registry, deaths from external causes 1998–2007 were analysed. Norway consists of 434 municipalities, and municipality population density and travel time to hospital care were entered as predictors of municipality mortality rates in univariate and multiple regression models of increasing model complexity. We fitted linear regression models with continuous and categorised predictors, as well as piecewise linear and generalised additive models (GAMs). Models were compared using Akaike's information criterion (AIC). Results Population density was an independent predictor of trauma mortality rates, while the contribution of transport time to hospital care was highly dependent on choice of statistical model. A multiple GAM or piecewise linear model was superior, and similar, in terms of AIC. However, while transport time was statistically significant in multiple models with piecewise linear or categorised predictors, it was not in GAM or standard linear regression. Conclusions Population density is an independent predictor of trauma mortality rates. The added explanatory value of transport time to hospital care is marginal and model-dependent, highlighting the importance of exploring several statistical models when studying complex associations in observational data. PMID:25972600
Predicting the mortality in geriatric patients with dengue fever
Huang, Hung-Sheng; Hsu, Chien-Chin; Ye, Je-Chiuan; Su, Shih-Bin; Huang, Chien-Cheng; Lin, Hung-Jung
2017-01-01
Abstract Geriatric patients have high mortality for dengue fever (DF); however, there is no adequate method to predict mortality in geriatric patients. Therefore, we conducted this study to develop a tool in an attempt to address this issue. We conducted a retrospective case–control study in a tertiary medical center during the DF outbreak in Taiwan in 2015. All the geriatric patients (aged ≥65 years) who visited the study hospital between September 1, 2015, and December 31, 2015, were recruited into this study. Variables included demographic data, vital signs, symptoms and signs, comorbidities, living status, laboratory data, and 30-day mortality. We investigated independent mortality predictors by univariate analysis and multivariate logistic regression analysis and then combined these predictors to predict the mortality. A total of 627 geriatric DF patients were recruited, with a mortality rate of 4.3% (27 deaths and 600 survivals). The following 4 independent mortality predictors were identified: severe coma [Glasgow Coma Scale: ≤8; adjusted odds ratio (AOR): 11.36; 95% confidence interval (CI): 1.89–68.19], bedridden (AOR: 10.46; 95% CI: 1.58–69.16), severe hepatitis (aspartate aminotransferase >1000 U/L; AOR: 96.08; 95% CI: 14.11–654.40), and renal failure (serum creatinine >2 mg/dL; AOR: 6.03; 95% CI: 1.50–24.24). When we combined the predictors, we found that the sensitivity, specificity, positive predictive value, and negative predictive value for patients with 1 or more predictors were 70.37%, 88.17%, 21.11%, and 98.51%, respectively. For patients with 2 or more predictors, the respective values were 33.33%, 99.44%, 57.14%, and 98.51%. We developed a new method to help decision making. Among geriatric patients with none of the predictors, the survival rate was 98.51%, and among those with 2 or more predictors, the mortality rate was 57.14%. This method is simple and useful, especially in an outbreak. PMID:28906367
Predicting the mortality in geriatric patients with dengue fever.
Huang, Hung-Sheng; Hsu, Chien-Chin; Ye, Je-Chiuan; Su, Shih-Bin; Huang, Chien-Cheng; Lin, Hung-Jung
2017-09-01
Geriatric patients have high mortality for dengue fever (DF); however, there is no adequate method to predict mortality in geriatric patients. Therefore, we conducted this study to develop a tool in an attempt to address this issue.We conducted a retrospective case-control study in a tertiary medical center during the DF outbreak in Taiwan in 2015. All the geriatric patients (aged ≥65 years) who visited the study hospital between September 1, 2015, and December 31, 2015, were recruited into this study. Variables included demographic data, vital signs, symptoms and signs, comorbidities, living status, laboratory data, and 30-day mortality. We investigated independent mortality predictors by univariate analysis and multivariate logistic regression analysis and then combined these predictors to predict the mortality.A total of 627 geriatric DF patients were recruited, with a mortality rate of 4.3% (27 deaths and 600 survivals). The following 4 independent mortality predictors were identified: severe coma [Glasgow Coma Scale: ≤8; adjusted odds ratio (AOR): 11.36; 95% confidence interval (CI): 1.89-68.19], bedridden (AOR: 10.46; 95% CI: 1.58-69.16), severe hepatitis (aspartate aminotransferase >1000 U/L; AOR: 96.08; 95% CI: 14.11-654.40), and renal failure (serum creatinine >2 mg/dL; AOR: 6.03; 95% CI: 1.50-24.24). When we combined the predictors, we found that the sensitivity, specificity, positive predictive value, and negative predictive value for patients with 1 or more predictors were 70.37%, 88.17%, 21.11%, and 98.51%, respectively. For patients with 2 or more predictors, the respective values were 33.33%, 99.44%, 57.14%, and 98.51%.We developed a new method to help decision making. Among geriatric patients with none of the predictors, the survival rate was 98.51%, and among those with 2 or more predictors, the mortality rate was 57.14%. This method is simple and useful, especially in an outbreak.
NASA Astrophysics Data System (ADS)
He, Liping; Lu, Gang; Chen, Dachuan; Li, Wenjun; Lu, Chunsheng
2017-07-01
This paper investigates the three-dimensional (3D) injection molding flow of short fiber-reinforced polymer composites using a smoothed particle hydrodynamics (SPH) simulation method. The polymer melt was modeled as a power law fluid and the fibers were considered as rigid cylindrical bodies. The filling details and fiber orientation in the injection-molding process were studied. The results indicated that the SPH method could effectively predict the order of filling, fiber accumulation, and heterogeneous distribution of fibers. The SPH simulation also showed that fibers were mainly aligned to the flow direction in the skin layer and inclined to the flow direction in the core layer. Additionally, the fiber-orientation state in the simulation was quantitatively analyzed and found to be consistent with the results calculated by conventional tensor methods.
Efficient, adaptive estimation of two-dimensional firing rate surfaces via Gaussian process methods.
Rad, Kamiar Rahnama; Paninski, Liam
2010-01-01
Estimating two-dimensional firing rate maps is a common problem, arising in a number of contexts: the estimation of place fields in hippocampus, the analysis of temporally nonstationary tuning curves in sensory and motor areas, the estimation of firing rates following spike-triggered covariance analyses, etc. Here we introduce methods based on Gaussian process nonparametric Bayesian techniques for estimating these two-dimensional rate maps. These techniques offer a number of advantages: the estimates may be computed efficiently, come equipped with natural errorbars, adapt their smoothness automatically to the local density and informativeness of the observed data, and permit direct fitting of the model hyperparameters (e.g., the prior smoothness of the rate map) via maximum marginal likelihood. We illustrate the method's flexibility and performance on a variety of simulated and real data.
Trajectory control of an articulated robot with a parallel drive arm based on splines under tension
NASA Astrophysics Data System (ADS)
Yi, Seung-Jong
Today's industrial robots controlled by mini/micro computers are basically simple positioning devices. The positioning accuracy depends on the mathematical description of the robot configuration to place the end-effector at the desired position and orientation within the workspace and on following the specified path which requires the trajectory planner. In addition, the consideration of joint velocity, acceleration, and jerk trajectories are essential for trajectory planning of industrial robots to obtain smooth operation. The newly designed 6 DOF articulated robot with a parallel drive arm mechanism which permits the joint actuators to be placed in the same horizontal line to reduce the arm inertia and to increase load capacity and stiffness is selected. First, the forward kinematic and inverse kinematic problems are examined. The forward kinematic equations are successfully derived based on Denavit-Hartenberg notation with independent joint angle constraints. The inverse kinematic problems are solved using the arm-wrist partitioned approach with independent joint angle constraints. Three types of curve fitting methods used in trajectory planning, i.e., certain degree polynomial functions, cubic spline functions, and cubic spline functions under tension, are compared to select the best possible method to satisfy both smooth joint trajectories and positioning accuracy for a robot trajectory planner. Cubic spline functions under tension is the method selected for the new trajectory planner. This method is implemented for a 6 DOF articulated robot with a parallel drive arm mechanism to improve the smoothness of the joint trajectories and the positioning accuracy of the manipulator. Also, this approach is compared with existing trajectory planners, 4-3-4 polynomials and cubic spline functions, via circular arc motion simulations. The new trajectory planner using cubic spline functions under tension is implemented into the microprocessor based robot controller and motors to produce combined arc and straight-line motion. The simulation and experiment show interesting results by demonstrating smooth motion in both acceleration and jerk and significant improvements of positioning accuracy in trajectory planning.
Dictionary-based fiber orientation estimation with improved spatial consistency.
Ye, Chuyang; Prince, Jerry L
2018-02-01
Diffusion magnetic resonance imaging (dMRI) has enabled in vivo investigation of white matter tracts. Fiber orientation (FO) estimation is a key step in tract reconstruction and has been a popular research topic in dMRI analysis. In particular, the sparsity assumption has been used in conjunction with a dictionary-based framework to achieve reliable FO estimation with a reduced number of gradient directions. Because image noise can have a deleterious effect on the accuracy of FO estimation, previous works have incorporated spatial consistency of FOs in the dictionary-based framework to improve the estimation. However, because FOs are only indirectly determined from the mixture fractions of dictionary atoms and not modeled as variables in the objective function, these methods do not incorporate FO smoothness directly, and their ability to produce smooth FOs could be limited. In this work, we propose an improvement to Fiber Orientation Reconstruction using Neighborhood Information (FORNI), which we call FORNI+; this method estimates FOs in a dictionary-based framework where FO smoothness is better enforced than in FORNI alone. We describe an objective function that explicitly models the actual FOs and the mixture fractions of dictionary atoms. Specifically, it consists of data fidelity between the observed signals and the signals represented by the dictionary, pairwise FO dissimilarity that encourages FO smoothness, and weighted ℓ 1 -norm terms that ensure the consistency between the actual FOs and the FO configuration suggested by the dictionary representation. The FOs and mixture fractions are then jointly estimated by minimizing the objective function using an iterative alternating optimization strategy. FORNI+ was evaluated on a simulation phantom, a physical phantom, and real brain dMRI data. In particular, in the real brain dMRI experiment, we have qualitatively and quantitatively evaluated the reproducibility of the proposed method. Results demonstrate that FORNI+ produces FOs with better quality compared with competing methods. Copyright © 2017 Elsevier B.V. All rights reserved.
ProQ3: Improved model quality assessments using Rosetta energy terms
Uziela, Karolis; Shu, Nanjiang; Wallner, Björn; Elofsson, Arne
2016-01-01
Quality assessment of protein models using no other information than the structure of the model itself has been shown to be useful for structure prediction. Here, we introduce two novel methods, ProQRosFA and ProQRosCen, inspired by the state-of-art method ProQ2, but using a completely different description of a protein model. ProQ2 uses contacts and other features calculated from a model, while the new predictors are based on Rosetta energies: ProQRosFA uses the full-atom energy function that takes into account all atoms, while ProQRosCen uses the coarse-grained centroid energy function. The two new predictors also include residue conservation and terms corresponding to the agreement of a model with predicted secondary structure and surface area, as in ProQ2. We show that the performance of these predictors is on par with ProQ2 and significantly better than all other model quality assessment programs. Furthermore, we show that combining the input features from all three predictors, the resulting predictor ProQ3 performs better than any of the individual methods. ProQ3, ProQRosFA and ProQRosCen are freely available both as a webserver and stand-alone programs at http://proq3.bioinfo.se/. PMID:27698390
A single-stage flux-corrected transport algorithm for high-order finite-volume methods
Chaplin, Christopher; Colella, Phillip
2017-05-08
We present a new limiter method for solving the advection equation using a high-order, finite-volume discretization. The limiter is based on the flux-corrected transport algorithm. Here, we modify the classical algorithm by introducing a new computation for solution bounds at smooth extrema, as well as improving the preconstraint on the high-order fluxes. We compute the high-order fluxes via a method-of-lines approach with fourth-order Runge-Kutta as the time integrator. For computing low-order fluxes, we select the corner-transport upwind method due to its improved stability over donor-cell upwind. Several spatial differencing schemes are investigated for the high-order flux computation, including centered- differencemore » and upwind schemes. We show that the upwind schemes perform well on account of the dissipation of high-wavenumber components. The new limiter method retains high-order accuracy for smooth solutions and accurately captures fronts in discontinuous solutions. Further, we need only apply the limiter once per complete time step.« less
One shot methods for optimal control of distributed parameter systems 1: Finite dimensional control
NASA Technical Reports Server (NTRS)
Taasan, Shlomo
1991-01-01
The efficient numerical treatment of optimal control problems governed by elliptic partial differential equations (PDEs) and systems of elliptic PDEs, where the control is finite dimensional is discussed. Distributed control as well as boundary control cases are discussed. The main characteristic of the new methods is that they are designed to solve the full optimization problem directly, rather than accelerating a descent method by an efficient multigrid solver for the equations involved. The methods use the adjoint state in order to achieve efficient smoother and a robust coarsening strategy. The main idea is the treatment of the control variables on appropriate scales, i.e., control variables that correspond to smooth functions are solved for on coarse grids depending on the smoothness of these functions. Solution of the control problems is achieved with the cost of solving the constraint equations about two to three times (by a multigrid solver). Numerical examples demonstrate the effectiveness of the method proposed in distributed control case, pointwise control and boundary control problems.
Fuzzy neural network technique for system state forecasting.
Li, Dezhi; Wang, Wilson; Ismail, Fathy
2013-10-01
In many system state forecasting applications, the prediction is performed based on multiple datasets, each corresponding to a distinct system condition. The traditional methods dealing with multiple datasets (e.g., vector autoregressive moving average models and neural networks) have some shortcomings, such as limited modeling capability and opaque reasoning operations. To tackle these problems, a novel fuzzy neural network (FNN) is proposed in this paper to effectively extract information from multiple datasets, so as to improve forecasting accuracy. The proposed predictor consists of both autoregressive (AR) nodes modeling and nonlinear nodes modeling; AR models/nodes are used to capture the linear correlation of the datasets, and the nonlinear correlation of the datasets are modeled with nonlinear neuron nodes. A novel particle swarm technique [i.e., Laplace particle swarm (LPS) method] is proposed to facilitate parameters estimation of the predictor and improve modeling accuracy. The effectiveness of the developed FNN predictor and the associated LPS method is verified by a series of tests related to Mackey-Glass data forecast, exchange rate data prediction, and gear system prognosis. Test results show that the developed FNN predictor and the LPS method can capture the dynamics of multiple datasets effectively and track system characteristics accurately.
Face-based smoothed finite element method for real-time simulation of soft tissue
NASA Astrophysics Data System (ADS)
Mendizabal, Andrea; Bessard Duparc, Rémi; Bui, Huu Phuoc; Paulus, Christoph J.; Peterlik, Igor; Cotin, Stéphane
2017-03-01
In soft tissue surgery, a tumor and other anatomical structures are usually located using the preoperative CT or MR images. However, due to the deformation of the concerned tissues, this information suffers from inaccuracy when employed directly during the surgery. In order to account for these deformations in the planning process, the use of a bio-mechanical model of the tissues is needed. Such models are often designed using the finite element method (FEM), which is, however, computationally expensive, in particular when a high accuracy of the simulation is required. In our work, we propose to use a smoothed finite element method (S-FEM) in the context of modeling of the soft tissue deformation. This numerical technique has been introduced recently to overcome the overly stiff behavior of the standard FEM and to improve the solution accuracy and the convergence rate in solid mechanics problems. In this paper, a face-based smoothed finite element method (FS-FEM) using 4-node tetrahedral elements is presented. We show that in some cases, the method allows for reducing the number of degrees of freedom, while preserving the accuracy of the discretization. The method is evaluated on a simulation of a cantilever beam loaded at the free end and on a simulation of a 3D cube under traction and compression forces. Further, it is applied to the simulation of the brain shift and of the kidney's deformation. The results demonstrate that the method outperforms the standard FEM in a bending scenario and that has similar accuracy as the standard FEM in the simulations of the brain-shift and of the kidney's deformation.
Immersed smoothed finite element method for fluid-structure interaction simulation of aortic valves
NASA Astrophysics Data System (ADS)
Yao, Jianyao; Liu, G. R.; Narmoneva, Daria A.; Hinton, Robert B.; Zhang, Zhi-Qian
2012-12-01
This paper presents a novel numerical method for simulating the fluid-structure interaction (FSI) problems when blood flows over aortic valves. The method uses the immersed boundary/element method and the smoothed finite element method and hence it is termed as IS-FEM. The IS-FEM is a partitioned approach and does not need a body-fitted mesh for FSI simulations. It consists of three main modules: the fluid solver, the solid solver and the FSI force solver. In this work, the blood is modeled as incompressible viscous flow and solved using the characteristic-based-split scheme with FEM for spacial discretization. The leaflets of the aortic valve are modeled as Mooney-Rivlin hyperelastic materials and solved using smoothed finite element method (or S-FEM). The FSI force is calculated on the Lagrangian fictitious fluid mesh that is identical to the moving solid mesh. The octree search and neighbor-to-neighbor schemes are used to detect efficiently the FSI pairs of fluid and solid cells. As an example, a 3D idealized model of aortic valve is modeled, and the opening process of the valve is simulated using the proposed IS-FEM. Numerical results indicate that the IS-FEM can serve as an efficient tool in the study of aortic valve dynamics to reveal the details of stresses in the aortic valves, the flow velocities in the blood, and the shear forces on the interfaces. This tool can also be applied to animal models studying disease processes and may ultimately translate to a new adaptive methods working with magnetic resonance images, leading to improvements on diagnostic and prognostic paradigms, as well as surgical planning, in the care of patients.
Integrated Reconfigurable Intelligent Systems (IRIS) for Complex Naval Systems
2010-02-21
RKF45] and Adams Variable Step- Size Predictor - Corrector methods). While such algorithms naturally are usually used to numerically solve differential...verified by yet another function call. Due to their nature, such methods are referred to as predictor - corrector methods. While computationally expensive...CONTRACT NUMBER N00014-09- C -0394 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER N/A 6. Author(s) Dr. Dimitri N. Mavris Dr. Yongchang Li 5d
DOE Office of Scientific and Technical Information (OSTI.GOV)
Josephson, Matthew P.; Sikkink, Laura A.; Penheiter, Alan R.
2011-12-16
Highlights: Black-Right-Pointing-Pointer Cardiac myosin regulatory light chain (MYL2) is phosphorylated at S15. Black-Right-Pointing-Pointer Smooth muscle myosin light chain kinase (smMLCK) is a ubiquitous kinase. Black-Right-Pointing-Pointer It is a widely believed that MYL2 is a poor substrate for smMLCK. Black-Right-Pointing-Pointer In fact, smMLCK efficiently and rapidly phosphorylates S15 in MYL2. Black-Right-Pointing-Pointer Phosphorylation kinetics measured by novel fluorescence method without radioactivity. -- Abstract: Specific phosphorylation of the human ventricular cardiac myosin regulatory light chain (MYL2) modifies the protein at S15. This modification affects MYL2 secondary structure and modulates the Ca{sup 2+} sensitivity of contraction in cardiac tissue. Smooth muscle myosin light chainmore » kinase (smMLCK) is a ubiquitous kinase prevalent in uterus and present in other contracting tissues including cardiac muscle. The recombinant 130 kDa (short) smMLCK phosphorylated S15 in MYL2 in vitro. Specific modification of S15 was verified using the direct detection of the phospho group on S15 with mass spectrometry. SmMLCK also specifically phosphorylated myosin regulatory light chain S15 in porcine ventricular myosin and chicken gizzard smooth muscle myosin (S20 in smooth muscle) but failed to phosphorylate the myosin regulatory light chain in rabbit skeletal myosin. Phosphorylation kinetics, measured using a novel fluorescence method eliminating the use of radioactive isotopes, indicates similar Michaelis-Menten V{sub max} and K{sub M} for regulatory light chain S15 phosphorylation rates in MYL2, porcine ventricular myosin, and chicken gizzard myosin. These data demonstrate that smMLCK is a specific and efficient kinase for the in vitro phosphorylation of MYL2, cardiac, and smooth muscle myosin. Whether smMLCK plays a role in cardiac muscle regulation or response to a disease causing stimulus is unclear but it should be considered a potentially significant kinase in cardiac tissue on the basis of its specificity, kinetics, and tissue expression.« less
Optimising predictor domains for spatially coherent precipitation downscaling
NASA Astrophysics Data System (ADS)
Radanovics, S.; Vidal, J.-P.; Sauquet, E.; Ben Daoud, A.; Bontron, G.
2013-10-01
Statistical downscaling is widely used to overcome the scale gap between predictors from numerical weather prediction models or global circulation models and predictands like local precipitation, required for example for medium-term operational forecasts or climate change impact studies. The predictors are considered over a given spatial domain which is rarely optimised with respect to the target predictand location. In this study, an extended version of the growing rectangular domain algorithm is proposed to provide an ensemble of near-optimum predictor domains for a statistical downscaling method. This algorithm is applied to find five-member ensembles of near-optimum geopotential predictor domains for an analogue downscaling method for 608 individual target zones covering France. Results first show that very similar downscaling performances based on the continuous ranked probability score (CRPS) can be achieved by different predictor domains for any specific target zone, demonstrating the need for considering alternative domains in this context of high equifinality. A second result is the large diversity of optimised predictor domains over the country that questions the commonly made hypothesis of a common predictor domain for large areas. The domain centres are mainly distributed following the geographical location of the target location, but there are apparent differences between the windward and the lee side of mountain ridges. Moreover, domains for target zones located in southeastern France are centred more east and south than the ones for target locations on the same longitude. The size of the optimised domains tends to be larger in the southeastern part of the country, while domains with a very small meridional extent can be found in an east-west band around 47° N. Sensitivity experiments finally show that results are rather insensitive to the starting point of the optimisation algorithm except for zones located in the transition area north of this east-west band. Results also appear generally robust with respect to the archive length considered for the analogue method, except for zones with high interannual variability like in the Cévennes area. This study paves the way for defining regions with homogeneous geopotential predictor domains for precipitation downscaling over France, and therefore de facto ensuring the spatial coherence required for hydrological applications.
NASA Astrophysics Data System (ADS)
Sauter, T.
2013-12-01
Despite the extensive research on downscaling methods there is still little consensus about the choice of useful atmospheric predictor variables. Besides the general decision of a proper statistical downscaling model, the selection of an informative predictor set is crucial for the accuracy and stability of the resulting downscaled time series. These requirements must be fullfilled by both the atmospheric variables and the predictor domains in terms of geographical location and spatial extend, to which in general not much attention is paid. However, only a limited number of studies is interested in the predictive capability of the predictor domain size or shape, and the question to what extent variability of neighboring grid points influence local-scale events. In this study we emphasized the spatial relationships between observed daily precipitation and selected number of atmospheric variables for the European Arctic. Several nonlinear regression models are used to link the large-scale predictors obtained from reanalysed Weather Research and Forecast model runs to the local-scale observed precipitation. Inferences on the sources of uncertainty are then drawn from variance based sensitivity measures, which also permit to capture interaction effects between individual predictors. The information is further used to develop more parsimonious downscaling models with only small decreases in accuracy. Individual predictors (without interactions) account for almost 2/3 of the total output variance, while the remaining fraction is solely due to interactions. Neglecting predictor interactions in the screening process will lead to some loss of information. Hence, linear screening methods are insufficient as they neither account for interactions nor for non-additivity as given by many nonlinear prediction algorithms.
On the Convergence Analysis of the Optimized Gradient Method.
Kim, Donghwan; Fessler, Jeffrey A
2017-01-01
This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov's fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization.
On the Convergence Analysis of the Optimized Gradient Method
Kim, Donghwan; Fessler, Jeffrey A.
2016-01-01
This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov’s fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization. PMID:28461707
NASA Astrophysics Data System (ADS)
Huang, Rui; Jin, Chunhua; Mei, Ming; Yin, Jingxue
2018-01-01
This paper deals with the existence and stability of traveling wave solutions for a degenerate reaction-diffusion equation with time delay. The degeneracy of spatial diffusion together with the effect of time delay causes us the essential difficulty for the existence of the traveling waves and their stabilities. In order to treat this case, we first show the existence of smooth- and sharp-type traveling wave solutions in the case of c≥c^* for the degenerate reaction-diffusion equation without delay, where c^*>0 is the critical wave speed of smooth traveling waves. Then, as a small perturbation, we obtain the existence of the smooth non-critical traveling waves for the degenerate diffusion equation with small time delay τ >0 . Furthermore, we prove the global existence and uniqueness of C^{α ,β } -solution to the time-delayed degenerate reaction-diffusion equation via compactness analysis. Finally, by the weighted energy method, we prove that the smooth non-critical traveling wave is globally stable in the weighted L^1 -space. The exponential convergence rate is also derived.
NASA Astrophysics Data System (ADS)
Huang, Rui; Jin, Chunhua; Mei, Ming; Yin, Jingxue
2018-06-01
This paper deals with the existence and stability of traveling wave solutions for a degenerate reaction-diffusion equation with time delay. The degeneracy of spatial diffusion together with the effect of time delay causes us the essential difficulty for the existence of the traveling waves and their stabilities. In order to treat this case, we first show the existence of smooth- and sharp-type traveling wave solutions in the case of c≥c^* for the degenerate reaction-diffusion equation without delay, where c^*>0 is the critical wave speed of smooth traveling waves. Then, as a small perturbation, we obtain the existence of the smooth non-critical traveling waves for the degenerate diffusion equation with small time delay τ >0. Furthermore, we prove the global existence and uniqueness of C^{α ,β }-solution to the time-delayed degenerate reaction-diffusion equation via compactness analysis. Finally, by the weighted energy method, we prove that the smooth non-critical traveling wave is globally stable in the weighted L^1-space. The exponential convergence rate is also derived.
Yin, Anlin; Bowlin, Gary L.; Luo, Rifang; Zhang, Xingdong; Wang, Yunbing; Mo, Xiumei
2016-01-01
The construction of a smooth muscle layer for blood vessel through electrospinning method plays a key role in vascular tissue engineering. However, smooth muscle cells (SMCs) penetration into the electrospun graft to form a smooth muscle layer is limited due to the dense packing of fibers and lack of inducing factors. In this paper, silk fibroin/poly (L-lactide-ε-caplacton) (SF/PLLA-CL) vascular graft loaded with platelet-rich growth factor (PRGF) was fabricated by electrospinning. The in vitro results showed that SMCs cultured in the graft grew fast, and the incorporation of PRGF could induce deeper SMCs infiltrating compared to the SF/PLLA-CL graft alone. Mechanical properties measurement showed that PRGF-incorporated graft had proper tensile stress, suture retention strength, burst pressure and compliance which could match the demand of native blood vessel. The success in the fabrication of PRGF-incorporated SF/PLLA-CL graft to induce fast SMCs growth and their strong penetration into graft has important application for tissue-engineered blood vessels. PMID:27482466
Yin, Anlin; Bowlin, Gary L; Luo, Rifang; Zhang, Xingdong; Wang, Yunbing; Mo, Xiumei
2016-12-01
The construction of a smooth muscle layer for blood vessel through electrospinning method plays a key role in vascular tissue engineering. However, smooth muscle cells (SMCs) penetration into the electrospun graft to form a smooth muscle layer is limited due to the dense packing of fibers and lack of inducing factors. In this paper, silk fibroin/poly (L-lactide-ε-caplacton) (SF/PLLA-CL) vascular graft loaded with platelet-rich growth factor (PRGF) was fabricated by electrospinning. The in vitro results showed that SMCs cultured in the graft grew fast, and the incorporation of PRGF could induce deeper SMCs infiltrating compared to the SF/PLLA-CL graft alone. Mechanical properties measurement showed that PRGF-incorporated graft had proper tensile stress, suture retention strength, burst pressure and compliance which could match the demand of native blood vessel. The success in the fabrication of PRGF-incorporated SF/PLLA-CL graft to induce fast SMCs growth and their strong penetration into graft has important application for tissue-engineered blood vessels.
Optical induction of muscle contraction at the tissue scale through intrinsic cellular amplifiers.
Yoon, Jonghee; Choi, Myunghwan; Ku, Taeyun; Choi, Won Jong; Choi, Chulhee
2014-08-01
The smooth muscle cell is the principal component responsible for involuntary control of visceral organs, including vascular tonicity, secretion, and sphincter regulation. It is known that the neurotransmitters released from nerve endings increase the intracellular Ca(2+) level in smooth muscle cells followed by muscle contraction. We herein report that femtosecond laser pulses focused on the diffraction-limited volume can induce intracellular Ca(2+) increases in the irradiated smooth muscle cell without neurotransmitters, and locally increased intracellular Ca(2+) levels are amplified by calcium-induced calcium-releasing mechanisms through the ryanodine receptor, a Ca(2+) channel of the endoplasmic reticulum. The laser-induced Ca(2+) increases propagate to adjacent cells through gap junctions. Thus, ultrashort-pulsed lasers can induce smooth muscle contraction by controlling Ca(2+), even with optical stimulation of the diffraction-limited volume. This optical method, which leads to reversible and reproducible muscle contraction, can be used in research into muscle dynamics, neuromuscular disease treatment, and nanorobot control. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Moderation analysis with missing data in the predictors.
Zhang, Qian; Wang, Lijuan
2017-12-01
The most widely used statistical model for conducting moderation analysis is the moderated multiple regression (MMR) model. In MMR modeling, missing data could pose a challenge, mainly because the interaction term is a product of two or more variables and thus is a nonlinear function of the involved variables. In this study, we consider a simple MMR model, where the effect of the focal predictor X on the outcome Y is moderated by a moderator U. The primary interest is to find ways of estimating and testing the moderation effect with the existence of missing data in X. We mainly focus on cases when X is missing completely at random (MCAR) and missing at random (MAR). Three methods are compared: (a) Normal-distribution-based maximum likelihood estimation (NML); (b) Normal-distribution-based multiple imputation (NMI); and (c) Bayesian estimation (BE). Via simulations, we found that NML and NMI could lead to biased estimates of moderation effects under MAR missingness mechanism. The BE method outperformed NMI and NML for MMR modeling with missing data in the focal predictor, missingness depending on the moderator and/or auxiliary variables, and correctly specified distributions for the focal predictor. In addition, more robust BE methods are needed in terms of the distribution mis-specification problem of the focal predictor. An empirical example was used to illustrate the applications of the methods with a simple sensitivity analysis. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Kargacin, G J; Cooke, P H; Abramson, S B; Fay, F S
1989-04-01
To study the organization of the contractile apparatus in smooth muscle and its behavior during shortening, the movement of dense bodies in contracting saponin skinned, isolated cells was analyzed from digital images collected at fixed time intervals. These cells were optically lucent so that punctate structures, identified immunocytochemically as dense bodies, were visible in them with the phase contrast microscope. Methods were adapted and developed to track the bodies and to study their relative motion. Analysis of their tracks or trajectories indicated that the bodies did not move passively as cells shortened and that nearby bodies often had similar patterns of motion. Analysis of the relative motion of the bodies indicated that some bodies were structurally linked to one another or constrained so that the distance between them remained relatively constant during contraction. Such bodies tended to fall into laterally oriented, semirigid groups found at approximately 6-microns intervals along the cell axis. Other dense bodies moved rapidly toward one another axially during contraction. Such bodies were often members of separate semirigid groups. This suggests that the semirigid groups of dense bodies in smooth muscle cells may provide a framework for the attachment of the contractile structures to the cytoskeleton and the cell surface and indicates that smooth muscle may be more well-ordered than previously thought. The methods described here for the analysis of the motion of intracellular structures should be directly applicable to the study of motion in other cell types.
Fried, Itzhak; Koch, Christof
2014-01-01
Peristimulus time histograms are a widespread form of visualizing neuronal responses. Kernel convolution methods transform these histograms into a smooth, continuous probability density function. This provides an improved estimate of a neuron's actual response envelope. We here develop a classifier, called the h-coefficient, to determine whether time-locked fluctuations in the firing rate of a neuron should be classified as a response or as random noise. Unlike previous approaches, the h-coefficient takes advantage of the more precise response envelope estimation provided by the kernel convolution method. The h-coefficient quantizes the smoothed response envelope and calculates the probability of a response of a given shape to occur by chance. We tested the efficacy of the h-coefficient in a large data set of Monte Carlo simulated smoothed peristimulus time histograms with varying response amplitudes, response durations, trial numbers, and baseline firing rates. Across all these conditions, the h-coefficient significantly outperformed more classical classifiers, with a mean false alarm rate of 0.004 and a mean hit rate of 0.494. We also tested the h-coefficient's performance in a set of neuronal responses recorded in humans. The algorithm behind the h-coefficient provides various opportunities for further adaptation and the flexibility to target specific parameters in a given data set. Our findings confirm that the h-coefficient can provide a conservative and powerful tool for the analysis of peristimulus time histograms with great potential for future development. PMID:25475352
Inventory control of raw material using silver meal heuristic method in PR. Trubus Alami Malang
NASA Astrophysics Data System (ADS)
Ikasari, D. M.; Lestari, E. R.; Prastya, E.
2018-03-01
The purpose of this study was to compare the total inventory cost calculated using the method applied by PR. Trubus Alami and Silver Meal Heuristic (SMH) method. The study was started by forecasting the cigarette demand from July 2016 to June 2017 (48 weeks) using additive decomposition forecasting method. The additive decomposition was used because it has the lowest value of Mean Abosolute Deviation (MAD) and Mean Squared Deviation (MSD) compared to other methods such as multiplicative decomposition, moving average, single exponential smoothing, and double exponential smoothing. The forcasting results was then converted as a raw material needs and further calculated using SMH method to obtain inventory cost. As expected, the result shows that the order frequency of using SMH methods was smaller than that of using the method applied by Trubus Alami. This affected the total inventory cost. The result suggests that using SMH method gave a 29.41% lower inventory cost, giving the cost different of IDR 21,290,622. The findings, is therefore, indicated that the PR. Trubus Alami should apply the SMH method if the company wants to reduce the total inventory cost.
Spectrum of Lyapunov exponents of non-smooth dynamical systems of integrate-and-fire type.
Zhou, Douglas; Sun, Yi; Rangan, Aaditya V; Cai, David
2010-04-01
We discuss how to characterize long-time dynamics of non-smooth dynamical systems, such as integrate-and-fire (I&F) like neuronal network, using Lyapunov exponents and present a stable numerical method for the accurate evaluation of the spectrum of Lyapunov exponents for this large class of dynamics. These dynamics contain (i) jump conditions as in the firing-reset dynamics and (ii) degeneracy such as in the refractory period in which voltage-like variables of the network collapse to a single constant value. Using the networks of linear I&F neurons, exponential I&F neurons, and I&F neurons with adaptive threshold, we illustrate our method and discuss the rich dynamics of these networks.
NASA Astrophysics Data System (ADS)
Wang, H. P.; Guan, Y. C.; Zheng, H. Y.
2017-12-01
Rough surface features induced by laser irradiation have been a challenging for the fabrication of micro/nano scale features. In this work, we propose hybrid ultrasonic vibration polishing method to improve surface quality of microcraters produced by femtosecond laser irradiation on cemented carbide. The laser caused rough surfaces are significantly smoothened after ultrasonic vibration polishing due to the strong collision effect of diamond particles on the surfaces. 3D morphology, SEM and AFM analysis has been conducted to characterize surface morphology and topography. Results indicate that the minimal surface roughness of Ra 7.60 nm has been achieved on the polished surfaces. The fabrication of microcraters with smooth surfaces is applicable to molding process for mass production of micro-optical components.
NASA Astrophysics Data System (ADS)
Pérez-Huerta, J. S.; Ariza-Flores, D.; Castro-García, R.; Mochán, W. L.; Ortiz, G. P.; Agarwal, V.
2018-04-01
We report the reflectivity of one-dimensional finite and semi-infinite photonic crystals, computed through the coupling to Bloch modes (BM) and through a transfer matrix method (TMM), and their comparison to the experimental spectral line shapes of porous silicon (PS) multilayer structures. Both methods reproduce a forbidden photonic bandgap (PBG), but slowly-converging oscillations are observed in the TMM as the number of layers increases to infinity, while a smooth converged behavior is presented with BM. The experimental reflectivity spectra is in good agreement with the TMM results for multilayer structures with a small number of periods. However, for structures with large amount of periods, the measured spectral line shapes exhibit better agreement with the smooth behavior predicted by BM.
NASA Astrophysics Data System (ADS)
Li, Y. J.; Kokkinaki, Amalia; Darve, Eric F.; Kitanidis, Peter K.
2017-08-01
The operation of most engineered hydrogeological systems relies on simulating physical processes using numerical models with uncertain parameters and initial conditions. Predictions by such uncertain models can be greatly improved by Kalman-filter techniques that sequentially assimilate monitoring data. Each assimilation constitutes a nonlinear optimization, which is solved by linearizing an objective function about the model prediction and applying a linear correction to this prediction. However, if model parameters and initial conditions are uncertain, the optimization problem becomes strongly nonlinear and a linear correction may yield unphysical results. In this paper, we investigate the utility of one-step ahead smoothing, a variant of the traditional filtering process, to eliminate nonphysical results and reduce estimation artifacts caused by nonlinearities. We present the smoothing-based compressed state Kalman filter (sCSKF), an algorithm that combines one step ahead smoothing, in which current observations are used to correct the state and parameters one step back in time, with a nonensemble covariance compression scheme, that reduces the computational cost by efficiently exploring the high-dimensional state and parameter space. Numerical experiments show that when model parameters are uncertain and the states exhibit hyperbolic behavior with sharp fronts, as in CO2 storage applications, one-step ahead smoothing reduces overshooting errors and, by design, gives physically consistent state and parameter estimates. We compared sCSKF with commonly used data assimilation methods and showed that for the same computational cost, combining one step ahead smoothing and nonensemble compression is advantageous for real-time characterization and monitoring of large-scale hydrogeological systems with sharp moving fronts.
Parmar, Nina; Ahmadi, Raheleh
2015-01-01
Muscle degeneration is a prevalent disease, particularly in aging societies where it has a huge impact on quality of life and incurs colossal health costs. Suitable donor sources of smooth muscle cells are limited and minimally invasive therapeutic approaches are sought that will augment muscle volume by delivering cells to damaged or degenerated areas of muscle. For the first time, we report the use of highly porous microcarriers produced using thermally induced phase separation (TIPS) to expand and differentiate adipose-derived mesenchymal stem cells (AdMSCs) into smooth muscle-like cells in a format that requires minimal manipulation before clinical delivery. AdMSCs readily attached to the surface of TIPS microcarriers and proliferated while maintained in suspension culture for 12 days. Switching the incubation medium to a differentiation medium containing 2 ng/mL transforming growth factor beta-1 resulted in a significant increase in both the mRNA and protein expression of cell contractile apparatus components caldesmon, calponin, and myosin heavy chains, indicative of a smooth muscle cell-like phenotype. Growth of smooth muscle cells on the surface of the microcarriers caused no change to the integrity of the polymer microspheres making them suitable for a cell-delivery vehicle. Our results indicate that TIPS microspheres provide an ideal substrate for the expansion and differentiation of AdMSCs into smooth muscle-like cells as well as a microcarrier delivery vehicle for the attached cells ready for therapeutic applications. PMID:25205072
Yuzuriha, Shunsuke; Matsuo, Kiyoshi; Ban, Ryokuya; Yano, Shiharu; Moriizumi, Tetsuji
2012-01-01
Background: We previously reported that the supratarsal Mueller's muscle is innervated by both sympathetic efferent fibers and trigeminal proprioceptive afferent fibers, which function as mechanoreceptors-inducing reflexive contractions of both the levator and frontalis muscles. Controversy still persists regarding the role of the mechanoreceptors in Mueller's muscle; therefore, we clinically and histologically investigated Mueller's muscle. Methods: We evaluated the role of phenylephrine administration into the upper fornix in contraction of Mueller's smooth muscle fibers and how intraoperative stretching of Mueller's muscle alters the degree of eyelid retraction in 20 patients with aponeurotic blepharoptosis. In addition, we stained Mueller's muscle in 7 cadavers with antibodies against α-smooth muscle actin, S100, tyrosine hydroxylase, c-kit, and connexin 43. Results: Maximal eyelid retraction occurred approximately 3.8 minutes after administration of phenylephrine and prolonged eyelid retraction for at least 20 minutes after administration. Intraoperative stretching of Mueller's muscle increased eyelid retraction due to its reflexive contraction. The tyrosine hydroxylase antibody sparsely stained postganglionic sympathetic nerve fibers, whereas the S100 and c-kit antibodies densely stained the interstitial cells of Cajal (ICCs) among Mueller's smooth muscle fibers. A connexin 43 antibody failed to stain Mueller's muscle. Conclusions: A contractile network of ICCs may mediate neurotransmission within Mueller's multiunit smooth muscle fibers that are sparsely innervated by postganglionic sympathetic fibers. Interstitial cells of Cajal may also serve as mechanoreceptors that reflexively contract Mueller's smooth muscle fibers, forming intimate associations with intramuscular trigeminal proprioceptive fibers to induce reflexive contraction of the levator and frontalis muscles. PMID:22359687
Spradley, Jackson P; Pampush, James D; Morse, Paul E; Kay, Richard F
2017-05-01
Dirichlet normal energy (DNE) is a metric of surface topography that has been used to evaluate the relationship between the surface complexity of primate cheek teeth and dietary categories. This study examines the effects of different 3D mesh retriangulation protocols on DNE. We examine how different protocols influence the DNE of a simple geometric shape-a hemisphere-to gain a more thorough understanding than can be achieved by investigating a complex biological surface such as a tooth crown. We calculate DNE on 3D surface meshes of hemispheres and on primate molars subjected to various retriangulation protocols, including smoothing algorithms, smoothing amounts, target face counts, and criteria for boundary face exclusion. Software used includes R, MorphoTester, Avizo, and MeshLab. DNE was calculated using the R package "molaR." In all cases, smoothing as performed in Avizo sharply decreases DNE initially, after which DNE becomes stable. Using a broader boundary exclusion criterion or performing additional smoothing (using "mesh fairing" methods) further decreases DNE. Increasing the mesh face count also results in increased DNE on tooth surfaces. Different retriangulation protocols yield different DNE values for the same surfaces, and should not be combined in meta-analyses. Increasing face count will capture surface microfeatures, but at the expense of computational speed. More aggressive smoothing is more likely to alter the essential geometry of the surface. A protocol is proposed that limits potential artifacts created during surface production while preserving pertinent features on the occlusal surface. © 2017 Wiley Periodicals, Inc.
Solutions to inverse plume in a crosswind problem using a predictor - corrector method
NASA Astrophysics Data System (ADS)
Vanderveer, Joseph; Jaluria, Yogesh
2013-11-01
Investigation for minimalist solutions to the inverse convection problem of a plume in a crosswind has developed a predictor - corrector method. The inverse problem is to predict the strength and location of the plume with respect to a select few downstream sampling points. This is accomplished with the help of two numerical simulations of the domain at differing source strengths, allowing the generation of two inverse interpolation functions. These functions in turn are utilized by the predictor step to acquire the plume strength. Finally, the same interpolation functions with the corrections from the plume strength are used to solve for the plume location. Through optimization of the relative location of the sampling points, the minimum number of samples for accurate predictions is reduced to two for the plume strength and three for the plume location. After the optimization, the predictor-corrector method demonstrates global uniqueness of the inverse solution for all test cases. The solution error is less than 1% for both plume strength and plume location. The basic approach could be extended to other inverse convection transport problems, particularly those encountered in environmental flows.
NASA Astrophysics Data System (ADS)
Sandalski, Stou
Smooth particle hydrodynamics is an efficient method for modeling the dynamics of fluids. It is commonly used to simulate astrophysical processes such as binary mergers. We present a newly developed GPU accelerated smooth particle hydrodynamics code for astrophysical simulations. The code is named
The Highly Adaptive Lasso Estimator
Benkeser, David; van der Laan, Mark
2017-01-01
Estimation of a regression functions is a common goal of statistical learning. We propose a novel nonparametric regression estimator that, in contrast to many existing methods, does not rely on local smoothness assumptions nor is it constructed using local smoothing techniques. Instead, our estimator respects global smoothness constraints by virtue of falling in a class of right-hand continuous functions with left-hand limits that have variation norm bounded by a constant. Using empirical process theory, we establish a fast minimal rate of convergence of our proposed estimator and illustrate how such an estimator can be constructed using standard software. In simulations, we show that the finite-sample performance of our estimator is competitive with other popular machine learning techniques across a variety of data generating mechanisms. We also illustrate competitive performance in real data examples using several publicly available data sets. PMID:29094111
Post-Dryout Heat Transfer to a Refrigerant Flowing in Horizontal Evaporator Tubes
NASA Astrophysics Data System (ADS)
Mori, Hideo; Yoshida, Suguru; Kakimoto, Yasushi; Ohishi, Katsumi; Fukuda, Kenichi
Studies of the post-dryout heat transfer were made based on the experimental data for HFC-134a flowing in horizontal smooth and spiral1y grooved (micro-fin) tubes and the characteristics of the post-dryout heat transfer were c1arified. The heat transfer coefficient at medium and high mass flow rates in the smooth tube was lower than the single-phase heat transfer coefficient of the superheated vapor flow, of which mass flow rate was given on the assumption that the flow was in a thermodynamic equilibrium. A prediction method of post-dryout heat transfer coefficient was developed to reproduce the measurement satisfactorily for the smooth tube. The post dryout heat transfer in the micro-fin tube can be regarded approximately as a superheated vapor single-phase heat transfer.
Smooth and vertical facet formation for AlGaN-based deep-UV laser diodes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bogart, Katherine Huderle Andersen; Shul, Randy John; Stevens, Jeffrey
2008-10-01
Using a two-step method of plasma and wet chemical etching, we demonstrate smooth, vertical facets for use in Al{sub x} Ga{sub 1-x} N-based deep-ultraviolet laser-diode heterostructures where x = 0 to 0.5. Optimization of plasma-etching conditions included increasing both temperature and radiofrequency (RF) power to achieve a facet angle of 5 deg from vertical. Subsequent etching in AZ400K developer was investigated to reduce the facet surface roughness and improve facet verticality. The resulting combined processes produced improved facet sidewalls with an average angle of 0.7 deg from vertical and less than 2-nm root-mean-square (RMS) roughness, yielding an estimated reflectivity greatermore » than 95% of that of a perfectly smooth and vertical facet.« less
NASA Astrophysics Data System (ADS)
Chaljub, Emmanuel; Maufroy, Emeline; Moczo, Peter; Kristek, Jozef; Hollender, Fabrice; Bard, Pierre-Yves; Priolo, Enrico; Klin, Peter; de Martin, Florent; Zhang, Zhenguo; Zhang, Wei; Chen, Xiaofei
2015-04-01
Differences between 3-D numerical predictions of earthquake ground motion in the Mygdonian basin near Thessaloniki, Greece, led us to define four canonical stringent models derived from the complex realistic 3-D model of the Mygdonian basin. Sediments atop an elastic bedrock are modelled in the 1D-sharp and 1D-smooth models using three homogeneous layers and smooth velocity distribution, respectively. The 2D-sharp and 2D-smooth models are extensions of the 1-D models to an asymmetric sedimentary valley. In all cases, 3-D wavefields include strongly dispersive surface waves in the sediments. We compared simulations by the Fourier pseudo-spectral method (FPSM), the Legendre spectral-element method (SEM) and two formulations of the finite-difference method (FDM-S and FDM-C) up to 4 Hz. The accuracy of individual solutions and level of agreement between solutions vary with type of seismic waves and depend on the smoothness of the velocity model. The level of accuracy is high for the body waves in all solutions. However, it strongly depends on the discrete representation of the material interfaces (at which material parameters change discontinuously) for the surface waves in the sharp models. An improper discrete representation of the interfaces can cause inaccurate numerical modelling of surface waves. For all the numerical methods considered, except SEM with mesh of elements following the interfaces, a proper implementation of interfaces requires definition of an effective medium consistent with the interface boundary conditions. An orthorhombic effective medium is shown to significantly improve accuracy and preserve the computational efficiency of modelling. The conclusions drawn from the analysis of the results of the canonical cases greatly help to explain differences between numerical predictions of ground motion in realistic models of the Mygdonian basin. We recommend that any numerical method and code that is intended for numerical prediction of earthquake ground motion should be verified through stringent models that would make it possible to test the most important aspects of accuracy.
Dynamic equilibration of airway smooth muscle contraction during physiological loading.
Latourelle, Jeanne; Fabry, Ben; Fredberg, Jeffrey J
2002-02-01
Airway smooth muscle contraction is the central event in acute airway narrowing in asthma. Most studies of isolated muscle have focused on statically equilibrated contractile states that arise from isometric or isotonic contractions. It has recently been established, however, that muscle length is determined by a dynamically equilibrated state of the muscle in which small tidal stretches associated with the ongoing action of breathing act to perturb the binding of myosin to actin. To further investigate this phenomenon, we describe in this report an experimental method for subjecting isolated muscle to a dynamic microenvironment designed to closely approximate that experienced in vivo. Unlike previous methods that used either time-varying length control, force control, or time-invariant auxotonic loads, this method uses transpulmonary pressure as the controlled variable, with both muscle force and muscle length free to adjust as they would in vivo. The method was implemented by using a servo-controlled lever arm to load activated airway smooth muscle strips with transpulmonary pressure fluctuations of increasing amplitude, simulating the action of breathing. The results are not consistent with classical ideas of airway narrowing, which rest on the assumption of a statically equilibrated contractile state; they are consistent, however, with the theory of perturbed equilibria of myosin binding. This experimental method will allow for quantitative experimental evaluation of factors that were previously outside of experimental control, including sensitivity of muscle length to changes of tidal volume, changes of lung volume, shape of the load characteristic, loss of parenchymal support and inflammatory thickening of airway wall compartments.
Pareto Tracer: a predictor-corrector method for multi-objective optimization problems
NASA Astrophysics Data System (ADS)
Martín, Adanay; Schütze, Oliver
2018-03-01
This article proposes a novel predictor-corrector (PC) method for the numerical treatment of multi-objective optimization problems (MOPs). The algorithm, Pareto Tracer (PT), is capable of performing a continuation along the set of (local) solutions of a given MOP with k objectives, and can cope with equality and box constraints. Additionally, the first steps towards a method that manages general inequality constraints are also introduced. The properties of PT are first discussed theoretically and later numerically on several examples.
2012-09-03
prac- tice to solve these initial value problems. Additionally, the predictor / corrector methods are combined with adaptive stepsize and adaptive ...for implementing a numerical path tracking algorithm is to decide which predictor / corrector method to employ, how large to take the step ∆t, and what...the endgame algorithm . Output: A steady state solution Set ǫ = 1 while ǫ >= ǫend do set the stepsize ∆ǫ by using adaptive stepsize control algorithm
Profile Optimization Method for Robust Airfoil Shape Optimization in Viscous Flow
NASA Technical Reports Server (NTRS)
Li, Wu
2003-01-01
Simulation results obtained by using FUN2D for robust airfoil shape optimization in transonic viscous flow are included to show the potential of the profile optimization method for generating fairly smooth optimal airfoils with no off-design performance degradation.
NASA Technical Reports Server (NTRS)
Verger, Aleixandre; Baret, F.; Weiss, M.; Kandasamy, S.; Vermote, E.
2013-01-01
Consistent, continuous, and long time series of global biophysical variables derived from satellite data are required for global change research. A novel climatology fitting approach called CACAO (Consistent Adjustment of the Climatology to Actual Observations) is proposed to reduce noise and fill gaps in time series by scaling and shifting the seasonal climatological patterns to the actual observations. The shift and scale CACAO parameters adjusted for each season allow quantifying shifts in the timing of seasonal phenology and inter-annual variations in magnitude as compared to the average climatology. CACAO was assessed first over simulated daily Leaf Area Index (LAI) time series with varying fractions of missing data and noise. Then, performances were analyzed over actual satellite LAI products derived from AVHRR Long-Term Data Record for the 1981-2000 period over the BELMANIP2 globally representative sample of sites. Comparison with two widely used temporal filtering methods-the asymmetric Gaussian (AG) model and the Savitzky-Golay (SG) filter as implemented in TIMESAT-revealed that CACAO achieved better performances for smoothing AVHRR time series characterized by high level of noise and frequent missing observations. The resulting smoothed time series captures well the vegetation dynamics and shows no gaps as compared to the 50-60% of still missing data after AG or SG reconstructions. Results of simulation experiments as well as confrontation with actual AVHRR time series indicate that the proposed CACAO method is more robust to noise and missing data than AG and SG methods for phenology extraction.
The investigation of serum vaspin level in atherosclerotic coronary artery disease.
Kobat, Mehmet Ali; Celik, Ahmet; Balin, Mehmet; Altas, Yakup; Baydas, Adil; Bulut, Musa; Aydin, Suleyman; Dagli, Necati; Yavuzkir, Mustafa Ferzeyn; Ilhan, Selcuk
2012-04-01
It was speculated that fatty tissue originated adipocytokines may play role in pathogenesis of atherosclerosis. These adipocytokines may alter vascular homeostasis by effecting endothelial cells, arterial smooth muscle cells and macrophages. Vaspin is a newly described member of adipocytokines family. We aimed to investigate whether plasma vaspin level has any predictive value in coronary artery disease (CAD). Forty patients who have at least single vessel ≥ 70 % stenosis demostrated angiographically and 40 subjects with normal coronary anatomy were included to the study. The vaspin levels were measured from serum that is obtained by centrifigation of blood and stored at -20 (o)C by ELISA method. The length, weight and body mass index of patients were measured. Biochemical parameters including total cholesterol, low density lipoprotein, high density lipoprotein, creatinine, sodium, potassium, hemoglobine, uric acid and fasting glucose were also measured. Biochemical markers levels were similar in both groups. Serum vaspin levels were significantly lower in CAD patients than control group (respectively; 256 ± 219 pg/ml vs. 472 ( 564 pg/ml, P < 0.02). Beside this serum vaspin level was lower in control group with high systolic blood pressure. Serum vaspin levels were found significantly lower in patients with CAD than age-matched subjects with normal coronary anatomy. Vaspin may be used as a predictor of CAD. Coronary artery disease; Vaspin; Adipokine.
Lowe, James
2018-01-01
A high reactivity and leaving no harmful residues make ozone an effective disinfectant for farm hygiene and biosecurity. Our objectives were therefore to (1) characterize the killing capacity of aqueous and gaseous ozone at different operational conditions on dairy cattle manure-based pathogens (MBP) contaminated different surfaces (plastic, metal, nylon, rubber, and wood); (2) determine the effect of microbial load on the killing capacity of aqueous ozone. In a crossover design, 14 strips of each material were randomly assigned into 3 groups, treatment (n = 6), positive-control (n = 6), and negative-control (n = 2). The strips were soaked in dairy cattle manure with an inoculum level of 107–108 for 60 minutes. The treatment strips were exposed to aqueous ozone of 2, 4, and 9 ppm and gaseous ozone of 1and 9 ppm for 2, 4, and 8 minutes exposure. 3M™ Petrifilm™ rapid aerobic count plate and plate reader were used for bacterial culture. On smooth surfaces, plastic and metal, aqueous ozone at 4 ppm reduced MBP to a safe level (≥5-log10) within 2 minutes (6.1 and 5.1-log10, respectively). However, gaseous ozone at 9 ppm for 4 minutes inactivated 3.3-log10 of MBP. Aqueous ozone of 9 ppm is sufficient to reduce MBP to a safe level, 6.0 and 5.4- log10, on nylon and rubber surfaces within 2 and 8 minutes, respectively. On complex surfaces, wood, both aqueous and gaseous ozone at up to 9 ppm were unable to reduce MBP to a safe level (3.6 and 0.8-log10, respectively). The bacterial load was a strong predictor for reduction in MBP (P<0.0001, R2 = 0.72). We conclude that aqueous ozone of 4 and 9 ppm for 2 minutes may provide an efficient method to reduce MBP to a safe level on smooth and moderately rough surfaces, respectively. However, ozone alone may not an adequate means of controlling MBP on complex surfaces. PMID:29758045
Method of invitation and geographical proximity as predictors of NHS Health Check uptake.
Gidlow, Christopher; Ellis, Naomi; Randall, Jason; Cowap, Lisa; Smith, Graham; Iqbal, Zafar; Kumar, Jagdish
2015-06-01
Uptake of NHS Health Checks remains below the national target. Better understanding of predictors of uptake can inform targeting and delivery. We explored invitation method and geographical proximity as predictors of uptake in deprived urban communities. This observational cohort study used data from all 4855 individuals invited for an NHS Health Check (September 2010-February 2014) at five general practices in Stoke-on-Trent, UK. Attendance/non-attendance was the binary outcome variable. Predictor variables included the method of invitation, general practice, demographics, deprivation and distance to Health Check location. Mean attendance (61.6%) was above the city and national average, but varied by practice (47.5-83.3%; P < 0.001). Telephone/verbal invitations were associated with higher uptake than postal invitations (OR = 2.87, 95% CI = 2.26-3.64), yet significant practice-level variation remained. Distance to Health Check was not associated with attendance. Increasing age (OR = 1.04, 95% CI = 1.03-1.04), female gender (OR = 1.48, 95% CI = 1.30-1.68) and living in the least deprived areas (OR = 1.59, 95% CI = 1.23-2.05) were all independent positive predictors of attendance. Using verbal or telephone invitations should be considered to improve Health Check uptake. Other differences in recruitment and delivery that might explain remaining practice-level variation in uptake warrant further exploration. Geographical proximity may not be an important predictor of uptake in urban populations. © The Author 2014. Published by Oxford University Press on behalf of Faculty of Public Health.
NASA Astrophysics Data System (ADS)
Greaves, Heather E.
Climate change is disproportionately affecting high northern latitudes, and the extreme temperatures, remoteness, and sheer size of the Arctic tundra biome have always posed challenges that make application of remote sensing technology especially appropriate. Advances in high-resolution remote sensing continually improve our ability to measure characteristics of tundra vegetation communities, which have been difficult to characterize previously due to their low stature and their distribution in complex, heterogeneous patches across large landscapes. In this work, I apply terrestrial lidar, airborne lidar, and high-resolution airborne multispectral imagery to estimate tundra vegetation characteristics for a research area near Toolik Lake, Alaska. Initially, I explored methods for estimating shrub biomass from terrestrial lidar point clouds, finding that a canopy-volume based algorithm performed best. Although shrub biomass estimates derived from airborne lidar data were less accurate than those from terrestrial lidar data, algorithm parameters used to derive biomass estimates were similar for both datasets. Additionally, I found that airborne lidar-based shrub biomass estimates were just as accurate whether calibrated against terrestrial lidar data or harvested shrub biomass--suggesting that terrestrial lidar potentially could replace destructive biomass harvest. Along with smoothed Normalized Differenced Vegetation Index (NDVI) derived from airborne imagery, airborne lidar-derived canopy volume was an important predictor in a Random Forest model trained to estimate shrub biomass across the 12.5 km2 covered by our lidar and imagery data. The resulting 0.80 m resolution shrub biomass maps should provide important benchmarks for change detection in the Toolik area, especially as deciduous shrubs continue to expand in tundra regions. Finally, I applied 33 lidar- and imagery-derived predictor layers in a validated Random Forest modeling approach to map vegetation community distribution at 20 cm resolution across the data collection area, creating maps that will enable validation of coarser maps, as well as study of fine-scale ecological processes in the area. These projects have pushed the limits of what can be accomplished for vegetation mapping using airborne remote sensing in a challenging but important region; it is my hope that the methods explored here will illuminate potential paths forward as landscapes and technologies inevitably continue to change.
Q-Method Extended Kalman Filter
NASA Technical Reports Server (NTRS)
Zanetti, Renato; Ainscough, Thomas; Christian, John; Spanos, Pol D.
2012-01-01
A new algorithm is proposed that smoothly integrates non-linear estimation of the attitude quaternion using Davenport s q-method and estimation of non-attitude states through an extended Kalman filter. The new method is compared to a similar existing algorithm showing its similarities and differences. The validity of the proposed approach is confirmed through numerical simulations.
Slow-rotation dynamic SPECT with a temporal second derivative constraint.
Humphries, T; Celler, A; Trummer, M
2011-08-01
Dynamic tracer behavior in the human body arises as a result of continuous physiological processes. Hence, the change in tracer concentration within a region of interest (ROI) should follow a smooth curve. The authors propose a modification to an existing slow-rotation dynamic SPECT reconstruction algorithm (dSPECT) with the goal of improving the smoothness of time activity curves (TACs) and other properties of the reconstructed image. The new method, denoted d2EM, imposes a constraint on the second derivative (concavity) of the TAC in every voxel of the reconstructed image, allowing it to change sign at most once. Further constraints are enforced to prevent other nonphysical behaviors from arising. The new method is compared with dSPECT using digital phantom simulations and experimental dynamic 99mTc -DTPA renal SPECT data, to assess any improvement in image quality. In both phantom simulations and healthy volunteer experiments, the d2EM method provides smoother TACs than dSPECT, with more consistent shapes in regions with dynamic behavior. Magnitudes of TACs within an ROI still vary noticeably in both dSPECT and d2EM images, but also in images produced using an OSEM approach that reconstructs each time frame individually, based on much more complete projection data. TACs produced by averaging over a region are similar using either method, even for small ROIs. Results for experimental renal data show expected behavior in images produced by both methods, with d2EM providing somewhat smoother mean TACs and more consistent TAC shapes. The d2EM method is successful in improving the smoothness of time activity curves obtained from the reconstruction, as well as improving consistency of TAC shapes within ROIs.
NASA Astrophysics Data System (ADS)
M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.
2014-06-01
The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R2), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.
DOE Office of Scientific and Technical Information (OSTI.GOV)
M Ali, M. K., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com; Ruslan, M. H., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com; Muthuvalu, M. S., E-mail: sudaram-@yahoo.com, E-mail: jumat@ums.edu.my
2014-06-19
The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m{sup 2} and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea ofmore » this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R{sup 2}), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.« less
[Using sequential indicator simulation method to define risk areas of soil heavy metals in farmland.
Yang, Hao; Song, Ying Qiang; Hu, Yue Ming; Chen, Fei Xiang; Zhang, Rui
2018-05-01
The heavy metals in soil have serious impacts on safety, ecological environment and human health due to their toxicity and accumulation. It is necessary to efficiently identify the risk area of heavy metals in farmland soil, which is of important significance for environment protection, pollution warning and farmland risk control. We collected 204 samples and analyzed the contents of seven kinds of heavy metals (Cu, Zn, Pb, Cd, Cr, As, Hg) in Zengcheng District of Guangzhou, China. In order to overcame the problems of the data, including the limitation of abnormal values and skewness distribution and the smooth effect with the traditional kriging methods, we used sequential indicator simulation method (SISIM) to define the spatial distribution of heavy metals, and combined Hakanson index method to identify potential ecological risk area of heavy metals in farmland. The results showed that: (1) Based on the similar accuracy of spatial prediction of soil heavy metals, the SISIM had a better expression of detail rebuild than ordinary kriging in small scale area. Compared to indicator kriging, the SISIM had less error rate (4.9%-17.1%) in uncertainty evaluation of heavy-metal risk identification. The SISIM had less smooth effect and was more applicable to simulate the spatial uncertainty assessment of soil heavy metals and risk identification. (2) There was no pollution in Zengcheng's farmland. Moderate potential ecological risk was found in the southern part of study area due to enterprise production, human activities, and river sediments. This study combined the sequential indicator simulation with Hakanson risk index method, and effectively overcame the outlier information loss and smooth effect of traditional kriging method. It provided a new way to identify the soil heavy metal risk area of farmland in uneven sampling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Wenxiao; Bao, Jie; Tartakovsky, Alexandre M.
2014-02-15
Robin boundary condition for the Navier-Stokes equations is used to model slip conditions at the fluid-solid boundaries. A novel Continuous Boundary Force (CBF) method is proposed for solving the Navier-Stokes equations subject to Robin boundary condition. In the CBF method, the Robin boundary condition at boundary is replaced by the homogeneous Neumann boundary condition at the boundary and a volumetric force term added to the momentum conservation equation. Smoothed Particle Hydrodynamics (SPH) method is used to solve the resulting Navier-Stokes equations. We present solutions for two-dimensional and three-dimensional flows in domains bounded by flat and curved boundaries subject to variousmore » forms of the Robin boundary condition. The numerical accuracy and convergence are examined through comparison of the SPH-CBF results with the solutions of finite difference or finite element method. Taken the no-slip boundary condition as a special case of slip boundary condition, we demonstrate that the SPH-CBF method describes accurately both no-slip and slip conditions.« less
YFa and analogs: Investigation of opioid receptors in smooth muscle contraction
Kumar, Krishan; Goyal, Ritika; Mudgal, Annu; Mohan, Anita; Pasha, Santosh
2011-01-01
AIM: To study the pharmacological profile and inhibition of smooth muscle contraction by YFa and its analogs in conjunction with their receptor selectivity. METHODS: The effects of YFa and its analogs (D-Ala2) YFa, Y (D-Ala2) GFMKKKFMRF amide and Des-Phe-YGGFMKKKFMR amide in guinea pig ileum (GPI) and mouse vas deferens (MVD) motility were studied using an isolated tissue organ bath system, and morphine and DynA (1-13) served as controls. Acetylcholine was used for muscle stimulation. The observations were validated by specific antagonist pretreatment experiments using naloxonazine, naltrindole and norbinaltorphimine norBNI. RESULTS: YFa did not demonstrate significant inhibition of GPI muscle contraction as compared with morphine (15% vs 62%, P = 0.0002), but moderate inhibition of MVD muscle contraction, indicating the role of κ opioid receptors in the contraction. A moderate inhibition of GPI muscles by (Des-Phe) YFa revealed the role of anti-opiate receptors in the smooth muscle contraction. (D-Ala-2) YFa showed significant inhibition of smooth muscle contraction, indicating the involvement of mainly δ receptors in MVD contraction. These results were supported by specific antagonist pretreatment assays. CONCLUSION: YFa revealed its side-effect-free analgesic properties with regard to arrest of gastrointestinal transit. The study provides evidences for the involvement of κ and anti-opioid receptors in smooth muscle contraction. PMID:22110284
Cervical cancer survival prediction using hybrid of SMOTE, CART and smooth support vector machine
NASA Astrophysics Data System (ADS)
Purnami, S. W.; Khasanah, P. M.; Sumartini, S. H.; Chosuvivatwong, V.; Sriplung, H.
2016-04-01
According to the WHO, every two minutes there is one patient who died from cervical cancer. The high mortality rate is due to the lack of awareness of women for early detection. There are several factors that supposedly influence the survival of cervical cancer patients, including age, anemia status, stage, type of treatment, complications and secondary disease. This study wants to classify/predict cervical cancer survival based on those factors. Various classifications methods: classification and regression tree (CART), smooth support vector machine (SSVM), three order spline SSVM (TSSVM) were used. Since the data of cervical cancer are imbalanced, synthetic minority oversampling technique (SMOTE) is used for handling imbalanced dataset. Performances of these methods are evaluated using accuracy, sensitivity and specificity. Results of this study show that balancing data using SMOTE as preprocessing can improve performance of classification. The SMOTE-SSVM method provided better result than SMOTE-TSSVM and SMOTE-CART.
Stochastic modeling and simulation of reaction-diffusion system with Hill function dynamics.
Chen, Minghan; Li, Fei; Wang, Shuo; Cao, Young
2017-03-14
Stochastic simulation of reaction-diffusion systems presents great challenges for spatiotemporal biological modeling and simulation. One widely used framework for stochastic simulation of reaction-diffusion systems is reaction diffusion master equation (RDME). Previous studies have discovered that for the RDME, when discretization size approaches zero, reaction time for bimolecular reactions in high dimensional domains tends to infinity. In this paper, we demonstrate that in the 1D domain, highly nonlinear reaction dynamics given by Hill function may also have dramatic change when discretization size is smaller than a critical value. Moreover, we discuss methods to avoid this problem: smoothing over space, fixed length smoothing over space and a hybrid method. Our analysis reveals that the switch-like Hill dynamics reduces to a linear function of discretization size when the discretization size is small enough. The three proposed methods could correctly (under certain precision) simulate Hill function dynamics in the microscopic RDME system.
Single image super-resolution via an iterative reproducing kernel Hilbert space method.
Deng, Liang-Jian; Guo, Weihong; Huang, Ting-Zhu
2016-11-01
Image super-resolution, a process to enhance image resolution, has important applications in satellite imaging, high definition television, medical imaging, etc. Many existing approaches use multiple low-resolution images to recover one high-resolution image. In this paper, we present an iterative scheme to solve single image super-resolution problems. It recovers a high quality high-resolution image from solely one low-resolution image without using a training data set. We solve the problem from image intensity function estimation perspective and assume the image contains smooth and edge components. We model the smooth components of an image using a thin-plate reproducing kernel Hilbert space (RKHS) and the edges using approximated Heaviside functions. The proposed method is applied to image patches, aiming to reduce computation and storage. Visual and quantitative comparisons with some competitive approaches show the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Han, Qun; Xu, Wei; Sun, Jian-Qiao
2016-09-01
The stochastic response of nonlinear oscillators under periodic and Gaussian white noise excitations is studied with the generalized cell mapping based on short-time Gaussian approximation (GCM/STGA) method. The solutions of the transition probability density functions over a small fraction of the period are constructed by the STGA scheme in order to construct the GCM over one complete period. Both the transient and steady-state probability density functions (PDFs) of a smooth and discontinuous (SD) oscillator are computed to illustrate the application of the method. The accuracy of the results is verified by direct Monte Carlo simulations. The transient responses show the evolution of the PDFs from being Gaussian to non-Gaussian. The effect of a chaotic saddle on the stochastic response is also studied. The stochastic P-bifurcation in terms of the steady-state PDFs occurs with the decrease of the smoothness parameter, which corresponds to the deterministic pitchfork bifurcation.
Calculation of smooth potential energy surfaces using local electron correlation methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mata, Ricardo A.; Werner, Hans-Joachim
2006-11-14
The geometry dependence of excitation domains in local correlation methods can lead to noncontinuous potential energy surfaces. We propose a simple domain merging procedure which eliminates this problem in many situations. The method is applied to heterolytic bond dissociations of ketene and propadienone, to SN2 reactions of Cl{sup -} with alkylchlorides, and in a quantum mechanical/molecular mechanical study of the chorismate mutase enzyme. It is demonstrated that smooth potentials are obtained in all cases. Furthermore, basis set superposition error effects are reduced in local calculations, and it is found that this leads to better basis set convergence when computing barriermore » heights or weak interactions. When the electronic structure strongly changes between reactants or products and the transition state, the domain merging procedure leads to a balanced description of all structures and accurate barrier heights.« less
Inverse metal-assisted chemical etching produces smooth high aspect ratio InP nanostructures.
Kim, Seung Hyun; Mohseni, Parsian K; Song, Yi; Ishihara, Tatsumi; Li, Xiuling
2015-01-14
Creating high aspect ratio (AR) nanostructures by top-down fabrication without surface damage remains challenging for III-V semiconductors. Here, we demonstrate uniform, array-based InP nanostructures with lateral dimensions as small as sub-20 nm and AR > 35 using inverse metal-assisted chemical etching (I-MacEtch) in hydrogen peroxide (H2O2) and sulfuric acid (H2SO4), a purely solution-based yet anisotropic etching method. The mechanism of I-MacEtch, in contrast to regular MacEtch, is explored through surface characterization. Unique to I-MacEtch, the sidewall etching profile is remarkably smooth, independent of metal pattern edge roughness. The capability of this simple method to create various InP nanostructures, including high AR fins, can potentially enable the aggressive scaling of InP based transistors and optoelectronic devices with better performance and at lower cost than conventional etching methods.
Airway mechanics and methods used to visualize smooth muscle dynamics in vitro.
Cooper, P R; McParland, B E; Mitchell, H W; Noble, P B; Politi, A Z; Ressmeyer, A R; West, A R
2009-10-01
Contraction of airway smooth muscle (ASM) is regulated by the physiological, structural and mechanical environment in the lung. We review two in vitro techniques, lung slices and airway segment preparations, that enable in situ ASM contraction and airway narrowing to be visualized. Lung slices and airway segment approaches bridge a gap between cell culture and isolated ASM, and whole animal studies. Imaging techniques enable key upstream events involved in airway narrowing, such as ASM cell signalling and structural and mechanical events impinging on ASM, to be investigated.
Brock, Timothy M; Sidaginamale, Raghavendra; Rushton, Steven; Nargol, Antoni V F; Bowsher, John G; Savisaar, Christina; Joyce, Tom J; Deehan, David J; Lord, James K; Langton, David J
2015-12-01
Taper wear at the head-neck junction is a possible cause of early failure in large head metal-on-metal (LH-MoM) hip replacements. We hypothesized that: (i) taper wear may be more pronounced in certain product designs; and (ii) an increased abductor moment arm may be protective. The tapers of 104 explanted LH-MoM hip replacements revised for adverse reaction to metal debris (ARMD) from a single manufacturer were analyzed for linear and volumetric wear using a co-ordinate measuring machine. The mated stem was a shorter 12/14, threaded trunnion (n=72) or a longer, smooth 11/13 trunnion (n=32). The abductor moment arm was calculated from pre-revision radiographs. Independent predictors of linear and volumetric wear included taper angle, stem type, and the horizontal moment arm. Tapers mated with the threaded 12/14 trunnion had significantly higher rates of volumetric wear (0.402 mm3/yr vs. 0.123 mm3/yr [t=-2.145, p=0.035]). There was a trend to larger abductor moment arms being protective (p=0.055). Design variation appears to play an important role in taper-trunnion junction failure. We recommend that surgeons bear these findings in mind when considering the use of a short, threaded trunnion with a cobalt-chromium head. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.
SG1120-1202: Mass-quenching as Tracked by UV Emission in the Group Environment at z=0.37
NASA Astrophysics Data System (ADS)
Monroe, Jonathan T.; Tran, Kim-Vy H.; Gonzalez, Anthony H.
2017-02-01
We use the Hubble Space Telescope to obtain WFC3/F390W imaging of the supergroup SG1120-1202 at z=0.37, mapping the UV emission of 138 spectroscopically confirmed members. We measure total (F390W-F814W) colors and visually classify the UV morphology of individual galaxies as “clumpy” or “smooth.” Approximately 30% of the members have pockets of UV emission (clumpy) and we identify for the first time in the group environment galaxies with UV morphologies similar to the “jellyfish” galaxies observed in massive clusters. We stack the clumpy UV members and measure a shallow internal color gradient, which indicates that unobscured star formation is occurring throughout these galaxies. We also stack the four galaxy groups and measure a strong trend of decreasing UV emission with decreasing projected group distance ({R}{proj}). We find that the strong correlation between decreasing UV emission and increasing stellar mass can fully account for the observed trend in (F390W-F814W)-{R}{proj}, I.e., mass-quenching is the dominant mechanism for extinguishing UV emission in group galaxies. Our extensive multi-wavelength analysis of SG1120-1202 indicates that stellar mass is the primary predictor of UV emission, but that the increasing fraction of massive (red/smooth) galaxies at {R}{proj} ≲ 2 R 200 and existence of jellyfish candidates is due to the group environment.
Optimising predictor domains for spatially coherent precipitation downscaling
NASA Astrophysics Data System (ADS)
Radanovics, S.; Vidal, J.-P.; Sauquet, E.; Ben Daoud, A.; Bontron, G.
2012-04-01
Relationships between local precipitation (predictands) and large-scale circulation (predictors) are used for statistical downscaling purposes in various contexts, from medium-term forecasting to climate change impact studies. For hydrological purposes like flood forecasting, the downscaled precipitation spatial fields have furthermore to be coherent over possibly large basins. This thus first requires to know what predictor domain can be associated to the precipitation over each part of the studied basin. This study addresses this issue by identifying the optimum predictor domains over the whole of France, for a specific downscaling method based on a analogue approach and developed by Ben Daoud et al. (2011). The downscaling method used here is based on analogies on different variables: temperature, relative humidity, vertical velocity and geopotentials. The optimum predictor domain has been found to consist of the nearest grid cell for all variables except geopotentials (Ben Daoud et al., 2011). Moreover, geopotential domains have been found to be sensitive to the target location by Obled et al. (2002), and the present study thus focuses on optimizing the domains of this specific predictor over France. The predictor domains for geopotential at 500 hPa and 1000 hPa are optimised for 608 climatologically homogeneous zones in France using the ERA-40 reanalysis data for the large-scale predictors and local precipitation from the Safran near-surface atmospheric reanalysis (Vidal et al., 2010). The similarity of geopotential fields is measured by the Teweles and Wobus shape criterion. The predictive skill of different predictor domains for the different regions is tested with the Continuous Ranked Probability Score (CRPS) for the 25 best analogue days found with the statistical downscaling method. Rectangular predictor domains of different sizes, shapes and locations are tested, and the one that leads to the smallest CRPS for the zone in question is retained. The resulting optimised domains are analysed for defining regions where neighbouring zones have equal or similar predictor domains and identifying which French river basins contain zones associated with different predictor domains, i.e. are exposed to different meteorological influences. The above analysis will be used (1) to extend the statistical downscaling method of Ben Daoud et al. (2011) to the whole of France and (2) to develop it further in order to achieve spatially coherent forecasts while preserving the predictive skill on the local scale. Ben Daoud, A., Sauquet, E., Lang, M., Bontron, G., and Obled, C. (2011). Precipitation forecasting through an analog sorting technique: a comparative study. Advances in Geosciences, 29:103-107. doi: 10.5194/adgeo-29-103-2011 Obled, C., Bontron, G., and Garçon, R. (2002). Quantitative precipitation forecasts: a statistical adaptation of model outputs through an analogues sorting approach. Atmospheric Research, 63(3-4):303-324. doi: 10.1016/S0169-8095(02)00038-8 Vidal, J.-P., Martin, E., Franchistéguy, L., Baillon, M., and Soubeyroux, J.-M. (2010) A 50-year high-resolution atmospheric reanalysis over France with the Safran system. International Journal of Climatology, 30:1627-1644. doi: 10.1002/joc.2003
Survey and Method for Determination of Trajectory Predictor Requirements
NASA Technical Reports Server (NTRS)
Rentas, Tamika L.; Green, Steven M.; Cate, Karen Tung
2009-01-01
A survey of air-traffic-management researchers, representing a broad range of automation applications, was conducted to document trajectory-predictor requirements for future decision-support systems. Results indicated that the researchers were unable to articulate a basic set of trajectory-prediction requirements for their automation concepts. Survey responses showed the need to establish a process to help developers determine the trajectory-predictor-performance requirements for their concepts. Two methods for determining trajectory-predictor requirements are introduced. A fast-time simulation method is discussed that captures the sensitivity of a concept to the performance of its trajectory-prediction capability. A characterization method is proposed to provide quicker, yet less precise results, based on analysis and simulation to characterize the trajectory-prediction errors associated with key modeling options for a specific concept. Concept developers can then identify the relative sizes of errors associated with key modeling options, and qualitatively determine which options lead to significant errors. The characterization method is demonstrated for a case study involving future airport surface traffic management automation. Of the top four sources of error, results indicated that the error associated with accelerations to and from turn speeds was unacceptable, the error associated with the turn path model was acceptable, and the error associated with taxi-speed estimation was of concern and needed a higher fidelity concept simulation to obtain a more precise result
Spencer, Nick J; Hibberd, Timothy J; Travis, Lee; Wiklendt, Lukasz; Costa, Marcello; Hu, Hongzhen; Brookes, Simon J; Wattchow, David A; Dinning, Phil G; Keating, Damien J; Sorensen, Julian
2018-05-28
The enteric nervous system (ENS) contains millions of neurons essential for organization of motor behaviour of the intestine. It is well established the large intestine requires ENS activity to drive propulsive motor behaviours. However, the firing pattern of the ENS underlying propagating neurogenic contractions of the large intestine remains unknown. To identify this, we used high resolution neuronal imaging with electrophysiology from neighbouring smooth muscle. Myoelectric activity underlying propagating neurogenic contractions along murine large intestine (referred to as colonic migrating motor complexes, CMMCs) consisted of prolonged bursts of rhythmic depolarizations at a frequency of ∼2 Hz. Temporal coordination of this activity in the smooth muscle over large spatial fields (∼7mm, longitudinally) was dependent on the ENS. During quiescent periods between neurogenic contractions, recordings from large populations of enteric neurons, in mice of either sex, revealed ongoing activity. The onset of neurogenic contractions was characterized by the emergence of temporally synchronized activity across large populations of excitatory and inhibitory neurons. This neuronal firing pattern was rhythmic and temporally synchronized across large numbers of ganglia at ∼2 Hz. ENS activation preceded smooth muscle depolarization, indicating rhythmic depolarizations in smooth muscle were controlled by firing of enteric neurons. The cyclical emergence of temporally coordinated firing of large populations of enteric neurons represents a unique neural motor pattern outside the central nervous system. This is the first direct observation of rhythmic firing in the ENS underlying rhythmic electrical depolarizations in smooth muscle. The pattern of neuronal activity we identified underlies the generation of CMMCs. SIGNIFICANCE STATEMENT How the enteric nervous system (ENS) generates neurogenic contractions of smooth muscle in the gastrointestinal (GI) tract has been a long-standing mystery in vertebrates. It is well known that myogenic pacemaker cells exist in the GI-tract (called Interstitial cells of Cajal, ICC) that generate rhythmic myogenic contractions. However, the mechanisms underlying the generation of rhythmic neurogenic contractions of smooth muscle in the GI-tract remains unknown. We developed a high resolution neuronal imaging method with electrophysiology to address this issue. This technique revealed a novel pattern of rhythmic coordinated neuronal firing in the ENS that has never been identified. Rhythmic neuronal firing in the ENS was found to generate rhythmic neurogenic depolarizations in smooth muscle that underlie contraction of the GI-tract. Copyright © 2018 the authors.
MO-DE-207A-11: Sparse-View CT Reconstruction Via a Novel Non-Local Means Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Z; Qi, H; Wu, S
2016-06-15
Purpose: Sparse-view computed tomography (CT) reconstruction is an effective strategy to reduce the radiation dose delivered to patients. Due to its insufficiency of measurements, traditional non-local means (NLM) based reconstruction methods often lead to over-smoothness in image edges. To address this problem, an adaptive NLM reconstruction method based on rotational invariance (RIANLM) is proposed. Methods: The method consists of four steps: 1) Initializing parameters; 2) Algebraic reconstruction technique (ART) reconstruction using raw projection data; 3) Positivity constraint of the image reconstructed by ART; 4) Update reconstructed image by using RIANLM filtering. In RIANLM, a novel similarity metric that is rotationalmore » invariance is proposed and used to calculate the distance between two patches. In this way, any patch with similar structure but different orientation to the reference patch would win a relatively large weight to avoid over-smoothed image. Moreover, the parameter h in RIANLM which controls the decay of the weights is adaptive to avoid over-smoothness, while it in NLM is not adaptive during the whole reconstruction process. The proposed method is named as ART-RIANLM and validated on Shepp-Logan phantom and clinical projection data. Results: In our experiments, the searching neighborhood size is set to 15 by 15 and the similarity window is set to 3 by 3. For the simulated case with a resolution of 256 by 256 Shepp-Logan phantom, the ART-RIANLM produces higher SNR (35.38dB<24.00dB) and lower MAE (0.0006<0.0023) reconstructed image than ART-NLM. The visual inspection demonstrated that the proposed method could suppress artifacts or noises more effectively and preserve image edges better. Similar results were found for clinical data case. Conclusion: A novel ART-RIANLM method for sparse-view CT reconstruction is presented with superior image. Compared to the conventional ART-NLM method, the SNR and MAE from ART-RIANLM increases 47% and decreases 74%, respectively.« less
Fabrication of low cost soft tissue prostheses with the desktop 3D printer
NASA Astrophysics Data System (ADS)
He, Yong; Xue, Guang-Huai; Fu, Jian-Zhong
2014-11-01
Soft tissue prostheses such as artificial ear, eye and nose are widely used in the maxillofacial rehabilitation. In this report we demonstrate how to fabricate soft prostheses mold with a low cost desktop 3D printer. The fabrication method used is referred to as Scanning Printing Polishing Casting (SPPC). Firstly the anatomy is scanned with a 3D scanner, then a tissue casting mold is designed on computer and printed with a desktop 3D printer. Subsequently, a chemical polishing method is used to polish the casting mold by removing the staircase effect and acquiring a smooth surface. Finally, the last step is to cast medical grade silicone into the mold. After the silicone is cured, the fine soft prostheses can be removed from the mold. Utilizing the SPPC method, soft prostheses with smooth surface and complicated structure can be fabricated at a low cost. Accordingly, the total cost of fabricating ear prosthesis is about $30, which is much lower than the current soft prostheses fabrication methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Bradley, E-mail: brma7253@colorado.edu; Fornberg, Bengt, E-mail: Fornberg@colorado.edu
In a previous study of seismic modeling with radial basis function-generated finite differences (RBF-FD), we outlined a numerical method for solving 2-D wave equations in domains with material interfaces between different regions. The method was applicable on a mesh-free set of data nodes. It included all information about interfaces within the weights of the stencils (allowing the use of traditional time integrators), and was shown to solve problems of the 2-D elastic wave equation to 3rd-order accuracy. In the present paper, we discuss a refinement of that method that makes it simpler to implement. It can also improve accuracy formore » the case of smoothly-variable model parameter values near interfaces. We give several test cases that demonstrate the method solving 2-D elastic wave equation problems to 4th-order accuracy, even in the presence of smoothly-curved interfaces with jump discontinuities in the model parameters.« less
Fabrication of low cost soft tissue prostheses with the desktop 3D printer
He, Yong; Xue, Guang-huai; Fu, Jian-zhong
2014-01-01
Soft tissue prostheses such as artificial ear, eye and nose are widely used in the maxillofacial rehabilitation. In this report we demonstrate how to fabricate soft prostheses mold with a low cost desktop 3D printer. The fabrication method used is referred to as Scanning Printing Polishing Casting (SPPC). Firstly the anatomy is scanned with a 3D scanner, then a tissue casting mold is designed on computer and printed with a desktop 3D printer. Subsequently, a chemical polishing method is used to polish the casting mold by removing the staircase effect and acquiring a smooth surface. Finally, the last step is to cast medical grade silicone into the mold. After the silicone is cured, the fine soft prostheses can be removed from the mold. Utilizing the SPPC method, soft prostheses with smooth surface and complicated structure can be fabricated at a low cost. Accordingly, the total cost of fabricating ear prosthesis is about $30, which is much lower than the current soft prostheses fabrication methods. PMID:25427880
Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context
Martinez, Josue G.; Carroll, Raymond J.; Müller, Samuel; Sampson, Joshua N.; Chatterjee, Nilanjan
2012-01-01
When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso. PMID:22347720
Stemshorn, B; Nielsen, K; Samagh, B
1981-01-01
Two methods are described for the partial purification of a high molecular weight, heat-resistant component (CO1) of sonicates of smooth and rough Brucella abortus which is precipitated by sera of some infected cattle. Method 1, a combination of gel filtration chromatography and polyacrylamide gel electrophoresis, was used to prepare CO1 from sonicates of a smooth field strain of B. abortus. Method 2, a combination of gel filtration chromatography and heat treatment, was used to obtain CO1, from sonicates of rough B. abortus strain 45/20. Rabbit antisera produced against CO1 prepared by either method contained only CO1 precipitins but were negative in standard agglutination and complement fixation tests conducted with whole cell antigens. Evidence is presented that CO1 is identical to Brucella antigen A2, and it is proposed that in future the designation A2 be employed. Images Fig. 1. Fig. 2. Fig. 3. Fig. 4. PMID:6791797
Method of adiabatic modes in studying problems of smoothly irregular open waveguide structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sevastianov, L. A., E-mail: sevast@sci.pfu.edu.ru; Egorov, A. A.; Sevastyanov, A. L.
2013-02-15
Basic steps in developing an original method of adiabatic modes that makes it possible to solve the direct and inverse problems of simulating and designing three-dimensional multilayered smoothly irregular open waveguide structures are described. A new element in the method is that an approximate solution of Maxwell's equations is made to obey 'inclined' boundary conditions at the interfaces between themedia being considered. These boundary conditions take into account the obliqueness of planes tangent to nonplanar boundaries between the media and lead to new equations for coupled vector quasiwaveguide hybrid adiabatic modes. Solutions of these equations describe the phenomenon of 'entanglement'more » of two linear polarizations of an irregular multilayered waveguide, the appearance of a new mode in an entangled state, and the effect of rotation of the polarization plane of quasiwaveguide modes. The efficiency of the method is demonstrated by considering the example of numerically simulating a thin-film generalized waveguide Lueneburg lens.« less
An Improved Time-Frequency Analysis Method in Interference Detection for GNSS Receivers
Sun, Kewen; Jin, Tian; Yang, Dongkai
2015-01-01
In this paper, an improved joint time-frequency (TF) analysis method based on a reassigned smoothed pseudo Wigner–Ville distribution (RSPWVD) has been proposed in interference detection for Global Navigation Satellite System (GNSS) receivers. In the RSPWVD, the two-dimensional low-pass filtering smoothing function is introduced to eliminate the cross-terms present in the quadratic TF distribution, and at the same time, the reassignment method is adopted to improve the TF concentration properties of the auto-terms of the signal components. This proposed interference detection method is evaluated by experiments on GPS L1 signals in the disturbing scenarios compared to the state-of-the-art interference detection approaches. The analysis results show that the proposed interference detection technique effectively overcomes the cross-terms problem and also preserves good TF localization properties, which has been proven to be effective and valid to enhance the interference detection performance of the GNSS receivers, particularly in the jamming environments. PMID:25905704
Fabrication of low cost soft tissue prostheses with the desktop 3D printer.
He, Yong; Xue, Guang-huai; Fu, Jian-zhong
2014-11-27
Soft tissue prostheses such as artificial ear, eye and nose are widely used in the maxillofacial rehabilitation. In this report we demonstrate how to fabricate soft prostheses mold with a low cost desktop 3D printer. The fabrication method used is referred to as Scanning Printing Polishing Casting (SPPC). Firstly the anatomy is scanned with a 3D scanner, then a tissue casting mold is designed on computer and printed with a desktop 3D printer. Subsequently, a chemical polishing method is used to polish the casting mold by removing the staircase effect and acquiring a smooth surface. Finally, the last step is to cast medical grade silicone into the mold. After the silicone is cured, the fine soft prostheses can be removed from the mold. Utilizing the SPPC method, soft prostheses with smooth surface and complicated structure can be fabricated at a low cost. Accordingly, the total cost of fabricating ear prosthesis is about $30, which is much lower than the current soft prostheses fabrication methods.
NASA Astrophysics Data System (ADS)
Martin, Bradley; Fornberg, Bengt
2017-04-01
In a previous study of seismic modeling with radial basis function-generated finite differences (RBF-FD), we outlined a numerical method for solving 2-D wave equations in domains with material interfaces between different regions. The method was applicable on a mesh-free set of data nodes. It included all information about interfaces within the weights of the stencils (allowing the use of traditional time integrators), and was shown to solve problems of the 2-D elastic wave equation to 3rd-order accuracy. In the present paper, we discuss a refinement of that method that makes it simpler to implement. It can also improve accuracy for the case of smoothly-variable model parameter values near interfaces. We give several test cases that demonstrate the method solving 2-D elastic wave equation problems to 4th-order accuracy, even in the presence of smoothly-curved interfaces with jump discontinuities in the model parameters.
Sun, Jun; Zhou, Xin; Wu, Xiaohong; Zhang, Xiaodong; Li, Qinglin
2016-02-26
Fast identification of moisture content in tobacco plant leaves plays a key role in the tobacco cultivation industry and benefits the management of tobacco plant in the farm. In order to identify moisture content of tobacco plant leaves in a fast and nondestructive way, a method involving Mahalanobis distance coupled with Monte Carlo cross validation(MD-MCCV) was proposed to eliminate outlier sample in this study. The hyperspectral data of 200 tobacco plant leaf samples of 20 moisture gradients were obtained using FieldSpc(®) 3 spectrometer. Savitzky-Golay smoothing(SG), roughness penalty smoothing(RPS), kernel smoothing(KS) and median smoothing(MS) were used to preprocess the raw spectra. In addition, Mahalanobis distance(MD), Monte Carlo cross validation(MCCV) and Mahalanobis distance coupled to Monte Carlo cross validation(MD-MCCV) were applied to select the outlier sample of the raw spectrum and four smoothing preprocessing spectra. Successive projections algorithm (SPA) was used to extract the most influential wavelengths. Multiple Linear Regression (MLR) was applied to build the prediction models based on preprocessed spectra feature in characteristic wavelengths. The results showed that the preferably four prediction model were MD-MCCV-SG (Rp(2) = 0.8401 and RMSEP = 0.1355), MD-MCCV-RPS (Rp(2) = 0.8030 and RMSEP = 0.1274), MD-MCCV-KS (Rp(2) = 0.8117 and RMSEP = 0.1433), MD-MCCV-MS (Rp(2) = 0.9132 and RMSEP = 0.1162). MD-MCCV algorithm performed best among MD algorithm, MCCV algorithm and the method without sample pretreatment algorithm in the eliminating outlier sample from 20 different moisture gradients of tobacco plant leaves and MD-MCCV can be used to eliminate outlier sample in the spectral preprocessing. Copyright © 2016 Elsevier Inc. All rights reserved.