Sample records for optimal smoothing parameter

  1. Adaptive quantization-parameter clip scheme for smooth quality in H.264/AVC.

    PubMed

    Hu, Sudeng; Wang, Hanli; Kwong, Sam

    2012-04-01

    In this paper, we investigate the issues over the smooth quality and the smooth bit rate during rate control (RC) in H.264/AVC. An adaptive quantization-parameter (Q(p)) clip scheme is proposed to optimize the quality smoothness while keeping the bit-rate fluctuation at an acceptable level. First, the frame complexity variation is studied by defining a complexity ratio between two nearby frames. Second, the range of the generated bits is analyzed to prevent the encoder buffer from overflow and underflow. Third, based on the safe range of the generated bits, an optimal Q(p) clip range is developed to reduce the quality fluctuation. Experimental results demonstrate that the proposed Q(p) clip scheme can achieve excellent performance in quality smoothness and buffer regulation.

  2. Bayesian Optimization for Neuroimaging Pre-processing in Brain Age Classification and Prediction

    PubMed Central

    Lancaster, Jenessa; Lorenz, Romy; Leech, Rob; Cole, James H.

    2018-01-01

    Neuroimaging-based age prediction using machine learning is proposed as a biomarker of brain aging, relating to cognitive performance, health outcomes and progression of neurodegenerative disease. However, even leading age-prediction algorithms contain measurement error, motivating efforts to improve experimental pipelines. T1-weighted MRI is commonly used for age prediction, and the pre-processing of these scans involves normalization to a common template and resampling to a common voxel size, followed by spatial smoothing. Resampling parameters are often selected arbitrarily. Here, we sought to improve brain-age prediction accuracy by optimizing resampling parameters using Bayesian optimization. Using data on N = 2003 healthy individuals (aged 16–90 years) we trained support vector machines to (i) distinguish between young (<22 years) and old (>50 years) brains (classification) and (ii) predict chronological age (regression). We also evaluated generalisability of the age-regression model to an independent dataset (CamCAN, N = 648, aged 18–88 years). Bayesian optimization was used to identify optimal voxel size and smoothing kernel size for each task. This procedure adaptively samples the parameter space to evaluate accuracy across a range of possible parameters, using independent sub-samples to iteratively assess different parameter combinations to arrive at optimal values. When distinguishing between young and old brains a classification accuracy of 88.1% was achieved, (optimal voxel size = 11.5 mm3, smoothing kernel = 2.3 mm). For predicting chronological age, a mean absolute error (MAE) of 5.08 years was achieved, (optimal voxel size = 3.73 mm3, smoothing kernel = 3.68 mm). This was compared to performance using default values of 1.5 mm3 and 4mm respectively, resulting in MAE = 5.48 years, though this 7.3% improvement was not statistically significant. When assessing generalisability, best performance was achieved when applying the entire Bayesian optimization framework to the new dataset, out-performing the parameters optimized for the initial training dataset. Our study outlines the proof-of-principle that neuroimaging models for brain-age prediction can use Bayesian optimization to derive case-specific pre-processing parameters. Our results suggest that different pre-processing parameters are selected when optimization is conducted in specific contexts. This potentially motivates use of optimization techniques at many different points during the experimental process, which may improve statistical sensitivity and reduce opportunities for experimenter-led bias. PMID:29483870

  3. Optimal HRF and smoothing parameters for fMRI time series within an autoregressive modeling framework.

    PubMed

    Galka, Andreas; Siniatchkin, Michael; Stephani, Ulrich; Groening, Kristina; Wolff, Stephan; Bosch-Bayard, Jorge; Ozaki, Tohru

    2010-12-01

    The analysis of time series obtained by functional magnetic resonance imaging (fMRI) may be approached by fitting predictive parametric models, such as nearest-neighbor autoregressive models with exogeneous input (NNARX). As a part of the modeling procedure, it is possible to apply instantaneous linear transformations to the data. Spatial smoothing, a common preprocessing step, may be interpreted as such a transformation. The autoregressive parameters may be constrained, such that they provide a response behavior that corresponds to the canonical haemodynamic response function (HRF). We present an algorithm for estimating the parameters of the linear transformations and of the HRF within a rigorous maximum-likelihood framework. Using this approach, an optimal amount of both the spatial smoothing and the HRF can be estimated simultaneously for a given fMRI data set. An example from a motor-task experiment is discussed. It is found that, for this data set, weak, but non-zero, spatial smoothing is optimal. Furthermore, it is demonstrated that activated regions can be estimated within the maximum-likelihood framework.

  4. Smoothing-based compressed state Kalman filter for joint state-parameter estimation: Applications in reservoir characterization and CO2 storage monitoring

    NASA Astrophysics Data System (ADS)

    Li, Y. J.; Kokkinaki, Amalia; Darve, Eric F.; Kitanidis, Peter K.

    2017-08-01

    The operation of most engineered hydrogeological systems relies on simulating physical processes using numerical models with uncertain parameters and initial conditions. Predictions by such uncertain models can be greatly improved by Kalman-filter techniques that sequentially assimilate monitoring data. Each assimilation constitutes a nonlinear optimization, which is solved by linearizing an objective function about the model prediction and applying a linear correction to this prediction. However, if model parameters and initial conditions are uncertain, the optimization problem becomes strongly nonlinear and a linear correction may yield unphysical results. In this paper, we investigate the utility of one-step ahead smoothing, a variant of the traditional filtering process, to eliminate nonphysical results and reduce estimation artifacts caused by nonlinearities. We present the smoothing-based compressed state Kalman filter (sCSKF), an algorithm that combines one step ahead smoothing, in which current observations are used to correct the state and parameters one step back in time, with a nonensemble covariance compression scheme, that reduces the computational cost by efficiently exploring the high-dimensional state and parameter space. Numerical experiments show that when model parameters are uncertain and the states exhibit hyperbolic behavior with sharp fronts, as in CO2 storage applications, one-step ahead smoothing reduces overshooting errors and, by design, gives physically consistent state and parameter estimates. We compared sCSKF with commonly used data assimilation methods and showed that for the same computational cost, combining one step ahead smoothing and nonensemble compression is advantageous for real-time characterization and monitoring of large-scale hydrogeological systems with sharp moving fronts.

  5. Cosmological parameter estimation using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Prasad, J.; Souradeep, T.

    2014-03-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.

  6. Optimization of reactive-ion etching (RIE) parameters for fabrication of tantalum pentoxide (Ta2O5) waveguide using Taguchi method

    NASA Astrophysics Data System (ADS)

    Muttalib, M. Firdaus A.; Chen, Ruiqi Y.; Pearce, S. J.; Charlton, Martin D. B.

    2017-11-01

    In this paper, we demonstrate the optimization of reactive-ion etching (RIE) parameters for the fabrication of tantalum pentoxide (Ta2O5) waveguide with chromium (Cr) hard mask in a commercial OIPT Plasmalab 80 RIE etcher. A design of experiment (DOE) using Taguchi method was implemented to find optimum RF power, mixture of CHF3 and Ar gas ratio, and chamber pressure for a high etch rate, good selectivity, and smooth waveguide sidewall. It was found that the optimized etch condition obtained in this work were RF power = 200 W, gas ratio = 80 %, and chamber pressure = 30 mTorr with an etch rate of 21.6 nm/min, Ta2O5/Cr selectivity ratio of 28, and smooth waveguide sidewall.

  7. Optimizing chirped laser pulse parameters for electron acceleration in vacuum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akhyani, Mina; Jahangiri, Fazel; Niknam, Ali Reza

    2015-11-14

    Electron dynamics in the field of a chirped linearly polarized laser pulse is investigated. Variations of electron energy gain versus chirp parameter, time duration, and initial phase of laser pulse are studied. Based on maximizing laser pulse asymmetry, a numerical optimization procedure is presented, which leads to the elimination of rapid fluctuations of gain versus the chirp parameter. Instead, a smooth variation is observed that considerably reduces the accuracy required for experimentally adjusting the chirp parameter.

  8. Improving smoothing efficiency of rigid conformal polishing tool using time-dependent smoothing evaluation model

    NASA Astrophysics Data System (ADS)

    Song, Chi; Zhang, Xuejun; Zhang, Xin; Hu, Haifei; Zeng, Xuefeng

    2017-06-01

    A rigid conformal (RC) lap can smooth mid-spatial-frequency (MSF) errors, which are naturally smaller than the tool size, while still removing large-scale errors in a short time. However, the RC-lap smoothing efficiency performance is poorer than expected, and existing smoothing models cannot explicitly specify the methods to improve this efficiency. We presented an explicit time-dependent smoothing evaluation model that contained specific smoothing parameters directly derived from the parametric smoothing model and the Preston equation. Based on the time-dependent model, we proposed a strategy to improve the RC-lap smoothing efficiency, which incorporated the theoretical model, tool optimization, and efficiency limit determination. Two sets of smoothing experiments were performed to demonstrate the smoothing efficiency achieved using the time-dependent smoothing model. A high, theory-like tool influence function and a limiting tool speed of 300 RPM were o

  9. Maximum profile likelihood estimation of differential equation parameters through model based smoothing state estimates.

    PubMed

    Campbell, D A; Chkrebtii, O

    2013-12-01

    Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.

  10. The quasi-optimality criterion in the linear functional strategy

    NASA Astrophysics Data System (ADS)

    Kindermann, Stefan; Pereverzyev, Sergiy, Jr.; Pilipenko, Andrey

    2018-07-01

    The linear functional strategy for the regularization of inverse problems is considered. For selecting the regularization parameter therein, we propose the heuristic quasi-optimality principle and some modifications including the smoothness of the linear functionals. We prove convergence rates for the linear functional strategy with these heuristic rules taking into account the smoothness of the solution and the functionals and imposing a structural condition on the noise. Furthermore, we study these noise conditions in both a deterministic and stochastic setup and verify that for mildly-ill-posed problems and Gaussian noise, these conditions are satisfied almost surely, where on the contrary, in the severely-ill-posed case and in a similar setup, the corresponding noise condition fails to hold. Moreover, we propose an aggregation method for adaptively optimizing the parameter choice rule by making use of improved rates for linear functionals. Numerical results indicate that this method yields better results than the standard heuristic rule.

  11. Global optimization for motion estimation with applications to ultrasound videos of carotid artery plaques

    NASA Astrophysics Data System (ADS)

    Murillo, Sergio; Pattichis, Marios; Soliz, Peter; Barriga, Simon; Loizou, C. P.; Pattichis, C. S.

    2010-03-01

    Motion estimation from digital video is an ill-posed problem that requires a regularization approach. Regularization introduces a smoothness constraint that can reduce the resolution of the velocity estimates. The problem is further complicated for ultrasound videos (US), where speckle noise levels can be significant. Motion estimation using optical flow models requires the modification of several parameters to satisfy the optical flow constraint as well as the level of imposed smoothness. Furthermore, except in simulations or mostly unrealistic cases, there is no ground truth to use for validating the velocity estimates. This problem is present in all real video sequences that are used as input to motion estimation algorithms. It is also an open problem in biomedical applications like motion analysis of US of carotid artery (CA) plaques. In this paper, we study the problem of obtaining reliable ultrasound video motion estimates for atherosclerotic plaques for use in clinical diagnosis. A global optimization framework for motion parameter optimization is presented. This framework uses actual carotid artery motions to provide optimal parameter values for a variety of motions and is tested on ten different US videos using two different motion estimation techniques.

  12. An optimized Nash nonlinear grey Bernoulli model based on particle swarm optimization and its application in prediction for the incidence of Hepatitis B in Xinjiang, China.

    PubMed

    Zhang, Liping; Zheng, Yanling; Wang, Kai; Zhang, Xueliang; Zheng, Yujian

    2014-06-01

    In this paper, by using a particle swarm optimization algorithm to solve the optimal parameter estimation problem, an improved Nash nonlinear grey Bernoulli model termed PSO-NNGBM(1,1) is proposed. To test the forecasting performance, the optimized model is applied for forecasting the incidence of hepatitis B in Xinjiang, China. Four models, traditional GM(1,1), grey Verhulst model (GVM), original nonlinear grey Bernoulli model (NGBM(1,1)) and Holt-Winters exponential smoothing method, are also established for comparison with the proposed model under the criteria of mean absolute percentage error and root mean square percent error. The prediction results show that the optimized NNGBM(1,1) model is more accurate and performs better than the traditional GM(1,1), GVM, NGBM(1,1) and Holt-Winters exponential smoothing method. Copyright © 2014. Published by Elsevier Ltd.

  13. Towards adjoint-based inversion for rheological parameters in nonlinear viscous mantle flow

    NASA Astrophysics Data System (ADS)

    Worthen, Jennifer; Stadler, Georg; Petra, Noemi; Gurnis, Michael; Ghattas, Omar

    2014-09-01

    We address the problem of inferring mantle rheological parameter fields from surface velocity observations and instantaneous nonlinear mantle flow models. We formulate this inverse problem as an infinite-dimensional nonlinear least squares optimization problem governed by nonlinear Stokes equations. We provide expressions for the gradient of the cost functional of this optimization problem with respect to two spatially-varying rheological parameter fields: the viscosity prefactor and the exponent of the second invariant of the strain rate tensor. Adjoint (linearized) Stokes equations, which are characterized by a 4th order anisotropic viscosity tensor, facilitates efficient computation of the gradient. A quasi-Newton method for the solution of this optimization problem is presented, which requires the repeated solution of both nonlinear forward Stokes and linearized adjoint Stokes equations. For the solution of the nonlinear Stokes equations, we find that Newton’s method is significantly more efficient than a Picard fixed point method. Spectral analysis of the inverse operator given by the Hessian of the optimization problem reveals that the numerical eigenvalues collapse rapidly to zero, suggesting a high degree of ill-posedness of the inverse problem. To overcome this ill-posedness, we employ Tikhonov regularization (favoring smooth parameter fields) or total variation (TV) regularization (favoring piecewise-smooth parameter fields). Solution of two- and three-dimensional finite element-based model inverse problems show that a constant parameter in the constitutive law can be recovered well from surface velocity observations. Inverting for a spatially-varying parameter field leads to its reasonable recovery, in particular close to the surface. When inferring two spatially varying parameter fields, only an effective viscosity field and the total viscous dissipation are recoverable. Finally, a model of a subducting plate shows that a localized weak zone at the plate boundary can be partially recovered, especially with TV regularization.

  14. Improved dose-volume histogram estimates for radiopharmaceutical therapy by optimizing quantitative SPECT reconstruction parameters

    NASA Astrophysics Data System (ADS)

    Cheng, Lishui; Hobbs, Robert F.; Segars, Paul W.; Sgouros, George; Frey, Eric C.

    2013-06-01

    In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose-volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator-detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less smoothing at early time points post-radiopharmaceutical administration but more smoothing and fewer iterations at later time points when the total organ activity was lower. The results of this study demonstrate the importance of using optimal reconstruction and regularization parameters. Optimal results were obtained with different parameters at each time point, but using a single set of parameters for all time points produced near-optimal dose-volume histograms.

  15. Likelihood testing of seismicity-based rate forecasts of induced earthquakes in Oklahoma and Kansas

    USGS Publications Warehouse

    Moschetti, Morgan P.; Hoover, Susan M.; Mueller, Charles

    2016-01-01

    Likelihood testing of induced earthquakes in Oklahoma and Kansas has identified the parameters that optimize the forecasting ability of smoothed seismicity models and quantified the recent temporal stability of the spatial seismicity patterns. Use of the most recent 1-year period of earthquake data and use of 10–20-km smoothing distances produced the greatest likelihood. The likelihood that the locations of January–June 2015 earthquakes were consistent with optimized forecasts decayed with increasing elapsed time between the catalogs used for model development and testing. Likelihood tests with two additional sets of earthquakes from 2014 exhibit a strong sensitivity of the rate of decay to the smoothing distance. Marked reductions in likelihood are caused by the nonstationarity of the induced earthquake locations. Our results indicate a multiple-fold benefit from smoothed seismicity models in developing short-term earthquake rate forecasts for induced earthquakes in Oklahoma and Kansas, relative to the use of seismic source zones.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jassal, K; Sarkar, B; Ganesh, T

    Purpose: The study investigates the effect of fluence smoothing parameter on VMAT plans for ten head-neck cancer patients using Monaco5.00.04. Methods: VMAT plans were created using Monaco5.00.04 planning system for 10 head-neck patients. Four plans were generated for each patient using available smoothing parameters i.e. high, medium, low and off. The number of monitor units required to deliver 1 cGy was defined as a modulation degree; and was taken as a measure of plan complexity. Routinely used plan quality parameters Conformity index (CI) and Homogeneity index (HI) were used in the study. As a protocol our center, practices “medium” smoothingmore » for clinical implementation. Plans with medium smoothing were opted as reference plans due to the clinical acceptance and dosimetric verifications made on these plans. Plans were generated by varying the smoothing parameter and re-optimization was done. The PTV was evaluated for D98%, D95%, D50%, D1% and prescription isodose volume (PIV). For critical organs; spine and parotids the parameters recorded were D1cc and Dmean respectively. Results: The cohort had the median prescription as 6000 cGy in the range of 6600 cGy - 4500 cGy. The modulation degree was observed to increase up to 6% from reference to the most complex plan. High smoothing had about 11% increase in segments which marginally (0.5 to 1%) increased the homogeneity index while conformity index remains constant. For spine the maximum D1cc was observed in medium smoothing as 4639.8 cGy, this plan was clinically accepted and dosimetrically verified. Similarly for parotids, the Dmean was 2011.9 cGy and 1817.05 cGy. Conclusion: The sensitivity of plan quality in terms of smoothing options (high, medium, low and off) available in Monaco 5.00.04 was resulted in minimal difference in terms of target coverage, conformity index and homogeneity index. Similarly changing smoothing did not result in any enhanced advantage in sparing of critical organs.« less

  17. Efficient Round-Trip Time Optimization for Replica-Exchange Enveloping Distribution Sampling (RE-EDS).

    PubMed

    Sidler, Dominik; Cristòfol-Clough, Michael; Riniker, Sereina

    2017-06-13

    Replica-exchange enveloping distribution sampling (RE-EDS) allows the efficient estimation of free-energy differences between multiple end-states from a single molecular dynamics (MD) simulation. In EDS, a reference state is sampled, which can be tuned by two types of parameters, i.e., smoothness parameters(s) and energy offsets, such that all end-states are sufficiently sampled. However, the choice of these parameters is not trivial. Replica exchange (RE) or parallel tempering is a widely applied technique to enhance sampling. By combining EDS with the RE technique, the parameter choice problem could be simplified and the challenge shifted toward an optimal distribution of the replicas in the smoothness-parameter space. The choice of a certain replica distribution can alter the sampling efficiency significantly. In this work, global round-trip time optimization (GRTO) algorithms are tested for the use in RE-EDS simulations. In addition, a local round-trip time optimization (LRTO) algorithm is proposed for systems with slowly adapting environments, where a reliable estimate for the round-trip time is challenging to obtain. The optimization algorithms were applied to RE-EDS simulations of a system of nine small-molecule inhibitors of phenylethanolamine N-methyltransferase (PNMT). The energy offsets were determined using our recently proposed parallel energy-offset (PEOE) estimation scheme. While the multistate GRTO algorithm yielded the best replica distribution for the ligands in water, the multistate LRTO algorithm was found to be the method of choice for the ligands in complex with PNMT. With this, the 36 alchemical free-energy differences between the nine ligands were calculated successfully from a single RE-EDS simulation 10 ns in length. Thus, RE-EDS presents an efficient method for the estimation of relative binding free energies.

  18. Smoothness of In vivo Spectral Baseline Determined by Mean Squared Error

    PubMed Central

    Zhang, Yan; Shen, Jun

    2013-01-01

    Purpose A nonparametric smooth line is usually added to spectral model to account for background signals in vivo magnetic resonance spectroscopy (MRS). The assumed smoothness of the baseline significantly influences quantitative spectral fitting. In this paper, a method is proposed to minimize baseline influences on estimated spectral parameters. Methods In this paper, the non-parametric baseline function with a given smoothness was treated as a function of spectral parameters. Its uncertainty was measured by root-mean-squared error (RMSE). The proposed method was demonstrated with a simulated spectrum and in vivo spectra of both short echo time (TE) and averaged echo times. The estimated in vivo baselines were compared with the metabolite-nulled spectra, and the LCModel-estimated baselines. The accuracies of estimated baseline and metabolite concentrations were further verified by cross-validation. Results An optimal smoothness condition was found that led to the minimal baseline RMSE. In this condition, the best fit was balanced against minimal baseline influences on metabolite concentration estimates. Conclusion Baseline RMSE can be used to indicate estimated baseline uncertainties and serve as the criterion for determining the baseline smoothness of in vivo MRS. PMID:24259436

  19. SU-E-J-16: Automatic Image Contrast Enhancement Based On Automatic Parameter Optimization for Radiation Therapy Setup Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiu, J; Washington University in St Louis, St Louis, MO; Li, H. Harlod

    Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The mostmore » important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools.« less

  20. Modelling and optimization of a wellhead gas flowmeter using concentric pipes

    NASA Astrophysics Data System (ADS)

    Nec, Yana; Huculak, Greg

    2017-09-01

    A novel configuration of a landfill wellhead was analysed to measure the flow rate of gas extracted from sanitary landfills. The device provides access points for pressure measurement integral to flow rate computation similarly to orifice and Venturi meters, and has the advantage of eliminating the problem of water condensation often impairing the accuracy thereof. It is proved that the proposed configuration entails comparable computational complexity and negligible sensitivity to geometric parameters. Calibration for the new device was attained using a custom optimization procedure, operating on a quadri-dimensional parameter surface evincing discontinuity and non-smoothness.

  1. An improved state-parameter analysis of ecosystem models using data assimilation

    USGS Publications Warehouse

    Chen, M.; Liu, S.; Tieszen, L.L.; Hollinger, D.Y.

    2008-01-01

    Much of the effort spent in developing data assimilation methods for carbon dynamics analysis has focused on estimating optimal values for either model parameters or state variables. The main weakness of estimating parameter values alone (i.e., without considering state variables) is that all errors from input, output, and model structure are attributed to model parameter uncertainties. On the other hand, the accuracy of estimating state variables may be lowered if the temporal evolution of parameter values is not incorporated. This research develops a smoothed ensemble Kalman filter (SEnKF) by combining ensemble Kalman filter with kernel smoothing technique. SEnKF has following characteristics: (1) to estimate simultaneously the model states and parameters through concatenating unknown parameters and state variables into a joint state vector; (2) to mitigate dramatic, sudden changes of parameter values in parameter sampling and parameter evolution process, and control narrowing of parameter variance which results in filter divergence through adjusting smoothing factor in kernel smoothing algorithm; (3) to assimilate recursively data into the model and thus detect possible time variation of parameters; and (4) to address properly various sources of uncertainties stemming from input, output and parameter uncertainties. The SEnKF is tested by assimilating observed fluxes of carbon dioxide and environmental driving factor data from an AmeriFlux forest station located near Howland, Maine, USA, into a partition eddy flux model. Our analysis demonstrates that model parameters, such as light use efficiency, respiration coefficients, minimum and optimum temperatures for photosynthetic activity, and others, are highly constrained by eddy flux data at daily-to-seasonal time scales. The SEnKF stabilizes parameter values quickly regardless of the initial values of the parameters. Potential ecosystem light use efficiency demonstrates a strong seasonality. Results show that the simultaneous parameter estimation procedure significantly improves model predictions. Results also show that the SEnKF can dramatically reduce the variance in state variables stemming from the uncertainty of parameters and driving variables. The SEnKF is a robust and effective algorithm in evaluating and developing ecosystem models and in improving the understanding and quantification of carbon cycle parameters and processes. ?? 2008 Elsevier B.V.

  2. Using the LMS method to calculate z-scores for the Fenton preterm infant growth chart.

    PubMed

    Fenton, T R; Sauve, R S

    2007-12-01

    The use of exact percentiles and z-scores permit optimal assessment of infants' growth. In addition, z-scores allow the precise description of size outside of the 3rd and 97th percentiles of a growth reference. To calculate percentiles and z-scores, health professionals require the LMS parameters (Lambda for the skew, Mu for the median, and Sigma for the generalized coefficient of variation; Cole, 1990). The objective of this study was to calculate the LMS parameters for the Fenton preterm growth chart (2003). Secondary data analysis of the Fenton preterm growth chart data. The Cole methods were used to produce the LMS parameters and to smooth the L parameter. New percentiles were generated from the smooth LMS parameters, which were then compared with the original growth chart percentiles. The maximum differences between the original percentile curves and the percentile curves generated from the LMS parameters were: for weight; a difference of 66 g (2.9%) at 32 weeks along the 90th percentile; for head circumference; some differences of 0.3 cm (0.6-1.0%); and for length; a difference of 0.5 cm (1.6%) at 22 weeks on the 97th percentile. The percentile curves generated from the smoothed LMS parameters for the Fenton growth chart are similar to the original curves. These LMS parameters for the Fenton preterm growth chart facilitate the calculation of z-scores, which will permit the more precise assessment of growth of infants who are born preterm.

  3. [Vis-NIR spectroscopic pattern recognition combined with SG smoothing applied to breed screening of transgenic sugarcane].

    PubMed

    Liu, Gui-Song; Guo, Hao-Song; Pan, Tao; Wang, Ji-Hua; Cao, Gan

    2014-10-01

    Based on Savitzky-Golay (SG) smoothing screening, principal component analysis (PCA) combined with separately supervised linear discriminant analysis (LDA) and unsupervised hierarchical clustering analysis (HCA) were used for non-destructive visible and near-infrared (Vis-NIR) detection for breed screening of transgenic sugarcane. A random and stability-dependent framework of calibration, prediction, and validation was proposed. A total of 456 samples of sugarcane leaves planting in the elongating stage were collected from the field, which was composed of 306 transgenic (positive) samples containing Bt and Bar gene and 150 non-transgenic (negative) samples. A total of 156 samples (negative 50 and positive 106) were randomly selected as the validation set; the remaining samples (negative 100 and positive 200, a total of 300 samples) were used as the modeling set, and then the modeling set was subdivided into calibration (negative 50 and positive 100, a total of 150 samples) and prediction sets (negative 50 and positive 100, a total of 150 samples) for 50 times. The number of SG smoothing points was ex- panded, while some modes of higher derivative were removed because of small absolute value, and a total of 264 smoothing modes were used for screening. The pairwise combinations of first three principal components were used, and then the optimal combination of principal components was selected according to the model effect. Based on all divisions of calibration and prediction sets and all SG smoothing modes, the SG-PCA-LDA and SG-PCA-HCA models were established, the model parameters were optimized based on the average prediction effect for all divisions to produce modeling stability. Finally, the model validation was performed by validation set. With SG smoothing, the modeling accuracy and stability of PCA-LDA, PCA-HCA were signif- icantly improved. For the optimal SG-PCA-LDA model, the recognition rate of positive and negative validation samples were 94.3%, 96.0%; and were 92.5%, 98.0% for the optimal SG-PCA-LDA model, respectively. Vis-NIR spectro- scopic pattern recognition combined with SG smoothing could be used for accurate recognition of transgenic sugarcane leaves, and provided a convenient screening method for transgenic sugarcane breeding.

  4. Exploring functional data analysis and wavelet principal component analysis on ecstasy (MDMA) wastewater data.

    PubMed

    Salvatore, Stefania; Bramness, Jørgen G; Røislien, Jo

    2016-07-12

    Wastewater-based epidemiology (WBE) is a novel approach in drug use epidemiology which aims to monitor the extent of use of various drugs in a community. In this study, we investigate functional principal component analysis (FPCA) as a tool for analysing WBE data and compare it to traditional principal component analysis (PCA) and to wavelet principal component analysis (WPCA) which is more flexible temporally. We analysed temporal wastewater data from 42 European cities collected daily over one week in March 2013. The main temporal features of ecstasy (MDMA) were extracted using FPCA using both Fourier and B-spline basis functions with three different smoothing parameters, along with PCA and WPCA with different mother wavelets and shrinkage rules. The stability of FPCA was explored through bootstrapping and analysis of sensitivity to missing data. The first three principal components (PCs), functional principal components (FPCs) and wavelet principal components (WPCs) explained 87.5-99.6 % of the temporal variation between cities, depending on the choice of basis and smoothing. The extracted temporal features from PCA, FPCA and WPCA were consistent. FPCA using Fourier basis and common-optimal smoothing was the most stable and least sensitive to missing data. FPCA is a flexible and analytically tractable method for analysing temporal changes in wastewater data, and is robust to missing data. WPCA did not reveal any rapid temporal changes in the data not captured by FPCA. Overall the results suggest FPCA with Fourier basis functions and common-optimal smoothing parameter as the most accurate approach when analysing WBE data.

  5. Guaranteed convergence of the Hough transform

    NASA Astrophysics Data System (ADS)

    Soffer, Menashe; Kiryati, Nahum

    1995-01-01

    The straight-line Hough Transform using normal parameterization with a continuous voting kernel is considered. It transforms the colinearity detection problem to a problem of finding the global maximum of a two dimensional function above a domain in the parameter space. The principle is similar to robust regression using fixed scale M-estimation. Unlike standard M-estimation procedures the Hough Transform does not rely on a good initial estimate of the line parameters: The global optimization problem is approached by exhaustive search on a grid that is usually as fine as computationally feasible. The global maximum of a general function above a bounded domain cannot be found by a finite number of function evaluations. Only if sufficient a-priori knowledge about the smoothness of the objective function is available, convergence to the global maximum can be guaranteed. The extraction of a-priori information and its efficient use are the main challenges in real global optimization problems. The global optimization problem in the Hough Transform is essentially how fine should the parameter space quantization be in order not to miss the true maximum. More than thirty years after Hough patented the basic algorithm, the problem is still essentially open. In this paper an attempt is made to identify a-priori information on the smoothness of the objective (Hough) function and to introduce sufficient conditions for the convergence of the Hough Transform to the global maximum. An image model with several application dependent parameters is defined. Edge point location errors as well as background noise are accounted for. Minimal parameter space quantization intervals that guarantee convergence are obtained. Focusing policies for multi-resolution Hough algorithms are developed. Theoretical support for bottom- up processing is provided. Due to the randomness of errors and noise, convergence guarantees are probabilistic.

  6. Optimized theory for simple and molecular fluids.

    PubMed

    Marucho, M; Montgomery Pettitt, B

    2007-03-28

    An optimized closure approximation for both simple and molecular fluids is presented. A smooth interpolation between Perkus-Yevick and hypernetted chain closures is optimized by minimizing the free energy self-consistently with respect to the interpolation parameter(s). The molecular version is derived from a refinement of the method for simple fluids. In doing so, a method is proposed which appropriately couples an optimized closure with the variant of the diagrammatically proper integral equation recently introduced by this laboratory [K. M. Dyer et al., J. Chem. Phys. 123, 204512 (2005)]. The simplicity of the expressions involved in this proposed theory has allowed the authors to obtain an analytic expression for the approximate excess chemical potential. This is shown to be an efficient tool to estimate, from first principles, the numerical value of the interpolation parameters defining the aforementioned closure. As a preliminary test, representative models for simple fluids and homonuclear diatomic Lennard-Jones fluids were analyzed, obtaining site-site correlation functions in excellent agreement with simulation data.

  7. An explicit scheme for ohmic dissipation with smoothed particle magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Tsukamoto, Yusuke; Iwasaki, Kazunari; Inutsuka, Shu-ichiro

    2013-09-01

    In this paper, we present an explicit scheme for Ohmic dissipation with smoothed particle magnetohydrodynamics (SPMHD). We propose an SPH discretization of Ohmic dissipation and solve Ohmic dissipation part of induction equation with the super-time-stepping method (STS) which allows us to take a longer time step than Courant-Friedrich-Levy stability condition. Our scheme is second-order accurate in space and first-order accurate in time. Our numerical experiments show that optimal choice of the parameters of STS for Ohmic dissipation of SPMHD is νsts ˜ 0.01 and Nsts ˜ 5.

  8. One shot methods for optimal control of distributed parameter systems 1: Finite dimensional control

    NASA Technical Reports Server (NTRS)

    Taasan, Shlomo

    1991-01-01

    The efficient numerical treatment of optimal control problems governed by elliptic partial differential equations (PDEs) and systems of elliptic PDEs, where the control is finite dimensional is discussed. Distributed control as well as boundary control cases are discussed. The main characteristic of the new methods is that they are designed to solve the full optimization problem directly, rather than accelerating a descent method by an efficient multigrid solver for the equations involved. The methods use the adjoint state in order to achieve efficient smoother and a robust coarsening strategy. The main idea is the treatment of the control variables on appropriate scales, i.e., control variables that correspond to smooth functions are solved for on coarse grids depending on the smoothness of these functions. Solution of the control problems is achieved with the cost of solving the constraint equations about two to three times (by a multigrid solver). Numerical examples demonstrate the effectiveness of the method proposed in distributed control case, pointwise control and boundary control problems.

  9. Smooth Constrained Heuristic Optimization of a Combinatorial Chemical Space

    DTIC Science & Technology

    2015-05-01

    ARL-TR-7294•MAY 2015 US Army Research Laboratory Smooth ConstrainedHeuristic Optimization of a Combinatorial Chemical Space by Berend Christopher...7294•MAY 2015 US Army Research Laboratory Smooth ConstrainedHeuristic Optimization of a Combinatorial Chemical Space by Berend Christopher...

  10. Exponential smoothing weighted correlations

    NASA Astrophysics Data System (ADS)

    Pozzi, F.; Di Matteo, T.; Aste, T.

    2012-06-01

    In many practical applications, correlation matrices might be affected by the "curse of dimensionality" and by an excessive sensitiveness to outliers and remote observations. These shortcomings can cause problems of statistical robustness especially accentuated when a system of dynamic correlations over a running window is concerned. These drawbacks can be partially mitigated by assigning a structure of weights to observational events. In this paper, we discuss Pearson's ρ and Kendall's τ correlation matrices, weighted with an exponential smoothing, computed on moving windows using a data-set of daily returns for 300 NYSE highly capitalized companies in the period between 2001 and 2003. Criteria for jointly determining optimal weights together with the optimal length of the running window are proposed. We find that the exponential smoothing can provide more robust and reliable dynamic measures and we discuss that a careful choice of the parameters can reduce the autocorrelation of dynamic correlations whilst keeping significance and robustness of the measure. Weighted correlations are found to be smoother and recovering faster from market turbulence than their unweighted counterparts, helping also to discriminate more effectively genuine from spurious correlations.

  11. A Particle Smoother with Sequential Importance Resampling for soil hydraulic parameter estimation: A lysimeter experiment

    NASA Astrophysics Data System (ADS)

    Montzka, Carsten; Hendricks Franssen, Harrie-Jan; Moradkhani, Hamid; Pütz, Thomas; Han, Xujun; Vereecken, Harry

    2013-04-01

    An adequate description of soil hydraulic properties is essential for a good performance of hydrological forecasts. So far, several studies showed that data assimilation could reduce the parameter uncertainty by considering soil moisture observations. However, these observations and also the model forcings were recorded with a specific measurement error. It seems a logical step to base state updating and parameter estimation on observations made at multiple time steps, in order to reduce the influence of outliers at single time steps given measurement errors and unknown model forcings. Such outliers could result in erroneous state estimation as well as inadequate parameters. This has been one of the reasons to use a smoothing technique as implemented for Bayesian data assimilation methods such as the Ensemble Kalman Filter (i.e. Ensemble Kalman Smoother). Recently, an ensemble-based smoother has been developed for state update with a SIR particle filter. However, this method has not been used for dual state-parameter estimation. In this contribution we present a Particle Smoother with sequentially smoothing of particle weights for state and parameter resampling within a time window as opposed to the single time step data assimilation used in filtering techniques. This can be seen as an intermediate variant between a parameter estimation technique using global optimization with estimation of single parameter sets valid for the whole period, and sequential Monte Carlo techniques with estimation of parameter sets evolving from one time step to another. The aims are i) to improve the forecast of evaporation and groundwater recharge by estimating hydraulic parameters, and ii) to reduce the impact of single erroneous model inputs/observations by a smoothing method. In order to validate the performance of the proposed method in a real world application, the experiment is conducted in a lysimeter environment.

  12. Smoothing optimization of supporting quadratic surfaces with Zernike polynomials

    NASA Astrophysics Data System (ADS)

    Zhang, Hang; Lu, Jiandong; Liu, Rui; Ma, Peifu

    2018-03-01

    A new optimization method to get a smooth freeform optical surface from an initial surface generated by the supporting quadratic method (SQM) is proposed. To smooth the initial surface, a 9-vertex system from the neighbor quadratic surface and the Zernike polynomials are employed to establish a linear equation system. A local optimized surface to the 9-vertex system can be build by solving the equations. Finally, a continuous smooth optimization surface is constructed by stitching the above algorithm on the whole initial surface. The spot corresponding to the optimized surface is no longer discrete pixels but a continuous distribution.

  13. Magnetorheological elastic super-smooth finishing for high-efficiency manufacturing of ultraviolet laser resistant optics

    NASA Astrophysics Data System (ADS)

    Shi, Feng; Shu, Yong; Dai, Yifan; Peng, Xiaoqiang; Li, Shengyi

    2013-07-01

    Based on the elastic-plastic deformation theory, status between abrasives and workpiece in magnetorheological finishing (MRF) process and the feasibility of elastic polishing are analyzed. The relationship among material removal mechanism and particle force, removal efficiency, and surface topography are revealed through a set of experiments. The chemical dominant elastic super-smooth polishing can be fulfilled by changing the components of magnetorheological (MR) fluid and optimizing polishing parameters. The MR elastic super-smooth finishing technology can be applied in polishing high-power laser-irradiated components with high efficiency, high accuracy, low damage, and high laser-induced damage threshold (LIDT). A 430×430×10 mm fused silica (FS) optic window is polished and surface error is improved from 538.241 nm [peak to valley (PV)], 96.376 nm (rms) to 76.372 nm (PV), 8.295 nm (rms) after 51.6 h rough polishing, 42.6 h fine polishing, and 54.6 h super-smooth polishing. A 50×50×10 mm sample is polished with exactly the same parameters. The roughness is improved from 1.793 nm [roughness average (Ra)] to 0.167 nm (Ra) and LIDT is improved from 9.77 to 19.2 J/cm2 after MRF elastic polishing.

  14. 350 nm Broadband Supercontinuum Generation Using Dispersion Engineered Near Zero Ultraflat Square-Lattice PCF around 1.55 μm and Fabrication Tolerance Analysis

    PubMed Central

    Roy Chaudhuri, Partha

    2014-01-01

    In this work, a new design of ultraflat dispersion PCF based on square-lattice geometry with all uniform air holes towards broadband smooth SCG around the C-band of wavelength has been presented. The air hole of the inner ring was infiltrated with liquid of certain refractive indices. Numerical investigations establish a near zero ultraflattened dispersion of 0 ± 0.78 ps/nm/km in a wavelength range of 1496 nm to 2174 nm (678 nm bandwidth) covering most of the communications bands with the first zero dispersion wavelength around 1.54 μm. With the optimized ultraflattened fiber, we have achieved a broadband SC spectrum with FWHM of 350 nm with the central wavelength of 1550 nm with less than a meter long of the fiber by using a picosecond pulse laser. We have also analyzed the sensitivity of the optimized dispersion design by small variations from the optimum value of the geometrical structural parameters. Our investigations establish that for a negative change of PCF parameters, the profile retains the smooth and flat SCG spectra; however, for a positive change, the smooth and a flat spectrum is lost. The new design of the fiber will be capable of covering huge diverse field of DWDM sources, spectroscopy, meteorology, optical coherence tomography, and optical sensing. PMID:27355018

  15. A General Multidisciplinary Turbomachinery Design Optimization system Applied to a Transonic Fan

    NASA Astrophysics Data System (ADS)

    Nemnem, Ahmed Mohamed Farid

    The blade geometry design process is integral to the development and advancement of compressors and turbines in gas generators or aeroengines. A new airfoil section design capability has been added to an open source parametric 3D blade design tool. Curvature of the meanline is controlled using B-splines to create the airfoils. The curvature is analytically integrated to derive the angles and the meanline is obtained by integrating the angles. A smooth thickness distribution is then added to the airfoil to guarantee a smooth shape while maintaining a prescribed thickness distribution. A leading edge B-spline definition has also been implemented to achieve customized airfoil leading edges which guarantees smoothness with parametric eccentricity and droop. An automated turbomachinery design and optimization system has been created. An existing splittered transonic fan is used as a test and reference case. This design was more general than a conventional design to have access to the other design methodology. The whole mechanical and aerodynamic design loops are automated for the optimization process. The flow path and the geometrical properties of the rotor are initially created using the axi-symmetric design and analysis code (T-AXI). The main and splitter blades are parametrically designed with the created geometry builder (3DBGB) using the new added features (curvature technique). The solid model creation of the rotor sector with a periodic boundaries combining the main blade and splitter is done using MATLAB code directly connected to SolidWorks including the hub, fillets and tip clearance. A mechanical optimization is performed with DAKOTA (developed by DOE) to reduce the mass of the blades while keeping maximum stress as a constraint with a safety factor. A Genetic algorithm followed by Numerical Gradient optimization strategies are used in the mechanical optimization. The splittered transonic fan blades mass is reduced by 2.6% while constraining the maximum stress below 50% material yield strength using 2D sections thickness and chord multipliers. Once the initial design was mechanically optimized, a CFD optimization was performed to maximize efficiency and/or stall margin. The CFD grid generator (AUTOGRID) reads 3DBGB output and accounts for hub fillets and tip gaps. Single and Multi-objective Genetic Algorithm (SOGA, MOGA) optimization have been used with the CFD analysis system. In SOGA optimization, efficiency was increased by 3.525% from 78.364% to 81.889% while only changing 4 design parameters. For MOGA optimization with higher weighting efficiency than stall margin, the efficiency was increased by 2.651% from 78.364% to 81.015% while the static pressure recovery factor was increased from 0.37407 to 0.4812286 that consequently increases the stall margin. The design process starts with a hot shape design, and then a hot to cold transformation process is explained once the optimization process ends which smoothly subtracts the mechanical deflections from the hot shape. This transformation ensures an accurate tip clearance. The optimization modules can be customized by the user as one full optimization or multiple small ones. This allows the designer not to be eliminated from the design loop which helps in taking the right choice of parameters for the optimization and the final feasible design.

  16. Accuracy of the weighted essentially non-oscillatory conservative finite difference schemes

    NASA Astrophysics Data System (ADS)

    Don, Wai-Sun; Borges, Rafael

    2013-10-01

    In the reconstruction step of (2r-1) order weighted essentially non-oscillatory conservative finite difference schemes (WENO) for solving hyperbolic conservation laws, nonlinear weights αk and ωk, such as the WENO-JS weights by Jiang et al. and the WENO-Z weights by Borges et al., are designed to recover the formal (2r-1) order (optimal order) of the upwinded central finite difference scheme when the solution is sufficiently smooth. The smoothness of the solution is determined by the lower order local smoothness indicators βk in each substencil. These nonlinear weight formulations share two important free parameters in common: the power p, which controls the amount of numerical dissipation, and the sensitivity ε, which is added to βk to avoid a division by zero in the denominator of αk. However, ε also plays a role affecting the order of accuracy of WENO schemes, especially in the presence of critical points. It was recently shown that, for any design order (2r-1), ε should be of Ω(Δx2) (Ω(Δxm) means that ε⩾CΔxm for some C independent of Δx, as Δx→0) for the WENO-JS scheme to achieve the optimal order, regardless of critical points. In this paper, we derive an alternative proof of the sufficient condition using special properties of βk. Moreover, it is unknown if the WENO-Z scheme should obey the same condition on ε. Here, using same special properties of βk, we prove that in fact the optimal order of the WENO-Z scheme can be guaranteed with a much weaker condition ε=Ω(Δxm), where m(r,p)⩾2 is the optimal sensitivity order, regardless of critical points. Both theoretical results are confirmed numerically on smooth functions with arbitrary order of critical points. This is a highly desirable feature, as illustrated with the Lax problem and the Mach 3 shock-density wave interaction of one dimensional Euler equations, for a smaller ε allows a better essentially non-oscillatory shock capturing as it does not over-dominate over the size of βk. We also show that numerical oscillations can be further attenuated by increasing the power parameter 2⩽p⩽r-1, at the cost of increased numerical dissipation. Compact formulas of βk for WENO schemes are also presented.

  17. Determining the Optimal Values of Exponential Smoothing Constants--Does Solver Really Work?

    ERIC Educational Resources Information Center

    Ravinder, Handanhal V.

    2013-01-01

    A key issue in exponential smoothing is the choice of the values of the smoothing constants used. One approach that is becoming increasingly popular in introductory management science and operations management textbooks is the use of Solver, an Excel-based non-linear optimizer, to identify values of the smoothing constants that minimize a measure…

  18. Sensitivity analysis of an optimization-based trajectory planner for autonomous vehicles in urban environments

    NASA Astrophysics Data System (ADS)

    Hardy, Jason; Campbell, Mark; Miller, Isaac; Schimpf, Brian

    2008-10-01

    The local path planner implemented on Cornell's 2007 DARPA Urban Challenge entry vehicle Skynet utilizes a novel mixture of discrete and continuous path planning steps to facilitate a safe, smooth, and human-like driving behavior. The planner first solves for a feasible path through the local obstacle map using a grid based search algorithm. The resulting path is then refined using a cost-based nonlinear optimization routine with both hard and soft constraints. The behavior of this optimization is influenced by tunable weighting parameters which govern the relative cost contributions assigned to different path characteristics. This paper studies the sensitivity of the vehicle's performance to these path planner weighting parameters using a data driven simulation based on logged data from the National Qualifying Event. The performance of the path planner in both the National Qualifying Event and in the Urban Challenge is also presented and analyzed.

  19. Controlling Morphological Parameters of Anodized Titania Nanotubes for Optimized Solar Energy Applications

    PubMed Central

    Haring, Andrew; Morris, Amanda; Hu, Michael

    2012-01-01

    Anodized TiO2 nanotubes have received much attention for their use in solar energy applications including water oxidation cells and hybrid solar cells [dye-sensitized solar cells (DSSCs) and bulk heterojuntion solar cells (BHJs)]. High surface area allows for increased dye-adsorption and photon absorption. Titania nanotubes grown by anodization of titanium in fluoride-containing electrolytes are aligned perpendicular to the substrate surface, reducing the electron diffusion path to the external circuit in solar cells. The nanotube morphology can be optimized for the various applications by adjusting the anodization parameters but the optimum crystallinity of the nanotube arrays remains to be realized. In addition to morphology and crystallinity, the method of device fabrication significantly affects photon and electron dynamics and its energy conversion efficiency. This paper provides the state-of-the-art knowledge to achieve experimental tailoring of morphological parameters including nanotube diameter, length, wall thickness, array surface smoothness, and annealing of nanotube arrays.

  20. Robust Smoothing: Smoothing Parameter Selection and Applications to Fluorescence Spectroscopy∂

    PubMed Central

    Lee, Jong Soo; Cox, Dennis D.

    2009-01-01

    Fluorescence spectroscopy has emerged in recent years as an effective way to detect cervical cancer. Investigation of the data preprocessing stage uncovered a need for a robust smoothing to extract the signal from the noise. Various robust smoothing methods for estimating fluorescence emission spectra are compared and data driven methods for the selection of smoothing parameter are suggested. The methods currently implemented in R for smoothing parameter selection proved to be unsatisfactory, and a computationally efficient procedure that approximates robust leave-one-out cross validation is presented. PMID:20729976

  1. Identification of optimal mask size parameter for noise filtering in 99mTc-methylene diphosphonate bone scintigraphy images.

    PubMed

    Pandey, Anil K; Bisht, Chandan S; Sharma, Param D; ArunRaj, Sreedharan Thankarajan; Taywade, Sameer; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh

    2017-11-01

    Tc-methylene diphosphonate (Tc-MDP) bone scintigraphy images have limited number of counts per pixel. A noise filtering method based on local statistics of the image produces better results than a linear filter. However, the mask size has a significant effect on image quality. In this study, we have identified the optimal mask size that yields a good smooth bone scan image. Forty four bone scan images were processed using mask sizes 3, 5, 7, 9, 11, 13, and 15 pixels. The input and processed images were reviewed in two steps. In the first step, the images were inspected and the mask sizes that produced images with significant loss of clinical details in comparison with the input image were excluded. In the second step, the image quality of the 40 sets of images (each set had input image, and its corresponding three processed images with 3, 5, and 7-pixel masks) was assessed by two nuclear medicine physicians. They selected one good smooth image from each set of images. The image quality was also assessed quantitatively with a line profile. Fisher's exact test was used to find statistically significant differences in image quality processed with 5 and 7-pixel mask at a 5% cut-off. A statistically significant difference was found between the image quality processed with 5 and 7-pixel mask at P=0.00528. The identified optimal mask size to produce a good smooth image was found to be 7 pixels. The best mask size for the John-Sen Lee filter was found to be 7×7 pixels, which yielded Tc-methylene diphosphonate bone scan images with the highest acceptable smoothness.

  2. Mutually beneficial relationship in optimization between search-space smoothing and stochastic search

    NASA Astrophysics Data System (ADS)

    Hasegawa, Manabu; Hiramatsu, Kotaro

    2013-10-01

    The effectiveness of the Metropolis algorithm (MA) (constant-temperature simulated annealing) in optimization by the method of search-space smoothing (SSS) (potential smoothing) is studied on two types of random traveling salesman problems. The optimization mechanism of this hybrid approach (MASSS) is investigated by analyzing the exploration dynamics observed in the rugged landscape of the cost function (energy surface). The results show that the MA can be successfully utilized as a local search algorithm in the SSS approach. It is also clarified that the optimization characteristics of these two constituent methods are improved in a mutually beneficial manner in the MASSS run. Specifically, the relaxation dynamics generated by employing the MA work effectively even in a smoothed landscape and more advantage is taken of the guiding function proposed in the idea of SSS; this mechanism operates in an adaptive manner in the de-smoothing process and therefore the MASSS method maintains its optimization function over a wider temperature range than the MA.

  3. Mitigating Short-Term Variations of Photovoltaic Generation Using Energy Storage with VOLTTRON

    NASA Astrophysics Data System (ADS)

    Morrissey, Kevin

    A smart-building communications system performs smoothing on photovoltaic (PV) power generation using a battery energy storage system (BESS). The system runs using VOLTTRON(TM), a multi-agent python-based software platform dedicated to power systems. The VOLTTRON(TM) system designed for this project runs synergistically with the larger University of Washington VOLTTRON(TM) environment, which is designed to operate UW device communications and databases as well as to perform real-time operations for research. One such research algorithm that operates simultaneously with this PV Smoothing System is an energy cost optimization system which optimizes net demand and associated cost throughout a day using the BESS. The PV Smoothing System features an active low-pass filter with an adaptable time constant, as well as adjustable limitations on the output power and accumulated battery energy of the BESS contribution. The system was analyzed using 26 days of PV generation at 1-second resolution. PV smoothing was studied with unconstrained BESS contribution as well as under a broad range of BESS constraints analogous to variable-sized storage. It was determined that a large inverter output power was more important for PV smoothing than a large battery energy capacity. Two methods of selecting the time constant in real time, static and adaptive, are studied for their impact on system performance. It was found that both systems provide a high level of PV smoothing performance, within 8% of the ideal case where the best time constant is known ahead of time. The system was run in real time using VOLTTRON(TM) with BESS limitations of 5 kW/6.5 kWh and an adaptive update period of 7 days. The system behaved as expected given the BESS parameters and time constant selection methods, providing smoothing on the PV generation and updating the time constant periodically using the adaptive time constant selection method.

  4. Shape optimization techniques for musical instrument design

    NASA Astrophysics Data System (ADS)

    Henrique, Luis; Antunes, Jose; Carvalho, Joao S.

    2002-11-01

    The design of musical instruments is still mostly based on empirical knowledge and costly experimentation. One interesting improvement is the shape optimization of resonating components, given a number of constraints (allowed parameter ranges, shape smoothness, etc.), so that vibrations occur at specified modal frequencies. Each admissible geometrical configuration generates an error between computed eigenfrequencies and the target set. Typically, error surfaces present many local minima, corresponding to suboptimal designs. This difficulty can be overcome using global optimization techniques, such as simulated annealing. However these methods are greedy, concerning the number of function evaluations required. Thus, the computational effort can be unacceptable if complex problems, such as bell optimization, are tackled. Those issues are addressed in this paper, and a method for improving optimization procedures is proposed. Instead of using the local geometric parameters as searched variables, the system geometry is modeled in terms of truncated series of orthogonal space-funcitons, and optimization is performed on their amplitude coefficients. Fourier series and orthogonal polynomials are typical such functions. This technique reduces considerably the number of searched variables, and has a potential for significant computational savings in complex problems. It is illustrated by optimizing the shapes of both current and uncommon marimba bars.

  5. Approximate message passing for nonconvex sparse regularization with stability and asymptotic analysis

    NASA Astrophysics Data System (ADS)

    Sakata, Ayaka; Xu, Yingying

    2018-03-01

    We analyse a linear regression problem with nonconvex regularization called smoothly clipped absolute deviation (SCAD) under an overcomplete Gaussian basis for Gaussian random data. We propose an approximate message passing (AMP) algorithm considering nonconvex regularization, namely SCAD-AMP, and analytically show that the stability condition corresponds to the de Almeida-Thouless condition in spin glass literature. Through asymptotic analysis, we show the correspondence between the density evolution of SCAD-AMP and the replica symmetric (RS) solution. Numerical experiments confirm that for a sufficiently large system size, SCAD-AMP achieves the optimal performance predicted by the replica method. Through replica analysis, a phase transition between replica symmetric and replica symmetry breaking (RSB) region is found in the parameter space of SCAD. The appearance of the RS region for a nonconvex penalty is a significant advantage that indicates the region of smooth landscape of the optimization problem. Furthermore, we analytically show that the statistical representation performance of the SCAD penalty is better than that of \

  6. Right Ventricular Enlargement and Renal Function Are Associated With Smooth Introduction of Adaptive Servo-Ventilation Therapy in Chronic Heart Failure Patients.

    PubMed

    Iwasaku, Toshihiro; Okuhara, Yoshitaka; Eguchi, Akiyo; Ando, Tomotaka; Naito, Yoshiro; Masuyama, Tohru; Hirotani, Shinichi

    2017-04-06

    Although adaptive servo-ventilation (ASV) therapy has beneficial effects on chronic heart failure (CHF), a relatively large number of CHF patients cannot undergo ASV therapy due to general discomfort from the mask and/or positive airway pressure. The present study aimed to clarify baseline patient characteristics which are associated with the smooth introduction of ASV treatment in stable CHF inpatients.Thirty-two consecutive heart failure (HF) inpatients were enrolled (left ventricular ejection fraction (LVEF) < 45%, estimated glomerular filtration rate (eGFR) > 10 mL/minute/1.73m 2 , and apnea-hypopnea index < 30/hour). After the patients were clinically stabilized on optimal therapy, they underwent portable polysomnography and echocardiography, and then received ASV therapy. The patients were divided into two groups: a smooth introduction group (n = 18) and non-smooth introduction group (n = 14). Smooth introduction of ASV treatment was defined as ASV usage for 4 hours and more on the first night. Univariate analysis showed that the smooth introduction group differed significantly from the non-smooth introduction group in age, hemoglobin level, eGFR, HF origin, LVEF, right ventricular (RV) diastolic dimension (RVDd), RV dp/dt, and RV fractional shortening. Multivariate analyses revealed that RVDd, eGFR, and LVEF were independently associated with smooth introduction. In addition, RVDd and eGFR seemed to be better diagnostic parameters for longer usage for ASV therapy according to the analysis of receiver operating characteristics curves.RV enlargement, eGFR, and LVEF are associated with the smooth introduction of ASV therapy in CHF inpatients.

  7. Simulated Annealing in the Variable Landscape

    NASA Astrophysics Data System (ADS)

    Hasegawa, Manabu; Kim, Chang Ju

    An experimental analysis is conducted to test whether the appropriate introduction of the smoothness-temperature schedule enhances the optimizing ability of the MASSS method, the combination of the Metropolis algorithm (MA) and the search-space smoothing (SSS) method. The test is performed on two types of random traveling salesman problems. The results show that the optimization performance of the MA is substantially improved by a single smoothing alone and slightly more by a single smoothing with cooling and by a de-smoothing process with heating. The performance is compared to that of the parallel tempering method and a clear advantage of the idea of smoothing is observed depending on the problem.

  8. Optimization of motion control laws for tether crawler or elevator systems

    NASA Technical Reports Server (NTRS)

    Swenson, Frank R.; Von Tiesenhausen, Georg

    1988-01-01

    Based on the proposal of a motion control law by Lorenzini (1987), a method is developed for optimizing motion control laws for tether crawler or elevator systems in terms of the performance measures of travel time, the smoothness of acceleration and deceleration, and the maximum values of velocity and acceleration. The Lorenzini motion control law, based on powers of the hyperbolic tangent function, is modified by the addition of a constant-velocity section, and this modified function is then optimized by parameter selections to minimize the peak acceleration value for a selected travel time or to minimize travel time for the selected peak values of velocity and acceleration. It is shown that the addition of a constant-velocity segment permits further optimization of the motion control law performance.

  9. Multi-step optimization strategy for fuel-optimal orbital transfer of low-thrust spacecraft

    NASA Astrophysics Data System (ADS)

    Rasotto, M.; Armellin, R.; Di Lizia, P.

    2016-03-01

    An effective method for the design of fuel-optimal transfers in two- and three-body dynamics is presented. The optimal control problem is formulated using calculus of variation and primer vector theory. This leads to a multi-point boundary value problem (MPBVP), characterized by complex inner constraints and a discontinuous thrust profile. The first issue is addressed by embedding the MPBVP in a parametric optimization problem, thus allowing a simplification of the set of transversality constraints. The second problem is solved by representing the discontinuous control function by a smooth function depending on a continuation parameter. The resulting trajectory optimization method can deal with different intermediate conditions, and no a priori knowledge of the control structure is required. Test cases in both the two- and three-body dynamics show the capability of the method in solving complex trajectory design problems.

  10. Lightweight filter architecture for energy efficient mobile vehicle localization based on a distributed acoustic sensor network.

    PubMed

    Kim, Keonwook

    2013-08-23

    The generic properties of an acoustic signal provide numerous benefits for localization by applying energy-based methods over a deployed wireless sensor network (WSN). However, the signal generated by a stationary target utilizes a significant amount of bandwidth and power in the system without providing further position information. For vehicle localization, this paper proposes a novel proximity velocity vector estimator (PVVE) node architecture in order to capture the energy from a moving vehicle and reject the signal from motionless automobiles around the WSN node. A cascade structure between analog envelope detector and digital exponential smoothing filter presents the velocity vector-sensitive output with low analog circuit and digital computation complexity. The optimal parameters in the exponential smoothing filter are obtained by analytical and mathematical methods for maximum variation over the vehicle speed. For stationary targets, the derived simulation based on the acoustic field parameters demonstrates that the system significantly reduces the communication requirements with low complexity and can be expected to extend the operation time considerably.

  11. Tuning support vector machines for minimax and Neyman-Pearson classification.

    PubMed

    Davenport, Mark A; Baraniuk, Richard G; Scott, Clayton D

    2010-10-01

    This paper studies the training of support vector machine (SVM) classifiers with respect to the minimax and Neyman-Pearson criteria. In principle, these criteria can be optimized in a straightforward way using a cost-sensitive SVM. In practice, however, because these criteria require especially accurate error estimation, standard techniques for tuning SVM parameters, such as cross-validation, can lead to poor classifier performance. To address this issue, we first prove that the usual cost-sensitive SVM, here called the 2C-SVM, is equivalent to another formulation called the 2nu-SVM. We then exploit a characterization of the 2nu-SVM parameter space to develop a simple yet powerful approach to error estimation based on smoothing. In an extensive experimental study, we demonstrate that smoothing significantly improves the accuracy of cross-validation error estimates, leading to dramatic performance gains. Furthermore, we propose coordinate descent strategies that offer significant gains in computational efficiency, with little to no loss in performance.

  12. Spectral analysis and markov switching model of Indonesia business cycle

    NASA Astrophysics Data System (ADS)

    Fajar, Muhammad; Darwis, Sutawanir; Darmawan, Gumgum

    2017-03-01

    This study aims to investigate the Indonesia business cycle encompassing the determination of smoothing parameter (λ) on Hodrick-Prescott filter. Subsequently, the components of the filter output cycles were analyzed using a spectral method useful to know its characteristics, and Markov switching regime modeling is made to forecast the probability recession and expansion regimes. The data used in the study is real GDP (1983Q1 - 2016Q2). The results of the study are: a) Hodrick-Prescott filter on real GDP of Indonesia to be optimal when the value of the smoothing parameter is 988.474, b) Indonesia business cycle has amplitude varies between±0.0071 to±0.01024, and the duration is between 4 to 22 quarters, c) the business cycle can be modelled by MSIV-AR (2) but regime periodization is generated this model not perfect exactly with real regime periodzation, and d) Based on the model MSIV-AR (2) obtained long-term probabilities in the expansion regime: 0.4858 and in the recession regime: 0.5142.

  13. Periodic orbits of hybrid systems and parameter estimation via AD.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guckenheimer, John.; Phipps, Eric Todd; Casey, Richard

    Rhythmic, periodic processes are ubiquitous in biological systems; for example, the heart beat, walking, circadian rhythms and the menstrual cycle. Modeling these processes with high fidelity as periodic orbits of dynamical systems is challenging because: (1) (most) nonlinear differential equations can only be solved numerically; (2) accurate computation requires solving boundary value problems; (3) many problems and solutions are only piecewise smooth; (4) many problems require solving differential-algebraic equations; (5) sensitivity information for parameter dependence of solutions requires solving variational equations; and (6) truncation errors in numerical integration degrade performance of optimization methods for parameter estimation. In addition, mathematical modelsmore » of biological processes frequently contain many poorly-known parameters, and the problems associated with this impedes the construction of detailed, high-fidelity models. Modelers are often faced with the difficult problem of using simulations of a nonlinear model, with complex dynamics and many parameters, to match experimental data. Improved computational tools for exploring parameter space and fitting models to data are clearly needed. This paper describes techniques for computing periodic orbits in systems of hybrid differential-algebraic equations and parameter estimation methods for fitting these orbits to data. These techniques make extensive use of automatic differentiation to accurately and efficiently evaluate derivatives for time integration, parameter sensitivities, root finding and optimization. The boundary value problem representing a periodic orbit in a hybrid system of differential algebraic equations is discretized via multiple-shooting using a high-degree Taylor series integration method [GM00, Phi03]. Numerical solutions to the shooting equations are then estimated by a Newton process yielding an approximate periodic orbit. A metric is defined for computing the distance between two given periodic orbits which is then minimized using a trust-region minimization algorithm [DS83] to find optimal fits of the model to a reference orbit [Cas04]. There are two different yet related goals that motivate the algorithmic choices listed above. The first is to provide a simple yet powerful framework for studying periodic motions in mechanical systems. Formulating mechanically correct equations of motion for systems of interconnected rigid bodies, while straightforward, is a time-consuming error prone process. Much of this difficulty stems from computing the acceleration of each rigid body in an inertial reference frame. The acceleration is computed most easily in a redundant set of coordinates giving the spatial positions of each body: since the acceleration is just the second derivative of these positions. Rather than providing explicit formulas for these derivatives, automatic differentiation can be employed to compute these quantities efficiently during the course of a simulation. The feasibility of these ideas was investigated by applying these techniques to the problem of locating stable walking motions for a disc-foot passive walking machine [CGMR01, Gar99, McG91]. The second goal for this work was to investigate the application of smooth optimization methods to periodic orbit parameter estimation problems in neural oscillations. Others [BB93, FUS93, VB99] have favored non-continuous optimization methods such as genetic algorithms, stochastic search methods, simulated annealing and brute-force random searches because of their perceived suitability to the landscape of typical objective functions in parameter space, particularly for multi-compartmental neural models. Here we argue that a carefully formulated optimization problem is amenable to Newton-like methods and has a sufficiently smooth landscape in parameter space that these methods can be an efficient and effective alternative. The plan of this paper is as follows. In Section 1 we provide a definition of hybrid systems that is the basis for modeling systems with discontinuities or discrete transitions. Sections 2, 3, and 4 briefly describe the Taylor series integration, periodic orbit tracking, and parameter estimation algorithms. For full treatments of these algorithms, we refer the reader to [Phi03, Cas04, CPG04]. The software implementation of these algorithms is briefly described in Section 5 with particular emphasis on the automatic differentiation software ADMC++. Finally, these algorithms are applied to the bipedal walking and Hodgkin-Huxley based neural oscillation problems discussed above in Section 6.« less

  14. A new parametric method to smooth time-series data of metabolites in metabolic networks.

    PubMed

    Miyawaki, Atsuko; Sriyudthsak, Kansuporn; Hirai, Masami Yokota; Shiraishi, Fumihide

    2016-12-01

    Mathematical modeling of large-scale metabolic networks usually requires smoothing of metabolite time-series data to account for measurement or biological errors. Accordingly, the accuracy of smoothing curves strongly affects the subsequent estimation of model parameters. Here, an efficient parametric method is proposed for smoothing metabolite time-series data, and its performance is evaluated. To simplify parameter estimation, the method uses S-system-type equations with simple power law-type efflux terms. Iterative calculation using this method was found to readily converge, because parameters are estimated stepwise. Importantly, smoothing curves are determined so that metabolite concentrations satisfy mass balances. Furthermore, the slopes of smoothing curves are useful in estimating parameters, because they are probably close to their true behaviors regardless of errors that may be present in the actual data. Finally, calculations for each differential equation were found to converge in much less than one second if initial parameters are set at appropriate (guessed) values. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Use of a genetic algorithm for the analysis of eye movements from the linear vestibulo-ocular reflex

    NASA Technical Reports Server (NTRS)

    Shelhamer, M.

    2001-01-01

    It is common in vestibular and oculomotor testing to use a single-frequency (sine) or combination of frequencies [sum-of-sines (SOS)] stimulus for head or target motion. The resulting eye movements typically contain a smooth tracking component, which follows the stimulus, in which are interspersed rapid eye movements (saccades or fast phases). The parameters of the smooth tracking--the amplitude and phase of each component frequency--are of interest; many methods have been devised that attempt to identify and remove the fast eye movements from the smooth. We describe a new approach to this problem, tailored to both single-frequency and sum-of-sines stimulation of the human linear vestibulo-ocular reflex. An approximate derivative is used to identify fast movements, which are then omitted from further analysis. The remaining points form a series of smooth tracking segments. A genetic algorithm is used to fit these segments together to form a smooth (but disconnected) wave form, by iteratively removing biases due to the missing fast phases. A genetic algorithm is an iterative optimization procedure; it provides a basis for extending this approach to more complex stimulus-response situations. In the SOS case, the genetic algorithm estimates the amplitude and phase values of the component frequencies as well as removing biases.

  16. Rate-independent dissipation in phase-field modelling of displacive transformations

    NASA Astrophysics Data System (ADS)

    Tůma, K.; Stupkiewicz, S.; Petryk, H.

    2018-05-01

    In this paper, rate-independent dissipation is introduced into the phase-field framework for modelling of displacive transformations, such as martensitic phase transformation and twinning. The finite-strain phase-field model developed recently by the present authors is here extended beyond the limitations of purely viscous dissipation. The variational formulation, in which the evolution problem is formulated as a constrained minimization problem for a global rate-potential, is enhanced by including a mixed-type dissipation potential that combines viscous and rate-independent contributions. Effective computational treatment of the resulting incremental problem of non-smooth optimization is developed by employing the augmented Lagrangian method. It is demonstrated that a single Lagrange multiplier field suffices to handle the dissipation potential vertex and simultaneously to enforce physical constraints on the order parameter. In this way, the initially non-smooth problem of evolution is converted into a smooth stationarity problem. The model is implemented in a finite-element code and applied to solve two- and three-dimensional boundary value problems representative for shape memory alloys.

  17. Effect of smoothing on robust chaos.

    PubMed

    Deshpande, Amogh; Chen, Qingfei; Wang, Yan; Lai, Ying-Cheng; Do, Younghae

    2010-08-01

    In piecewise-smooth dynamical systems, situations can arise where the asymptotic attractors of the system in an open parameter interval are all chaotic (e.g., no periodic windows). This is the phenomenon of robust chaos. Previous works have established that robust chaos can occur through the mechanism of border-collision bifurcation, where border is the phase-space region where discontinuities in the derivatives of the dynamical equations occur. We investigate the effect of smoothing on robust chaos and find that periodic windows can arise when a small amount of smoothness is present. We introduce a parameter of smoothing and find that the measure of the periodic windows in the parameter space scales linearly with the parameter, regardless of the details of the smoothing function. Numerical support and a heuristic theory are provided to establish the scaling relation. Experimental evidence of periodic windows in a supposedly piecewise linear dynamical system, which has been implemented as an electronic circuit, is also provided.

  18. On the constrained minimization of smooth Kurdyka—Łojasiewicz functions with the scaled gradient projection method

    NASA Astrophysics Data System (ADS)

    Prato, Marco; Bonettini, Silvia; Loris, Ignace; Porta, Federica; Rebegoldi, Simone

    2016-10-01

    The scaled gradient projection (SGP) method is a first-order optimization method applicable to the constrained minimization of smooth functions and exploiting a scaling matrix multiplying the gradient and a variable steplength parameter to improve the convergence of the scheme. For a general nonconvex function, the limit points of the sequence generated by SGP have been proved to be stationary, while in the convex case and with some restrictions on the choice of the scaling matrix the sequence itself converges to a constrained minimum point. In this paper we extend these convergence results by showing that the SGP sequence converges to a limit point provided that the objective function satisfies the Kurdyka-Łojasiewicz property at each point of its domain and its gradient is Lipschitz continuous.

  19. Fisher's method of scoring in statistical image reconstruction: comparison of Jacobi and Gauss-Seidel iterative schemes.

    PubMed

    Hudson, H M; Ma, J; Green, P

    1994-01-01

    Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.

  20. Smooth function approximation using neural networks.

    PubMed

    Ferrari, Silvia; Stengel, Robert F

    2005-01-01

    An algebraic approach for representing multidimensional nonlinear functions by feedforward neural networks is presented. In this paper, the approach is implemented for the approximation of smooth batch data containing the function's input, output, and possibly, gradient information. The training set is associated to the network adjustable parameters by nonlinear weight equations. The cascade structure of these equations reveals that they can be treated as sets of linear systems. Hence, the training process and the network approximation properties can be investigated via linear algebra. Four algorithms are developed to achieve exact or approximate matching of input-output and/or gradient-based training sets. Their application to the design of forward and feedback neurocontrollers shows that algebraic training is characterized by faster execution speeds and better generalization properties than contemporary optimization techniques.

  1. A Novel Controller Design for the Next Generation Space Electrostatic Accelerometer Based on Disturbance Observation and Rejection.

    PubMed

    Li, Hongyin; Bai, Yanzheng; Hu, Ming; Luo, Yingxin; Zhou, Zebing

    2016-12-23

    The state-of-the-art accelerometer technology has been widely applied in space missions. The performance of the next generation accelerometer in future geodesic satellites is pushed to 8 × 10 - 13 m / s 2 / H z 1 / 2 , which is close to the hardware fundamental limit. According to the instrument noise budget, the geodesic test mass must be kept in the center of the accelerometer within the bounds of 56 pm / Hz 1 / 2 by the feedback controller. The unprecedented control requirements and necessity for the integration of calibration functions calls for a new type of control scheme with more flexibility and robustness. A novel digital controller design for the next generation electrostatic accelerometers based on disturbance observation and rejection with the well-studied Embedded Model Control (EMC) methodology is presented. The parameters are optimized automatically using a non-smooth optimization toolbox and setting a weighted H-infinity norm as the target. The precise frequency performance requirement of the accelerometer is well met during the batch auto-tuning, and a series of controllers for multiple working modes is generated. Simulation results show that the novel controller could obtain not only better disturbance rejection performance than the traditional Proportional Integral Derivative (PID) controllers, but also new instrument functions, including: easier tuning procedure, separation of measurement and control bandwidth and smooth control parameter switching.

  2. A Novel Controller Design for the Next Generation Space Electrostatic Accelerometer Based on Disturbance Observation and Rejection

    PubMed Central

    Li, Hongyin; Bai, Yanzheng; Hu, Ming; Luo, Yingxin; Zhou, Zebing

    2016-01-01

    The state-of-the-art accelerometer technology has been widely applied in space missions. The performance of the next generation accelerometer in future geodesic satellites is pushed to 8×10−13m/s2/Hz1/2, which is close to the hardware fundamental limit. According to the instrument noise budget, the geodesic test mass must be kept in the center of the accelerometer within the bounds of 56 pm/Hz1/2 by the feedback controller. The unprecedented control requirements and necessity for the integration of calibration functions calls for a new type of control scheme with more flexibility and robustness. A novel digital controller design for the next generation electrostatic accelerometers based on disturbance observation and rejection with the well-studied Embedded Model Control (EMC) methodology is presented. The parameters are optimized automatically using a non-smooth optimization toolbox and setting a weighted H-infinity norm as the target. The precise frequency performance requirement of the accelerometer is well met during the batch auto-tuning, and a series of controllers for multiple working modes is generated. Simulation results show that the novel controller could obtain not only better disturbance rejection performance than the traditional Proportional Integral Derivative (PID) controllers, but also new instrument functions, including: easier tuning procedure, separation of measurement and control bandwidth and smooth control parameter switching. PMID:28025534

  3. Adjoint-based Sensitivity of Jet Noise to Near-nozzle Forcing

    NASA Astrophysics Data System (ADS)

    Chung, Seung Whan; Vishnampet, Ramanathan; Bodony, Daniel; Freund, Jonathan

    2017-11-01

    Past efforts have used optimal control theory, based on the numerical solution of the adjoint flow equations, to perturb turbulent jets in order to reduce their radiated sound. These efforts have been successful in that sound is reduced, with concomitant changes to the large-scale turbulence structures in the flow. However, they have also been inconclusive, in that the ultimate level of reduction seemed to depend upon the accuracy of the adjoint-based gradient rather than a physical limitation of the flow. The chaotic dynamics of the turbulence can degrade the smoothness of cost functional in the control-parameter space, which is necessary for gradient-based optimization. We introduce a route to overcoming this challenge, in part by leveraging the regularity and accuracy with a dual-consistent, discrete-exact adjoint formulation. We confirm its properties and use it to study the sensitivity and controllability of the acoustic radiation from a simulation of a M = 1.3 turbulent jet, whose statistics matches data. The smoothness of the cost functional over time is quantified by a minimum optimization step size beyond which the gradient cannot have a certain degree of accuracy. Based on this, we achieve a moderate level of sound reduction in the first few optimization steps. This material is based [in part] upon work supported by the Department of Energy, National Nuclear Security Administration, under Award Number DE-NA0002374.

  4. Synchrony in Joint Action Is Directed by Each Participant’s Motor Control System

    PubMed Central

    Noy, Lior; Weiser, Netta; Friedman, Jason

    2017-01-01

    In this work, we ask how the probability of achieving synchrony in joint action is affected by the choice of motion parameters of each individual. We use the mirror game paradigm to study how changes in leader’s motion parameters, specifically frequency and peak velocity, affect the probability of entering the state of co-confidence (CC) motion: a dyadic state of synchronized, smooth and co-predictive motions. In order to systematically study this question, we used a one-person version of the mirror game, where the participant mirrored piece-wise rhythmic movements produced by a computer on a graphics tablet. We systematically varied the frequency and peak velocity of the movements to determine how these parameters affect the likelihood of synchronized joint action. To assess synchrony in the mirror game we used the previously developed marker of co-confident (CC) motions: smooth, jitter-less and synchronized motions indicative of co-predicative control. We found that when mirroring movements with low frequencies (i.e., long duration movements), the participants never showed CC, and as the frequency of the stimuli increased, the probability of observing CC also increased. This finding is discussed in the framework of motor control studies showing an upper limit on the duration of smooth motion. We confirmed the relationship between motion parameters and the probability to perform CC with three sets of data of open-ended two-player mirror games. These findings demonstrate that when performing movements together, there are optimal movement frequencies to use in order to maximize the possibility of entering a state of synchronized joint action. It also shows that the ability to perform synchronized joint action is constrained by the properties of our motor control systems. PMID:28443047

  5. A Robust Kalman Framework with Resampling and Optimal Smoothing

    PubMed Central

    Kautz, Thomas; Eskofier, Bjoern M.

    2015-01-01

    The Kalman filter (KF) is an extremely powerful and versatile tool for signal processing that has been applied extensively in various fields. We introduce a novel Kalman-based analysis procedure that encompasses robustness towards outliers, Kalman smoothing and real-time conversion from non-uniformly sampled inputs to a constant output rate. These features have been mostly treated independently, so that not all of their benefits could be exploited at the same time. Here, we present a coherent analysis procedure that combines the aforementioned features and their benefits. To facilitate utilization of the proposed methodology and to ensure optimal performance, we also introduce a procedure to calculate all necessary parameters. Thereby, we substantially expand the versatility of one of the most widely-used filtering approaches, taking full advantage of its most prevalent extensions. The applicability and superior performance of the proposed methods are demonstrated using simulated and real data. The possible areas of applications for the presented analysis procedure range from movement analysis over medical imaging, brain-computer interfaces to robot navigation or meteorological studies. PMID:25734647

  6. Aerodynamic performance of conventional and advanced design labyrinth seals with solid-smooth abradable, and honeycomb lands. [gas turbine engines

    NASA Technical Reports Server (NTRS)

    Stocker, H. L.; Cox, D. M.; Holle, G. F.

    1977-01-01

    Labyrinth air seal static and dynamic performance was evaluated using solid, abradable, and honeycomb lands with standard and advanced seal designs. The effects on leakage of land surface roughness, abradable land porosity, rub grooves in abradable lands, and honeycomb land cell size and depth were studied using a standard labyrinth seal. The effects of rotation on the optimum seal knife pitch were also investigated. Selected geometric and aerodynamic parameters for an advanced seal design were evaluated to derive an optimized performance configuration. The rotational energy requirements were also measured to determine the inherent friction and pumping energy absorbed by the various seal knife and land configurations tested in order to properly assess the net seal system performance level. Results indicate that: (1) seal leakage can be significantly affected with honeycomb or abradable lands; (2) rotational energy absorption does not vary significantly with the use of a solid-smooth, an abradable, or a honeycomb land; and (3) optimization of an advanced lab seal design produced a configuration that had leakage 25% below a conventional stepped seal.

  7. Numerical simulation and optimization of casting process for complex pump

    NASA Astrophysics Data System (ADS)

    Liu, Xueqin; Dong, Anping; Wang, Donghong; Lu, Yanling; Zhu, Guoliang

    2017-09-01

    The complex shape of the casting pump body has large complicated structure and uniform wall thickness, which easy give rise to casting defects. The numerical simulation software ProCAST is used to simulate the initial top gating process, after analysis of the material and structure characteristics of the high-pressure pump. The filling process was overall smooth, not there the water shortage phenomenon. But the circular shrinkage defects appear at the bottom of casting during solidification process. Then, the casting parameters were optimized and adding cold iron in the bottom. The shrinkage weight was reduced from 0.00167g to 0.0005g. The porosity volume was reduced from 1.39cm3 to 0.41cm3. The optimization scheme is simulated and actual experimented. The defect has been significantly improved.

  8. A new smoothing modified three-term conjugate gradient method for [Formula: see text]-norm minimization problem.

    PubMed

    Du, Shouqiang; Chen, Miao

    2018-01-01

    We consider a kind of nonsmooth optimization problems with [Formula: see text]-norm minimization, which has many applications in compressed sensing, signal reconstruction, and the related engineering problems. Using smoothing approximate techniques, this kind of nonsmooth optimization problem can be transformed into a general unconstrained optimization problem, which can be solved by the proposed smoothing modified three-term conjugate gradient method. The smoothing modified three-term conjugate gradient method is based on Polak-Ribière-Polyak conjugate gradient method. For the Polak-Ribière-Polyak conjugate gradient method has good numerical properties, the proposed method possesses the sufficient descent property without any line searches, and it is also proved to be globally convergent. Finally, the numerical experiments show the efficiency of the proposed method.

  9. Development of dihydropyridone indazole amides as selective Rho-kinase inhibitors.

    PubMed

    Goodman, Krista B; Cui, Haifeng; Dowdell, Sarah E; Gaitanopoulos, Dimitri E; Ivy, Robert L; Sehon, Clark A; Stavenger, Robert A; Wang, Gren Z; Viet, Andrew Q; Xu, Weiwei; Ye, Guosen; Semus, Simon F; Evans, Christopher; Fries, Harvey E; Jolivette, Larry J; Kirkpatrick, Robert B; Dul, Edward; Khandekar, Sanjay S; Yi, Tracey; Jung, David K; Wright, Lois L; Smith, Gary K; Behm, David J; Bentley, Ross; Doe, Christopher P; Hu, Erding; Lee, Dennis

    2007-01-11

    Rho kinase (ROCK1) mediates vascular smooth muscle contraction and is a potential target for the treatment of hypertension and related disorders. Indazole amide 3 was identified as a potent and selective ROCK1 inhibitor but possessed poor oral bioavailability. Optimization of this lead resulted in the discovery of a series of dihydropyridones, exemplified by 13, with improved pharmacokinetic parameters relative to the initial lead. Indazole substitution played a critical role in decreasing clearance and improving oral bioavailability.

  10. Optimized dielectric properties of SrTiO3:Nb /SrTiO3 (001) films for high field effect charge densities

    NASA Astrophysics Data System (ADS)

    Cai, Xiuyu; Frisbie, C. Daniel; Leighton, C.

    2006-12-01

    The authors report the growth, structural and electrical characterizations of SrTiO3 films deposited on conductive SrTiO3:Nb (001) substrates by high pressure reactive rf magnetron sputtering. Optimized deposition parameters yield smooth epitaxial layers of high crystalline perfection with a room temperature dielectric constant ˜200 (for a thickness of 1150Å). The breakdown fields in SrTiO3:Nb /SrTiO3/Ag capacitors are consistent with induced charge densities >1×1014cm-2 for both holes and electrons, making these films ideal for high charge density field effect devices.

  11. Lightweight Filter Architecture for Energy Efficient Mobile Vehicle Localization Based on a Distributed Acoustic Sensor Network

    PubMed Central

    Kim, Keonwook

    2013-01-01

    The generic properties of an acoustic signal provide numerous benefits for localization by applying energy-based methods over a deployed wireless sensor network (WSN). However, the signal generated by a stationary target utilizes a significant amount of bandwidth and power in the system without providing further position information. For vehicle localization, this paper proposes a novel proximity velocity vector estimator (PVVE) node architecture in order to capture the energy from a moving vehicle and reject the signal from motionless automobiles around the WSN node. A cascade structure between analog envelope detector and digital exponential smoothing filter presents the velocity vector-sensitive output with low analog circuit and digital computation complexity. The optimal parameters in the exponential smoothing filter are obtained by analytical and mathematical methods for maximum variation over the vehicle speed. For stationary targets, the derived simulation based on the acoustic field parameters demonstrates that the system significantly reduces the communication requirements with low complexity and can be expected to extend the operation time considerably. PMID:23979482

  12. Optimized path planning for soft tissue resection via laser vaporization

    NASA Astrophysics Data System (ADS)

    Ross, Weston; Cornwell, Neil; Tucker, Matthew; Mann, Brian; Codd, Patrick

    2018-02-01

    Robotic and robotic-assisted surgeries are becoming more prevalent with the promise of improving surgical outcomes through increased precision, reduced operating times, and minimally invasive procedures. The handheld laser scalpel in neurosurgery has been shown to provide a more gentle approach to tissue manipulation on or near critical structures over classical tooling, though difficulties of control have prevented large scale adoption of the tool. This paper presents a novel approach to generating a cutting path for the volumetric resection of tissue using a computer-guided laser scalpel. A soft tissue ablation simulator is developed and used in conjunction with an optimization routine to select parameters which maximize the total resection of target tissue while minimizing the damage to surrounding tissue. The simulator predicts the ablative properties of tissue from an interrogation cut for tuning and simulates the removal of a tumorous tissue embedded on the surface of healthy tissue using a laser scalpel. We demonstrate the ability to control depth and smoothness of cut using genetic algorithms to optimize the ablation parameters and cutting path. The laser power level, cutting rate and spacing between cuts are optimized over multiple surface cuts to achieve the desired resection volumes.

  13. Fabrication of low-loss ridge waveguides in z-cut lithium niobate by combination of ion implantation and UV picosecond laser micromachining

    NASA Astrophysics Data System (ADS)

    Stolze, M.; Herrmann, T.; L'huillier, J. A.

    2016-03-01

    Ridge waveguides in ferroelectric materials like LiNbO3 attended great interest for highly efficient integrated optical devices, for instance, electro-optic modulators, frequency converters and ring resonators. The main challenges are the realization of high index barrier towards the substrate and the processing of smooth ridges for minimized scattering losses. For fabricating ridges a variety of techniques, like chemical and wet etching as well as optical grade dicing, have been investigated in detail. Among them, laser micromachining offers a versatile and flexible processing technology, but up to now only a limited side wall roughness has been achieved by this technique. Here we report on laser micromachining of smooth ridges for low-loss optical waveguides in LiNbO3. The ridges with a top width of 7 µm were fabricated in z-cut LiNbO3 by a combination of UV picosecond micromachining and thermal annealing. The laser processing parameters show a strong influence on the achievable sidewall roughness of the ridges and were systematically investigated and optimized. Finally, the surface quality is further improved by an optimized thermal post-processing. The roughness of the ridges were analysed with confocal microscopy and the scattering losses were measured at an optical characterization wavelength of 632.8 nm by using the end-fire coupling method. In these investigations the index barrier was formed by multi-energy low dose oxygen ion implantation technology in a depth of 2.7 μm. With optimized laser processing parameters and thermal post-processing a scattering loss as low as 0.1 dB/cm has been demonstrated.

  14. Optimal Control and Smoothing Techniques for Computing Minimum Fuel Orbital Transfers and Rendezvous

    NASA Astrophysics Data System (ADS)

    Epenoy, R.; Bertrand, R.

    We investigate in this paper the computation of minimum fuel orbital transfers and rendezvous. Each problem is seen as an optimal control problem and is solved by means of shooting methods [1]. This approach corresponds to the use of Pontryagin's Maximum Principle (PMP) [2-4] and leads to the solution of a Two Point Boundary Value Problem (TPBVP). It is well known that this last one is very difficult to solve when the performance index is fuel consumption because in this case the optimal control law has a particular discontinuous structure called "bang-bang". We will show how to modify the performance index by a term depending on a small parameter in order to yield regular controls. Then, a continuation method on this parameter will lead us to the solution of the original problem. Convergence theorems will be given. Finally, numerical examples will illustrate the interest of our method. We will consider two particular problems: The GTO (Geostationary Transfer Orbit) to GEO (Geostationary Equatorial Orbit) transfer and the LEO (Low Earth Orbit) rendezvous.

  15. New approaches to optimization in aerospace conceptual design

    NASA Technical Reports Server (NTRS)

    Gage, Peter J.

    1995-01-01

    Aerospace design can be viewed as an optimization process, but conceptual studies are rarely performed using formal search algorithms. Three issues that restrict the success of automatic search are identified in this work. New approaches are introduced to address the integration of analyses and optimizers, to avoid the need for accurate gradient information and a smooth search space (required for calculus-based optimization), and to remove the restrictions imposed by fixed complexity problem formulations. (1) Optimization should be performed in a flexible environment. A quasi-procedural architecture is used to conveniently link analysis modules and automatically coordinate their execution. It efficiently controls a large-scale design tasks. (2) Genetic algorithms provide a search method for discontinuous or noisy domains. The utility of genetic optimization is demonstrated here, but parameter encodings and constraint-handling schemes must be carefully chosen to avoid premature convergence to suboptimal designs. The relationship between genetic and calculus-based methods is explored. (3) A variable-complexity genetic algorithm is created to permit flexible parameterization, so that the level of description can change during optimization. This new optimizer automatically discovers novel designs in structural and aerodynamic tasks.

  16. Autoimmune control of lesion growth in CNS with minimal damage

    NASA Astrophysics Data System (ADS)

    Mathankumar, R.; Mohan, T. R. Krishna

    2013-07-01

    Lesions in central nervous system (CNS) and their growth leads to debilitating diseases like Multiple Sclerosis (MS), Alzheimer's etc. We developed a model earlier [1, 2] which shows how the lesion growth can be arrested through a beneficial auto-immune mechanism. We compared some of the dynamical patterns in the model with different facets of MS. The success of the approach depends on a set of control parameters and their phase space was shown to have a smooth manifold separating the uncontrolled lesion growth region from the controlled. Here we show that an optimal set of parameter values exist in the model which minimizes system damage while, at once, achieving control of lesion growth.

  17. Robust estimation for ordinary differential equation models.

    PubMed

    Cao, J; Wang, L; Xu, J

    2011-12-01

    Applied scientists often like to use ordinary differential equations (ODEs) to model complex dynamic processes that arise in biology, engineering, medicine, and many other areas. It is interesting but challenging to estimate ODE parameters from noisy data, especially when the data have some outliers. We propose a robust method to address this problem. The dynamic process is represented with a nonparametric function, which is a linear combination of basis functions. The nonparametric function is estimated by a robust penalized smoothing method. The penalty term is defined with the parametric ODE model, which controls the roughness of the nonparametric function and maintains the fidelity of the nonparametric function to the ODE model. The basis coefficients and ODE parameters are estimated in two nested levels of optimization. The coefficient estimates are treated as an implicit function of ODE parameters, which enables one to derive the analytic gradients for optimization using the implicit function theorem. Simulation studies show that the robust method gives satisfactory estimates for the ODE parameters from noisy data with outliers. The robust method is demonstrated by estimating a predator-prey ODE model from real ecological data. © 2011, The International Biometric Society.

  18. Efficient Smoothed Concomitant Lasso Estimation for High Dimensional Regression

    NASA Astrophysics Data System (ADS)

    Ndiaye, Eugene; Fercoq, Olivier; Gramfort, Alexandre; Leclère, Vincent; Salmon, Joseph

    2017-10-01

    In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance. It is customary to consider ℓ 1 penalty to enforce sparsity in such scenarios. Sparsity enforcing methods, the Lasso being a canonical example, are popular candidates to address high dimension. For efficiency, they rely on tuning a parameter trading data fitting versus sparsity. For the Lasso theory to hold this tuning parameter should be proportional to the noise level, yet the latter is often unknown in practice. A possible remedy is to jointly optimize over the regression parameter as well as over the noise level. This has been considered under several names in the literature: Scaled-Lasso, Square-root Lasso, Concomitant Lasso estimation for instance, and could be of interest for uncertainty quantification. In this work, after illustrating numerical difficulties for the Concomitant Lasso formulation, we propose a modification we coined Smoothed Concomitant Lasso, aimed at increasing numerical stability. We propose an efficient and accurate solver leading to a computational cost no more expensive than the one for the Lasso. We leverage on standard ingredients behind the success of fast Lasso solvers: a coordinate descent algorithm, combined with safe screening rules to achieve speed efficiency, by eliminating early irrelevant features.

  19. [Parameters optimization and cleaning efficiency evaluation of attrition scrubbing remediation of Pb-contaminated soil].

    PubMed

    Yang, Wen; Huang, Jin-lou; Peng, Hui-qing; Li, Si-tuo

    2013-09-01

    Attrition scrubbing was used to remediate lead contaminated-site soil, and the main purpose was to remove fine particles and lead contaminants from the surface of sand. The optimal parameters of attrition scrubbing were determined by orthogonal experiment, and three soil samples with different lead concentration were subjected to attrition scrubbing experiments. The results showed that the optimal scrubbing parameters were: a solid ratio of 70% dry matter, a temperature of 25 degrees C, an attrition time of 30 min, and an attrition speed of 1200 r x min(-1). Before attrition scrubbing, the screening and analysis of soil showed that in all three soil samples, lead was mainly enriched on sand and fine particles, and the distribution of lead was highly correlated to the organic matter. After attrition scrubbing, the washing efficiency of the original state lead contaminated sand soil in triplicates was 67.61%, 31.71% and 41.01%, respectively, which indicates that attrition scrubbing can remove part of the fine soil and lead contaminants from the surface of sand, to accomplish the purpose of pollutants enrichment. Scanning electron microscopy (SEM) analysis showed that the sand surface became smooth after attrition scrubbing. The results above show that attrition scrubbing has a good washing effect for the remediation of lead contaminated sand soil.

  20. Poster - 52: Smoothing constraints in Modulated Photon Radiotherapy (XMRT) fluence map optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGeachy, Philip; Villarreal-Barajas, Jose Eduardo

    Purpose: Modulated Photon Radiotherapy (XMRT), which simultaneously optimizes photon beamlet energy (6 and 18 MV) and fluence, has recently shown dosimetric improvement in comparison to conventional IMRT. That said, the degree of smoothness of resulting fluence maps (FMs) has yet to be investigated and could impact the deliverability of XMRT. This study looks at investigating FM smoothness and imposing smoothing constraint in the fluence map optimization. Methods: Smoothing constraints were modeled in the XMRT algorithm with the sum of positive gradient (SPG) technique. XMRT solutions, with and without SPG constraints, were generated for a clinical prostate scan using standard dosimetricmore » prescriptions, constraints, and a seven coplanar beam arrangement. The smoothness, with and without SPG constraints, was assessed by looking at the absolute and relative maximum SPG scores for each fluence map. Dose volume histograms were utilized when evaluating impact on the dose distribution. Results: Imposing SPG constraints reduced the absolute and relative maximum SPG values by factors of up to 5 and 2, respectively, when compared with their non-SPG constrained counterparts. This leads to a more seamless conversion of FMS to their respective MLC sequences. This improved smoothness resulted in an increase to organ at risk (OAR) dose, however the increase is not clinically significant. Conclusions: For a clinical prostate case, there was a noticeable improvement in the smoothness of the XMRT FMs when SPG constraints were applied with a minor increase in dose to OARs. This increase in OAR dose is not clinically meaningful.« less

  1. Nonlinear dynamical modes of climate variability: from curves to manifolds

    NASA Astrophysics Data System (ADS)

    Gavrilov, Andrey; Mukhin, Dmitry; Loskutov, Evgeny; Feigin, Alexander

    2016-04-01

    The necessity of efficient dimensionality reduction methods capturing dynamical properties of the system from observed data is evident. Recent study shows that nonlinear dynamical mode (NDM) expansion is able to solve this problem and provide adequate phase variables in climate data analysis [1]. A single NDM is logical extension of linear spatio-temporal structure (like empirical orthogonal function pattern): it is constructed as nonlinear transformation of hidden scalar time series to the space of observed variables, i. e. projection of observed dataset onto a nonlinear curve. Both the hidden time series and the parameters of the curve are learned simultaneously using Bayesian approach. The only prior information about the hidden signal is the assumption of its smoothness. The optimal nonlinearity degree and smoothness are found using Bayesian evidence technique. In this work we do further extension and look for vector hidden signals instead of scalar with the same smoothness restriction. As a result we resolve multidimensional manifolds instead of sum of curves. The dimension of the hidden manifold is optimized using also Bayesian evidence. The efficiency of the extension is demonstrated on model examples. Results of application to climate data are demonstrated and discussed. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS). 1. Mukhin, D., Gavrilov, A., Feigin, A., Loskutov, E., & Kurths, J. (2015). Principal nonlinear dynamical modes of climate variability. Scientific Reports, 5, 15510. http://doi.org/10.1038/srep15510

  2. A graphical approach to optimizing variable-kernel smoothing parameters for improved deformable registration of CT and cone beam CT images

    NASA Astrophysics Data System (ADS)

    Hart, Vern; Burrow, Damon; Li, X. Allen

    2017-08-01

    A systematic method is presented for determining optimal parameters in variable-kernel deformable image registration of cone beam CT and CT images, in order to improve accuracy and convergence for potential use in online adaptive radiotherapy. Assessed conditions included the noise constant (symmetric force demons), the kernel reduction rate, the kernel reduction percentage, and the kernel adjustment criteria. Four such parameters were tested in conjunction with reductions of 5, 10, 15, 20, 30, and 40%. Noise constants ranged from 1.0 to 1.9 for pelvic images in ten prostate cancer patients. A total of 516 tests were performed and assessed using the structural similarity index. Registration accuracy was plotted as a function of iteration number and a least-squares regression line was calculated, which implied an average improvement of 0.0236% per iteration. This baseline was used to determine if a given set of parameters under- or over-performed. The most accurate parameters within this range were applied to contoured images. The mean Dice similarity coefficient was calculated for bladder, prostate, and rectum with mean values of 98.26%, 97.58%, and 96.73%, respectively; corresponding to improvements of 2.3%, 9.8%, and 1.2% over previously reported values for the same organ contours. This graphical approach to registration analysis could aid in determining optimal parameters for Demons-based algorithms. It also establishes expectation values for convergence rates and could serve as an indicator of non-physical warping, which often occurred in cases  >0.6% from the regression line.

  3. [Preparation and quality control of pyridostigmine bromide orally disintegrating tablet].

    PubMed

    Zhang, Li; Tan, Qun-you; Cheng, Xun-guan; Wang, Hong; Hu, Ni-ni; Zhang, Jing-qing

    2012-05-01

    To prepare orally disintegrating tablets containing pyridostigmine bromide and optimize formulations. Solid dispersion was prepared using solvent evaporation-deposition method. The formulation was optimized by central composite design-response surface methodology (RSM plus CCD) with disintegration time as a reference parameter. The orally disintegrating tablets showed integrity and were smooth with desirable taste and feel in mouth. The disintegration time was less than 30 s. The cumulative drug dissolution was around 8.5% (around 2.5 mg which was less than bitterness threshold of pyridostigmine bromide of 3 mg) within 5 min in water while the cumulative drug dissolution was higher than 95% within 2 min in 0.1 N HCl. The orally disintegrating tablets are reasonable in formulation, feasible in technology and patient-friendly.

  4. Image smoothing and enhancement via min/max curvature flow

    NASA Astrophysics Data System (ADS)

    Malladi, Ravikanth; Sethian, James A.

    1996-03-01

    We present a class of PDE-based algorithms suitable for a wide range of image processing applications. The techniques are applicable to both salt-and-pepper gray-scale noise and full- image continuous noise present in black and white images, gray-scale images, texture images and color images. At the core, the techniques rely on a level set formulation of evolving curves and surfaces and the viscosity in profile evolution. Essentially, the method consists of moving the isointensity contours in an image under curvature dependent speed laws to achieve enhancement. Compared to existing techniques, our approach has several distinct advantages. First, it contains only one enhancement parameter, which in most cases is automatically chosen. Second, the scheme automatically stops smoothing at some optimal point; continued application of the scheme produces no further change. Third, the method is one of the fastest possible schemes based on a curvature-controlled approach.

  5. Optimize of shrink process with X-Y CD bias on hole pattern

    NASA Astrophysics Data System (ADS)

    Koike, Kyohei; Hara, Arisa; Natori, Sakurako; Yamauchi, Shohei; Yamato, Masatoshi; Oyama, Kenichi; Yaegashi, Hidetami

    2017-03-01

    Gridded design rules[1] is major process in configuring logic circuit used 193-immersion lithography. In the scaling of grid patterning, we can make 10nm order line and space pattern by using multiple patterning techniques such as self-aligned multiple patterning (SAMP) and litho-etch- litho-etch (LELE)[2][3][4] . On the other hand, Line cut process has some error parameters such as pattern defect, placement error, roughness and X-Y CD bias with the decreasing scale. We tried to cure hole pattern roughness to use additional process such as Line smoothing[5] . Each smoothing process showed different effect. As the result, CDx shrink amount is smaller than CDy without one additional process. In this paper, we will report the pattern controllability comparison of EUV and 193-immersion. And we will discuss optimum method about CD bias on hole pattern.

  6. Effect of tetramethylammonium hydroxide/isopropyl alcohol wet etching on geometry and surface roughness of silicon nanowires fabricated by AFM lithography

    PubMed Central

    Yusoh, Siti Noorhaniah

    2016-01-01

    Summary The optimization of etchant parameters in wet etching plays an important role in the fabrication of semiconductor devices. Wet etching of tetramethylammonium hydroxide (TMAH)/isopropyl alcohol (IPA) on silicon nanowires fabricated by AFM lithography is studied herein. TMAH (25 wt %) with different IPA concentrations (0, 10, 20, and 30 vol %) and etching time durations (30, 40, and 50 s) were investigated. The relationships between etching depth and width, and etching rate and surface roughness of silicon nanowires were characterized in detail using atomic force microscopy (AFM). The obtained results indicate that increased IPA concentration in TMAH produced greater width of the silicon nanowires with a smooth surface. It was also observed that the use of a longer etching time causes more unmasked silicon layers to be removed. Importantly, throughout this study, wet etching with optimized parameters can be applied in the design of the devices with excellent performance for many applications. PMID:27826521

  7. Multi-Objective Control Optimization for Greenhouse Environment Using Evolutionary Algorithms

    PubMed Central

    Hu, Haigen; Xu, Lihong; Wei, Ruihua; Zhu, Bingkun

    2011-01-01

    This paper investigates the issue of tuning the Proportional Integral and Derivative (PID) controller parameters for a greenhouse climate control system using an Evolutionary Algorithm (EA) based on multiple performance measures such as good static-dynamic performance specifications and the smooth process of control. A model of nonlinear thermodynamic laws between numerous system variables affecting the greenhouse climate is formulated. The proposed tuning scheme is tested for greenhouse climate control by minimizing the integrated time square error (ITSE) and the control increment or rate in a simulation experiment. The results show that by tuning the gain parameters the controllers can achieve good control performance through step responses such as small overshoot, fast settling time, and less rise time and steady state error. Besides, it can be applied to tuning the system with different properties, such as strong interactions among variables, nonlinearities and conflicting performance criteria. The results implicate that it is a quite effective and promising tuning method using multi-objective optimization algorithms in the complex greenhouse production. PMID:22163927

  8. Research on Novel High-Power Microwave/Millimeter Wave Sources and Applications

    DTIC Science & Technology

    2010-08-28

    density with acceptable operating temperature and lifetime. The MIG is optimized with the EGUN code for a cath- ode voltage Vb of 100 kV and a beam...emission suppression. Figure 2 is an EGUN drawing of the MIG configuration/ dimensions and electron trajectories. The design is flexible TABLE I. Predicted...and measured MIG parameters. EGUN prediction smooth cathode Measurement Voltage kV 100.0 100.0 Current A 8.0 8.0 0 1.40 1.40 vz /vz0 3.5% 4.6

  9. Cyclosporine a loaded solid lipid nanoparticles: optimization of formulation, process variable and characterization.

    PubMed

    Varia, Jigisha K; Dodiya, Shamsunder S; Sawant, Krutika K

    2008-01-01

    Solid lipid nanoparticles (SLNs) loaded with Cyclosporine A using glyceryl monostearate (GMS) and glyceryl palmitostearate (GPS) as lipid matrices were prepared by melt-homogenization using high-pressure homogenizer. Various process parameters such as homogenization pressure, homogenization cycles and formulation parameters such as ratio of drug: lipid, emulsifier: lipid and emulsifier: co-emulsifier were optimized using particle size and entrapment efficiencies as the dependent variables. The mean particle size of optimized batches of the GMS SLN and GPS SLN were found to be 131 nm and 158 nm and their entrapment efficiencies were 83 +/- 3.08% and 97 +/- 2.59% respectively. To improve the handling processing and stability of the prepared SLNs, the SLN dispersions were spray dried and its effect on size and reconstitution parameters were evaluated. The spray drying of SLNs did not significantly alter the size of SLNs and they exhibited good redispersibility. Solid state studies such as Infra Red Spectroscopy and Differential Scanning Calorimetry indicated absence of any chemical interaction between Cyclosporine A and the lipids. Scanning Electron Microscopy of optimized formulations showed spherical shape with smooth and non porous surface. In vitro release studies revealed that GMS based SLNs released the drug faster (41.12% in 20 hours) than GPS SLNs (7.958% in 20 hours). Release of Cyclosporine A from GMS SLN followed Higuchi equation better than first order while release from GPS SLN followed first order better than Higuchi model.

  10. An optimal general type-2 fuzzy controller for Urban Traffic Network.

    PubMed

    Khooban, Mohammad Hassan; Vafamand, Navid; Liaghat, Alireza; Dragicevic, Tomislav

    2017-01-01

    Urban traffic network model is illustrated by state-charts and object-diagram. However, they have limitations to show the behavioral perspective of the Traffic Information flow. Consequently, a state space model is used to calculate the half-value waiting time of vehicles. In this study, a combination of the general type-2 fuzzy logic sets and the Modified Backtracking Search Algorithm (MBSA) techniques are used in order to control the traffic signal scheduling and phase succession so as to guarantee a smooth flow of traffic with the least wait times and average queue length. The parameters of input and output membership functions are optimized simultaneously by the novel heuristic algorithm MBSA. A comparison is made between the achieved results with those of optimal and conventional type-1 fuzzy logic controllers. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Simple fabrication of closed-packed IR microlens arrays on silicon by femtosecond laser wet etching

    NASA Astrophysics Data System (ADS)

    Meng, Xiangwei; Chen, Feng; Yang, Qing; Bian, Hao; Du, Guangqing; Hou, Xun

    2015-10-01

    We demonstrate a simple route to fabricate closed-packed infrared (IR) silicon microlens arrays (MLAs) based on femtosecond laser irradiation assisted by wet etching method. The fabricated MLAs show high fill factor, smooth surface and good uniformity. They can be used as optical devices for IR applications. The exposure and etching parameters are optimized to obtain reproducible microlens with hexagonal and rectangular arrangements. The surface roughness of the concave MLAs is only 56 nm. This presented method is a maskless process and can flexibly change the size, shape and the fill factor of the MLAs by controlling the experimental parameters. The concave MLAs on silicon can work in IR region and can be used for IR sensors and imaging applications.

  12. Comparing transformation methods for DNA microarray data

    PubMed Central

    Thygesen, Helene H; Zwinderman, Aeilko H

    2004-01-01

    Background When DNA microarray data are used for gene clustering, genotype/phenotype correlation studies, or tissue classification the signal intensities are usually transformed and normalized in several steps in order to improve comparability and signal/noise ratio. These steps may include subtraction of an estimated background signal, subtracting the reference signal, smoothing (to account for nonlinear measurement effects), and more. Different authors use different approaches, and it is generally not clear to users which method they should prefer. Results We used the ratio between biological variance and measurement variance (which is an F-like statistic) as a quality measure for transformation methods, and we demonstrate a method for maximizing that variance ratio on real data. We explore a number of transformations issues, including Box-Cox transformation, baseline shift, partial subtraction of the log-reference signal and smoothing. It appears that the optimal choice of parameters for the transformation methods depends on the data. Further, the behavior of the variance ratio, under the null hypothesis of zero biological variance, appears to depend on the choice of parameters. Conclusions The use of replicates in microarray experiments is important. Adjustment for the null-hypothesis behavior of the variance ratio is critical to the selection of transformation method. PMID:15202953

  13. Comparing transformation methods for DNA microarray data.

    PubMed

    Thygesen, Helene H; Zwinderman, Aeilko H

    2004-06-17

    When DNA microarray data are used for gene clustering, genotype/phenotype correlation studies, or tissue classification the signal intensities are usually transformed and normalized in several steps in order to improve comparability and signal/noise ratio. These steps may include subtraction of an estimated background signal, subtracting the reference signal, smoothing (to account for nonlinear measurement effects), and more. Different authors use different approaches, and it is generally not clear to users which method they should prefer. We used the ratio between biological variance and measurement variance (which is an F-like statistic) as a quality measure for transformation methods, and we demonstrate a method for maximizing that variance ratio on real data. We explore a number of transformations issues, including Box-Cox transformation, baseline shift, partial subtraction of the log-reference signal and smoothing. It appears that the optimal choice of parameters for the transformation methods depends on the data. Further, the behavior of the variance ratio, under the null hypothesis of zero biological variance, appears to depend on the choice of parameters. The use of replicates in microarray experiments is important. Adjustment for the null-hypothesis behavior of the variance ratio is critical to the selection of transformation method.

  14. Optimization of conditions for thermal smoothing GaAs surfaces

    NASA Astrophysics Data System (ADS)

    Akhundov, I. O.; Kazantsev, D. M.; Kozhuhov, A. S.; Alperovich, V. L.

    2018-03-01

    GaAs thermal smoothing by annealing in conditions which are close to equilibrium between the surface and vapors of As and Ga was earlier proved to be effective for the step-terraced surface formation on epi-ready substrates with a small root-mean-square roughness (Rq ≤ 0.15 nm). In the present study, this technique is further developed in order to reduce the annealing duration and to smooth GaAs samples with a larger initial roughness. To this end, we proposed a two-stage anneal with the first high-temperature stage aimed at smoothing "coarse" relief features and the second stage focused on "fine" smoothing at a lower temperature. The optimal temperatures and durations of two-stage annealing are found by Monte Carlo simulations and adjusted after experimentation. It is proved that the temperature and duration of the first high-temperature stage are restricted by the surface roughening, which occurs due to deviations from equilibrium conditions.

  15. [Determination of calcium and magnesium in tobacco by near-infrared spectroscopy and least squares-support vector machine].

    PubMed

    Tian, Kuang-da; Qiu, Kai-xian; Li, Zu-hong; Lü, Ya-qiong; Zhang, Qiu-ju; Xiong, Yan-mei; Min, Shun-geng

    2014-12-01

    The purpose of the present paper is to determine calcium and magnesium in tobacco using NIR combined with least squares-support vector machine (LS-SVM). Five hundred ground and dried tobacco samples from Qujing city, Yunnan province, China, were surveyed by a MATRIX-I spectrometer (Bruker Optics, Bremen, Germany). At the beginning of data processing, outliers of samples were eliminated for stability of the model. The rest 487 samples were divided into several calibration sets and validation sets according to a hybrid modeling strategy. Monte-Carlo cross validation was used to choose the best spectral preprocess method from multiplicative scatter correction (MSC), standard normal variate transformation (SNV), S-G smoothing, 1st derivative, etc., and their combinations. To optimize parameters of LS-SVM model, the multilayer grid search and 10-fold cross validation were applied. The final LS-SVM models with the optimizing parameters were trained by the calibration set and accessed by 287 validation samples picked by Kennard-Stone method. For the quantitative model of calcium in tobacco, Savitzky-Golay FIR smoothing with frame size 21 showed the best performance. The regularization parameter λ of LS-SVM was e16.11, while the bandwidth of the RBF kernel σ2 was e8.42. The determination coefficient for prediction (Rc(2)) was 0.9755 and the determination coefficient for prediction (Rp(2)) was 0.9422, better than the performance of PLS model (Rc(2)=0.9593, Rp(2)=0.9344). For the quantitative analysis of magnesium, SNV made the regression model more precise than other preprocess. The optimized λ was e15.25 and σ2 was e6.32. Rc(2) and Rp(2) were 0.9961 and 0.9301, respectively, better than PLS model (Rc(2)=0.9716, Rp(2)=0.8924). After modeling, the whole progress of NIR scan and data analysis for one sample was within tens of seconds. The overall results show that NIR spectroscopy combined with LS-SVM can be efficiently utilized for rapid and accurate analysis of calcium and magnesium in tobacco.

  16. Optimization of ecosystem model parameters with different temporal variabilities using tower flux data and an ensemble Kalman filter

    NASA Astrophysics Data System (ADS)

    He, L.; Chen, J. M.; Liu, J.; Mo, G.; Zhen, T.; Chen, B.; Wang, R.; Arain, M.

    2013-12-01

    Terrestrial ecosystem models have been widely used to simulate carbon, water and energy fluxes and climate-ecosystem interactions. In these models, some vegetation and soil parameters are determined based on limited studies from literatures without consideration of their seasonal variations. Data assimilation (DA) provides an effective way to optimize these parameters at different time scales . In this study, an ensemble Kalman filter (EnKF) is developed and applied to optimize two key parameters of an ecosystem model, namely the Boreal Ecosystem Productivity Simulator (BEPS): (1) the maximum photosynthetic carboxylation rate (Vcmax) at 25 °C, and (2) the soil water stress factor (fw) for stomatal conductance formulation. These parameters are optimized through assimilating observations of gross primary productivity (GPP) and latent heat (LE) fluxes measured in a 74 year-old pine forest, which is part of the Turkey Point Flux Station's age-sequence sites. Vcmax is related to leaf nitrogen concentration and varies slowly over the season and from year to year. In contrast, fw varies rapidly in response to soil moisture dynamics in the root-zone. Earlier studies suggested that DA of vegetation parameters at daily time steps leads to Vcmax values that are unrealistic. To overcome the problem, we developed a three-step scheme to optimize Vcmax and fw. First, the EnKF is applied daily to obtain precursor estimates of Vcmax and fw. Then Vcmax is optimized at different time scales assuming fw is unchanged from first step. The best temporal period or window size is then determined by analyzing the magnitude of the minimized cost-function, and the coefficient of determination (R2) and Root-mean-square deviation (RMSE) of GPP and LE between simulation and observation. Finally, the daily fw value is optimized for rain free days corresponding to the Vcmax curve from the best window size. The optimized fw is then used to model its relationship with soil moisture. We found that the optimized fw is best correlated linearly to soil water content at 5 to 10 cm depth. We also found that both the temporal scale or window size and the priori uncertainty of Vcmax (given as its standard deviation) are important in determining the seasonal trajectory of Vcmax. During the leaf expansion stage, an appropriate window size leads to reasonable estimate of Vcmax. In the summer, the fluctuation of optimized Vcmax is mainly caused by the uncertainties in Vcmax but not the window size. Our study suggests that a smooth Vcmax curve optimized from an optimal time window size is close to the reality though the RMSE of GPP at this window is not the minimum. It also suggests that for the accurate optimization of Vcmax, it is necessary to set appropriate levels of uncertainty of Vcmax in the spring and summer because the rate of leaf nitrogen concentration change is different over the season. Parameter optimizations for more sites and multi-years are in progress.

  17. Optimization of morphological parameters for mitigation pits on rear KDP surface: experiments and numerical modeling.

    PubMed

    Yang, Hao; Cheng, Jian; Chen, Mingjun; Wang, Jian; Liu, Zhichao; An, Chenhui; Zheng, Yi; Hu, Kehui; Liu, Qi

    2017-07-24

    In high power laser systems, precision micro-machining is an effective method to mitigate the laser-induced surface damage growth on potassium dihydrogen phosphate (KDP) crystal. Repaired surfaces with smooth spherical and Gaussian contours can alleviate the light field modulation caused by damage site. To obtain the optimal repairing structure parameters, finite element method (FEM) models for simulating the light intensification caused by the mitigation pits on rear KDP surface were established. The light intensity modulation of these repairing profiles was compared by changing the structure parameters. The results indicate the modulation is mainly caused by the mutual interference between the reflected and incident lights on the rear surface. Owing to the total reflection, the light intensity enhancement factors (LIEFs) of the spherical and Gaussian mitigation pits sharply increase when the width-depth ratios are near 5.28 and 3.88, respectively. To achieve the optimal mitigation effect, the width-depth ratios greater than 5.3 and 4.3 should be applied to the spherical and Gaussian repaired contours. Particularly, for the cases of width-depth ratios greater than 5.3, the spherical repaired contour is preferred to achieve lower light intensification. The laser damage test shows that when the width-depth ratios are larger than 5.3, the spherical repaired contour presents higher laser damage resistance than that of Gaussian repaired contour, which agrees well with the simulation results.

  18. Development and evaluation of paclitaxel nanoparticles using a quality-by-design approach.

    PubMed

    Yerlikaya, Firat; Ozgen, Aysegul; Vural, Imran; Guven, Olgun; Karaagaoglu, Ergun; Khan, Mansoor A; Capan, Yilmaz

    2013-10-01

    The aims of this study were to develop and characterize paclitaxel nanoparticles, to identify and control critical sources of variability in the process, and to understand the impact of formulation and process parameters on the critical quality attributes (CQAs) using a quality-by-design (QbD) approach. For this, a risk assessment study was performed with various formulation and process parameters to determine their impact on CQAs of nanoparticles, which were determined to be average particle size, zeta potential, and encapsulation efficiency. Potential risk factors were identified using an Ishikawa diagram and screened by Plackett-Burman design and finally nanoparticles were optimized using Box-Behnken design. The optimized formulation was further characterized by Fourier transform infrared spectroscopy, X-ray diffractometry, differential scanning calorimetry, scanning electron microscopy, atomic force microscopy, and gas chromatography. It was observed that paclitaxel transformed from crystalline state to amorphous state while totally encapsulating into the nanoparticles. The nanoparticles were spherical, smooth, and homogenous with no dichloromethane residue. In vitro cytotoxicity test showed that the developed nanoparticles are more efficient than free paclitaxel in terms of antitumor activity (more than 25%). In conclusion, this study demonstrated that understanding formulation and process parameters with the philosophy of QbD is useful for the optimization of complex drug delivery systems. © 2013 Wiley Periodicals, Inc. and the American Pharmacists Association.

  19. Connectivity-based fixel enhancement: Whole-brain statistical analysis of diffusion MRI measures in the presence of crossing fibres

    PubMed Central

    Raffelt, David A.; Smith, Robert E.; Ridgway, Gerard R.; Tournier, J-Donald; Vaughan, David N.; Rose, Stephen; Henderson, Robert; Connelly, Alan

    2015-01-01

    In brain regions containing crossing fibre bundles, voxel-average diffusion MRI measures such as fractional anisotropy (FA) are difficult to interpret, and lack within-voxel single fibre population specificity. Recent work has focused on the development of more interpretable quantitative measures that can be associated with a specific fibre population within a voxel containing crossing fibres (herein we use fixel to refer to a specific fibre population within a single voxel). Unfortunately, traditional 3D methods for smoothing and cluster-based statistical inference cannot be used for voxel-based analysis of these measures, since the local neighbourhood for smoothing and cluster formation can be ambiguous when adjacent voxels may have different numbers of fixels, or ill-defined when they belong to different tracts. Here we introduce a novel statistical method to perform whole-brain fixel-based analysis called connectivity-based fixel enhancement (CFE). CFE uses probabilistic tractography to identify structurally connected fixels that are likely to share underlying anatomy and pathology. Probabilistic connectivity information is then used for tract-specific smoothing (prior to the statistical analysis) and enhancement of the statistical map (using a threshold-free cluster enhancement-like approach). To investigate the characteristics of the CFE method, we assessed sensitivity and specificity using a large number of combinations of CFE enhancement parameters and smoothing extents, using simulated pathology generated with a range of test-statistic signal-to-noise ratios in five different white matter regions (chosen to cover a broad range of fibre bundle features). The results suggest that CFE input parameters are relatively insensitive to the characteristics of the simulated pathology. We therefore recommend a single set of CFE parameters that should give near optimal results in future studies where the group effect is unknown. We then demonstrate the proposed method by comparing apparent fibre density between motor neurone disease (MND) patients with control subjects. The MND results illustrate the benefit of fixel-specific statistical inference in white matter regions that contain crossing fibres. PMID:26004503

  20. Modeling the dispersion effects of contractile fibers in smooth muscles

    NASA Astrophysics Data System (ADS)

    Murtada, Sae-Il; Kroon, Martin; Holzapfel, Gerhard A.

    2010-12-01

    Micro-structurally based models for smooth muscle contraction are crucial for a better understanding of pathological conditions such as atherosclerosis, incontinence and asthma. It is meaningful that models consider the underlying mechanical structure and the biochemical activation. Hence, a simple mechanochemical model is proposed that includes the dispersion of the orientation of smooth muscle myofilaments and that is capable to capture available experimental data on smooth muscle contraction. This allows a refined study of the effects of myofilament dispersion on the smooth muscle contraction. A classical biochemical model is used to describe the cross-bridge interactions with the thin filament in smooth muscles in which calcium-dependent myosin phosphorylation is the only regulatory mechanism. A novel mechanical model considers the dispersion of the contractile fiber orientations in smooth muscle cells by means of a strain-energy function in terms of one dispersion parameter. All model parameters have a biophysical meaning and may be estimated through comparisons with experimental data. The contraction of the middle layer of a carotid artery is studied numerically. Using a tube the relationships between the internal pressure and the stretches are investigated as functions of the dispersion parameter, which implies a strong influence of the orientation of smooth muscle myofilaments on the contraction response. It is straightforward to implement this model in a finite element code to better analyze more complex boundary-value problems.

  1. Measuring optical properties of a blood vessel model using optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Levitz, David; Hinds, Monica T.; Tran, Noi; Vartanian, Keri; Hanson, Stephen R.; Jacques, Steven L.

    2006-02-01

    In this paper we develop the concept of a tissue-engineered optical phantom that uses engineered tissue as a phantom for calibration and optimization of biomedical optics instrumentation. With this method, the effects of biological processes on measured signals can be studied in a well controlled manner. To demonstrate this concept, we attempted to investigate how the cellular remodeling of a collagen matrix affected the optical properties extracted from optical coherence tomography (OCT) images of the samples. Tissue-engineered optical phantoms of the vascular system were created by seeding smooth muscle cells in a collagen matrix. Four different optical properties were evaluated by fitting the OCT signal to 2 different models: the sample reflectivity ρ and attenuation parameter μ were extracted from the single scattering model, and the scattering coefficient μ s and root-mean-square scattering angle θ rms were extracted from the extended Huygens-Fresnel model. We found that while contraction of the smooth muscle cells was clearly evident macroscopically, on the microscopic scale very few cells were actually embedded in the collagen. Consequently, no significant difference between the cellular and acellular samples in either set of measured optical properties was observed. We believe that further optimization of our tissue-engineering methods is needed in order to make the histology and biochemistry of the cellular samples sufficiently different from the acellular samples on the microscopic level. Once these methods are optimized, we can better verify whether the optical properties of the cellular and acellular collagen samples differ.

  2. Joint Machine Learning and Game Theory for Rate Control in High Efficiency Video Coding.

    PubMed

    Gao, Wei; Kwong, Sam; Jia, Yuheng

    2017-08-25

    In this paper, a joint machine learning and game theory modeling (MLGT) framework is proposed for inter frame coding tree unit (CTU) level bit allocation and rate control (RC) optimization in High Efficiency Video Coding (HEVC). First, a support vector machine (SVM) based multi-classification scheme is proposed to improve the prediction accuracy of CTU-level Rate-Distortion (R-D) model. The legacy "chicken-and-egg" dilemma in video coding is proposed to be overcome by the learning-based R-D model. Second, a mixed R-D model based cooperative bargaining game theory is proposed for bit allocation optimization, where the convexity of the mixed R-D model based utility function is proved, and Nash bargaining solution (NBS) is achieved by the proposed iterative solution search method. The minimum utility is adjusted by the reference coding distortion and frame-level Quantization parameter (QP) change. Lastly, intra frame QP and inter frame adaptive bit ratios are adjusted to make inter frames have more bit resources to maintain smooth quality and bit consumption in the bargaining game optimization. Experimental results demonstrate that the proposed MLGT based RC method can achieve much better R-D performances, quality smoothness, bit rate accuracy, buffer control results and subjective visual quality than the other state-of-the-art one-pass RC methods, and the achieved R-D performances are very close to the performance limits from the FixedQP method.

  3. Empirical Bayes methods for smoothing data and for simultaneous estimation of many parameters.

    PubMed Central

    Yanagimoto, T; Kashiwagi, N

    1990-01-01

    A recent successful development is found in a series of innovative, new statistical methods for smoothing data that are based on the empirical Bayes method. This paper emphasizes their practical usefulness in medical sciences and their theoretically close relationship with the problem of simultaneous estimation of parameters, depending on strata. The paper also presents two examples of analyzing epidemiological data obtained in Japan using the smoothing methods to illustrate their favorable performance. PMID:2148512

  4. Highly controllable ICP etching of GaAs based materials for grating fabrication

    NASA Astrophysics Data System (ADS)

    Weibin, Qiu; Jiaxian, Wang

    2012-02-01

    Highly controllable ICP etching of GaAs based materials with SiCl4/Ar plasma is investigated. A slow etching rate of 13 nm/min was achieved with RF1 D 10 W, RF2 D 20 W and a high ratio of Ar to SiCl4 flow. First order gratings with 25 nm depth and 140 nm period were fabricated with the optimal parameters. AFM analysis indicated that the RMS roughness over a 10 × 10 μm2 area was 0.3 nm, which is smooth enough to regrow high quality materials for devices.

  5. Minimum variance optimal rate allocation for multiplexed H.264/AVC bitstreams.

    PubMed

    Tagliasacchi, Marco; Valenzise, Giuseppe; Tubaro, Stefano

    2008-07-01

    Consider the problem of transmitting multiple video streams to fulfill a constant bandwidth constraint. The available bit budget needs to be distributed across the sequences in order to meet some optimality criteria. For example, one might want to minimize the average distortion or, alternatively, minimize the distortion variance, in order to keep almost constant quality among the encoded sequences. By working in the rho-domain, we propose a low-delay rate allocation scheme that, at each time instant, provides a closed form solution for either the aforementioned problems. We show that minimizing the distortion variance instead of the average distortion leads, for each of the multiplexed sequences, to a coding penalty less than 0.5 dB, in terms of average PSNR. In addition, our analysis provides an explicit relationship between model parameters and this loss. In order to smooth the distortion also along time, we accommodate a shared encoder buffer to compensate for rate fluctuations. Although the proposed scheme is general, and it can be adopted for any video and image coding standard, we provide experimental evidence by transcoding bitstreams encoded using the state-of-the-art H.264/AVC standard. The results of our simulations reveal that is it possible to achieve distortion smoothing both in time and across the sequences, without sacrificing coding efficiency.

  6. A robust, efficient equidistribution 2D grid generation method

    NASA Astrophysics Data System (ADS)

    Chacon, Luis; Delzanno, Gian Luca; Finn, John; Chung, Jeojin; Lapenta, Giovanni

    2007-11-01

    We present a new cell-area equidistribution method for two- dimensional grid adaptation [1]. The method is able to satisfy the equidistribution constraint to arbitrary precision while optimizing desired grid properties (such as isotropy and smoothness). The method is based on the minimization of the grid smoothness integral, constrained to producing a given positive-definite cell volume distribution. The procedure gives rise to a single, non-linear scalar equation with no free-parameters. We solve this equation numerically with the Newton-Krylov technique. The ellipticity property of the linearized scalar equation allows multigrid preconditioning techniques to be effectively used. We demonstrate a solution exists and is unique. Therefore, once the solution is found, the adapted grid cannot be folded due to the positivity of the constraint on the cell volumes. We present several challenging tests to show that our new method produces optimal grids in which the constraint is satisfied numerically to arbitrary precision. We also compare the new method to the deformation method [2] and show that our new method produces better quality grids. [1] G.L. Delzanno, L. Chac'on, J.M. Finn, Y. Chung, G. Lapenta, A new, robust equidistribution method for two-dimensional grid generation, in preparation. [2] G. Liao and D. Anderson, A new approach to grid generation, Appl. Anal. 44, 285--297 (1992).

  7. Self-Organizing Hierarchical Particle Swarm Optimization with Time-Varying Acceleration Coefficients for Economic Dispatch with Valve Point Effects and Multifuel Options

    NASA Astrophysics Data System (ADS)

    Polprasert, Jirawadee; Ongsakul, Weerakorn; Dieu, Vo Ngoc

    2011-06-01

    This paper proposes a self-organizing hierarchical particle swarm optimization (SPSO) with time-varying acceleration coefficients (TVAC) for solving economic dispatch (ED) problem with non-smooth functions including multiple fuel options (MFO) and valve-point loading effects (VPLE). The proposed SPSO with TVAC is the new approach optimizer and good performance for solving ED problems. It can handle the premature convergence of the problem by re-initialization of velocity whenever particles are stagnated in the search space. To properly control both local and global explorations of the swarm during the optimization process, the performance of TVAC is included. The proposed method is tested in different ED problems with non-smooth cost functions and the obtained results are compared to those from many other methods in the literature. The results have revealed that the proposed SPSO with TVAC is effective in finding higher quality solutions for non-smooth ED problems than many other methods.

  8. Smoothing Forecasting Methods for Academic Library Circulations: An Evaluation and Recommendation.

    ERIC Educational Resources Information Center

    Brooks, Terrence A.; Forys, John W., Jr.

    1986-01-01

    Circulation time-series data from 50 midwest academic libraries were used to test 110 variants of 8 smoothing forecasting methods. Data and methodologies and illustrations of two recommended methods--the single exponential smoothing method and Brown's one-parameter linear exponential smoothing method--are given. Eight references are cited. (EJS)

  9. Investigating the relation between the geometric properties of river basins and the filtering parameters for regional land hydrology applications using GRACE models

    NASA Astrophysics Data System (ADS)

    Piretzidis, Dimitrios; Sideris, Michael G.

    2016-04-01

    This study investigates the possibilities of local hydrology signal extraction using GRACE data and conventional filtering techniques. The impact of the basin shape has also been studied in order to derive empirical rules for tuning the GRACE filter parameters. GRACE CSR Release 05 monthly solutions were used from April 2002 to August 2015 (161 monthly solutions in total). SLR data were also used to replace the GRACE C2,0 coefficient, and a de-correlation filter with optimal parameters for CSR Release 05 data was applied to attenuate the correlation errors of monthly mass differences. For basins located at higher latitudes, the effect of Glacial Isostatic Adjustment (GIA) was taken into account using the ICE-6G model. The study focuses on three geometric properties, i.e., the area, the convexity and the width in the longitudinal direction, of 100 basins with global distribution. Two experiments have been performed. The first one deals with the determination of the Gaussian smoothing radius that minimizes the gaussianity of GRACE equivalent water height (EWH) over the selected basins. The EWH kurtosis was selected as a metric of gaussianity. The second experiment focuses on the derivation of the Gaussian smoothing radius that minimizes the RMS difference between GRACE data and a hydrology model. The GLDAS 1.0 Noah hydrology model was chosen, which shows good agreement with GRACE data according to previous studies. Early results show that there is an apparent relation between the geometric attributes of the basins examined and the Gaussian radius derived from the two experiments. The kurtosis analysis experiment tends to underestimate the optimal Gaussian radius, which is close to 200-300 km in many cases. Empirical rules for the selection of the Gaussian radius have been also developed for sub-regional scale basins.

  10. Study on fabrication technology of silicon-based silica array waveguide grating

    NASA Astrophysics Data System (ADS)

    Sun, Yanjun; Dong, Lianhe; Leng, Yanbing

    2009-05-01

    Array waveguide grating (AWG) is an important plane optical element in dense wavelength division multiplex/demultiplex system. There are many virtue, channel quantity larger,lower loss, lower crosstalk, size smaller and high reliability etc. This article describs AWG fabrication technics utilizing IC(Integrated Circles) techniques, based on sixteen channel Silicon-Based Silica Array Waveguide Grating, put emphasis on discussing doping and deposition of waveguide core film,technics theory and interrelated parameter condition of photoetch and ion etching. Experiment result indicates that it depens on electrode structure, energy of radio-frequency electrode gas component, pressure ,flowing speed and substrate temperature by CVD depositing film .During depositing waveguide film by PE-CVD, the silicon is not reacted, When temperature becomes lower,it is reacted and it is easy to realize the control of film thickness and time with a result of film thickness uniformity reaching about 4% after optimizing deposition parameter and condition. We get the result of high etching speed rate, outline zoom, and side frame smooth by photoresist/Cr multiple mask and optimizing etching technics.

  11. Profile Optimization Method for Robust Airfoil Shape Optimization in Viscous Flow

    NASA Technical Reports Server (NTRS)

    Li, Wu

    2003-01-01

    Simulation results obtained by using FUN2D for robust airfoil shape optimization in transonic viscous flow are included to show the potential of the profile optimization method for generating fairly smooth optimal airfoils with no off-design performance degradation.

  12. Generalized Scalar-on-Image Regression Models via Total Variation.

    PubMed

    Wang, Xiao; Zhu, Hongtu

    2017-01-01

    The use of imaging markers to predict clinical outcomes can have a great impact in public health. The aim of this paper is to develop a class of generalized scalar-on-image regression models via total variation (GSIRM-TV), in the sense of generalized linear models, for scalar response and imaging predictor with the presence of scalar covariates. A key novelty of GSIRM-TV is that it is assumed that the slope function (or image) of GSIRM-TV belongs to the space of bounded total variation in order to explicitly account for the piecewise smooth nature of most imaging data. We develop an efficient penalized total variation optimization to estimate the unknown slope function and other parameters. We also establish nonasymptotic error bounds on the excess risk. These bounds are explicitly specified in terms of sample size, image size, and image smoothness. Our simulations demonstrate a superior performance of GSIRM-TV against many existing approaches. We apply GSIRM-TV to the analysis of hippocampus data obtained from the Alzheimers Disease Neuroimaging Initiative (ADNI) dataset.

  13. IRT Item Parameter Recovery with Marginal Maximum Likelihood Estimation Using Loglinear Smoothing Models

    ERIC Educational Resources Information Center

    Casabianca, Jodi M.; Lewis, Charles

    2015-01-01

    Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…

  14. High rate dry etching of (BiSb)2Te3 film by CH4/H2-based plasma

    NASA Astrophysics Data System (ADS)

    Song, Junqiang; Shi, Xun; Chen, Lidong

    2014-10-01

    Etching characteristics of p-type (BiSb)2Te3 films were studied with CH4/H2/Ar gas mixture using an inductively coupled plasma (ICP)-reactive ion etching (RIE) system. The effects of gas mixing ratio, working pressure and gas flow rate on the etch rate and the surface morphology were investigated. The vertical etched profile with the etch rate of 600 nm/min was achieved at the optimized processing parameters. X-ray photoelectron spectroscopy (XPS) analysis revealed the non-uniform etching of (BiSb)2Te3 films due to disparate volatility of the etching products. Micro-masking effects caused by polymer deposition and Bi-rich residues resulted in roughly etched surfaces. Smooth surfaces can be obtained by optimizing the CH4/H2/Ar mixing ratio.

  15. On the solution of evolution equations based on multigrid and explicit iterative methods

    NASA Astrophysics Data System (ADS)

    Zhukov, V. T.; Novikova, N. D.; Feodoritova, O. B.

    2015-08-01

    Two schemes for solving initial-boundary value problems for three-dimensional parabolic equations are studied. One is implicit and is solved using the multigrid method, while the other is explicit iterative and is based on optimal properties of the Chebyshev polynomials. In the explicit iterative scheme, the number of iteration steps and the iteration parameters are chosen as based on the approximation and stability conditions, rather than on the optimization of iteration convergence to the solution of the implicit scheme. The features of the multigrid scheme include the implementation of the intergrid transfer operators for the case of discontinuous coefficients in the equation and the adaptation of the smoothing procedure to the spectrum of the difference operators. The results produced by these schemes as applied to model problems with anisotropic discontinuous coefficients are compared.

  16. Evaluating low pass filters on SPECT reconstructed cardiac orientation estimation

    NASA Astrophysics Data System (ADS)

    Dwivedi, Shekhar

    2009-02-01

    Low pass filters can affect the quality of clinical SPECT images by smoothing. Appropriate filter and parameter selection leads to optimum smoothing that leads to a better quantification followed by correct diagnosis and accurate interpretation by the physician. This study aims at evaluating the low pass filters on SPECT reconstruction algorithms. Criteria for evaluating the filters are estimating the SPECT reconstructed cardiac azimuth and elevation angle. Low pass filters studied are butterworth, gaussian, hamming, hanning and parzen. Experiments are conducted using three reconstruction algorithms, FBP (filtered back projection), MLEM (maximum likelihood expectation maximization) and OSEM (ordered subsets expectation maximization), on four gated cardiac patient projections (two patients with stress and rest projections). Each filter is applied with varying cutoff and order for each reconstruction algorithm (only butterworth used for MLEM and OSEM). The azimuth and elevation angles are calculated from the reconstructed volume and the variation observed in the angles with varying filter parameters is reported. Our results demonstrate that behavior of hamming, hanning and parzen filter (used with FBP) with varying cutoff is similar for all the datasets. Butterworth filter (cutoff > 0.4) behaves in a similar fashion for all the datasets using all the algorithms whereas with OSEM for a cutoff < 0.4, it fails to generate cardiac orientation due to oversmoothing, and gives an unstable response with FBP and MLEM. This study on evaluating effect of low pass filter cutoff and order on cardiac orientation using three different reconstruction algorithms provides an interesting insight into optimal selection of filter parameters.

  17. Bifurcation theory for finitely smooth planar autonomous differential systems

    NASA Astrophysics Data System (ADS)

    Han, Maoan; Sheng, Lijuan; Zhang, Xiang

    2018-03-01

    In this paper we establish bifurcation theory of limit cycles for planar Ck smooth autonomous differential systems, with k ∈ N. The key point is to study the smoothness of bifurcation functions which are basic and important tool on the study of Hopf bifurcation at a fine focus or a center, and of Poincaré bifurcation in a period annulus. We especially study the smoothness of the first order Melnikov function in degenerate Hopf bifurcation at an elementary center. As we know, the smoothness problem was solved for analytic and C∞ differential systems, but it was not tackled for finitely smooth differential systems. Here, we present their optimal regularity of these bifurcation functions and their asymptotic expressions in the finite smooth case.

  18. PV output smoothing using a battery and natural gas engine-generator.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Jay Dean; Ellis, Abraham; Denda, Atsushi

    2013-02-01

    In some situations involving weak grids or high penetration scenarios, the variability of photovoltaic systems can affect the local electrical grid. In order to mitigate destabilizing effects of power fluctuations, an energy storage device or other controllable generation or load can be used. This paper describes the development of a controller for coordinated operation of a small gas engine-generator set (genset) and a battery for smoothing PV plant output. There are a number of benefits derived from using a traditional generation resource in combination with the battery; the variability of the photovoltaic system can be reduced to a specific levelmore » with a smaller battery and Power Conditioning System (PCS) and the lifetime of the battery can be extended. The controller was designed specifically for a PV/energy storage project (Prosperity) and a gas engine-generator (Mesa Del Sol) currently operating on the same feeder in Albuquerque, New Mexico. A number of smoothing simulations of the Prosperity PV were conducted using power data collected from the site. By adjusting the control parameters, tradeoffs between battery use and ramp rates could be tuned. A cost function was created to optimize the control in order to balance, in this example, the need to have low ramp rates with reducing battery size and operation. Simulations were performed for cases with only a genset or battery, and with and without coordinated control between the genset and battery, e.g., without the communication link between sites or during a communication failure. The degree of smoothing without coordinated control did not change significantly because the battery dominated the smoothing response. It is anticipated that this work will be followed by a field demonstration in the near future.« less

  19. Directional bilateral filters for smoothing fluorescence microscopy images

    NASA Astrophysics Data System (ADS)

    Venkatesh, Manasij; Mohan, Kavya; Seelamantula, Chandra Sekhar

    2015-08-01

    Images obtained through fluorescence microscopy at low numerical aperture (NA) are noisy and have poor resolution. Images of specimens such as F-actin filaments obtained using confocal or widefield fluorescence microscopes contain directional information and it is important that an image smoothing or filtering technique preserve the directionality. F-actin filaments are widely studied in pathology because the abnormalities in actin dynamics play a key role in diagnosis of cancer, cardiac diseases, vascular diseases, myofibrillar myopathies, neurological disorders, etc. We develop the directional bilateral filter as a means of filtering out the noise in the image without significantly altering the directionality of the F-actin filaments. The bilateral filter is anisotropic to start with, but we add an additional degree of anisotropy by employing an oriented domain kernel for smoothing. The orientation is locally adapted using a structure tensor and the parameters of the bilateral filter are optimized for within the framework of statistical risk minimization. We show that the directional bilateral filter has better denoising performance than the traditional Gaussian bilateral filter and other denoising techniques such as SURE-LET, non-local means, and guided image filtering at various noise levels in terms of peak signal-to-noise ratio (PSNR). We also show quantitative improvements in low NA images of F-actin filaments.

  20. An impact analysis of forecasting methods and forecasting parameters on bullwhip effect

    NASA Astrophysics Data System (ADS)

    Silitonga, R. Y. H.; Jelly, N.

    2018-04-01

    Bullwhip effect is an increase of variance of demand fluctuation from downstream to upstream of supply chain. Forecasting methods and forecasting parameters were recognized as some factors that affect bullwhip phenomena. To study these factors, we can develop simulations. There are several ways to simulate bullwhip effect in previous studies, such as mathematical equation modelling, information control modelling, computer program, and many more. In this study a spreadsheet program named Bullwhip Explorer was used to simulate bullwhip effect. Several scenarios were developed to show the change in bullwhip effect ratio because of the difference in forecasting methods and forecasting parameters. Forecasting methods used were mean demand, moving average, exponential smoothing, demand signalling, and minimum expected mean squared error. Forecasting parameters were moving average period, smoothing parameter, signalling factor, and safety stock factor. It showed that decreasing moving average period, increasing smoothing parameter, increasing signalling factor can create bigger bullwhip effect ratio. Meanwhile, safety stock factor had no impact to bullwhip effect.

  1. Geometry characteristics modeling and process optimization in coaxial laser inside wire cladding

    NASA Astrophysics Data System (ADS)

    Shi, Jianjun; Zhu, Ping; Fu, Geyan; Shi, Shihong

    2018-05-01

    Coaxial laser inside wire cladding method is very promising as it has a very high efficiency and a consistent interaction between the laser and wire. In this paper, the energy and mass conservation law, and the regression algorithm are used together for establishing the mathematical models to study the relationship between the layer geometry characteristics (width, height and cross section area) and process parameters (laser power, scanning velocity and wire feeding speed). At the selected parameter ranges, the predicted values from the models are compared with the experimental measured results, and there is minor error existing, but they reflect the same regularity. From the models, it is seen the width of the cladding layer is proportional to both the laser power and wire feeding speed, while it firstly increases and then decreases with the increasing of the scanning velocity. The height of the cladding layer is proportional to the scanning velocity and feeding speed and inversely proportional to the laser power. The cross section area increases with the increasing of feeding speed and decreasing of scanning velocity. By using the mathematical models, the geometry characteristics of the cladding layer can be predicted by the known process parameters. Conversely, the process parameters can be calculated by the targeted geometry characteristics. The models are also suitable for multi-layer forming process. By using the optimized process parameters calculated from the models, a 45 mm-high thin-wall part is formed with smooth side surfaces.

  2. Optimal swimming of a sheet.

    PubMed

    Montenegro-Johnson, Thomas D; Lauga, Eric

    2014-06-01

    Propulsion at microscopic scales is often achieved through propagating traveling waves along hairlike organelles called flagella. Taylor's two-dimensional swimming sheet model is frequently used to provide insight into problems of flagellar propulsion. We derive numerically the large-amplitude wave form of the two-dimensional swimming sheet that yields optimum hydrodynamic efficiency: the ratio of the squared swimming speed to the rate-of-working of the sheet against the fluid. Using the boundary element method, we show that the optimal wave form is a front-back symmetric regularized cusp that is 25% more efficient than the optimal sine wave. This optimal two-dimensional shape is smooth, qualitatively different from the kinked form of Lighthill's optimal three-dimensional flagellum, not predicted by small-amplitude theory, and different from the smooth circular-arc-like shape of active elastic filaments.

  3. Optimal Synthesis of Compliant Mechanisms using Subdivision and Commercial FEA (DETC2004-57497)

    NASA Technical Reports Server (NTRS)

    Hull, Patrick V.; Canfield, Stephen

    2004-01-01

    The field of distributed-compliance mechanisms has seen significant work in developing suitable topology optimization tools for their design. These optimal design tools have grown out of the techniques of structural optimization. This paper will build on the previous work in topology optimization and compliant mechanism design by proposing an alternative design space parameterization through control points and adding another step to the process, that of subdivision. The control points allow a specific design to be represented as a solid model during the optimization process. The process of subdivision creates an additional number of control points that help smooth the surface (for example a C(sup 2) continuous surface depending on the method of subdivision chosen) creating a manufacturable design free of some traditional numerical instabilities. Note that these additional control points do not add to the number of design parameters. This alternative parameterization and description as a solid model effectively and completely separates the design variables from the analysis variables during the optimization procedure. The motivation behind this work is to create an automated design tool from task definition to functional prototype created on a CNC or rapid-prototype machine. This paper will describe the proposed compliant mechanism design process and will demonstrate the procedure on several examples common in the literature.

  4. Optimization of equivalent uniform dose using the L-curve criterion.

    PubMed

    Chvetsov, Alexei V; Dempsey, James F; Palta, Jatinder R

    2007-10-07

    Optimization of equivalent uniform dose (EUD) in inverse planning for intensity-modulated radiation therapy (IMRT) prevents variation in radiobiological effect between different radiotherapy treatment plans, which is due to variation in the pattern of dose nonuniformity. For instance, the survival fraction of clonogens would be consistent with the prescription when the optimized EUD is equal to the prescribed EUD. One of the problems in the practical implementation of this approach is that the spatial dose distribution in EUD-based inverse planning would be underdetermined because an unlimited number of nonuniform dose distributions can be computed for a prescribed value of EUD. Together with ill-posedness of the underlying integral equation, this may significantly increase the dose nonuniformity. To optimize EUD and keep dose nonuniformity within reasonable limits, we implemented into an EUD-based objective function an additional criterion which ensures the smoothness of beam intensity functions. This approach is similar to the variational regularization technique which was previously studied for the dose-based least-squares optimization. We show that the variational regularization together with the L-curve criterion for the regularization parameter can significantly reduce dose nonuniformity in EUD-based inverse planning.

  5. A note on the regularity of solutions of infinite dimensional Riccati equations

    NASA Technical Reports Server (NTRS)

    Burns, John A.; King, Belinda B.

    1994-01-01

    This note is concerned with the regularity of solutions of algebraic Riccati equations arising from infinite dimensional LQR and LQG control problems. We show that distributed parameter systems described by certain parabolic partial differential equations often have a special structure that smoothes solutions of the corresponding Riccati equation. This analysis is motivated by the need to find specific representations for Riccati operators that can be used in the development of computational schemes for problems where the input and output operators are not Hilbert-Schmidt. This situation occurs in many boundary control problems and in certain distributed control problems associated with optimal sensor/actuator placement.

  6. Diffraction-geometry refinement in the DIALS framework

    DOE PAGES

    Waterman, David G.; Winter, Graeme; Gildea, Richard J.; ...

    2016-03-30

    Rapid data collection and modern computing resources provide the opportunity to revisit the task of optimizing the model of diffraction geometry prior to integration. A comprehensive description is given of new software that builds upon established methods by performing a single global refinement procedure, utilizing a smoothly varying model of the crystal lattice where appropriate. This global refinement technique extends to multiple data sets, providing useful constraints to handle the problem of correlated parameters, particularly for small wedges of data. Examples of advanced uses of the software are given and the design is explained in detail, with particular emphasis onmore » the flexibility and extensibility it entails.« less

  7. On splice site prediction using weight array models: a comparison of smoothing techniques

    NASA Astrophysics Data System (ADS)

    Taher, Leila; Meinicke, Peter; Morgenstern, Burkhard

    2007-11-01

    In most eukaryotic genes, protein-coding exons are separated by non-coding introns which are removed from the primary transcript by a process called "splicing". The positions where introns are cut and exons are spliced together are called "splice sites". Thus, computational prediction of splice sites is crucial for gene finding in eukaryotes. Weight array models are a powerful probabilistic approach to splice site detection. Parameters for these models are usually derived from m-tuple frequencies in trusted training data and subsequently smoothed to avoid zero probabilities. In this study we compare three different ways of parameter estimation for m-tuple frequencies, namely (a) non-smoothed probability estimation, (b) standard pseudo counts and (c) a Gaussian smoothing procedure that we recently developed.

  8. A long-term earthquake rate model for the central and eastern United States from smoothed seismicity

    USGS Publications Warehouse

    Moschetti, Morgan P.

    2015-01-01

    I present a long-term earthquake rate model for the central and eastern United States from adaptive smoothed seismicity. By employing pseudoprospective likelihood testing (L-test), I examined the effects of fixed and adaptive smoothing methods and the effects of catalog duration and composition on the ability of the models to forecast the spatial distribution of recent earthquakes. To stabilize the adaptive smoothing method for regions of low seismicity, I introduced minor modifications to the way that the adaptive smoothing distances are calculated. Across all smoothed seismicity models, the use of adaptive smoothing and the use of earthquakes from the recent part of the catalog optimizes the likelihood for tests with M≥2.7 and M≥4.0 earthquake catalogs. The smoothed seismicity models optimized by likelihood testing with M≥2.7 catalogs also produce the highest likelihood values for M≥4.0 likelihood testing, thus substantiating the hypothesis that the locations of moderate-size earthquakes can be forecast by the locations of smaller earthquakes. The likelihood test does not, however, maximize the fraction of earthquakes that are better forecast than a seismicity rate model with uniform rates in all cells. In this regard, fixed smoothing models perform better than adaptive smoothing models. The preferred model of this study is the adaptive smoothed seismicity model, based on its ability to maximize the joint likelihood of predicting the locations of recent small-to-moderate-size earthquakes across eastern North America. The preferred rate model delineates 12 regions where the annual rate of M≥5 earthquakes exceeds 2×10−3. Although these seismic regions have been previously recognized, the preferred forecasts are more spatially concentrated than the rates from fixed smoothed seismicity models, with rate increases of up to a factor of 10 near clusters of high seismic activity.

  9. Optimal pollution trading without pollution reductions

    EPA Science Inventory

    Many kinds of water pollution occur in pulses, e.g., agricultural and urban runoff. Ecosystems, such as wetlands, can serve to regulate these pulses and smooth pollution distributions over time. This smoothing reduces total environmental damages when “instantaneous” damages are m...

  10. A deterministic global optimization using smooth diagonal auxiliary functions

    NASA Astrophysics Data System (ADS)

    Sergeyev, Yaroslav D.; Kvasov, Dmitri E.

    2015-04-01

    In many practical decision-making problems it happens that functions involved in optimization process are black-box with unknown analytical representations and hard to evaluate. In this paper, a global optimization problem is considered where both the goal function f (x) and its gradient f‧ (x) are black-box functions. It is supposed that f‧ (x) satisfies the Lipschitz condition over the search hyperinterval with an unknown Lipschitz constant K. A new deterministic 'Divide-the-Best' algorithm based on efficient diagonal partitions and smooth auxiliary functions is proposed in its basic version, its convergence conditions are studied and numerical experiments executed on eight hundred test functions are presented.

  11. Quantitative characterization of material surface — Application to Ni + Mo electrolytic composite coatings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kubisztal, J., E-mail: julian.kubisztal@us.edu.pl

    A new approach to numerical analysis of maps of material surface has been proposed and discussed in detail. It was concluded that the roughness factor RF and the root mean square roughness S{sub q} show a saturation effect with increasing size of the analysed maps what allows determining the optimal map dimension representative of the examined material. A quantitative method of determining predominant direction of the surface texture based on the power spectral density function is also proposed and discussed. The elaborated method was applied in surface analysis of Ni + Mo composite coatings. It was shown that co-deposition ofmore » molybdenum particles in nickel matrix leads to an increase in surface roughness. In addition, a decrease in size of the embedded Mo particles in Ni matrix causes an increase of both the surface roughness and the surface texture. It was also stated that the relation between the roughness factor and the double layer capacitance C{sub dl} of the studied coatings is linear and allows determining the double layer capacitance of the smooth nickel electrode. - Highlights: •Optimization of the procedure for the scanning of the material surface •Quantitative determination of the surface roughness and texture intensity •Proposition of the parameter describing privileged direction of the surface texture •Determination of the double layer capacitance of the smooth electrode.« less

  12. Staggered Mesh Ewald: An extension of the Smooth Particle-Mesh Ewald method adding great versatility

    PubMed Central

    Cerutti, David S.; Duke, Robert E.; Darden, Thomas A.; Lybrand, Terry P.

    2009-01-01

    We draw on an old technique for improving the accuracy of mesh-based field calculations to extend the popular Smooth Particle Mesh Ewald (SPME) algorithm as the Staggered Mesh Ewald (StME) algorithm. StME improves the accuracy of computed forces by up to 1.2 orders of magnitude and also reduces the drift in system momentum inherent in the SPME method by averaging the results of two separate reciprocal space calculations. StME can use charge mesh spacings roughly 1.5× larger than SPME to obtain comparable levels of accuracy; the one mesh in an SPME calculation can therefore be replaced with two separate meshes, each less than one third of the original size. Coarsening the charge mesh can be balanced with reductions in the direct space cutoff to optimize performance: the efficiency of StME rivals or exceeds that of SPME calculations with similarly optimized parameters. StME may also offer advantages for parallel molecular dynamics simulations because it permits the use of coarser meshes without requiring higher orders of charge interpolation and also because the two reciprocal space calculations can be run independently if that is most suitable for the machine architecture. We are planning other improvements to the standard SPME algorithm, and anticipate that StME will work synergistically will all of them to dramatically improve the efficiency and parallel scaling of molecular simulations. PMID:20174456

  13. Design optimization of a smooth headlamp reflector to SAE/DOT beam-shape requirements

    NASA Astrophysics Data System (ADS)

    Shatz, Narkis E.; Bortz, John C.; Dassanayake, Mahendra S.

    1999-10-01

    The optical design of Ford Motor Company's 1992 Mercury Grand Marquis headlamp utilized a Sylvania 9007 filament source, a paraboloidal reflector and an array of cylindrical lenses (flutes). It has been of interest to Ford to determine the practicality of closely reproducing the on- road beam pattern performance of this headlamp, with an alternate optical arrangement whereby the control of the beam would be achieved solely by means of the geometry of the surface of the reflector, subject to a requirement of smooth-surface continuity; replacing the outer lens with a clear plastic cover having no beam-forming function. To this end the far-field intensity distribution produced by the 9007 bulb was measured at the low-beam setting. These measurements were then used to develop a light-source model for use in ray tracing simulations of candidate reflector geometries. An objective function was developed to compare candidate beam patterns with the desired beam pattern. Functional forms for the 3D reflector geometry were developed with free parameters to be subsequently optimized. A solution was sought meeting the detailed US SAE/DOT constraints for minimum and maximum permissible levels of illumination in the different portions of the beam pattern. Simulated road scenes were generated by Ford Motor Company to compare the illumination properties of the new design with those of the original Grand Marquis headlamp.

  14. Additive Manufacturing of Syntactic Foams: Part 2: Specimen Printing and Mechanical Property Characterization

    NASA Astrophysics Data System (ADS)

    Singh, Ashish Kumar; Saltonstall, Brooks; Patil, Balu; Hoffmann, Niklas; Doddamani, Mrityunjay; Gupta, Nikhil

    2018-03-01

    High-density polyethylene (HDPE) and its fly ash cenosphere-filled syntactic foam filaments have been recently developed. These filaments are used for three-dimensional (3D) printing using a commercial printer. The developed syntactic foam filament (HDPE40) contains 40 wt.% cenospheres in the HDPE matrix. Printing parameters for HDPE and HDPE40 were optimized for use in widely available commercial printers, and specimens were three-dimensionally (3D) printed for tensile testing at strain rate of 10-3 s-1. Process optimization resulted in smooth operation of the 3D printer without nozzle clogging or cenosphere fracture during the printing process. Characterization results revealed that the tensile modulus values of 3D-printed HDPE and HDPE40 specimens were higher than those of injection-molded specimens, while the tensile strength was comparable, but the fracture strain and density were lower.

  15. Development and design of flexible Fowler flaps for an adaptive wing

    NASA Astrophysics Data System (ADS)

    Monner, Hans P.; Hanselka, Holger; Breitbach, Elmar J.

    1998-06-01

    Civil transport airplanes fly with fixed geometry wings optimized only for one design point described by altitude, Mach number and airplane weight. These parameters vary continuously during flight, to which means the wing geometry seldom is optimal. According to aerodynamic investigations a chordwide variation of the wing camber leads to improvements in operational flexibility, buffet boundaries and performance resulting in reduction of fuel consumption. A spanwise differential camber variation allows to gain control over spanwise lift distributions reducing wing root bending moments. This paper describes the design of flexible Fowler flaps for an adaptive wing to be used in civil transport aircraft that allows both a chordwise as well as spanwise differential camber variation during flight. Since both lower and upper skins are flexed by active ribs, the camber variation is achieved with a smooth contour and without any additional gaps.

  16. Ocean data assimilation using optimal interpolation with a quasi-geostrophic model

    NASA Technical Reports Server (NTRS)

    Rienecker, Michele M.; Miller, Robert N.

    1991-01-01

    A quasi-geostrophic (QG) stream function is analyzed by optimal interpolation (OI) over a 59-day period in a 150-km-square domain off northern California. Hydrographic observations acquired over five surveys were assimilated into a QG open boundary ocean model. Assimilation experiments were conducted separately for individual surveys to investigate the sensitivity of the OI analyses to parameters defining the decorrelation scale of an assumed error covariance function. The analyses were intercompared through dynamical hindcasts between surveys. The best hindcast was obtained using the smooth analyses produced with assumed error decorrelation scales identical to those of the observed stream function. The rms difference between the hindcast stream function and the final analysis was only 23 percent of the observation standard deviation. The two sets of OI analyses were temporally smoother than the fields from statistical objective analysis and in good agreement with the only independent data available for comparison.

  17. Optimal Pollution Trading without Pollution Reductions : A Note

    EPA Science Inventory

    Many kinds of water pollution occur in pulses, e.g., agricultural and urban runoff. Ecosystems, such as wetlands, can serve to regulate these pulses and smooth pollution distributions over time. This smoothing reduces total environmental damages when “instantaneous” damages are m...

  18. Training-based descreening.

    PubMed

    Siddiqui, Hasib; Bouman, Charles A

    2007-03-01

    Conventional halftoning methods employed in electrophotographic printers tend to produce Moiré artifacts when used for printing images scanned from printed material, such as books and magazines. We present a novel approach for descreening color scanned documents aimed at providing an efficient solution to the Moiré problem in practical imaging devices, including copiers and multifunction printers. The algorithm works by combining two nonlinear image-processing techniques, resolution synthesis-based denoising (RSD), and modified smallest univalue segment assimilating nucleus (SUSAN) filtering. The RSD predictor is based on a stochastic image model whose parameters are optimized beforehand in a separate training procedure. Using the optimized parameters, RSD classifies the local window around the current pixel in the scanned image and applies filters optimized for the selected classes. The output of the RSD predictor is treated as a first-order estimate to the descreened image. The modified SUSAN filter uses the output of RSD for performing an edge-preserving smoothing on the raw scanned data and produces the final output of the descreening algorithm. Our method does not require any knowledge of the screening method, such as the screen frequency or dither matrix coefficients, that produced the printed original. The proposed scheme not only suppresses the Moiré artifacts, but, in addition, can be trained with intrinsic sharpening for deblurring scanned documents. Finally, once optimized for a periodic clustered-dot halftoning method, the same algorithm can be used to inverse halftone scanned images containing stochastic error diffusion halftone noise.

  19. Dynamic Bayesian wavelet transform: New methodology for extraction of repetitive transients

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Tsui, Kwok-Leung

    2017-05-01

    Thanks to some recent research works, dynamic Bayesian wavelet transform as new methodology for extraction of repetitive transients is proposed in this short communication to reveal fault signatures hidden in rotating machine. The main idea of the dynamic Bayesian wavelet transform is to iteratively estimate posterior parameters of wavelet transform via artificial observations and dynamic Bayesian inference. First, a prior wavelet parameter distribution can be established by one of many fast detection algorithms, such as the fast kurtogram, the improved kurtogram, the enhanced kurtogram, the sparsogram, the infogram, continuous wavelet transform, discrete wavelet transform, wavelet packets, multiwavelets, empirical wavelet transform, empirical mode decomposition, local mean decomposition, etc.. Second, artificial observations can be constructed based on one of many metrics, such as kurtosis, the sparsity measurement, entropy, approximate entropy, the smoothness index, a synthesized criterion, etc., which are able to quantify repetitive transients. Finally, given artificial observations, the prior wavelet parameter distribution can be posteriorly updated over iterations by using dynamic Bayesian inference. More importantly, the proposed new methodology can be extended to establish the optimal parameters required by many other signal processing methods for extraction of repetitive transients.

  20. The construction of support vector machine classifier using the firefly algorithm.

    PubMed

    Chao, Chih-Feng; Horng, Ming-Huwi

    2015-01-01

    The setting of parameters in the support vector machines (SVMs) is very important with regard to its accuracy and efficiency. In this paper, we employ the firefly algorithm to train all parameters of the SVM simultaneously, including the penalty parameter, smoothness parameter, and Lagrangian multiplier. The proposed method is called the firefly-based SVM (firefly-SVM). This tool is not considered the feature selection, because the SVM, together with feature selection, is not suitable for the application in a multiclass classification, especially for the one-against-all multiclass SVM. In experiments, binary and multiclass classifications are explored. In the experiments on binary classification, ten of the benchmark data sets of the University of California, Irvine (UCI), machine learning repository are used; additionally the firefly-SVM is applied to the multiclass diagnosis of ultrasonic supraspinatus images. The classification performance of firefly-SVM is also compared to the original LIBSVM method associated with the grid search method and the particle swarm optimization based SVM (PSO-SVM). The experimental results advocate the use of firefly-SVM to classify pattern classifications for maximum accuracy.

  1. The Construction of Support Vector Machine Classifier Using the Firefly Algorithm

    PubMed Central

    Chao, Chih-Feng; Horng, Ming-Huwi

    2015-01-01

    The setting of parameters in the support vector machines (SVMs) is very important with regard to its accuracy and efficiency. In this paper, we employ the firefly algorithm to train all parameters of the SVM simultaneously, including the penalty parameter, smoothness parameter, and Lagrangian multiplier. The proposed method is called the firefly-based SVM (firefly-SVM). This tool is not considered the feature selection, because the SVM, together with feature selection, is not suitable for the application in a multiclass classification, especially for the one-against-all multiclass SVM. In experiments, binary and multiclass classifications are explored. In the experiments on binary classification, ten of the benchmark data sets of the University of California, Irvine (UCI), machine learning repository are used; additionally the firefly-SVM is applied to the multiclass diagnosis of ultrasonic supraspinatus images. The classification performance of firefly-SVM is also compared to the original LIBSVM method associated with the grid search method and the particle swarm optimization based SVM (PSO-SVM). The experimental results advocate the use of firefly-SVM to classify pattern classifications for maximum accuracy. PMID:25802511

  2. Investigation of the Specht density estimator

    NASA Technical Reports Server (NTRS)

    Speed, F. M.; Rydl, L. M.

    1971-01-01

    The feasibility of using the Specht density estimator function on the IBM 360/44 computer is investigated. Factors such as storage, speed, amount of calculations, size of the smoothing parameter and sample size have an effect on the results. The reliability of the Specht estimator for normal and uniform distributions and the effects of the smoothing parameter and sample size are investigated.

  3. Holt-Winters Forecasting: A Study of Practical Applications for Healthcare Managers

    DTIC Science & Technology

    2006-05-25

    Winters Forecasting 5 List of Tables Table 1. Holt-Winters smoothing parameters and Mean Absolute Percentage Errors: Pseudoephedrine prescriptions Table 2...confidence intervals Holt-Winters Forecasting 6 List of Figures Figure 1. Line Plot of Pseudoephedrine Prescriptions forecast using smoothing parameters...The first represents monthly prescriptions of pseudoephedrine . Pseudoephedrine is a drug commonly prescribed to relieve nasal congestion and other

  4. An improved multi-paths optimization method for video stabilization

    NASA Astrophysics Data System (ADS)

    Qin, Tao; Zhong, Sheng

    2018-03-01

    For video stabilization, the difference between original camera motion path and the optimized one is proportional to the cropping ratio and warping ratio. A good optimized path should preserve the moving tendency of the original one meanwhile the cropping ratio and warping ratio of each frame should be kept in a proper range. In this paper we use an improved warping-based motion representation model, and propose a gauss-based multi-paths optimization method to get a smoothing path and obtain a stabilized video. The proposed video stabilization method consists of two parts: camera motion path estimation and path smoothing. We estimate the perspective transform of adjacent frames according to warping-based motion representation model. It works well on some challenging videos where most previous 2D methods or 3D methods fail for lacking of long features trajectories. The multi-paths optimization method can deal well with parallax, as we calculate the space-time correlation of the adjacent grid, and then a kernel of gauss is used to weigh the motion of adjacent grid. Then the multi-paths are smoothed while minimize the crop ratio and the distortion. We test our method on a large variety of consumer videos, which have casual jitter and parallax, and achieve good results.

  5. Design and optimization of Artificial Neural Networks for the modelling of superconducting magnets operation in tokamak fusion reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Froio, A.; Bonifetto, R.; Carli, S.

    In superconducting tokamaks, the cryoplant provides the helium needed to cool different clients, among which by far the most important one is the superconducting magnet system. The evaluation of the transient heat load from the magnets to the cryoplant is fundamental for the design of the latter and the assessment of suitable strategies to smooth the heat load pulses, induced by the intrinsically pulsed plasma scenarios characteristic of today's tokamaks, is crucial for both suitable sizing and stable operation of the cryoplant. For that evaluation, accurate but expensive system-level models, as implemented in e.g. the validated state-of-the-art 4C code, weremore » developed in the past, including both the magnets and the respective external cryogenic cooling circuits. Here we show how these models can be successfully substituted with cheaper ones, where the magnets are described by suitably trained Artificial Neural Networks (ANNs) for the evaluation of the heat load to the cryoplant. First, two simplified thermal-hydraulic models for an ITER Toroidal Field (TF) magnet and for the ITER Central Solenoid (CS) are developed, based on ANNs, and a detailed analysis of the chosen networks' topology and parameters is presented and discussed. The ANNs are then inserted into the 4C model of the ITER TF and CS cooling circuits, which also includes active controls to achieve a smoothing of the variation of the heat load to the cryoplant. The training of the ANNs is achieved using the results of full 4C simulations (including detailed models of the magnets) for conventional sigmoid-like waveforms of the drivers and the predictive capabilities of the ANN-based models in the case of actual ITER operating scenarios are demonstrated by comparison with the results of full 4C runs, both with and without active smoothing, in terms of both accuracy and computational time. Exploiting the low computational effort requested by the ANN-based models, a demonstrative optimization study has been finally carried out, with the aim of choosing among different smoothing strategies for the standard ITER plasma operation.« less

  6. Fractional Klein-Gordon equation composed of Jumarie fractional derivative and its interpretation by a smoothness parameter

    NASA Astrophysics Data System (ADS)

    Ghosh, Uttam; Banerjee, Joydip; Sarkar, Susmita; Das, Shantanu

    2018-06-01

    Klein-Gordon equation is one of the basic steps towards relativistic quantum mechanics. In this paper, we have formulated fractional Klein-Gordon equation via Jumarie fractional derivative and found two types of solutions. Zero-mass solution satisfies photon criteria and non-zero mass satisfies general theory of relativity. Further, we have developed rest mass condition which leads us to the concept of hidden wave. Classical Klein-Gordon equation fails to explain a chargeless system as well as a single-particle system. Using the fractional Klein-Gordon equation, we can overcome the problem. The fractional Klein-Gordon equation also leads to the smoothness parameter which is the measurement of the bumpiness of space. Here, by using this smoothness parameter, we have defined and interpreted the various cases.

  7. Smoothing of geoelectrical resistivity profiles in order to build a 3D model: A case study from an outcropping limestone block

    NASA Astrophysics Data System (ADS)

    Tóth, Krisztina; Kovács, Gábor

    2014-05-01

    Geoelectrical imaging is one of the most common survey methods in the field of shallow geophysics. In order to get information from the subsurface electric current is induced into the ground. In our summer camp organized by the Department of Geophysics and Space Sciences, Eötvös Loránd University we have carried out resistivity surveys to get more accurate information about the lithology of the Dorog basin located in the Transdanubian range, Middle Hungary. This study focused on the outcropping limestone block located next to the village Leányvár in the Dorog basin. The main aim of the research is the impoundment of the subsurface continuation of the limestone outcrop. Cable problems occurred during field survey therefore the dataset obtained by the measurement have become very noisy thus we had to gain smoothed data with the appropriate editing steps. The goal was to produce an optimized model to demonstrate the reality beneath the subsurface. In order to achieve better results from the noisy dataset we changed some parameters based on the description of the program. Whereas cable problems occurred we exterminated the bad datum points visually and statistically as well. Because of the noisiness we increased the value of the so called damping factor which is a variable parameter in the equation used by the inversion routine responsible for smoothing the data. The limitation of the range of model resistivity values based on our knowledge about geological environment was also necessary in order to avoid physically unrealistic results. The purpose of the modification was to obtain smoothed and more interpretable geoelectric profiles. The geological background combined with the explanation of the profiles gave us the approximate location of the block. In the final step of the research we created a 3D model with proper location and smoothed resistivity data included. This study was supported by the Hungarian Scientific Research Fund (OTKA NK83400) and was realized in the frames of TÁMOP 4.2.4.A/2-11-1-2012-0001 high priority "National Excellence Program - Elaborating and Operating an Inland Student and Researcher Personal Support System convergence program" project's scholarship support.

  8. More efficient evolutionary strategies for model calibration with watershed model for demonstration

    NASA Astrophysics Data System (ADS)

    Baggett, J. S.; Skahill, B. E.

    2008-12-01

    Evolutionary strategies allow automatic calibration of more complex models than traditional gradient based approaches, but they are more computationally intensive. We present several efficiency enhancements for evolution strategies, many of which are not new, but when combined have been shown to dramatically decrease the number of model runs required for calibration of synthetic problems. To reduce the number of expensive model runs we employ a surrogate objective function for an adaptively determined fraction of the population at each generation (Kern et al., 2006). We demonstrate improvements to the adaptive ranking strategy that increase its efficiency while sacrificing little reliability and further reduce the number of model runs required in densely sampled parts of parameter space. Furthermore, we include a gradient individual in each generation that is usually not selected when the search is in a global phase or when the derivatives are poorly approximated, but when selected near a smooth local minimum can dramatically increase convergence speed (Tahk et al., 2007). Finally, the selection of the gradient individual is used to adapt the size of the population near local minima. We show, by incorporating these enhancements into the Covariance Matrix Adaption Evolution Strategy (CMAES; Hansen, 2006), that their synergetic effect is greater than their individual parts. This hybrid evolutionary strategy exploits smooth structure when it is present but degrades to an ordinary evolutionary strategy, at worst, if smoothness is not present. Calibration of 2D-3D synthetic models with the modified CMAES requires approximately 10%-25% of the model runs of ordinary CMAES. Preliminary demonstration of this hybrid strategy will be shown for watershed model calibration problems. Hansen, N. (2006). The CMA Evolution Strategy: A Comparing Review. In J.A. Lozano, P. Larrañga, I. Inza and E. Bengoetxea (Eds.). Towards a new evolutionary computation. Advances in estimation of distribution algorithms. pp. 75-102, Springer Kern, S., N. Hansen and P. Koumoutsakos (2006). Local Meta-Models for Optimization Using Evolution Strategies. In Ninth International Conference on Parallel Problem Solving from Nature PPSN IX, Proceedings, pp.939-948, Berlin: Springer. Tahk, M., Woo, H., and Park. M, (2007). A hybrid optimization of evolutionary and gradient search. Engineering Optimization, (39), 87-104.

  9. Fast global image smoothing based on weighted least squares.

    PubMed

    Min, Dongbo; Choi, Sunghwan; Lu, Jiangbo; Ham, Bumsub; Sohn, Kwanghoon; Do, Minh N

    2014-12-01

    This paper presents an efficient technique for performing a spatially inhomogeneous edge-preserving image smoothing, called fast global smoother. Focusing on sparse Laplacian matrices consisting of a data term and a prior term (typically defined using four or eight neighbors for 2D image), our approach efficiently solves such global objective functions. In particular, we approximate the solution of the memory-and computation-intensive large linear system, defined over a d-dimensional spatial domain, by solving a sequence of 1D subsystems. Our separable implementation enables applying a linear-time tridiagonal matrix algorithm to solve d three-point Laplacian matrices iteratively. Our approach combines the best of two paradigms, i.e., efficient edge-preserving filters and optimization-based smoothing. Our method has a comparable runtime to the fast edge-preserving filters, but its global optimization formulation overcomes many limitations of the local filtering approaches. Our method also achieves high-quality results as the state-of-the-art optimization-based techniques, but runs ∼10-30 times faster. Besides, considering the flexibility in defining an objective function, we further propose generalized fast algorithms that perform Lγ norm smoothing (0 < γ < 2) and support an aggregated (robust) data term for handling imprecise data constraints. We demonstrate the effectiveness and efficiency of our techniques in a range of image processing and computer graphics applications.

  10. Performance evaluation of the zero-multipole summation method in modern molecular dynamics software.

    PubMed

    Sakuraba, Shun; Fukuda, Ikuo

    2018-05-04

    The zero-multiple summation method (ZMM) is a cutoff-based method for calculating electrostatic interactions in molecular dynamics simulations, utilizing an electrostatic neutralization principle as a physical basis. Since the accuracies of the ZMM have been revealed to be sufficient in previous studies, it is highly desirable to clarify its practical performance. In this paper, the performance of the ZMM is compared with that of the smooth particle mesh Ewald method (SPME), where the both methods are implemented in molecular dynamics software package GROMACS. Extensive performance comparisons against a highly optimized, parameter-tuned SPME implementation are performed for various-sized water systems and two protein-water systems. We analyze in detail the dependence of the performance on the potential parameters and the number of CPU cores. Even though the ZMM uses a larger cutoff distance than the SPME does, the performance of the ZMM is comparable to or better than that of the SPME. This is because the ZMM does not require a time-consuming electrostatic convolution and because the ZMM gains short neighbor-list distances due to the smooth damping feature of the pairwise potential function near the cutoff length. We found, in particular, that the ZMM with quadrupole or octupole cancellation and no damping factor is an excellent candidate for the fast calculation of electrostatic interactions. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  11. Real-Time Noise Reduction for Mossbauer Spectroscopy through Online Implementation of a Modified Kalman Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abrecht, David G.; Schwantes, Jon M.; Kukkadapu, Ravi K.

    2015-02-01

    Spectrum-processing software that incorporates a gaussian smoothing kernel within the statistics of first-order Kalman filtration has been developed to provide cross-channel spectral noise reduction for increased real-time signal-to-noise ratios for Mossbauer spectroscopy. The filter was optimized for the breadth of the gaussian using the Mossbauer spectrum of natural iron foil, and comparisons between the peak broadening, signal-to-noise ratios, and shifts in the calculated hyperfine parameters are presented. The results of optimization give a maximum improvement in the signal-to-noise ratio of 51.1% over the unfiltered spectrum at a gaussian breadth of 27 channels, or 2.5% of the total spectrum width. Themore » full-width half-maximum of the spectrum peaks showed an increase of 19.6% at this optimum point, indicating a relatively weak increase in the peak broadening relative to the signal enhancement, leading to an overall increase in the observable signal. Calculations of the hyperfine parameters showed no statistically significant deviations were introduced from the application of the filter, confirming the utility of this filter for spectroscopy applications.« less

  12. A three-level support method for smooth switching of the micro-grid operation model

    NASA Astrophysics Data System (ADS)

    Zong, Yuanyang; Gong, Dongliang; Zhang, Jianzhou; Liu, Bin; Wang, Yun

    2018-01-01

    Smooth switching of micro-grid between the grid-connected operation mode and off-grid operation mode is one of the key technologies to ensure it runs flexible and efficiently. The basic control strategy and the switching principle of micro-grid are analyzed in this paper. The reasons for the fluctuations of the voltage and the frequency in the switching process are analyzed from views of power balance and control strategy, and the operation mode switching strategy has been improved targeted. From the three aspects of controller’s current inner loop reference signal, voltage outer loop control strategy optimization and micro-grid energy balance management, a three-level security strategy for smooth switching of micro-grid operation mode is proposed. From the three aspects of controller’s current inner loop reference signal tracking, voltage outer loop control strategy optimization and micro-grid energy balance management, a three-level strategy for smooth switching of micro-grid operation mode is proposed. At last, it is proved by simulation that the proposed control strategy can make the switching process smooth and stable, the fluctuation problem of the voltage and frequency has been effectively improved.

  13. Growth and structure of Bi 0.5(Na 0.7K 0.2Li 0.1) 0.5TiO 3 thin films prepared by pulsed laser deposition technique

    NASA Astrophysics Data System (ADS)

    Lu, Lei; Xiao, Dingquan; Lin, Dunmin; Zhang, Yongbin; Zhu, Jianguo

    2009-02-01

    Bi 0.5(Na 0.7K 0.2Li 0.1) 0.5TiO 3 (BNKLT) thin films were prepared on Pt/Ti/SiO 2/Si substrates by pulsed laser deposition (PLD) technique. The films prepared were examined by using X-ray diffraction (XRD), scanning electron microscopy (SEM) and atomic force microscopy (AFM). The effects of the processing parameters, such as oxygen pressure, substrate temperature and laser power, on the crystal structure, surface morphology, roughness and deposition rates of the thin films were investigated. It was found that the substrate temperature of 600 °C and oxygen pressure of 30 Pa are the optimized technical parameters for the growth of textured film, and all the thin films prepared have granular structure, homogeneous grain size and smooth surfaces.

  14. Dense motion estimation using regularization constraints on local parametric models.

    PubMed

    Patras, Ioannis; Worring, Marcel; van den Boomgaard, Rein

    2004-11-01

    This paper presents a method for dense optical flow estimation in which the motion field within patches that result from an initial intensity segmentation is parametrized with models of different order. We propose a novel formulation which introduces regularization constraints between the model parameters of neighboring patches. In this way, we provide the additional constraints for very small patches and for patches whose intensity variation cannot sufficiently constrain the estimation of their motion parameters. In order to preserve motion discontinuities, we use robust functions as a regularization mean. We adopt a three-frame approach and control the balance between the backward and forward constraints by a real-valued direction field on which regularization constraints are applied. An iterative deterministic relaxation method is employed in order to solve the corresponding optimization problem. Experimental results show that the proposed method deals successfully with motions large in magnitude, motion discontinuities, and produces accurate piecewise-smooth motion fields.

  15. On a multigrid method for the coupled Stokes and porous media flow problem

    NASA Astrophysics Data System (ADS)

    Luo, P.; Rodrigo, C.; Gaspar, F. J.; Oosterlee, C. W.

    2017-07-01

    The multigrid solution of coupled porous media and Stokes flow problems is considered. The Darcy equation as the saturated porous medium model is coupled to the Stokes equations by means of appropriate interface conditions. We focus on an efficient multigrid solution technique for the coupled problem, which is discretized by finite volumes on staggered grids, giving rise to a saddle point linear system. Special treatment is required regarding the discretization at the interface. An Uzawa smoother is employed in multigrid, which is a decoupled procedure based on symmetric Gauss-Seidel smoothing for velocity components and a simple Richardson iteration for the pressure field. Since a relaxation parameter is part of a Richardson iteration, Local Fourier Analysis (LFA) is applied to determine the optimal parameters. Highly satisfactory multigrid convergence is reported, and, moreover, the algorithm performs well for small values of the hydraulic conductivity and fluid viscosity, that are relevant for applications.

  16. Automatic x-ray image contrast enhancement based on parameter auto-optimization.

    PubMed

    Qiu, Jianfeng; Harold Li, H; Zhang, Tiezhi; Ma, Fangfang; Yang, Deshan

    2017-11-01

    Insufficient image contrast associated with radiation therapy daily setup x-ray images could negatively affect accurate patient treatment setup. We developed a method to perform automatic and user-independent contrast enhancement on 2D kilo voltage (kV) and megavoltage (MV) x-ray images. The goal was to provide tissue contrast optimized for each treatment site in order to support accurate patient daily treatment setup and the subsequent offline review. The proposed method processes the 2D x-ray images with an optimized image processing filter chain, which consists of a noise reduction filter and a high-pass filter followed by a contrast limited adaptive histogram equalization (CLAHE) filter. The most important innovation is to optimize the image processing parameters automatically to determine the required image contrast settings per disease site and imaging modality. Three major parameters controlling the image processing chain, i.e., the Gaussian smoothing weighting factor for the high-pass filter, the block size, and the clip limiting parameter for the CLAHE filter, were determined automatically using an interior-point constrained optimization algorithm. Fifty-two kV and MV x-ray images were included in this study. The results were manually evaluated and ranked with scores from 1 (worst, unacceptable) to 5 (significantly better than adequate and visually praise worthy) by physicians and physicists. The average scores for the images processed by the proposed method, the CLAHE, and the best window-level adjustment were 3.92, 2.83, and 2.27, respectively. The percentage of the processed images received a score of 5 were 48, 29, and 18%, respectively. The proposed method is able to outperform the standard image contrast adjustment procedures that are currently used in the commercial clinical systems. When the proposed method is implemented in the clinical systems as an automatic image processing filter, it could be useful for allowing quicker and potentially more accurate treatment setup and facilitating the subsequent offline review and verification. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  17. Kernel spectral clustering with memory effect

    NASA Astrophysics Data System (ADS)

    Langone, Rocco; Alzate, Carlos; Suykens, Johan A. K.

    2013-05-01

    Evolving graphs describe many natural phenomena changing over time, such as social relationships, trade markets, metabolic networks etc. In this framework, performing community detection and analyzing the cluster evolution represents a critical task. Here we propose a new model for this purpose, where the smoothness of the clustering results over time can be considered as a valid prior knowledge. It is based on a constrained optimization formulation typical of Least Squares Support Vector Machines (LS-SVM), where the objective function is designed to explicitly incorporate temporal smoothness. The latter allows the model to cluster the current data well and to be consistent with the recent history. We also propose new model selection criteria in order to carefully choose the hyper-parameters of our model, which is a crucial issue to achieve good performances. We successfully test the model on four toy problems and on a real world network. We also compare our model with Evolutionary Spectral Clustering, which is a state-of-the-art algorithm for community detection of evolving networks, illustrating that the kernel spectral clustering with memory effect can achieve better or equal performances.

  18. On the Convergence Analysis of the Optimized Gradient Method.

    PubMed

    Kim, Donghwan; Fessler, Jeffrey A

    2017-01-01

    This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov's fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization.

  19. On the Convergence Analysis of the Optimized Gradient Method

    PubMed Central

    Kim, Donghwan; Fessler, Jeffrey A.

    2016-01-01

    This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov’s fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization. PMID:28461707

  20. A Smoothed Eclipse Model for Solar Electric Propulsion Trajectory Optimization

    NASA Technical Reports Server (NTRS)

    Aziz, Jonathan D.; Scheeres, Daniel J.; Parker, Jeffrey S.; Englander, Jacob A.

    2017-01-01

    Solar electric propulsion (SEP) is the dominant design option for employing low-thrust propulsion on a space mission. Spacecraft solar arrays power the SEP system but are subject to blackout periods during solar eclipse conditions. Discontinuity in power available to the spacecraft must be accounted for in trajectory optimization, but gradient-based methods require a differentiable power model. This work presents a power model that smooths the eclipse transition from total eclipse to total sunlight with a logistic function. Example trajectories are computed with differential dynamic programming, a second-order gradient-based method.

  1. Optimization of cathodic arc deposition and pulsed plasma melting techniques for growing smooth superconducting Pb photoemissive films for SRF injectors

    NASA Astrophysics Data System (ADS)

    Nietubyć, Robert; Lorkiewicz, Jerzy; Sekutowicz, Jacek; Smedley, John; Kosińska, Anna

    2018-05-01

    Superconducting photoinjectors have a potential to be the optimal solution for moderate and high current cw operating free electron lasers. For this application, a superconducting lead (Pb) cathode has been proposed to simplify the cathode integration into a 1.3 GHz, TESLA-type, 1.6-cell long purely superconducting gun cavity. In the proposed design, a lead film several micrometres thick is deposited onto a niobium plug attached to the cavity back wall. Traditional lead deposition techniques usually produce very non-uniform emission surfaces and often result in a poor adhesion of the layer. A pulsed plasma melting procedure reducing the non-uniformity of the lead photocathodes is presented. In order to determine the parameters optimal for this procedure, heat transfer from plasma to the film was first modelled to evaluate melting front penetration range and liquid state duration. The obtained results were verified by surface inspection of witness samples. The optimal procedure was used to prepare a photocathode plug, which was then tested in an electron gun. The quantum efficiency and the value of cavity quality factor have been found to satisfy the requirements for an injector of the European-XFEL facility.

  2. Inter and intra-modal deformable registration: continuous deformations meet efficient optimal linear programming.

    PubMed

    Glocker, Ben; Paragios, Nikos; Komodakis, Nikos; Tziritas, Georgios; Navab, Nassir

    2007-01-01

    In this paper we propose a novel non-rigid volume registration based on discrete labeling and linear programming. The proposed framework reformulates registration as a minimal path extraction in a weighted graph. The space of solutions is represented using a set of a labels which are assigned to predefined displacements. The graph topology corresponds to a superimposed regular grid onto the volume. Links between neighborhood control points introduce smoothness, while links between the graph nodes and the labels (end-nodes) measure the cost induced to the objective function through the selection of a particular deformation for a given control point once projected to the entire volume domain, Higher order polynomials are used to express the volume deformation from the ones of the control points. Efficient linear programming that can guarantee the optimal solution up to (a user-defined) bound is considered to recover the optimal registration parameters. Therefore, the method is gradient free, can encode various similarity metrics (simple changes on the graph construction), can guarantee a globally sub-optimal solution and is computational tractable. Experimental validation using simulated data with known deformation, as well as manually segmented data demonstrate the extreme potentials of our approach.

  3. Penalized spline estimation for functional coefficient regression models.

    PubMed

    Cao, Yanrong; Lin, Haiqun; Wu, Tracy Z; Yu, Yan

    2010-04-01

    The functional coefficient regression models assume that the regression coefficients vary with some "threshold" variable, providing appreciable flexibility in capturing the underlying dynamics in data and avoiding the so-called "curse of dimensionality" in multivariate nonparametric estimation. We first investigate the estimation, inference, and forecasting for the functional coefficient regression models with dependent observations via penalized splines. The P-spline approach, as a direct ridge regression shrinkage type global smoothing method, is computationally efficient and stable. With established fixed-knot asymptotics, inference is readily available. Exact inference can be obtained for fixed smoothing parameter λ, which is most appealing for finite samples. Our penalized spline approach gives an explicit model expression, which also enables multi-step-ahead forecasting via simulations. Furthermore, we examine different methods of choosing the important smoothing parameter λ: modified multi-fold cross-validation (MCV), generalized cross-validation (GCV), and an extension of empirical bias bandwidth selection (EBBS) to P-splines. In addition, we implement smoothing parameter selection using mixed model framework through restricted maximum likelihood (REML) for P-spline functional coefficient regression models with independent observations. The P-spline approach also easily allows different smoothness for different functional coefficients, which is enabled by assigning different penalty λ accordingly. We demonstrate the proposed approach by both simulation examples and a real data application.

  4. A theoretical measure technique for determining 3D symmetric nearly optimal shapes with a given center of mass

    NASA Astrophysics Data System (ADS)

    Alimorad D., H.; Fakharzadeh J., A.

    2017-07-01

    In this paper, a new approach is proposed for designing the nearly-optimal three dimensional symmetric shapes with desired physical center of mass. Herein, the main goal is to find such a shape whose image in ( r, θ)-plane is a divided region into a fixed and variable part. The nearly optimal shape is characterized in two stages. Firstly, for each given domain, the nearly optimal surface is determined by changing the problem into a measure-theoretical one, replacing this with an equivalent infinite dimensional linear programming problem and approximating schemes; then, a suitable function that offers the optimal value of the objective function for any admissible given domain is defined. In the second stage, by applying a standard optimization method, the global minimizer surface and its related domain will be obtained whose smoothness is considered by applying outlier detection and smooth fitting methods. Finally, numerical examples are presented and the results are compared to show the advantages of the proposed approach.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eaton, Craig; Brahlek, Matthew; Engel-Herbert, Roman, E-mail: rue2@psu.edu

    The authors report the growth of stoichiometric SrVO{sub 3} thin films on (LaAlO{sub 3}){sub 0.3}(Sr{sub 2}AlTaO{sub 6}){sub 0.7} (001) substrates using hybrid molecular beam epitaxy. This growth approach employs a conventional effusion cell to supply elemental A-site Sr and the metalorganic precursor vanadium oxytriisopropoxide (VTIP) to supply vanadium. Oxygen is supplied in its molecular form through a gas inlet. An optimal VTIP:Sr flux ratio has been identified using reflection high-energy electron-diffraction, x-ray diffraction, atomic force microscopy, and scanning transmission electron microscopy, demonstrating stoichiometric SrVO{sub 3} films with atomically flat surface morphology. Away from the optimal VTIP:Sr flux, characteristic changes inmore » the crystalline structure and surface morphology of the films were found, enabling identification of the type of nonstoichiometry. For optimal VTIP:Sr flux ratios, high quality SrVO{sub 3} thin films were obtained with smallest deviation of the lattice parameter from the ideal value and with atomically smooth surfaces, indicative of the good cation stoichiometry achieved by this growth technique.« less

  6. Electrodeposition of organic-inorganic tri-halide perovskites solar cell

    NASA Astrophysics Data System (ADS)

    Charles, U. A.; Ibrahim, M. A.; Teridi, M. A. M.

    2018-02-01

    Perovskite (CH3NH3PbI3) semiconductor materials are promising high-performance light energy absorber for solar cell application. However, the power conversion efficiency of perovskite solar cell is severely affected by the surface quality of the deposited thin film. Spin coating is a low-cost and widely used deposition technique for perovskite solar cell. Notably, film deposited by spin coating evolves surface hydroxide and defeats from uncontrolled precipitation and inter-diffusion reaction. Alternatively, vapor deposition (VD) method produces uniform thin film but requires precise control of complex thermodynamic parameters which makes the technique unsuitable for large scale production. Most deposition techniques for perovskite require tedious surface optimization to improve the surface quality of deposits. Optimization of perovskite surface is necessary to significantly improve device structure and electrical output. In this review, electrodeposition of perovskite solar cell is demonstrated as a scalable and reproducible technique to fabricate uniform and smooth thin film surface that circumvents the need for high vacuum environment. Electrodeposition is achieved at low temperatures, supports precise control and optimization of deposits for efficient charge transfer.

  7. A Bayesian inversion for slip distribution of 1 Apr 2007 Mw8.1 Solomon Islands Earthquake

    NASA Astrophysics Data System (ADS)

    Chen, T.; Luo, H.

    2013-12-01

    On 1 Apr 2007 the megathrust Mw8.1 Solomon Islands earthquake occurred in the southeast pacific along the New Britain subduction zone. 102 vertical displacement measurements over the southeastern end of the rupture zone from two field surveys after this event provide a unique constraint for slip distribution inversion. In conventional inversion method (such as bounded variable least squares) the smoothing parameter that determines the relative weight placed on fitting the data versus smoothing the slip distribution is often subjectively selected at the bend of the trade-off curve. Here a fully probabilistic inversion method[Fukuda,2008] is applied to estimate distributed slip and smoothing parameter objectively. The joint posterior probability density function of distributed slip and the smoothing parameter is formulated under a Bayesian framework and sampled with Markov chain Monte Carlo method. We estimate the spatial distribution of dip slip associated with the 1 Apr 2007 Solomon Islands earthquake with this method. Early results show a shallower dip angle than previous study and highly variable dip slip both along-strike and down-dip.

  8. Error analysis of filtering operations in pixel-duplicated images of diabetic retinopathy

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; McLauchlan, Lifford

    2010-08-01

    In this paper, diabetic retinopathy is chosen for a sample target image to demonstrate the effectiveness of image enlargement through pixel duplication in identifying regions of interest. Pixel duplication is presented as a simpler alternative to data interpolation techniques for detecting small structures in the images. A comparative analysis is performed on different image processing schemes applied to both original and pixel-duplicated images. Structures of interest are detected and and classification parameters optimized for minimum false positive detection in the original and enlarged retinal pictures. The error analysis demonstrates the advantages as well as shortcomings of pixel duplication in image enhancement when spatial averaging operations (smoothing filters) are also applied.

  9. Lateral variation in pavement smoothness

    DOT National Transportation Integrated Search

    2002-12-01

    Current performance-based contracting specifications employ International Roughness Index (IRI) to measure the smoothness of a pavement as perceived by the motorist. This parameter is measured in the outer or right-hand traffic lane and requires an u...

  10. Design of a model observer to evaluate calcification detectability in breast tomosynthesis and application to smoothing prior optimization.

    PubMed

    Michielsen, Koen; Nuyts, Johan; Cockmartin, Lesley; Marshall, Nicholas; Bosmans, Hilde

    2016-12-01

    In this work, the authors design and validate a model observer that can detect groups of microcalcifications in a four-alternative forced choice experiment and use it to optimize a smoothing prior for detectability of microcalcifications. A channelized Hotelling observer (CHO) with eight Laguerre-Gauss channels was designed to detect groups of five microcalcifications in a background of acrylic spheres by adding the CHO log-likelihood ratios calculated at the expected locations of the five calcifications. This model observer is then applied to optimize the detectability of the microcalcifications as a function of the smoothing prior. The authors examine the quadratic and total variation (TV) priors, and a combination of both. A selection of these reconstructions was then evaluated by human observers to validate the correct working of the model observer. The authors found a clear maximum for the detectability of microcalcification when using the total variation prior with weight β TV = 35. Detectability only varied over a small range for the quadratic and combined quadratic-TV priors when weight β Q of the quadratic prior was changed by two orders of magnitude. Spearman correlation with human observers was good except for the highest value of β for the quadratic and TV priors. Excluding those, the authors found ρ = 0.93 when comparing detection fractions, and ρ = 0.86 for the fitted detection threshold diameter. The authors successfully designed a model observer that was able to predict human performance over a large range of settings of the smoothing prior, except for the highest values of β which were outside the useful range for good image quality. Since detectability only depends weakly on the strength of the combined prior, it is not possible to pick an optimal smoothness based only on this criterion. On the other hand, such choice can now be made based on other criteria without worrying about calcification detectability.

  11. Inverse determination of the penalty parameter in penalized weighted least-squares algorithm for noise reduction of low-dose CBCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jing; Guan, Huaiqun; Solberg, Timothy

    2011-07-15

    Purpose: A statistical projection restoration algorithm based on the penalized weighted least-squares (PWLS) criterion can substantially improve the image quality of low-dose CBCT images. The performance of PWLS is largely dependent on the choice of the penalty parameter. Previously, the penalty parameter was chosen empirically by trial and error. In this work, the authors developed an inverse technique to calculate the penalty parameter in PWLS for noise suppression of low-dose CBCT in image guided radiotherapy (IGRT). Methods: In IGRT, a daily CBCT is acquired for the same patient during a treatment course. In this work, the authors acquired the CBCTmore » with a high-mAs protocol for the first session and then a lower mAs protocol for the subsequent sessions. The high-mAs projections served as the goal (ideal) toward, which the low-mAs projections were to be smoothed by minimizing the PWLS objective function. The penalty parameter was determined through an inverse calculation of the derivative of the objective function incorporating both the high and low-mAs projections. Then the parameter obtained can be used for PWLS to smooth the noise in low-dose projections. CBCT projections for a CatPhan 600 and an anthropomorphic head phantom, as well as for a brain patient, were used to evaluate the performance of the proposed technique. Results: The penalty parameter in PWLS was obtained for each CBCT projection using the proposed strategy. The noise in the low-dose CBCT images reconstructed from the smoothed projections was greatly suppressed. Image quality in PWLS-processed low-dose CBCT was comparable to its corresponding high-dose CBCT. Conclusions: A technique was proposed to estimate the penalty parameter for PWLS algorithm. It provides an objective and efficient way to obtain the penalty parameter for image restoration algorithms that require predefined smoothing parameters.« less

  12. Optimal Divergence-Free Hatch Filter for GNSS Single-Frequency Measurement.

    PubMed

    Park, Byungwoon; Lim, Cheolsoon; Yun, Youngsun; Kim, Euiho; Kee, Changdon

    2017-02-24

    The Hatch filter is a code-smoothing technique that uses the variation of the carrier phase. It can effectively reduce the noise of a pseudo-range with a very simple filter construction, but it occasionally causes an ionosphere-induced error for low-lying satellites. Herein, we propose an optimal single-frequency (SF) divergence-free Hatch filter that uses a satellite-based augmentation system (SBAS) message to reduce the ionospheric divergence and applies the optimal smoothing constant for its smoothing window width. According to the data-processing results, the overall performance of the proposed filter is comparable to that of the dual frequency (DF) divergence-free Hatch filter. Moreover, it can reduce the horizontal error of 57 cm to 37 cm and improve the vertical accuracy of the conventional Hatch filter by 25%. Considering that SF receivers dominate the global navigation satellite system (GNSS) market and that most of these receivers include the SBAS function, the filter suggested in this paper is of great value in that it can make the differential GPS (DGPS) performance of the low-cost SF receivers comparable to that of DF receivers.

  13. Optimal Divergence-Free Hatch Filter for GNSS Single-Frequency Measurement

    PubMed Central

    Park, Byungwoon; Lim, Cheolsoon; Yun, Youngsun; Kim, Euiho; Kee, Changdon

    2017-01-01

    The Hatch filter is a code-smoothing technique that uses the variation of the carrier phase. It can effectively reduce the noise of a pseudo-range with a very simple filter construction, but it occasionally causes an ionosphere-induced error for low-lying satellites. Herein, we propose an optimal single-frequency (SF) divergence-free Hatch filter that uses a satellite-based augmentation system (SBAS) message to reduce the ionospheric divergence and applies the optimal smoothing constant for its smoothing window width. According to the data-processing results, the overall performance of the proposed filter is comparable to that of the dual frequency (DF) divergence-free Hatch filter. Moreover, it can reduce the horizontal error of 57 cm to 37 cm and improve the vertical accuracy of the conventional Hatch filter by 25%. Considering that SF receivers dominate the global navigation satellite system (GNSS) market and that most of these receivers include the SBAS function, the filter suggested in this paper is of great value in that it can make the differential GPS (DGPS) performance of the low-cost SF receivers comparable to that of DF receivers. PMID:28245584

  14. A Scheme to Smooth Aggregated Traffic from Sensors with Periodic Reports

    PubMed Central

    Oh, Sungmin; Jang, Ju Wook

    2017-01-01

    The possibility of smoothing aggregated traffic from sensors with varying reporting periods and frame sizes to be carried on an access link is investigated. A straightforward optimization would take O(pn) time, whereas our heuristic scheme takes O(np) time where n, p denote the number of sensors and size of periods, respectively. Our heuristic scheme performs local optimization sensor by sensor, starting with the smallest to largest periods. This is based on an observation that sensors with large offsets have more choices in offsets to avoid traffic peaks than the sensors with smaller periods. A MATLAB simulation shows that our scheme excels the known scheme by M. Grenier et al. in a similar situation (aggregating periodic traffic in a controller area network) for almost all possible permutations. The performance of our scheme is very close to the straightforward optimization, which compares all possible permutations. We expect that our scheme would greatly contribute in smoothing the traffic from an ever-increasing number of IoT sensors to the gateway, reducing the burden on the access link to the Internet. PMID:28273831

  15. Optimal control of the gear shifting process for shift smoothness in dual-clutch transmissions

    NASA Astrophysics Data System (ADS)

    Li, Guoqiang; Görges, Daniel

    2018-03-01

    The control of the transmission system in vehicles is significant for the driving comfort. In order to design a controller for smooth shifting and comfortable driving, a dynamic model of a dual-clutch transmission is presented in this paper. A finite-time linear quadratic regulator is proposed for the optimal control of the two friction clutches in the torque phase for the upshift process. An integral linear quadratic regulator is introduced to regulate the relative speed difference between the engine and the slipping clutch under the optimization of the input torque during the inertia phase. The control objective focuses on smoothing the upshift process so as to improve the driving comfort. Considering the available sensors in vehicles for feedback control, an observer design is presented to track the immeasurable variables. Simulation results show that the jerk can be reduced both in the torque phase and inertia phase, indicating good shift performance. Furthermore, compared with conventional controllers for the upshift process, the proposed control method can reduce shift jerk and improve shift quality.

  16. A Scheme to Smooth Aggregated Traffic from Sensors with Periodic Reports.

    PubMed

    Oh, Sungmin; Jang, Ju Wook

    2017-03-03

    The possibility of smoothing aggregated traffic from sensors with varying reporting periods and frame sizes to be carried on an access link is investigated. A straightforward optimization would take O(pn) time, whereas our heuristic scheme takes O(np) time where n, p denote the number of sensors and size of periods, respectively. Our heuristic scheme performs local optimization sensor by sensor, starting with the smallest to largest periods. This is based on an observation that sensors with large offsets have more choices in offsets to avoid traffic peaks than the sensors with smaller periods. A MATLAB simulation shows that our scheme excels the known scheme by M. Grenier et al. in a similar situation (aggregating periodic traffic in a controller area network) for almost all possible permutations. The performance of our scheme is very close to the straightforward optimization, which compares all possible permutations. We expect that our scheme would greatly contribute in smoothing the traffic from an ever-increasing number of IoT sensors to the gateway, reducing the burden on the access link to the Internet.

  17. Improvements on a non-invasive, parameter-free approach to inverse form finding

    NASA Astrophysics Data System (ADS)

    Landkammer, P.; Caspari, M.; Steinmann, P.

    2017-08-01

    Our objective is to determine the optimal undeformed workpiece geometry (material configuration) within forming processes when the prescribed deformed geometry (spatial configuration) is given. For solving the resulting shape optimization problem—also denoted as inverse form finding—we use a novel parameter-free approach, which relocates in each iteration the material nodal positions as design variables. The spatial nodal positions computed by an elasto-plastic finite element (FE) forming simulation are compared with their prescribed values. The objective function expresses a least-squares summation of the differences between the computed and the prescribed nodal positions. Here, a recently developed shape optimization approach (Landkammer and Steinmann in Comput Mech 57(2):169-191, 2016) is investigated with a view to enhance its stability and efficiency. Motivated by nonlinear optimization theory a detailed justification of the algorithm is given. Furthermore, a classification according to shape changing design, fixed and controlled nodal coordinates is introduced. Two examples with large elasto-plastic strains demonstrate that using a superconvergent patch recovery technique instead of a least-squares (L2 )-smoothing improves the efficiency. Updating the interior discretization nodes by solving a fictitious elastic problem also reduces the number of required FE iterations and avoids severe mesh distortions. Furthermore, the impact of the inclusion of the second deformation gradient in the Hessian of the Quasi-Newton approach is analyzed. Inverse form finding is a crucial issue in metal forming applications. As a special feature, the approach is designed to be coupled in a non-invasive fashion to arbitrary FE software.

  18. Improvements on a non-invasive, parameter-free approach to inverse form finding

    NASA Astrophysics Data System (ADS)

    Landkammer, P.; Caspari, M.; Steinmann, P.

    2018-04-01

    Our objective is to determine the optimal undeformed workpiece geometry (material configuration) within forming processes when the prescribed deformed geometry (spatial configuration) is given. For solving the resulting shape optimization problem—also denoted as inverse form finding—we use a novel parameter-free approach, which relocates in each iteration the material nodal positions as design variables. The spatial nodal positions computed by an elasto-plastic finite element (FE) forming simulation are compared with their prescribed values. The objective function expresses a least-squares summation of the differences between the computed and the prescribed nodal positions. Here, a recently developed shape optimization approach (Landkammer and Steinmann in Comput Mech 57(2):169-191, 2016) is investigated with a view to enhance its stability and efficiency. Motivated by nonlinear optimization theory a detailed justification of the algorithm is given. Furthermore, a classification according to shape changing design, fixed and controlled nodal coordinates is introduced. Two examples with large elasto-plastic strains demonstrate that using a superconvergent patch recovery technique instead of a least-squares (L2)-smoothing improves the efficiency. Updating the interior discretization nodes by solving a fictitious elastic problem also reduces the number of required FE iterations and avoids severe mesh distortions. Furthermore, the impact of the inclusion of the second deformation gradient in the Hessian of the Quasi-Newton approach is analyzed. Inverse form finding is a crucial issue in metal forming applications. As a special feature, the approach is designed to be coupled in a non-invasive fashion to arbitrary FE software.

  19. Proceedings of the Third International Workshop on Multistrategy Learning, May 23-25 Harpers Ferry, WV.

    DTIC Science & Technology

    1996-09-16

    approaches are: • Adaptive filtering • Single exponential smoothing (Brown, 1963) * The Box-Jenkins methodology ( ARIMA modeling ) - Linear exponential... ARIMA • Linear exponential smoothing: Holt’s two parameter modeling (Box and Jenkins, 1976). However, there are two approach (Holt et al., 1960) very...crucial disadvantages: The most important point in - Winters’ three parameter method (Winters, 1960) ARIMA modeling is model identification. As shown in

  20. Mesh Denoising based on Normal Voting Tensor and Binary Optimization.

    PubMed

    Yadav, Sunil Kumar; Reitebuch, Ulrich; Polthier, Konrad

    2017-08-17

    This paper presents a two-stage mesh denoising algorithm. Unlike other traditional averaging approaches, our approach uses an element-based normal voting tensor to compute smooth surfaces. By introducing a binary optimization on the proposed tensor together with a local binary neighborhood concept, our algorithm better retains sharp features and produces smoother umbilical regions than previous approaches. On top of that, we provide a stochastic analysis on the different kinds of noise based on the average edge length. The quantitative results demonstrate that the performance of our method is better compared to state-of-the-art smoothing approaches.

  1. Utility of a novel error-stepping method to improve gradient-based parameter identification by increasing the smoothness of the local objective surface: a case-study of pulmonary mechanics.

    PubMed

    Docherty, Paul D; Schranz, Christoph; Chase, J Geoffrey; Chiew, Yeong Shiong; Möller, Knut

    2014-05-01

    Accurate model parameter identification relies on accurate forward model simulations to guide convergence. However, some forward simulation methodologies lack the precision required to properly define the local objective surface and can cause failed parameter identification. The role of objective surface smoothness in identification of a pulmonary mechanics model was assessed using forward simulation from a novel error-stepping method and a proprietary Runge-Kutta method. The objective surfaces were compared via the identified parameter discrepancy generated in a Monte Carlo simulation and the local smoothness of the objective surfaces they generate. The error-stepping method generated significantly smoother error surfaces in each of the cases tested (p<0.0001) and more accurate model parameter estimates than the Runge-Kutta method in three of the four cases tested (p<0.0001) despite a 75% reduction in computational cost. Of note, parameter discrepancy in most cases was limited to a particular oblique plane, indicating a non-intuitive multi-parameter trade-off was occurring. The error-stepping method consistently improved or equalled the outcomes of the Runge-Kutta time-integration method for forward simulations of the pulmonary mechanics model. This study indicates that accurate parameter identification relies on accurate definition of the local objective function, and that parameter trade-off can occur on oblique planes resulting prematurely halted parameter convergence. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  2. Rigid shape matching by segmentation averaging.

    PubMed

    Wang, Hongzhi; Oliensis, John

    2010-04-01

    We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking.

  3. A theoretical and experimental study on the pulsed laser dressing of bronze-bonded diamond grinding wheels

    NASA Astrophysics Data System (ADS)

    Deng, H.; Chen, G. Y.; Zhou, C.; Zhou, X. C.; He, J.; Zhang, Y.

    2014-09-01

    A series of theoretical analyses and experimental investigations were performed to examine a pulsed fiber-laser tangential profiling and radial sharpening technique for bronze-bonded diamond grinding wheels. The mechanisms for the pulsed laser tangential profiling and radial sharpening of grinding wheels were theoretically analyzed, and the four key processing parameters that determine the quality, accuracy, and efficiency of pulsed laser dressing, namely, the laser power density, laser spot overlap ratio, laser scanning track line overlap ratio, and number of laser scanning cycles, were proposed. Further, by utilizing cylindrical bronze wheels (without diamond grains) and bronze-bonded diamond grinding wheels as the experimental subjects, the effects of these four processing parameters on the removal efficiency and the surface smoothness of the bond material after pulsed laser ablation, as well as the effects on the contour accuracy of the grinding wheels, the protrusion height of the diamond grains, the sharpness of the grain cutting edges, and the graphitization degree of the diamond grains after pulsed laser dressing, were explored. The optimal values of the four key processing parameters were identified.

  4. Sequential Least-Squares Using Orthogonal Transformations. [spacecraft communication/spacecraft tracking-data smoothing

    NASA Technical Reports Server (NTRS)

    Bierman, G. J.

    1975-01-01

    Square root information estimation, starting from its beginnings in least-squares parameter estimation, is considered. Special attention is devoted to discussions of sensitivity and perturbation matrices, computed solutions and their formal statistics, consider-parameters and consider-covariances, and the effects of a priori statistics. The constant-parameter model is extended to include time-varying parameters and process noise, and the error analysis capabilities are generalized. Efficient and elegant smoothing results are obtained as easy consequences of the filter formulation. The value of the techniques is demonstrated by the navigation results that were obtained for the Mariner Venus-Mercury (Mariner 10) multiple-planetary space probe and for the Viking Mars space mission.

  5. Hybrid optimization and Bayesian inference techniques for a non-smooth radiation detection problem

    DOE PAGES

    Stefanescu, Razvan; Schmidt, Kathleen; Hite, Jason; ...

    2016-12-12

    In this paper, we propose several algorithms to recover the location and intensity of a radiation source located in a simulated 250 × 180 m block of an urban center based on synthetic measurements. Radioactive decay and detection are Poisson random processes, so we employ likelihood functions based on this distribution. Owing to the domain geometry and the proposed response model, the negative logarithm of the likelihood is only piecewise continuous differentiable, and it has multiple local minima. To address these difficulties, we investigate three hybrid algorithms composed of mixed optimization techniques. For global optimization, we consider simulated annealing, particlemore » swarm, and genetic algorithm, which rely solely on objective function evaluations; that is, they do not evaluate the gradient in the objective function. By employing early stopping criteria for the global optimization methods, a pseudo-optimum point is obtained. This is subsequently utilized as the initial value by the deterministic implicit filtering method, which is able to find local extrema in non-smooth functions, to finish the search in a narrow domain. These new hybrid techniques, combining global optimization and implicit filtering address, difficulties associated with the non-smooth response, and their performances, are shown to significantly decrease the computational time over the global optimization methods. To quantify uncertainties associated with the source location and intensity, we employ the delayed rejection adaptive Metropolis and DiffeRential Evolution Adaptive Metropolis algorithms. Finally, marginal densities of the source properties are obtained, and the means of the chains compare accurately with the estimates produced by the hybrid algorithms.« less

  6. Optimal Bandwidth Selection in Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Häggström, Jenny; Wiberg, Marie

    2014-01-01

    The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…

  7. fMRat: an extension of SPM for a fully automatic analysis of rodent brain functional magnetic resonance series.

    PubMed

    Chavarrías, Cristina; García-Vázquez, Verónica; Alemán-Gómez, Yasser; Montesinos, Paula; Pascau, Javier; Desco, Manuel

    2016-05-01

    The purpose of this study was to develop a multi-platform automatic software tool for full processing of fMRI rodent studies. Existing tools require the usage of several different plug-ins, a significant user interaction and/or programming skills. Based on a user-friendly interface, the tool provides statistical parametric brain maps (t and Z) and percentage of signal change for user-provided regions of interest. The tool is coded in MATLAB (MathWorks(®)) and implemented as a plug-in for SPM (Statistical Parametric Mapping, the Wellcome Trust Centre for Neuroimaging). The automatic pipeline loads default parameters that are appropriate for preclinical studies and processes multiple subjects in batch mode (from images in either Nifti or raw Bruker format). In advanced mode, all processing steps can be selected or deselected and executed independently. Processing parameters and workflow were optimized for rat studies and assessed using 460 male-rat fMRI series on which we tested five smoothing kernel sizes and three different hemodynamic models. A smoothing kernel of FWHM = 1.2 mm (four times the voxel size) yielded the highest t values at the somatosensorial primary cortex, and a boxcar response function provided the lowest residual variance after fitting. fMRat offers the features of a thorough SPM-based analysis combined with the functionality of several SPM extensions in a single automatic pipeline with a user-friendly interface. The code and sample images can be downloaded from https://github.com/HGGM-LIM/fmrat .

  8. MAIN software for density averaging, model building, structure refinement and validation

    PubMed Central

    Turk, Dušan

    2013-01-01

    MAIN is software that has been designed to interactively perform the complex tasks of macromolecular crystal structure determination and validation. Using MAIN, it is possible to perform density modification, manual and semi-automated or automated model building and rebuilding, real- and reciprocal-space structure optimization and refinement, map calculations and various types of molecular structure validation. The prompt availability of various analytical tools and the immediate visualization of molecular and map objects allow a user to efficiently progress towards the completed refined structure. The extraordinary depth perception of molecular objects in three dimensions that is provided by MAIN is achieved by the clarity and contrast of colours and the smooth rotation of the displayed objects. MAIN allows simultaneous work on several molecular models and various crystal forms. The strength of MAIN lies in its manipulation of averaged density maps and molecular models when noncrystallographic symmetry (NCS) is present. Using MAIN, it is possible to optimize NCS parameters and envelopes and to refine the structure in single or multiple crystal forms. PMID:23897458

  9. Scheduling Non-Preemptible Jobs to Minimize Peak Demand

    DOE PAGES

    Yaw, Sean; Mumey, Brendan

    2017-10-28

    Our paper examines an important problem in smart grid energy scheduling; peaks in power demand are proportionally more expensive to generate and provision for. The issue is exacerbated in local microgrids that do not benefit from the aggregate smoothing experienced by large grids. Demand-side scheduling can reduce these peaks by taking advantage of the fact that there is often flexibility in job start times. We then focus attention on the case where the jobs are non-preemptible, meaning once started, they run to completion. The associated optimization problem is called the peak demand minimization problem, and has been previously shown tomore » be NP-hard. These results include an optimal fixed-parameter tractable algorithm, a polynomial-time approximation algorithm, as well as an effective heuristic that can also be used in an online setting of the problem. Simulation results show that these methods can reduce peak demand by up to 50% versus on-demand scheduling for household power jobs.« less

  10. Scheduling Non-Preemptible Jobs to Minimize Peak Demand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yaw, Sean; Mumey, Brendan

    Our paper examines an important problem in smart grid energy scheduling; peaks in power demand are proportionally more expensive to generate and provision for. The issue is exacerbated in local microgrids that do not benefit from the aggregate smoothing experienced by large grids. Demand-side scheduling can reduce these peaks by taking advantage of the fact that there is often flexibility in job start times. We then focus attention on the case where the jobs are non-preemptible, meaning once started, they run to completion. The associated optimization problem is called the peak demand minimization problem, and has been previously shown tomore » be NP-hard. These results include an optimal fixed-parameter tractable algorithm, a polynomial-time approximation algorithm, as well as an effective heuristic that can also be used in an online setting of the problem. Simulation results show that these methods can reduce peak demand by up to 50% versus on-demand scheduling for household power jobs.« less

  11. Optimization of hetero-epitaxial growth for the threading dislocation density reduction of germanium epilayers

    NASA Astrophysics Data System (ADS)

    Chong, Haining; Wang, Zhewei; Chen, Chaonan; Xu, Zemin; Wu, Ke; Wu, Lan; Xu, Bo; Ye, Hui

    2018-04-01

    In order to suppress dislocation generation, we develop a "three-step growth" method to heteroepitaxy low dislocation density germanium (Ge) layers on silicon with the MBE process. The method is composed of 3 growth steps: low temperature (LT) seed layer, LT-HT intermediate layer as well as high temperature (HT) epilayer, successively. Threading dislocation density (TDD) of epitaxial Ge layers is measured as low as 1.4 × 106 cm-2 by optimizing the growth parameters. The results of Raman spectrum showed that the internal strain of heteroepitaxial Ge layers is tensile and homogeneous. During the growth of LT-HT intermediate layer, TDD reduction can be obtained by lowering the temperature ramping rate, and high rate deposition maintains smooth surface morphology in Ge epilayer. A mechanism based on thermodynamics is used to explain the TDD and surface morphological dependence on temperature ramping rate and deposition rate. Furthermore, we demonstrate that the Ge layer obtained can provide an excellent platform for III-V materials integrated on Si.

  12. Robust continuous clustering

    PubMed Central

    Shah, Sohil Atul

    2017-01-01

    Clustering is a fundamental procedure in the analysis of scientific data. It is used ubiquitously across the sciences. Despite decades of research, existing clustering algorithms have limited effectiveness in high dimensions and often require tuning parameters for different domains and datasets. We present a clustering algorithm that achieves high accuracy across multiple domains and scales efficiently to high dimensions and large datasets. The presented algorithm optimizes a smooth continuous objective, which is based on robust statistics and allows heavily mixed clusters to be untangled. The continuous nature of the objective also allows clustering to be integrated as a module in end-to-end feature learning pipelines. We demonstrate this by extending the algorithm to perform joint clustering and dimensionality reduction by efficiently optimizing a continuous global objective. The presented approach is evaluated on large datasets of faces, hand-written digits, objects, newswire articles, sensor readings from the Space Shuttle, and protein expression levels. Our method achieves high accuracy across all datasets, outperforming the best prior algorithm by a factor of 3 in average rank. PMID:28851838

  13. A comparative look at sunspot cycles

    NASA Technical Reports Server (NTRS)

    Wilson, R. M.

    1984-01-01

    On the basis of cycles 8 through 20, spanning about 143 years, observations of sunspot number, smoothed sunspot number, and their temporal properties were used to compute means, standard deviations, ranges, and frequency of occurrence histograms for a number of sunspot cycle parameters. The resultant schematic sunspot cycle was contrasted with the mean sunspot cycle, obtained by averaging smoothed sunspot number as a function of time, tying all cycles (8 through 20) to their minimum occurence date. A relatively good approximation of the time variation of smoothed sunspot number for a given cycle is possible if sunspot cycles are regarded in terms of being either HIGH- or LOW-R(MAX) cycles or LONG- or SHORT-PERIOD cycles, especially the latter. Linear regression analyses were performed comparing late cycle parameters with early cycle parameters and solar cycle number. The early occurring cycle parameters can be used to estimate later occurring cycle parameters with relatively good success, based on cycle 21 as an example. The sunspot cycle record clearly shows that the trend for both R(MIN) and R(MAX) was toward decreasing value between cycles 8 through 14 and toward increasing value between cycles 14 through 20. Linear regression equations were also obtained for several measures of solar activity.

  14. Quality Tetrahedral Mesh Smoothing via Boundary-Optimized Delaunay Triangulation

    PubMed Central

    Gao, Zhanheng; Yu, Zeyun; Holst, Michael

    2012-01-01

    Despite its great success in improving the quality of a tetrahedral mesh, the original optimal Delaunay triangulation (ODT) is designed to move only inner vertices and thus cannot handle input meshes containing “bad” triangles on boundaries. In the current work, we present an integrated approach called boundary-optimized Delaunay triangulation (B-ODT) to smooth (improve) a tetrahedral mesh. In our method, both inner and boundary vertices are repositioned by analytically minimizing the error between a paraboloid function and its piecewise linear interpolation over the neighborhood of each vertex. In addition to the guaranteed volume-preserving property, the proposed algorithm can be readily adapted to preserve sharp features in the original mesh. A number of experiments are included to demonstrate the performance of our method. PMID:23144522

  15. Aerodynamic parameters of across-wind self-limiting vibration for square sections after lock-in in smooth flow

    NASA Astrophysics Data System (ADS)

    Wu, Jong-Cheng; Chang, Feng-Jung

    2011-08-01

    The paper aims to identify the across-wind aerodynamic parameters of two-dimensional square section structures after the lock-in stage from the response measurements of wind tunnel tests under smooth wind flow conditions. Firstly, a conceivable self-limiting model was selected from the existent literature and the revisit of the analytical solution shows that the aerodynamic parameters (linear and nonlinear aerodynamic dampings Y1 and ɛ, and aerodynamic stiffness Y2) are not only functions of the section shape and reduced wind velocity but also dependent on both the mass ratio ( mr) and structural damping ratio ( ξ) independently, rather than on the Scruton number as a whole. Secondly, the growth-to-resonance (GTR) method was adopted for identifying the aerodynamic parameters of four different square section models (DN1, DN2, DN3 and DN4) by varying the density ranging from 226 to 409 kg/m 3. To improve the accuracy of the results, numerical optimization of the curve-fitting for experimental and analytical response in time domain was performed to finalize the results. The experimental results of the across-wind self-limiting steady-state amplitudes after lock-in stage versus the reduced wind velocity show that, except the tail part of the DN1 case slightly decreases indicating a pure vortex-induced lock-in persists, the DN2, DN3 and DN4 cases have a trend of monotonically increasing with the reduced wind velocity, which shows an asymptotic combination with the galloping behavior. Due to such a combination effect, all three aerodynamic parameters decrease as the reduced wind velocity increases and asymptotically approaches to a constant at the high branch. In the DN1 case, the parameters Y1 and Y2 decrease as the reduced wind velocity increases while the parameter ɛ slightly reverses in the tail part. The 3-dimensional surface plot of the Y1, ɛ and Y2 curves further show that, excluding the DN1 case, the parameters in the DN2, DN3 and DN4 cases almost follow a symmetric concave-up distribution versus the density under the same reduced wind velocity. This indicates that the aerodynamic parameters in the DN3 case are the minima along the density distribution.

  16. Corner smoothing of 2D milling toolpath using b-spline curve by optimizing the contour error and the feedrate

    NASA Astrophysics Data System (ADS)

    Özcan, Abdullah; Rivière-Lorphèvre, Edouard; Ducobu, François

    2018-05-01

    In part manufacturing, efficient process should minimize the cycle time needed to reach the prescribed quality on the part. In order to optimize it, the machining time needs to be as low as possible and the quality needs to meet some requirements. For a 2D milling toolpath defined by sharp corners, the programmed feedrate is different from the reachable feedrate due to kinematic limits of the motor drives. This phenomena leads to a loss of productivity. Smoothing the toolpath allows to reduce significantly the machining time but the dimensional accuracy should not be neglected. Therefore, a way to address the problem of optimizing a toolpath in part manufacturing is to take into account the manufacturing time and the part quality. On one hand, maximizing the feedrate will minimize the manufacturing time and, on the other hand, the maximum of the contour error needs to be set under a threshold to meet the quality requirements. This paper presents a method to optimize sharp corner smoothing using b-spline curves by adjusting the control points defining the curve. The objective function used in the optimization process is based on the contour error and the difference between the programmed feedrate and an estimation of the reachable feedrate. The estimation of the reachable feedrate is based on geometrical information. Some simulation results are presented in the paper and the machining times are compared in each cases.

  17. Global Mass Flux Solutions from GRACE: A Comparison of Parameter Estimation Strategies - Mass Concentrations Versus Stokes Coefficients

    NASA Technical Reports Server (NTRS)

    Rowlands, D. D.; Luthcke, S. B.; McCarthy J. J.; Klosko, S. M.; Chinn, D. S.; Lemoine, F. G.; Boy, J.-P.; Sabaka, T. J.

    2010-01-01

    The differences between mass concentration (mas con) parameters and standard Stokes coefficient parameters in the recovery of gravity infonnation from gravity recovery and climate experiment (GRACE) intersatellite K-band range rate data are investigated. First, mascons are decomposed into their Stokes coefficient representations to gauge the range of solutions available using each of the two types of parameters. Next, a direct comparison is made between two time series of unconstrained gravity solutions, one based on a set of global equal area mascon parameters (equivalent to 4deg x 4deg at the equator), and the other based on standard Stokes coefficients with each time series using the same fundamental processing of the GRACE tracking data. It is shown that in unconstrained solutions, the type of gravity parameter being estimated does not qualitatively affect the estimated gravity field. It is also shown that many of the differences in mass flux derivations from GRACE gravity solutions arise from the type of smoothing being used and that the type of smoothing that can be embedded in mas con solutions has distinct advantages over postsolution smoothing. Finally, a 1 year time series based on global 2deg equal area mascons estimated every 10 days is presented.

  18. Optimization methods applied to hybrid vehicle design

    NASA Technical Reports Server (NTRS)

    Donoghue, J. F.; Burghart, J. H.

    1983-01-01

    The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.

  19. Production of Conductive PEDOT-Coated PVA-GO Composite Nanofibers

    NASA Astrophysics Data System (ADS)

    Zubair, Nur Afifah; Rahman, Norizah Abdul; Lim, Hong Ngee; Sulaiman, Yusran

    2017-02-01

    Electrically conductive nanofiber is well known as an excellent nanostructured material for its outstanding performances. In this work, poly(3,4-ethylenedioxythiophene) (PEDOT)-coated polyvinyl alcohol-graphene oxide (PVA-GO)-conducting nanofibers were fabricated via a combined method using electrospinning and electropolymerization techniques. During electrospinning, the concentration of PVA-GO solution and the applied voltage were deliberately altered in order to determine the optimized electrospinning conditions. The optimized parameters obtained were 0.1 mg/mL of GO concentration with electrospinning voltage of 15 kV, which displayed smooth nanofibrous morphology and smaller diameter distribution. The electrospun PVA-GO nanofiber mats were further modified by coating with the conjugated polymer, PEDOT, using electropolymerization technique which is a facile approach for coating the nanofibers. SEM images of the obtained nanofibers indicated that cauliflower-like structures of PEDOT were successfully grown on the surface of the electrospun nanofibers during the potentiostatic mode of the electropolymerization process. The conductive nature of PEDOT coating strongly depends on the different electropolymerization parameters, resulting in good conductivity of PEDOT-coated nanofibers. The optimum electropolymerization of PEDOT was at a potential of 1.2 V in 5 min. The electrochemical measurements demonstrated that the fabricated PVA-GO/PEDOT composite nanofiber could enhance the current response and reduce the charge transfer resistance of the nanofiber.

  20. Fabrication of graded index single crystal in glass

    PubMed Central

    Veenhuizen, Keith; McAnany, Sean; Nolan, Daniel; Aitken, Bruce; Dierolf, Volkmar; Jain, Himanshu

    2017-01-01

    Lithium niobate crystals were grown in 3D through localized heating by femtosecond laser irradiation deep inside 35Li2O-35Nb2O5-30SiO2 glass. Laser scanning speed and power density were systematically varied to control the crystal growth process and determine the optimal conditions for the formation of single crystal lines. EBSD measurements showed that, in principle, single crystals can be grown to unlimited lengths using optimal parameters. We successfully tuned the parameters to a growth mode where nucleation and growth occur upon heating and ahead of the scanning laser focus. This growth mode eliminates the problem reported in previous works of non-uniform polycrystallinity because of a separate growth mode where crystallization occurs during cooling behind the scanning laser focus. To our knowledge, this is the first report of such a growth mode using a fs laser. The crystal cross-sections possessed a symmetric, smooth lattice misorientation with respect to the c-axis orientation in the center of the crystal. Calculations indicate the observed misorientation leads to a decrease in the refractive index of the crystal line from the center moving outwards, opening the possibility to produce within glass a graded refractive index single crystal (GRISC) optically active waveguide. PMID:28287174

  1. Formulating Spatially Varying Performance in the Statistical Fusion Framework

    PubMed Central

    Landman, Bennett A.

    2012-01-01

    To date, label fusion methods have primarily relied either on global (e.g. STAPLE, globally weighted vote) or voxelwise (e.g. locally weighted vote) performance models. Optimality of the statistical fusion framework hinges upon the validity of the stochastic model of how a rater errs (i.e., the labeling process model). Hitherto, approaches have tended to focus on the extremes of potential models. Herein, we propose an extension to the STAPLE approach to seamlessly account for spatially varying performance by extending the performance level parameters to account for a smooth, voxelwise performance level field that is unique to each rater. This approach, Spatial STAPLE, provides significant improvements over state-of-the-art label fusion algorithms in both simulated and empirical data sets. PMID:22438513

  2. Application of a stochastic inverse to the geophysical inverse problem

    NASA Technical Reports Server (NTRS)

    Jordan, T. H.; Minster, J. B.

    1972-01-01

    The inverse problem for gross earth data can be reduced to an undertermined linear system of integral equations of the first kind. A theory is discussed for computing particular solutions to this linear system based on the stochastic inverse theory presented by Franklin. The stochastic inverse is derived and related to the generalized inverse of Penrose and Moore. A Backus-Gilbert type tradeoff curve is constructed for the problem of estimating the solution to the linear system in the presence of noise. It is shown that the stochastic inverse represents an optimal point on this tradeoff curve. A useful form of the solution autocorrelation operator as a member of a one-parameter family of smoothing operators is derived.

  3. Control of Networked Traffic Flow Distribution - A Stochastic Distribution System Perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hong; Aziz, H M Abdul; Young, Stan

    Networked traffic flow is a common scenario for urban transportation, where the distribution of vehicle queues either at controlled intersections or highway segments reflect the smoothness of the traffic flow in the network. At signalized intersections, the traffic queues are controlled by traffic signal control settings and effective traffic lights control would realize both smooth traffic flow and minimize fuel consumption. Funded by the Energy Efficient Mobility Systems (EEMS) program of the Vehicle Technologies Office of the US Department of Energy, we performed a preliminary investigation on the modelling and control framework in context of urban network of signalized intersections.more » In specific, we developed a recursive input-output traffic queueing models. The queue formation can be modeled as a stochastic process where the number of vehicles entering each intersection is a random number. Further, we proposed a preliminary B-Spline stochastic model for a one-way single-lane corridor traffic system based on theory of stochastic distribution control.. It has been shown that the developed stochastic model would provide the optimal probability density function (PDF) of the traffic queueing length as a dynamic function of the traffic signal setting parameters. Based upon such a stochastic distribution model, we have proposed a preliminary closed loop framework on stochastic distribution control for the traffic queueing system to make the traffic queueing length PDF follow a target PDF that potentially realizes the smooth traffic flow distribution in a concerned corridor.« less

  4. A geometric projection method for designing three-dimensional open lattices with inverse homogenization

    DOE PAGES

    Watts, Seth; Tortorelli, Daniel A.

    2017-04-13

    Topology optimization is a methodology for assigning material or void to each point in a design domain in a way that extremizes some objective function, such as the compliance of a structure under given loads, subject to various imposed constraints, such as an upper bound on the mass of the structure. Geometry projection is a means to parameterize the topology optimization problem, by describing the design in a way that is independent of the mesh used for analysis of the design's performance; it results in many fewer design parameters, necessarily resolves the ill-posed nature of the topology optimization problem, andmore » provides sharp descriptions of the material interfaces. We extend previous geometric projection work to 3 dimensions and design unit cells for lattice materials using inverse homogenization. We perform a sensitivity analysis of the geometric projection and show it has smooth derivatives, making it suitable for use with gradient-based optimization algorithms. The technique is demonstrated by designing unit cells comprised of a single constituent material plus void space to obtain light, stiff materials with cubic and isotropic material symmetry. Here, we also design a single-constituent isotropic material with negative Poisson's ratio and a light, stiff material comprised of 2 constituent solids plus void space.« less

  5. A geometric projection method for designing three-dimensional open lattices with inverse homogenization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watts, Seth; Tortorelli, Daniel A.

    Topology optimization is a methodology for assigning material or void to each point in a design domain in a way that extremizes some objective function, such as the compliance of a structure under given loads, subject to various imposed constraints, such as an upper bound on the mass of the structure. Geometry projection is a means to parameterize the topology optimization problem, by describing the design in a way that is independent of the mesh used for analysis of the design's performance; it results in many fewer design parameters, necessarily resolves the ill-posed nature of the topology optimization problem, andmore » provides sharp descriptions of the material interfaces. We extend previous geometric projection work to 3 dimensions and design unit cells for lattice materials using inverse homogenization. We perform a sensitivity analysis of the geometric projection and show it has smooth derivatives, making it suitable for use with gradient-based optimization algorithms. The technique is demonstrated by designing unit cells comprised of a single constituent material plus void space to obtain light, stiff materials with cubic and isotropic material symmetry. Here, we also design a single-constituent isotropic material with negative Poisson's ratio and a light, stiff material comprised of 2 constituent solids plus void space.« less

  6. Optimization of cathodic arc deposition and pulsed plasma melting techniques for growing smooth superconducting Pb photoemissive films for SRF injectors

    DOE PAGES

    Nietubyc, Robert; Lorkiewicz, Jerzy; Sekutowicz, Jacek; ...

    2018-02-14

    Superconducting photoinjectors have a potential to be the optimal solution for moderate and high current cw operating free electron lasers. For this application, a superconducting lead (Pb) cathode has been proposed to simplify the cathode integration into a 1.3 GHz, TESLA-type, 1.6-cell long purely superconducting gun cavity. In the proposed design, a lead film several micrometres thick is deposited onto a niobium plug attached to the cavity back wall. Traditional lead deposition techniques usually produce very non-uniform emission surfaces and often result in a poor adhesion of the layer. A pulsed plasma melting procedure reducing the non-uniformity of the leadmore » photocathodes is presented. In order to determine the parameters optimal for this procedure, heat transfer from plasma to the film was first modelled to evaluate melting front penetration range and liquid state duration. The obtained results were verified by surface inspection of witness samples. The optimal procedure was used to prepare a photocathode plug, which was then tested in an electron gun. In conclusion, the quantum efficiency and the value of cavity quality factor have been found to satisfy the requirements for an injector of the European-XFEL facility.« less

  7. Optimization of cathodic arc deposition and pulsed plasma melting techniques for growing smooth superconducting Pb photoemissive films for SRF injectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nietubyc, Robert; Lorkiewicz, Jerzy; Sekutowicz, Jacek

    Superconducting photoinjectors have a potential to be the optimal solution for moderate and high current cw operating free electron lasers. For this application, a superconducting lead (Pb) cathode has been proposed to simplify the cathode integration into a 1.3 GHz, TESLA-type, 1.6-cell long purely superconducting gun cavity. In the proposed design, a lead film several micrometres thick is deposited onto a niobium plug attached to the cavity back wall. Traditional lead deposition techniques usually produce very non-uniform emission surfaces and often result in a poor adhesion of the layer. A pulsed plasma melting procedure reducing the non-uniformity of the leadmore » photocathodes is presented. In order to determine the parameters optimal for this procedure, heat transfer from plasma to the film was first modelled to evaluate melting front penetration range and liquid state duration. The obtained results were verified by surface inspection of witness samples. The optimal procedure was used to prepare a photocathode plug, which was then tested in an electron gun. In conclusion, the quantum efficiency and the value of cavity quality factor have been found to satisfy the requirements for an injector of the European-XFEL facility.« less

  8. Global Earthquake Activity Rate models based on version 2 of the Global Strain Rate Map

    NASA Astrophysics Data System (ADS)

    Bird, P.; Kreemer, C.; Kagan, Y. Y.; Jackson, D. D.

    2013-12-01

    Global Earthquake Activity Rate (GEAR) models have usually been based on either relative tectonic motion (fault slip rates and/or distributed strain rates), or on smoothing of seismic catalogs. However, a hybrid approach appears to perform better than either parent, at least in some retrospective tests. First, we construct a Tectonic ('T') forecast of shallow (≤ 70 km) seismicity based on global plate-boundary strain rates from version 2 of the Global Strain Rate Map. Our approach is the SHIFT (Seismic Hazard Inferred From Tectonics) method described by Bird et al. [2010, SRL], in which the character of the strain rate tensor (thrusting and/or strike-slip and/or normal) is used to select the most comparable type of plate boundary for calibration of the coupled seismogenic lithosphere thickness and corner magnitude. One difference is that activity of offshore plate boundaries is spatially smoothed using empirical half-widths [Bird & Kagan, 2004, BSSA] before conversion to seismicity. Another is that the velocity-dependence of coupling in subduction and continental-convergent boundaries [Bird et al., 2009, BSSA] is incorporated. Another forecast component is the smoothed-seismicity ('S') forecast model of [Kagan & Jackson, 1994, JGR; Kagan & Jackson, 2010, GJI], which was based on optimized smoothing of the shallow part of the GCMT catalog, years 1977-2004. Both forecasts were prepared for threshold magnitude 5.767. Then, we create hybrid forecasts by one of 3 methods: (a) taking the greater of S or T; (b) simple weighted-average of S and T; or (c) log of the forecast rate is a weighted average of the logs of S and T. In methods (b) and (c) there is one free parameter, which is the fractional contribution from S. All hybrid forecasts are normalized to the same global rate. Pseudo-prospective tests for 2005-2012 (using versions of S and T calibrated on years 1977-2004) show that many hybrid models outperform both parents (S and T), and that the optimal weight on S is in the neighborhood of 5/8. This is true whether forecast performance is scored by Kagan's [2009, GJI] I1 information score, or by the S-test of Zechar & Jordan [2010, BSSA]. These hybrids also score well (0.97) in the ASS-test of Zechar & Jordan [2008, GJI] with respect to prior relative intensity.

  9. A rapid quantification method for the screening indicator for β-thalassemia with near-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Chen, Jiemei; Peng, Lijun; Han, Yun; Yao, Lijun; Zhang, Jing; Pan, Tao

    2018-03-01

    Near-infrared (NIR) spectroscopy combined with chemometrics was applied to rapidly analyse haemoglobin A2 (HbA2) for β-thalassemia screening in human haemolysate samples. The relative content indicator HbA2 was indirectly quantified by simultaneous analysis of two absolute content indicators (Hb and Hb • HbA2). According to the comprehensive prediction effect of the multiple partitioning of calibration and prediction sets, the parameters were optimized to achieve modelling stability, and the preferred models were validated using the samples not involved in modelling. Savitzky-Golay smoothing was firstly used for the spectral pretreatment. The absorbance optimization partial least squares (AO-PLS) was used to eliminate high-absorption wave-bands appropriately. The equidistant combination PLS (EC-PLS) was further used to optimize wavelength models. The selected optimal models were I = 856 nm, N = 16, G = 1 and F = 6 for Hb and I = 988 nm, N = 12, G = 2 and F = 5 for Hb • HbA2. Through independent validation, the root-mean-square errors and correlation coefficients for prediction (RMSEP, RP) were 3.50 g L- 1 and 0.977 for Hb and 0.38 g L- 1 and 0.917 for Hb • HbA2, respectively. The predicted values of relative percentage HbA2 were further calculated, and the calculated RMSEP and RP were 0.31% and 0.965, respectively. The sensitivity and specificity for β-thalassemia both reached 100%. Therefore, the prediction of HbA2 achieved high accuracy for distinguishing β-thalassemia. The local optimal models for single parameter and the optimal equivalent model sets were proposed, providing more models to match possible constraints in practical applications. The NIR analysis method for the screening indicator of β-thalassemia was successfully established. The proposed method was rapid, simple and promising for thalassemia screening in a large population.

  10. Smooth centile curves for skew and kurtotic data modelled using the Box-Cox power exponential distribution.

    PubMed

    Rigby, Robert A; Stasinopoulos, D Mikis

    2004-10-15

    The Box-Cox power exponential (BCPE) distribution, developed in this paper, provides a model for a dependent variable Y exhibiting both skewness and kurtosis (leptokurtosis or platykurtosis). The distribution is defined by a power transformation Y(nu) having a shifted and scaled (truncated) standard power exponential distribution with parameter tau. The distribution has four parameters and is denoted BCPE (mu,sigma,nu,tau). The parameters, mu, sigma, nu and tau, may be interpreted as relating to location (median), scale (approximate coefficient of variation), skewness (transformation to symmetry) and kurtosis (power exponential parameter), respectively. Smooth centile curves are obtained by modelling each of the four parameters of the distribution as a smooth non-parametric function of an explanatory variable. A Fisher scoring algorithm is used to fit the non-parametric model by maximizing a penalized likelihood. The first and expected second and cross derivatives of the likelihood, with respect to mu, sigma, nu and tau, required for the algorithm, are provided. The centiles of the BCPE distribution are easy to calculate, so it is highly suited to centile estimation. This application of the BCPE distribution to smooth centile estimation provides a generalization of the LMS method of the centile estimation to data exhibiting kurtosis (as well as skewness) different from that of a normal distribution and is named here the LMSP method of centile estimation. The LMSP method of centile estimation is applied to modelling the body mass index of Dutch males against age. 2004 John Wiley & Sons, Ltd.

  11. Advanced optic fabrication using ultrafast laser radiation

    NASA Astrophysics Data System (ADS)

    Taylor, Lauren L.; Qiao, Jun; Qiao, Jie

    2016-03-01

    Advanced fabrication and finishing techniques are desired for freeform optics and integrated photonics. Methods including grinding, polishing and magnetorheological finishing used for final figuring and polishing of such optics are time consuming, expensive, and may be unsuitable for complex surface features while common photonics fabrication techniques often limit devices to planar geometries. Laser processing has been investigated as an alternative method for optic forming, surface polishing, structure writing, and welding, as direct tuning of laser parameters and flexible beam delivery are advantageous for complex freeform or photonics elements and material-specific processing. Continuous wave and pulsed laser radiation down to the nanosecond regime have been implemented to achieve nanoscale surface finishes through localized material melting, but the temporal extent of the laser-material interaction often results in the formation of a sub-surface heat affected zone. The temporal brevity of ultrafast laser radiation can allow for the direct vaporization of rough surface asperities with minimal melting, offering the potential for smooth, final surface quality with negligible heat affected material. High intensities achieved in focused ultrafast laser radiation can easily induce phase changes in the bulk of materials for processing applications. We have experimentally tested the effectiveness of ultrafast laser radiation as an alternative laser source for surface processing of monocrystalline silicon. Simulation of material heating associated with ultrafast laser-material interaction has been performed and used to investigate optimized processing parameters including repetition rate. The parameter optimization process and results of experimental processing will be presented.

  12. Automatic selection of optimal Savitzky-Golay filter parameters for Coronary Wave Intensity Analysis.

    PubMed

    Rivolo, Simone; Nagel, Eike; Smith, Nicolas P; Lee, Jack

    2014-01-01

    Coronary Wave Intensity Analysis (cWIA) is a technique capable of separating the effects of proximal arterial haemodynamics from cardiac mechanics. The cWIA ability to establish a mechanistic link between coronary haemodynamics measurements and the underlying pathophysiology has been widely demonstrated. Moreover, the prognostic value of a cWIA-derived metric has been recently proved. However, the clinical application of cWIA has been hindered due to the strong dependence on the practitioners, mainly ascribable to the cWIA-derived indices sensitivity to the pre-processing parameters. Specifically, as recently demonstrated, the cWIA-derived metrics are strongly sensitive to the Savitzky-Golay (S-G) filter, typically used to smooth the acquired traces. This is mainly due to the inability of the S-G filter to deal with the different timescale features present in the measured waveforms. Therefore, we propose to apply an adaptive S-G algorithm that automatically selects pointwise the optimal filter parameters. The newly proposed algorithm accuracy is assessed against a cWIA gold standard, provided by a newly developed in-silico cWIA modelling framework, when physiological noise is added to the simulated traces. The adaptive S-G algorithm, when used to automatically select the polynomial degree of the S-G filter, provides satisfactory results with ≤ 10% error for all the metrics through all the levels of noise tested. Therefore, the newly proposed method makes cWIA fully automatic and independent from the practitioners, opening the possibility to multi-centre trials.

  13. A computational framework for simultaneous estimation of muscle and joint contact forces and body motion using optimization and surrogate modeling.

    PubMed

    Eskinazi, Ilan; Fregly, Benjamin J

    2018-04-01

    Concurrent estimation of muscle activations, joint contact forces, and joint kinematics by means of gradient-based optimization of musculoskeletal models is hindered by computationally expensive and non-smooth joint contact and muscle wrapping algorithms. We present a framework that simultaneously speeds up computation and removes sources of non-smoothness from muscle force optimizations using a combination of parallelization and surrogate modeling, with special emphasis on a novel method for modeling joint contact as a surrogate model of a static analysis. The approach allows one to efficiently introduce elastic joint contact models within static and dynamic optimizations of human motion. We demonstrate the approach by performing two optimizations, one static and one dynamic, using a pelvis-leg musculoskeletal model undergoing a gait cycle. We observed convergence on the order of seconds for a static optimization time frame and on the order of minutes for an entire dynamic optimization. The presented framework may facilitate model-based efforts to predict how planned surgical or rehabilitation interventions will affect post-treatment joint and muscle function. Copyright © 2018 IPEM. Published by Elsevier Ltd. All rights reserved.

  14. Knowledge-based system for detailed blade design of turbines

    NASA Astrophysics Data System (ADS)

    Goel, Sanjay; Lamson, Scott

    1994-03-01

    A design optimization methodology that couples optimization techniques to CFD analysis for design of airfoils is presented. This technique optimizes 2D airfoil sections of a blade by minimizing the deviation of the actual Mach number distribution on the blade surface from a smooth fit of the distribution. The airfoil is not reverse engineered by specification of a precise distribution of the desired Mach number plot, only general desired characteristics of the distribution are specified for the design. Since the Mach number distribution is very complex, and cannot be conveniently represented by a single polynomial, it is partitioned into segments, each of which is characterized by a different order polynomial. The sum of the deviation of all the segments is minimized during optimization. To make intelligent changes to the airfoil geometry, it needs to be associated with features observed in the Mach number distribution. Associating the geometry parameters with independent features of the distribution is a fairly complex task. Also, for different optimization techniques to work efficiently the airfoil geometry needs to be parameterized into independent parameters, with enough degrees of freedom for adequate geometry manipulation. A high-pressure, low reaction steam turbine blade section was optimized using this methodology. The Mach number distribution was partitioned into pressure and suction surfaces and the suction surface distribution was further subdivided into leading edge, mid section and trailing edge sections. Two different airfoil representation schemes were used for defining the design variables of the optimization problem. The optimization was performed by using a combination of heuristic search and numerical optimization. The optimization results for the two schemes are discussed in the paper. The results are also compared to a manual design improvement study conducted independently by an experienced airfoil designer. The turbine blade optimization system (TBOS) is developed using the described methodology of coupling knowledge engineering with multiple search techniques for blade shape optimization. TBOS removes a major bottleneck in the design cycle by performing multiple design optimizations in parallel, and improves design quality at the same time. TBOS not only improves the design but also the designers' quality of work by taking the mundane repetitive task of design iterations away and leaving them more time for innovative design.

  15. On Asymptotic Behaviour and W 2, p Regularity of Potentials in Optimal Transportation

    NASA Astrophysics Data System (ADS)

    Liu, Jiakun; Trudinger, Neil S.; Wang, Xu-Jia

    2015-03-01

    In this paper we study local properties of cost and potential functions in optimal transportation. We prove that in a proper normalization process, the cost function is uniformly smooth and converges locally smoothly to a quadratic cost x · y, while the potential function converges to a quadratic function. As applications we obtain the interior W 2, p estimates and sharp C 1, α estimates for the potentials, which satisfy a Monge-Ampère type equation. The W 2, p estimate was previously proved by Caffarelli for the quadratic transport cost and the associated standard Monge-Ampère equation.

  16. Thermal contact conductance as a method of rectification in bulk materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sayer, Robert A.

    2016-08-01

    A thermal rectifier that utilizes thermal expansion to directionally control interfacial conductance between two contacting surfaces is presented. The device consists of two thermal reservoirs contacting a beam with one rough and one smooth end. When the temperature of reservoir in contact with the smooth surface is raised, a similar temperature rise will occur in the beam, causing it to expand, thus increasing the contact pressure at the rough interface and reducing the interfacial contact resistance. However, if the temperature of the reservoir in contact with the rough interface is raised, the large contact resistance will prevent a similar temperaturemore » rise in the beam. As a result, the contact pressure will be marginally affected and the contact resistance will not change appreciably. Owing to the decreased contact resistance of the first scenario compared to the second, thermal rectification occurs. A parametric analysis is used to determine optimal device parameters including surface roughness, contact pressure, and device length. Modeling predicts that rectification factors greater than 2 are possible at thermal biases as small as 3 K. Lastly, thin surface coatings are discussed as a method to control the temperature bias at which maximum rectification occurs.« less

  17. Nonsmooth, nonconvex regularizers applied to linear electromagnetic inverse problems

    NASA Astrophysics Data System (ADS)

    Hidalgo-Silva, H.; Gomez-Trevino, E.

    2017-12-01

    Tikhonov's regularization method is the standard technique applied to obtain models of the subsurface conductivity distribution from electric or electromagnetic measurements by solving UT (m) = | F (m) - d |2 + λ P(m). The second term correspond to the stabilizing functional, with P (m) = | ∇ m |2 the usual approach, and λ the regularization parameter. Due to the roughness penalizer inclusion, the model developed by Tikhonov's algorithm tends to smear discontinuities, a feature that may be undesirable. An important requirement for the regularizer is to allow the recovery of edges, and smooth the homogeneous parts. As is well known, Total Variation (TV) is now the standard approach to meet this requirement. Recently, Wang et.al. proved convergence for alternating direction method of multipliers in nonconvex, nonsmooth optimization. In this work we present a study of several algorithms for model recovering of Geosounding data based on Infimal Convolution, and also on hybrid, TV and second order TV and nonsmooth, nonconvex regularizers, observing their performance on synthetic and real data. The algorithms are based on Bregman iteration and Split Bregman method, and the geosounding method is the low-induction numbers magnetic dipoles. Non-smooth regularizers are considered using the Legendre-Fenchel transform.

  18. Quantitative surface topography assessment of directly compressed and roller compacted tablet cores using photometric stereo image analysis.

    PubMed

    Allesø, Morten; Holm, Per; Carstensen, Jens Michael; Holm, René

    2016-05-25

    Surface topography, in the context of surface smoothness/roughness, was investigated by the use of an image analysis technique, MultiRay™, related to photometric stereo, on different tablet batches manufactured either by direct compression or roller compaction. In the present study, oblique illumination of the tablet (darkfield) was considered and the area of cracks and pores in the surface was used as a measure of tablet surface topography; the higher a value, the rougher the surface. The investigations demonstrated a high precision of the proposed technique, which was able to rapidly (within milliseconds) and quantitatively measure the obtained surface topography of the produced tablets. Compaction history, in the form of applied roll force and tablet punch pressure, was also reflected in the measured smoothness of the tablet surfaces. Generally it was found that a higher degree of plastic deformation of the microcrystalline cellulose resulted in a smoother tablet surface. This altogether demonstrated that the technique provides the pharmaceutical developer with a reliable, quantitative response parameter for visual appearance of solid dosage forms, which may be used for process and ultimately product optimization. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Smooth-pursuit eye-movement-related neuronal activity in macaque nucleus reticularis tegmenti pontis.

    PubMed

    Suzuki, David A; Yamada, Tetsuto; Yee, Robert D

    2003-04-01

    Neuronal responses that were observed during smooth-pursuit eye movements were recorded from cells in rostral portions of the nucleus reticularis tegmenti pontis (rNRTP). The responses were categorized as smooth-pursuit eye velocity (78%) or eye acceleration (22%). A separate population of rNRTP cells encoded static eye position. The sensitivity to pursuit eye velocity averaged 0.81 spikes/s per degrees /s, whereas the average sensitivity to pursuit eye acceleration was 0.20 spikes/s per degrees /s(2). Of the eye-velocity cells with horizontal preferences for pursuit responses, 56% were optimally responsive to contraversive smooth-pursuit eye movements and 44% preferred ipsiversive pursuit. For cells with vertical pursuit preferences, 61% preferred upward pursuit and 39% preferred downward pursuit. The direction selectivity was broad with 50% of the maximal response amplitude observed for directions of smooth pursuit up to +/-85 degrees away from the optimal direction. The activities of some rNRTP cells were linearly related to eye position with an average sensitivity of 2.1 spikes/s per deg. In some cells, the magnitude of the response during smooth-pursuit eye movements was affected by the position of the eyes even though these cells did not encode eye position. On average, pursuit centered to one side of screen center elicited a response that was 73% of the response amplitude obtained with tracking centered at screen center. For pursuit centered on the opposite side, the average response was 127% of the response obtained at screen center. The results provide a neuronal rationale for the slow, pursuit-like eye movements evoked with rNRTP microstimulation and for the deficits in smooth-pursuit eye movements observed with ibotenic acid injection into rNRTP. More globally, the results support the notion of a frontal and supplementary eye field-rNRTP-cerebellum pathway involved with controlling smooth-pursuit eye movements.

  20. Image segmentation on adaptive edge-preserving smoothing

    NASA Astrophysics Data System (ADS)

    He, Kun; Wang, Dan; Zheng, Xiuqing

    2016-09-01

    Nowadays, typical active contour models are widely applied in image segmentation. However, they perform badly on real images with inhomogeneous subregions. In order to overcome the drawback, this paper proposes an edge-preserving smoothing image segmentation algorithm. At first, this paper analyzes the edge-preserving smoothing conditions for image segmentation and constructs an edge-preserving smoothing model inspired by total variation. The proposed model has the ability to smooth inhomogeneous subregions and preserve edges. Then, a kind of clustering algorithm, which reasonably trades off edge-preserving and subregion-smoothing according to the local information, is employed to learn the edge-preserving parameter adaptively. At last, according to the confidence level of segmentation subregions, this paper constructs a smoothing convergence condition to avoid oversmoothing. Experiments indicate that the proposed algorithm has superior performance in precision, recall, and F-measure compared with other segmentation algorithms, and it is insensitive to noise and inhomogeneous-regions.

  1. Investigation of noise in gear transmissions by the method of mathematical smoothing of experiments

    NASA Technical Reports Server (NTRS)

    Sheftel, B. T.; Lipskiy, G. K.; Ananov, P. P.; Chernenko, I. K.

    1973-01-01

    A rotatable central component smoothing method is used to analyze rotating gear noise spectra. A matrix is formulated in which the randomized rows correspond to various tests and the columns to factor values. Canonical analysis of the obtained regression equation permits the calculation of optimal speed and load at a previous assigned noise level.

  2. THE EFFECTS OF SPATIAL SMOOTHING ON SOLAR MAGNETIC HELICITY PARAMETERS AND THE HEMISPHERIC HELICITY SIGN RULE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ocker, Stella Koch; Petrie, Gordon, E-mail: socker@oberlin.edu, E-mail: gpetrie@nso.edu

    The hemispheric preference for negative/positive helicity to occur in the northern/southern solar hemisphere provides clues to the causes of twisted, flaring magnetic fields. Previous studies on the hemisphere rule may have been affected by seeing from atmospheric turbulence. Using Hinode /SOT-SP data spanning 2006–2013, we studied the effects of two spatial smoothing tests that imitate atmospheric seeing: noise reduction by ignoring pixel values weaker than the estimated noise threshold, and Gaussian spatial smoothing. We studied in detail the effects of atmospheric seeing on the helicity distributions across various field strengths for active regions (ARs) NOAA 11158 and NOAA 11243, in addition tomore » studying the average helicities of 179 ARs with and without smoothing. We found that, rather than changing trends in the helicity distributions, spatial smoothing modified existing trends by reducing random noise and by regressing outliers toward the mean, or removing them altogether. Furthermore, the average helicity parameter values of the 179 ARs did not conform to the hemisphere rule: independent of smoothing, the weak-vertical-field values tended to be negative in both hemispheres, and the strong-vertical-field values tended to be positive, especially in the south. We conclude that spatial smoothing does not significantly affect the overall statistics for space-based data, and thus seeing from atmospheric turbulence seems not to have significantly affected previous studies’ ground-based results on the hemisphere rule.« less

  3. Study the Effect of SiO2 Based Flux on Dilution in Submerged Arc Welding

    NASA Astrophysics Data System (ADS)

    kumar, Aditya; Maheshwari, Sachin

    2017-08-01

    This paper highlights the method for prediction of dilution in submerged arc welding (SAW). The most important factors of weld bead geometry are governed by the weld dilution which controls the chemical and mechanical properties. Submerged arc welding process is used generally due to its very easy control of process variables, good penetration, high weld quality, and smooth finish. Machining parameters, with suitable weld quality can be achieved with the different composition of the flux in the weld. In the present study Si02-Al2O3-CaO flux system was used. In SiO2 based flux NiO, MnO, MgO were mixed in various proportions. The paper investigates the relationship between the process parameters like voltage, % of flux constituents and dilution with the help of Taguchi’s method. The experiments were designed according to Taguchi L9 orthogonal array, while varying the voltage at two different levels in addition to alloying elements. Then the optimal results conditions were verified by confirmatory experiments.

  4. Optimizing Performance Parameters of Chemically-Derived Graphene/p-Si Heterojunction Solar Cell.

    PubMed

    Batra, Kamal; Nayak, Sasmita; Behura, Sanjay K; Jani, Omkar

    2015-07-01

    Chemically-derived graphene have been synthesized by modified Hummers method and reduced using sodium borohydride. To explore the potential for photovoltaic applications, graphene/p-silicon (Si) heterojunction devices were fabricated using a simple and cost effective technique called spin coating. The SEM analysis shows the formation of graphene oxide (GO) flakes which become smooth after reduction. The absence of oxygen containing functional groups, as observed in FT-IR spectra, reveals the reduction of GO, i.e., reduced graphene oxide (rGO). It was further confirmed by Raman analysis, which shows slight reduction in G-band intensity with respect to D-band. Hall effect measurement confirmed n-type nature of rGO. Therefore, an effort has been made to simu- late rGO/p-Si heterojunction device by using the one-dimensional solar cell capacitance software, considering the experimentally derived parameters. The detail analysis of the effects of Si thickness, graphene thickness and temperature on the performance of the device has been presented.

  5. The effects of DRIE operational parameters on vertically aligned micropillar arrays

    NASA Astrophysics Data System (ADS)

    Miller, Kane; Li, Mingxiao; Walsh, Kevin M.; Fu, Xiao-An

    2013-03-01

    Vertically aligned silicon micropillar arrays have been created by deep reactive ion etching (DRIE) and used for a number of microfabricated devices including microfluidic devices, micropreconcentrators and photovoltaic cells. This paper delineates an experimental design performed on the Bosch process of DRIE of micropillar arrays. The arrays are fabricated with direct-write optical lithography without photomask, and the effects of DRIE process parameters, including etch cycle time, passivation cycle time, platen power and coil power on profile angle, scallop depth and scallop peak-to-peak distance are studied by statistical design of experiments. Scanning electron microscope images are used for measuring the resultant profile angles and characterizing the scalloping effect on the pillar sidewalls. The experimental results indicate the effects of the determining factors, etch cycle time, passivation cycle time and platen power, on the micropillar profile angles and scallop depths. An optimized DRIE process recipe for creating nearly 90° and smooth surface (invisible scalloping) has been obtained as a result of the statistical design of experiments.

  6. Development, optimization, and in vitro characterization of dasatinib-loaded PEG functionalized chitosan capped gold nanoparticles using Box-Behnken experimental design.

    PubMed

    Adena, Sandeep Kumar Reddy; Upadhyay, Mansi; Vardhan, Harsh; Mishra, Brahmeshwar

    2018-03-01

    The purpose of this research study was to develop, optimize, and characterize dasatinib loaded polyethylene glycol (PEG) stabilized chitosan capped gold nanoparticles (DSB-PEG-Ch-GNPs). Gold (III) chloride hydrate was reduced with chitosan and the resulting nanoparticles were coated with thiol-terminated PEG and loaded with dasatinib (DSB). Plackett-Burman design (PBD) followed by Box-Behnken experimental design (BBD) were employed to optimize the process parameters. Polynomial equations, contour, and 3D response surface plots were generated to relate the factors and responses. The optimized DSB-PEG-Ch-GNPs were characterized by FTIR, XRD, HR-SEM, EDX, TEM, SAED, AFM, DLS, and ZP. The results of the optimized DSB-PEG-Ch-GNPs showed particle size (PS) of 24.39 ± 1.82 nm, apparent drug content (ADC) of 72.06 ± 0.86%, and zeta potential (ZP) of -13.91 ± 1.21 mV. The responses observed and the predicted values of the optimized process were found to be close. The shape and surface morphology studies showed that the resulting DSB-PEG-Ch-GNPs were spherical and smooth. The stability and in vitro drug release studies confirmed that the optimized formulation was stable at different conditions of storage and exhibited a sustained drug release of the drug of up to 76% in 48 h and followed Korsmeyer-Peppas release kinetic model. A process for preparing gold nanoparticles using chitosan, anchoring PEG to the particle surface, and entrapping dasatinib in the chitosan-PEG surface corona was optimized.

  7. Arbitrary Shape Deformation in CFD Design

    NASA Technical Reports Server (NTRS)

    Landon, Mark; Perry, Ernest

    2014-01-01

    Sculptor(R) is a commercially available software tool, based on an Arbitrary Shape Design (ASD), which allows the user to perform shape optimization for computational fluid dynamics (CFD) design. The developed software tool provides important advances in the state-of-the-art of automatic CFD shape deformations and optimization software. CFD is an analysis tool that is used by engineering designers to help gain a greater understanding of the fluid flow phenomena involved in the components being designed. The next step in the engineering design process is to then modify, the design to improve the components' performance. This step has traditionally been performed manually via trial and error. Two major problems that have, in the past, hindered the development of an automated CFD shape optimization are (1) inadequate shape parameterization algorithms, and (2) inadequate algorithms for CFD grid modification. The ASD that has been developed as part of the Sculptor(R) software tool is a major advancement in solving these two issues. First, the ASD allows the CFD designer to freely create his own shape parameters, thereby eliminating the restriction of only being able to use the CAD model parameters. Then, the software performs a smooth volumetric deformation, which eliminates the extremely costly process of having to remesh the grid for every shape change (which is how this process had previously been achieved). Sculptor(R) can be used to optimize shapes for aerodynamic and structural design of spacecraft, aircraft, watercraft, ducts, and other objects that affect and are affected by flows of fluids and heat. Sculptor(R) makes it possible to perform, in real time, a design change that would manually take hours or days if remeshing were needed.

  8. Inferring neural activity from BOLD signals through nonlinear optimization.

    PubMed

    Vakorin, Vasily A; Krakovska, Olga O; Borowsky, Ron; Sarty, Gordon E

    2007-11-01

    The blood oxygen level-dependent (BOLD) fMRI signal does not measure neuronal activity directly. This fact is a key concern for interpreting functional imaging data based on BOLD. Mathematical models describing the path from neural activity to the BOLD response allow us to numerically solve the inverse problem of estimating the timing and amplitude of the neuronal activity underlying the BOLD signal. In fact, these models can be viewed as an advanced substitute for the impulse response function. In this work, the issue of estimating the dynamics of neuronal activity from the observed BOLD signal is considered within the framework of optimization problems. The model is based on the extended "balloon" model and describes the conversion of neuronal signals into the BOLD response through the transitional dynamics of the blood flow-inducing signal, cerebral blood flow, cerebral blood volume and deoxyhemoglobin concentration. Global optimization techniques are applied to find a control input (the neuronal activity and/or the biophysical parameters in the model) that causes the system to follow an admissible solution to minimize discrepancy between model and experimental data. As an alternative to a local linearization (LL) filtering scheme, the optimization method escapes the linearization of the transition system and provides a possibility to search for the global optimum, avoiding spurious local minima. We have found that the dynamics of the neural signals and the physiological variables as well as the biophysical parameters can be robustly reconstructed from the BOLD responses. Furthermore, it is shown that spiking off/on dynamics of the neural activity is the natural mathematical solution of the model. Incorporating, in addition, the expansion of the neural input by smooth basis functions, representing a low-pass filtering, allows us to model local field potential (LFP) solutions instead of spiking solutions.

  9. Improved Evolutionary Programming with Various Crossover Techniques for Optimal Power Flow Problem

    NASA Astrophysics Data System (ADS)

    Tangpatiphan, Kritsana; Yokoyama, Akihiko

    This paper presents an Improved Evolutionary Programming (IEP) for solving the Optimal Power Flow (OPF) problem, which is considered as a non-linear, non-smooth, and multimodal optimization problem in power system operation. The total generator fuel cost is regarded as an objective function to be minimized. The proposed method is an Evolutionary Programming (EP)-based algorithm with making use of various crossover techniques, normally applied in Real Coded Genetic Algorithm (RCGA). The effectiveness of the proposed approach is investigated on the IEEE 30-bus system with three different types of fuel cost functions; namely the quadratic cost curve, the piecewise quadratic cost curve, and the quadratic cost curve superimposed by sine component. These three cost curves represent the generator fuel cost functions with a simplified model and more accurate models of a combined-cycle generating unit and a thermal unit with value-point loading effect respectively. The OPF solutions by the proposed method and Pure Evolutionary Programming (PEP) are observed and compared. The simulation results indicate that IEP requires less computing time than PEP with better solutions in some cases. Moreover, the influences of important IEP parameters on the OPF solution are described in details.

  10. Exploration of faint absorption bands in the reflectance spectra of the asteroids by method of optimal smoothing: Vestoids

    NASA Astrophysics Data System (ADS)

    Shestopalov, D. I.; McFadden, L. A.; Golubeva, L. F.

    2007-04-01

    An optimization method of smoothing noisy spectra was developed to investigate faint absorption bands in the visual spectral region of reflectance spectra of asteroids and the compositional information derived from their analysis. The smoothing algorithm is called "optimal" because the algorithm determines the best running box size to separate weak absorption bands from the noise. The method is tested for its sensitivity to identifying false features in the smoothed spectrum, and its correctness of forecasting real absorption bands was tested with artificial spectra simulating asteroid reflectance spectra. After validating the method we optimally smoothed 22 vestoid spectra from SMASS1 [Xu, Sh., Binzel, R.P., Burbine, T.H., Bus, S.J., 1995. Icarus 115, 1-35]. We show that the resulting bands are not telluric features. Interpretation of the absorption bands in the asteroid spectra was based on the spectral properties of both terrestrial and meteorite pyroxenes. The bands located near 480, 505, 530, and 550 nm we assigned to spin-forbidden crystal field bands of ferrous iron, whereas the bands near 570, 600, and 650 nm are attributed to the crystal field bands of trivalent chromium and/or ferric iron in low-calcium pyroxenes on the asteroids' surface. While not measured by microprobe analysis, Fe 3+ site occupancy can be measured with Mössbauer spectroscopy, and is seen in trace amounts in pyroxenes. We believe that trace amounts of Fe 3+ on vestoid surfaces may be due to oxidation from impacts by icy bodies. If that is the case, they should be ubiquitous in the asteroid belt wherever pyroxene absorptions are found. Pyroxene composition of four asteroids of our set is determined from the band position of absorptions at 505 and 1000 nm, implying that there can be orthopyroxenes in all range of ferruginosity on the vestoid surfaces. For the present we cannot unambiguously interpret of the faint absorption bands that are seen in the spectra of 4005 Dyagilev, 4038 Kristina, 4147 Lennon, and 5143 Heracles. Probably there are other spectrally active materials along with pyroxenes on the surfaces of these asteroids.

  11. Smooth halos in the cosmic web

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaite, José, E-mail: jose.gaite@upm.es

    Dark matter halos can be defined as smooth distributions of dark matter placed in a non-smooth cosmic web structure. This definition of halos demands a precise definition of smoothness and a characterization of the manner in which the transition from smooth halos to the cosmic web takes place. We introduce entropic measures of smoothness, related to measures of inequality previously used in economy and with the advantage of being connected with standard methods of multifractal analysis already used for characterizing the cosmic web structure in cold dark matter N-body simulations. These entropic measures provide us with a quantitative description ofmore » the transition from the small scales portrayed as a distribution of halos to the larger scales portrayed as a cosmic web and, therefore, allow us to assign definite sizes to halos. However, these ''smoothness sizes'' have no direct relation to the virial radii. Finally, we discuss the influence of N-body discreteness parameters on smoothness.« less

  12. Experimental study of ERT monitoring ability to measure solute dispersion.

    PubMed

    Lekmine, Grégory; Pessel, Marc; Auradou, Harold

    2012-01-01

    This paper reports experimental measurements performed to test the ability of electrical resistivity tomography (ERT) imaging to provide quantitative information about transport parameters in porous media such as the dispersivity α, the mixing front velocity u, and the retardation factor R(f) associated with the sorption or trapping of the tracers in the pore structure. The flow experiments are performed in a homogeneous porous column placed between two vertical set of electrodes. Ionic and dyed tracers are injected from the bottom of the porous media over its full width. Under such condition, the mixing front is homogeneous in the transverse direction and shows an S-shape variation in the flow direction. The transport parameters are inferred from the variation of the concentration curves and are compared with data obtained from video analysis of the dyed tracer front. The variations of the transport parameters obtained from an inversion performed by the Gauss-Newton method applied on smoothness-constrained least-squares are studied in detail. While u and R(f) show a relatively small dependence on the inversion procedure, α is strongly dependent on the choice of the inversion parameters. Comparison with the video observations allows for the optimization of the parameters; these parameters are found to be robust with respect to changes in the flow condition and conductivity contrast. © 2011, The Author(s). Ground Water © 2011, National Ground Water Association.

  13. MIND Demons: Symmetric Diffeomorphic Deformable Registration of MR and CT for Image-Guided Spine Surgery.

    PubMed

    Reaungamornrat, Sureerat; De Silva, Tharindu; Uneri, Ali; Vogt, Sebastian; Kleinszig, Gerhard; Khanna, Akhil J; Wolinsky, Jean-Paul; Prince, Jerry L; Siewerdsen, Jeffrey H

    2016-11-01

    Intraoperative localization of target anatomy and critical structures defined in preoperative MR/CT images can be achieved through the use of multimodality deformable registration. We propose a symmetric diffeomorphic deformable registration algorithm incorporating a modality-independent neighborhood descriptor (MIND) and a robust Huber metric for MR-to-CT registration. The method, called MIND Demons, finds a deformation field between two images by optimizing an energy functional that incorporates both the forward and inverse deformations, smoothness on the integrated velocity fields, a modality-insensitive similarity function suitable to multimodality images, and smoothness on the diffeomorphisms themselves. Direct optimization without relying on the exponential map and stationary velocity field approximation used in conventional diffeomorphic Demons is carried out using a Gauss-Newton method for fast convergence. Registration performance and sensitivity to registration parameters were analyzed in simulation, phantom experiments, and clinical studies emulating application in image-guided spine surgery, and results were compared to mutual information (MI) free-form deformation (FFD), local MI (LMI) FFD, normalized MI (NMI) Demons, and MIND with a diffusion-based registration method (MIND-elastic). The method yielded sub-voxel invertibility (0.008 mm) and nonzero-positive Jacobian determinants. It also showed improved registration accuracy in comparison to the reference methods, with mean target registration error (TRE) of 1.7 mm compared to 11.3, 3.1, 5.6, and 2.4 mm for MI FFD, LMI FFD, NMI Demons, and MIND-elastic methods, respectively. Validation in clinical studies demonstrated realistic deformations with sub-voxel TRE in cases of cervical, thoracic, and lumbar spine.

  14. MIND Demons: Symmetric Diffeomorphic Deformable Registration of MR and CT for Image-Guided Spine Surgery

    PubMed Central

    Reaungamornrat, Sureerat; De Silva, Tharindu; Uneri, Ali; Vogt, Sebastian; Kleinszig, Gerhard; Khanna, Akhil J; Wolinsky, Jean-Paul; Prince, Jerry L.

    2016-01-01

    Intraoperative localization of target anatomy and critical structures defined in preoperative MR/CT images can be achieved through the use of multimodality deformable registration. We propose a symmetric diffeomorphic deformable registration algorithm incorporating a modality-independent neighborhood descriptor (MIND) and a robust Huber metric for MR-to-CT registration. The method, called MIND Demons, finds a deformation field between two images by optimizing an energy functional that incorporates both the forward and inverse deformations, smoothness on the integrated velocity fields, a modality-insensitive similarity function suitable to multimodality images, and smoothness on the diffeomorphisms themselves. Direct optimization without relying on the exponential map and stationary velocity field approximation used in conventional diffeomorphic Demons is carried out using a Gauss-Newton method for fast convergence. Registration performance and sensitivity to registration parameters were analyzed in simulation, phantom experiments, and clinical studies emulating application in image-guided spine surgery, and results were compared to mutual information (MI) free-form deformation (FFD), local MI (LMI) FFD, normalized MI (NMI) Demons, and MIND with a diffusion-based registration method (MIND-elastic). The method yielded sub-voxel invertibility (0.008 mm) and nonzero-positive Jacobian determinants. It also showed improved registration accuracy in comparison to the reference methods, with mean target registration error (TRE) of 1.7 mm compared to 11.3, 3.1, 5.6, and 2.4 mm for MI FFD, LMI FFD, NMI Demons, and MIND-elastic methods, respectively. Validation in clinical studies demonstrated realistic deformations with sub-voxel TRE in cases of cervical, thoracic, and lumbar spine. PMID:27295656

  15. Global Patch Matching

    NASA Astrophysics Data System (ADS)

    Huang, X.; Hu, K.; Ling, X.; Zhang, Y.; Lu, Z.; Zhou, G.

    2017-09-01

    This paper introduces a novel global patch matching method that focuses on how to remove fronto-parallel bias and obtain continuous smooth surfaces with assuming that the scenes covered by stereos are piecewise continuous. Firstly, simple linear iterative cluster method (SLIC) is used to segment the base image into a series of patches. Then, a global energy function, which consists of a data term and a smoothness term, is built on the patches. The data term is the second-order Taylor expansion of correlation coefficients, and the smoothness term is built by combing connectivity constraints and the coplanarity constraints are combined to construct the smoothness term. Finally, the global energy function can be built by combining the data term and the smoothness term. We rewrite the global energy function in a quadratic matrix function, and use least square methods to obtain the optimal solution. Experiments on Adirondack stereo and Motorcycle stereo of Middlebury benchmark show that the proposed method can remove fronto-parallel bias effectively, and produce continuous smooth surfaces.

  16. Experimental tests of a superposition hypothesis to explain the relationship between the vestibuloocular reflex and smooth pursuit during horizontal combined eye-head tracking in humans

    NASA Technical Reports Server (NTRS)

    Huebner, W. P.; Leigh, R. J.; Seidman, S. H.; Thomas, C. W.; Billian, C.; DiScenna, A. O.; Dell'Osso, L. F.

    1992-01-01

    1. We used a modeling approach to test the hypothesis that, in humans, the smooth pursuit (SP) system provides the primary signal for cancelling the vestibuloocular reflex (VOR) during combined eye-head tracking (CEHT) of a target moving smoothly in the horizontal plane. Separate models for SP and the VOR were developed. The optimal values of parameters of the two models were calculated using measured responses of four subjects to trials of SP and the visually enhanced VOR. After optimal parameter values were specified, each model generated waveforms that accurately reflected the subjects' responses to SP and vestibular stimuli. The models were then combined into a CEHT model wherein the final eye movement command signal was generated as the linear summation of the signals from the SP and VOR pathways. 2. The SP-VOR superposition hypothesis was tested using two types of CEHT stimuli, both of which involved passive rotation of subjects in a vestibular chair. The first stimulus consisted of a "chair brake" or sudden stop of the subject's head during CEHT; the visual target continued to move. The second stimulus consisted of a sudden change from the visually enhanced VOR to CEHT ("delayed target onset" paradigm); as the vestibular chair rotated past the angular position of the stationary visual stimulus, the latter started to move in synchrony with the chair. Data collected during experiments that employed these stimuli were compared quantitatively with predictions made by the CEHT model. 3. During CEHT, when the chair was suddenly and unexpectedly stopped, the eye promptly began to move in the orbit to track the moving target. Initially, gaze velocity did not completely match target velocity, however; this finally occurred approximately 100 ms after the brake onset. The model did predict the prompt onset of eye-in-orbit motion after the brake, but it did not predict that gaze velocity would initially be only approximately 70% of target velocity. One possible explanation for this discrepancy is that VOR gain can be dynamically modulated and, during sustained CEHT, it may assume a lower value. Consequently, during CEHT, a smaller-amplitude SP signal would be needed to cancel the lower-gain VOR. This reduction of the SP signal could account for the attenuated tracking response observed immediately after the brake. We found evidence for the dynamic modulation of VOR gain by noting differences in responses to the onset and offset of head rotation in trials of the visually enhanced VOR.(ABSTRACT TRUNCATED AT 400 WORDS).

  17. A Low Cross-Polarization Smooth-Walled Horn with Improved Bandwidth

    NASA Technical Reports Server (NTRS)

    Zeng, Lingzhen; Bennette, Charles L.; Chuss, David T.; Wollack, Edward J.

    2009-01-01

    Corrugated feed horns offer excellent beam symmetry, main beam efficiency, and cross-polar response over wide bandwidths, but can be challenging to fabricate. An easier-to-manufacture smooth-walled feed is explored that approximates these properties over a finite bandwidth. The design, optimization and measurement of a monotonically-profiled, smooth-walled scalar feedhorn with a diffraction-limited approx. 14deg FWHM beam is presented. The feed was demonstrated to have low cross polarization (<-30 dB) across the frequency range 33-45 GHz (30% fractional bandwidth). A power reflection below -28 dB was measured across the band.

  18. Smoothing of climate time series revisited

    NASA Astrophysics Data System (ADS)

    Mann, Michael E.

    2008-08-01

    We present an easily implemented method for smoothing climate time series, generalizing upon an approach previously described by Mann (2004). The method adaptively weights the three lowest order time series boundary constraints to optimize the fit with the raw time series. We apply the method to the instrumental global mean temperature series from 1850-2007 and to various surrogate global mean temperature series from 1850-2100 derived from the CMIP3 multimodel intercomparison project. These applications demonstrate that the adaptive method systematically out-performs certain widely used default smoothing methods, and is more likely to yield accurate assessments of long-term warming trends.

  19. Accurate interlaminar stress recovery from finite element analysis

    NASA Technical Reports Server (NTRS)

    Tessler, Alexander; Riggs, H. Ronald

    1994-01-01

    The accuracy and robustness of a two-dimensional smoothing methodology is examined for the problem of recovering accurate interlaminar shear stress distributions in laminated composite and sandwich plates. The smoothing methodology is based on a variational formulation which combines discrete least-squares and penalty-constraint functionals in a single variational form. The smoothing analysis utilizes optimal strains computed at discrete locations in a finite element analysis. These discrete strain data are smoothed with a smoothing element discretization, producing superior accuracy strains and their first gradients. The approach enables the resulting smooth strain field to be practically C1-continuous throughout the domain of smoothing, exhibiting superconvergent properties of the smoothed quantity. The continuous strain gradients are also obtained directly from the solution. The recovered strain gradients are subsequently employed in the integration o equilibrium equations to obtain accurate interlaminar shear stresses. The problem is a simply-supported rectangular plate under a doubly sinusoidal load. The problem has an exact analytic solution which serves as a measure of goodness of the recovered interlaminar shear stresses. The method has the versatility of being applicable to the analysis of rather general and complex structures built of distinct components and materials, such as found in aircraft design. For these types of structures, the smoothing is achieved with 'patches', each patch covering the domain in which the smoothed quantity is physically continuous.

  20. Optimal smoothing length scale for actuator line models of wind turbine blades based on Gaussian body force distribution: Wind energy, actuator line model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martínez-Tossas, L. A.; Churchfield, M. J.; Meneveau, C.

    The actuator line model (ALM) is a commonly used method to represent lifting surfaces such as wind turbine blades within large-eddy simulations (LES). In the ALM, the lift and drag forces are replaced by an imposed body force that is typically smoothed over several grid points using a Gaussian kernel with some prescribed smoothing width e. To date, the choice of e has most often been based on numerical considerations related to the grid spacing used in LES. However, especially for finely resolved LES with grid spacings on the order of or smaller than the chord length of the blade,more » the best choice of e is not known. In this work, a theoretical approach is followed to determine the most suitable value of e, based on an analytical solution to the linearized inviscid flow response to a Gaussian force. We find that the optimal smoothing width eopt is on the order of 14%-25% of the chord length of the blade, and the center of force is located at about 13%-26% downstream of the leading edge of the blade for the cases considered. These optimal values do not depend on angle of attack and depend only weakly on the type of lifting surface. It is then shown that an even more realistic velocity field can be induced by a 2-D elliptical Gaussian lift-force kernel. Some results are also provided regarding drag force representation.« less

  1. An algorithm for surface smoothing with rational splines

    NASA Technical Reports Server (NTRS)

    Schiess, James R.

    1987-01-01

    Discussed is an algorithm for smoothing surfaces with spline functions containing tension parameters. The bivariate spline functions used are tensor products of univariate rational-spline functions. A distinct tension parameter corresponds to each rectangular strip defined by a pair of consecutive spline knots along either axis. Equations are derived for writing the bivariate rational spline in terms of functions and derivatives at the knots. Estimates of these values are obtained via weighted least squares subject to continuity constraints at the knots. The algorithm is illustrated on a set of terrain elevation data.

  2. The relationship between mean anomaly block sizes and spherical harmonic representations. [of earth gravity

    NASA Technical Reports Server (NTRS)

    Rapp, R. H.

    1977-01-01

    The frequently used rule specifying the relationship between a mean gravity anomaly in a block whose side length is theta degrees and a spherical harmonic representation of these data to degree l-bar is examined in light of the smoothing parameter used by Pellinen (1966). It is found that if the smoothing parameter is not considered, mean anomalies computed from potential coefficients can be in error by about 30% of the rms anomaly value. It is suggested that the above mentioned rule should be considered only a crude approximation.

  3. Trajectory fitting in function space with application to analytic modeling of surfaces

    NASA Technical Reports Server (NTRS)

    Barger, Raymond L.

    1992-01-01

    A theory for representing a parameter-dependent function as a function trajectory is described. Additionally, a theory for determining a piecewise analytic fit to the trajectory is described. An example is given that illustrates the application of the theory to generating a smooth surface through a discrete set of input cross-section shapes. A simple procedure for smoothing in the parameter direction is discussed, and a computed example is given. Application of the theory to aerodynamic surface modeling is demonstrated by applying it to a blended wing-fuselage surface.

  4. Expert system for generating initial layouts of zoom systems with multiple moving lens groups

    NASA Astrophysics Data System (ADS)

    Cheng, Xuemin; Wang, Yongtian; Hao, Qun; Sasián, José M.

    2005-01-01

    An expert system is developed for the automatic generation of initial layouts for the design of zoom systems with multiple moving lens groups. The Gaussian parameters of the zoom system are optimized using the damped-least-squares method to achieve smooth zoom cam curves, with the f-number of each lens group in the zoom system constrained to a rational value. Then each lens group is selected automatically from a database according to its range of f-number, field of view, and magnification ratio as it is used in the zoom system. The lens group database is established from the results of analyzing thousands of zoom lens patents. Design examples are given, which show that the scheme is a practical approach to generate starting points for zoom lens design.

  5. Development of RF sputtered chromium oxide coating for wear application

    NASA Technical Reports Server (NTRS)

    Bhushan, B.

    1979-01-01

    The radio frequency sputtering technique was used to deposite a hard refractory, chromium oxide coating on an Inconel X-750 foil 0.1 mm thick. Optimized sputtering parameters for a smooth and adherent coating were found to be as follows: target-to-substrate spacing, 41.3 mm; argon pressure, 5-10 mTorr; total power to the sputtering module, 400 W (voltage at the target, 1600 V), and a water-cooled substrate. The coating on the annealed foil was more adherent than that on the heat-treated foil. Substrate biasing during the sputter deposition of Cr2O3 adversely affected adherence by removing naturally occurring interfacial oxide layers. The deposited coatings were amorphous and oxygen deficient. Since amorphous materials are extremely hard, the structure was considered to be desirable.

  6. Cosmological Parameter Estimation Using the Genus Amplitude—Application to Mock Galaxy Catalogs

    NASA Astrophysics Data System (ADS)

    Appleby, Stephen; Park, Changbom; Hong, Sungwook E.; Kim, Juhan

    2018-01-01

    We study the topology of the matter density field in two-dimensional slices and consider how we can use the amplitude A of the genus for cosmological parameter estimation. Using the latest Horizon Run 4 simulation data, we calculate the genus of the smoothed density field constructed from light cone mock galaxy catalogs. Information can be extracted from the amplitude of the genus by considering both its redshift evolution and magnitude. The constancy of the genus amplitude with redshift can be used as a standard population, from which we derive constraints on the equation of state of dark energy {w}{de}—by measuring A at z∼ 0.1 and z∼ 1, we can place an order {{Δ }}{w}{de}∼ { O }(15 % ) constraint on {w}{de}. By comparing A to its Gaussian expectation value, we can potentially derive an additional stringent constraint on the matter density {{Δ }}{{{Ω }}}{mat}∼ 0.01. We discuss the primary sources of contamination associated with the two measurements—redshift space distortion (RSD) and shot noise. With accurate knowledge of galaxy bias, we can successfully remove the effect of RSD, and the combined effect of shot noise and nonlinear gravitational evolution is suppressed by smoothing over suitably large scales {R}{{G}}≥slant 15 {Mpc}/h. Without knowledge of the bias, we discuss how joint measurements of the two- and three-dimensional genus can be used to constrain the growth factor β =f/b. The method can be applied optimally to redshift slices of a galaxy distribution generated using the drop-off technique.

  7. Electron-beam lithography with character projection exposure for throughput enhancement with line-edge quality optimization

    NASA Astrophysics Data System (ADS)

    Ikeno, Rimon; Maruyama, Satoshi; Mita, Yoshio; Ikeda, Makoto; Asada, Kunihiro

    2016-03-01

    Among various electron-beam lithography (EBL) techniques, variable-shaped beam (VSB) and character projection (CP) methods have attracted many EBL users for their high-throughput feature, but they are considered to be more suited to small-featured VLSI fabrication with regularly-arranged layouts like standard-cell logics and memory arrays. On the other hand, non-VLSI applications like photonics, MEMS, MOEMS, and so on, have not been fully utilized the benefit of CP method due to their wide variety of layout patterns. In addition, the stepwise edge shapes by VSB method often causes intolerable edge roughness to degrade device characteristics from its intended performance with smooth edges. We proposed an overall EBL methodology applicable to wade-variety of EBL applications utilizing VSB and CP methods. Its key idea is in our layout data conversion algorithm that decomposes curved or oblique edges of arbitrary layout patterns into CP shots. We expect significant reduction in EB shot count with a CP-bordered exposure data compared to the corresponding VSB-alone conversion result. Several CP conversion parameters are used to optimize EB exposure throughput, edge quality, and resultant device characteristics. We demonstrated out methodology using the leading-edge VSB/CP EBL tool, ADVANTEST F7000S-VD02, with high resolution Hydrogen Silsesquioxane (HSQ) resist. Through our experiments of curved and oblique edge lithography under various data conversion conditions, we learned correspondence of the conversion parameters to the resultant edge roughness and other conditions. They will be utilized as the fundamental data for further enhancement of our EBL strategy for optimized EB exposure.

  8. Software Performs Complex Design Analysis

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Designers use computational fluid dynamics (CFD) to gain greater understanding of the fluid flow phenomena involved in components being designed. They also use finite element analysis (FEA) as a tool to help gain greater understanding of the structural response of components to loads, stresses and strains, and the prediction of failure modes. Automated CFD and FEA engineering design has centered on shape optimization, which has been hindered by two major problems: 1) inadequate shape parameterization algorithms, and 2) inadequate algorithms for CFD and FEA grid modification. Working with software engineers at Stennis Space Center, a NASA commercial partner, Optimal Solutions Software LLC, was able to utilize its revolutionary, one-of-a-kind arbitrary shape deformation (ASD) capability-a major advancement in solving these two aforementioned problems-to optimize the shapes of complex pipe components that transport highly sensitive fluids. The ASD technology solves the problem of inadequate shape parameterization algorithms by allowing the CFD designers to freely create their own shape parameters, therefore eliminating the restriction of only being able to use the computer-aided design (CAD) parameters. The problem of inadequate algorithms for CFD grid modification is solved by the fact that the new software performs a smooth volumetric deformation. This eliminates the extremely costly process of having to remesh the grid for every shape change desired. The program can perform a design change in a markedly reduced amount of time, a process that would traditionally involve the designer returning to the CAD model to reshape and then remesh the shapes, something that has been known to take hours, days-even weeks or months-depending upon the size of the model.

  9. Neutron Tomography of a Fuel Cell: Statistical Learning Implementation of a Penalized Likelihood Method

    NASA Astrophysics Data System (ADS)

    Coakley, Kevin J.; Vecchia, Dominic F.; Hussey, Daniel S.; Jacobson, David L.

    2013-10-01

    At the NIST Neutron Imaging Facility, we collect neutron projection data for both the dry and wet states of a Proton-Exchange-Membrane (PEM) fuel cell. Transmitted thermal neutrons captured in a scintillator doped with lithium-6 produce scintillation light that is detected by an amorphous silicon detector. Based on joint analysis of the dry and wet state projection data, we reconstruct a residual neutron attenuation image with a Penalized Likelihood method with an edge-preserving Huber penalty function that has two parameters that control how well jumps in the reconstruction are preserved and how well noisy fluctuations are smoothed out. The choice of these parameters greatly influences the resulting reconstruction. We present a data-driven method that objectively selects these parameters, and study its performance for both simulated and experimental data. Before reconstruction, we transform the projection data so that the variance-to-mean ratio is approximately one. For both simulated and measured projection data, the Penalized Likelihood method reconstruction is visually sharper than a reconstruction yielded by a standard Filtered Back Projection method. In an idealized simulation experiment, we demonstrate that the cross validation procedure selects regularization parameters that yield a reconstruction that is nearly optimal according to a root-mean-square prediction error criterion.

  10. Time-Dependent Computed Tomographic Perfusion Thresholds for Patients With Acute Ischemic Stroke.

    PubMed

    d'Esterre, Christopher D; Boesen, Mari E; Ahn, Seong Hwan; Pordeli, Pooneh; Najm, Mohamed; Minhas, Priyanka; Davari, Paniz; Fainardi, Enrico; Rubiera, Marta; Khaw, Alexander V; Zini, Andrea; Frayne, Richard; Hill, Michael D; Demchuk, Andrew M; Sajobi, Tolulope T; Forkert, Nils D; Goyal, Mayank; Lee, Ting Y; Menon, Bijoy K

    2015-12-01

    Among patients with acute ischemic stroke, we determine computed tomographic perfusion (CTP) thresholds associated with follow-up infarction at different stroke onset-to-CTP and CTP-to-reperfusion times. Acute ischemic stroke patients with occlusion on computed tomographic angiography were acutely imaged with CTP. Noncontrast computed tomography and magnectic resonance diffusion-weighted imaging between 24 and 48 hours were used to delineate follow-up infarction. Reperfusion was assessed on conventional angiogram or 4-hour repeat computed tomographic angiography. Tmax, cerebral blood flow, and cerebral blood volume derived from delay-insensitive CTP postprocessing were analyzed using receiver-operator characteristic curves to derive optimal thresholds for combined patient data (pooled analysis) and individual patients (patient-level analysis) based on time from stroke onset-to-CTP and CTP-to-reperfusion. One-way ANOVA and locally weighted scatterplot smoothing regression was used to test whether the derived optimal CTP thresholds were different by time. One hundred and thirty-two patients were included. Tmax thresholds of >16.2 and >15.8 s and absolute cerebral blood flow thresholds of <8.9 and <7.4 mL·min(-1)·100 g(-1) were associated with infarct if reperfused <90 min from CTP with onset <180 min. The discriminative ability of cerebral blood volume was modest. No statistically significant relationship was noted between stroke onset-to-CTP time and the optimal CTP thresholds for all parameters based on discrete or continuous time analysis (P>0.05). A statistically significant relationship existed between CTP-to-reperfusion time and the optimal thresholds for cerebral blood flow (P<0.001; r=0.59 and 0.77 for gray and white matter, respectively) and Tmax (P<0.001; r=-0.68 and -0.60 for gray and white matter, respectively) parameters. Optimal CTP thresholds associated with follow-up infarction depend on time from imaging to reperfusion. © 2015 American Heart Association, Inc.

  11. The sagittarius tidal stream and the shape of the galactic stellar halo

    NASA Astrophysics Data System (ADS)

    Newby, Matthew T.

    The stellar halo that surrounds our Galaxy contains clues to understanding galaxy formation, cosmology, stellar evolution, and the nature of dark matter. Gravitationally disrupted dwarf galaxies form tidal streams, which roughly trace orbits through the Galactic halo. The Sagittarius (Sgr) dwarf tidal debris is the most dominant of these streams, and its properties place important constraints on the distribution of mass (including dark matter) in the Galaxy. Stars not associated with substructures form the "smooth" component of the stellar halo, the origin of which is still under investigation. Characterizing halo substructures such as the Sgr stream and the smooth halo provides valuable information on the formation history and evolution of our galaxy, and places constraints on cosmological models. This thesis is primarily concerned with characterizing the 3-dimensional stellar densities of the Sgr tidal debris system and the smooth stellar halo, using data from the Sloan Digital Sky Survey (SDSS). F turnoff stars are used to infer distances, as they are relatively bright, numerous, and distributed about a single intrinsic brightness (magnitude). The inherent spread in brightnesses of these stars is overcome through the use of the recently-developed technique of statistical photometric parallax, in which the bulk properties of a stellar population are used to create a probability distribution for a given star's distance. This was used to build a spatial density model for the smooth stellar halo and tidal streams. The free parameters in this model are then fit to SDSS data with a maximum likelihood technique, and the parameters are optimized by advanced computational methods. Several computing platforms are used in this study, including the RPI SUR Bluegene and the Milkyway home volunteer computing project. Fits to the Sgr stream in 18 SDSS data stripes were performed, and a continuous density profile is found for the major Sgr stream. The stellar halo is found to be strongly oblate (flattening parameter q=0.53). A catalog of stars consistent with this density profile is produced as a template for matching future disruption models. The results of this analysis favor a description of the Sgr debris system that includes more than one dwarf galaxy progenitor, with the major streams above and below the Galactic disk being separate substructures. Preliminary results for the minor tidal stream characterizations are presented and discussed. Additionally, a more robust characterization of halo turnoff star brightnesses is performed, and it is found that increasing color errors with distance result in a previously unaccounted for incompleteness in star counts as the SDSS magnitude limit is approached. These corrections are currently in the process of being implemented on MilkyWay home.

  12. Time series modeling by a regression approach based on a latent process.

    PubMed

    Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice

    2009-01-01

    Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.

  13. Surface effect investigation on multipactor in microwave components using the EM-PIC method

    NASA Astrophysics Data System (ADS)

    Li, Yun; Ye, Ming; He, Yong-Ning; Cui, Wan-Zhao; Wang, Dan

    2017-11-01

    Multipactor poses a great risk to microwave components in space and its accurate controllable suppression is still lacking. To evaluate the secondary electron emission (SEE) of arbitrary surface states on multipactor, metal samples fabricated with ideal smoothness, random roughness, and micro-structures on the surface are investigated through SEE experiments and multipactor simulations. An accurate quantitative relationship between the SEE parameters and the multipactor discharge threshold in practical components has been established through Electromagnetic Particle-In-Cell (EM-PIC) simulation. Simulation results of microwave components, including the impedance transformer and the coaxial filter, exhibit an intuitive correlation between the critical SEE parameters, varied due to different surface states, and multipactor thresholds. It is demonstrated that it is the surface micro-structures with certain depth and morphology that determine the average yield of secondaries, other than the random surface relieves. Both the random surface relieves and micro-structures have a scattering effect on SEE, and the yield is prone to be identical upon different elevation angles of incident electrons. It possesses a great potential in the optimization and improvement of suppression technology without the exhaustion of the technological parameter.

  14. Effect of pulsed laser parameters on the corrosion limitation for electric connector coatings

    NASA Astrophysics Data System (ADS)

    Georges, C.; Semmar, N.; Boulmer-Leborgne, C.

    2006-12-01

    Materials used in electrical contact applications are usually constituted of multilayered compounds (e.g.: copper alloy electroplated with a nickel layer and finally by a gold layer). After the electro-deposition, micro-channels and pores within the gold layer allow undesirable corrosion of the underlying protection. In order to modify the gold-coating microstructure, a laser surface treatment was applied. The laser treatment suppressing porosity and smoothing the surface sealed the original open structure as a low roughness allows a good electrical contact. Corrosion tests were carried out in humid synthetic air containing three polluting gases. SEM characterization of cross-sections was performed to estimate the gold melting depth and to observe the modifications of gold structure obtained after laser treatment. The effects of the laser treatment were studied according to different surface parameters (roughness of the substrate and thickness of the gold layer) and different laser parameters (laser wavelength, laser fluence, pulse duration and number of pulses). A thermokinetic model was used to understand the heating and melting mechanism of the multilayered coating to optimize the process in terms of laser wavelength, energy and time of interaction.

  15. On Algorithms for Nonlinear Minimax and Min-Max-Min Problems and Their Efficiency

    DTIC Science & Technology

    2011-03-01

    dissertation is complete, I can finally stay home after dinner to play Wii with you. LET’S GO Mario and Yellow Mushroom... xv THIS PAGE INTENTIONALLY... balance the accuracy of the approximation with problem ill-conditioning. The sim- plest smoothing algorithm creates an accurate smooth approximating...sizing in electronic circuit boards (Chen & Fan, 1998), obstacle avoidance for robots (Kirjner- Neto & Polak, 1998), optimal design centering

  16. Modified Kneser-Ney Smoothing of n-Gram Models

    NASA Technical Reports Server (NTRS)

    James, Frankie

    2000-01-01

    This report examines a series of tests that were performed on variations of the modified Kneser Ney smoothing model outlined in a study by Chen and Goodman. We explore several different ways of choosing and setting the discounting parameters, as well as the exclusion of singleton contexts at various levels of the model.

  17. Biomimetics of human movement: functional or aesthetic?

    PubMed

    Harris, Christopher M

    2009-09-01

    How should robotic or prosthetic arms be programmed to move? Copying human smooth movements is popular in synthetic systems, but what does this really achieve? We cannot address these biomimetic issues without a deep understanding of why natural movements are so stereotyped. In this article, we distinguish between 'functional' and 'aesthetic' biomimetics. Functional biomimetics requires insight into the problem that nature has solved and recognition that a similar problem exists in the synthetic system. In aesthetic biomimetics, nature is copied for its own sake and no insight is needed. We examine the popular minimum jerk (MJ) model that has often been used to generate smooth human-like point-to-point movements in synthetic arms. The MJ model was originally justified as maximizing 'smoothness'; however, it is also the limiting optimal trajectory for a wide range of cost functions for brief movements, including the minimum variance (MV) model, where smoothness is a by-product of optimizing the speed-accuracy trade-off imposed by proportional noise (PN: signal-dependent noise with the standard deviation proportional to mean). PN is unlikely to be dominant in synthetic systems, and the control objectives of natural movements (speed and accuracy) would not be optimized in synthetic systems by human-like movements. Thus, employing MJ or MV controllers in robotic arms is just aesthetic biomimetics. For prosthetic arms, the goal is aesthetic by definition, but it is still crucial to recognize that MV trajectories and PN are deeply embedded in the human motor system. Thus, PN arises at the neural level, as a recruitment strategy of motor units and probably optimizes motor neuron noise. Human reaching is under continuous adaptive control. For prosthetic devices that do not have this natural architecture, natural plasticity would drive the system towards unnatural movements. We propose that a truly neuromorphic system with parallel force generators (muscle fibres) and noisy drivers (motor neurons) would permit plasticity to adapt the control of a prosthetic limb towards human-like movement.

  18. Influence of implantation on the electrochemical properties of smooth and porous TiN coatings for stimulation electrodes

    NASA Astrophysics Data System (ADS)

    Meijs, S.; Sørensen, C.; Sørensen, S.; Rechendorff, K.; Fjorback, M.; Rijkhoff, N. J. M.

    2016-04-01

    Objective. To determine whether changes in electrochemical properties of porous titanium nitride (TiN) electrodes as a function of time after implantation are different from those of smooth TiN electrodes. Approach. Eight smooth and 8 porous TiN coated electrodes were implanted in 8 rats. Before implantation, voltage transients, cyclic voltammograms and impedance spectra were recorded in phosphate buffered saline (PBS). After implantation, these measurements were done weekly to investigate how smooth and porous electrodes were affected by implantation. Main results. The electrode capacitance of the porous TiN electrodes decreased more than the capacitance of the smooth electrodes due to acute implantation under fast measurement conditions (such as stimulation pulses). This indicates that protein adhesion presents a greater diffusion limitation for counter-ions for the porous than for the smooth electrodes. The changes in electrochemical properties during the implanted period were similar for smooth and porous TiN electrodes, indicating that cell adhesion poses a similar diffusion limitation for smooth and porous electrodes. Significance. This knowledge can be used to optimize the porous structure of the TiN film, so that the effect of protein adhesion on the electrochemical properties is diminished. Alternatively, an additional coating could be applied on the porous TiN that would prevent or minimize protein adhesion.

  19. Econometrics of inventory holding and shortage costs: the case of refined gasoline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krane, S.D.

    1985-01-01

    This thesis estimates a model of a firm's optimal inventory and production behavior in order to investigate the link between the role of inventories in the business cycle and the microeconomic incentives for holding stocks of finished goods. The goal is to estimate a set of structural cost function parameters that can be used to infer the optimal cyclical response of inventories and production to shocks in demand. To avoid problems associated with the use of value based aggregate inventory data, an industry level physical unit data set for refined motor gasoline is examined. The Euler equations for a refiner'smore » multiperiod decision problem are estimated using restrictions imposed by the rational expectations hypothesis. The model also embodies the fact that, in most periods, the level of shortages will be zero, and even when positive, the shortages are not directly observable in the data set. These two concerns lead us to use a generalized method of moments estimation technique on a functional form that resembles the formulation of a Tobit problem. The estimation results are disappointing; the model and data yield coefficient estimates incongruous with the cost function interpretations of the structural parameters. These is only some superficial evidence that production smoothing is significant and that marginal inventory shortage costs increase at a faster rate than do marginal holding costs.« less

  20. Adaptive road crack detection system by pavement classification.

    PubMed

    Gavilán, Miguel; Balcones, David; Marcos, Oscar; Llorca, David F; Sotelo, Miguel A; Parra, Ignacio; Ocaña, Manuel; Aliseda, Pedro; Yarza, Pedro; Amírola, Alejandro

    2011-01-01

    This paper presents a road distress detection system involving the phases needed to properly deal with fully automatic road distress assessment. A vehicle equipped with line scan cameras, laser illumination and acquisition HW-SW is used to storage the digital images that will be further processed to identify road cracks. Pre-processing is firstly carried out to both smooth the texture and enhance the linear features. Non-crack features detection is then applied to mask areas of the images with joints, sealed cracks and white painting, that usually generate false positive cracking. A seed-based approach is proposed to deal with road crack detection, combining Multiple Directional Non-Minimum Suppression (MDNMS) with a symmetry check. Seeds are linked by computing the paths with the lowest cost that meet the symmetry restrictions. The whole detection process involves the use of several parameters. A correct setting becomes essential to get optimal results without manual intervention. A fully automatic approach by means of a linear SVM-based classifier ensemble able to distinguish between up to 10 different types of pavement that appear in the Spanish roads is proposed. The optimal feature vector includes different texture-based features. The parameters are then tuned depending on the output provided by the classifier. Regarding non-crack features detection, results show that the introduction of such module reduces the impact of false positives due to non-crack features up to a factor of 2. In addition, the observed performance of the crack detection system is significantly boosted by adapting the parameters to the type of pavement.

  1. Adaptive Road Crack Detection System by Pavement Classification

    PubMed Central

    Gavilán, Miguel; Balcones, David; Marcos, Oscar; Llorca, David F.; Sotelo, Miguel A.; Parra, Ignacio; Ocaña, Manuel; Aliseda, Pedro; Yarza, Pedro; Amírola, Alejandro

    2011-01-01

    This paper presents a road distress detection system involving the phases needed to properly deal with fully automatic road distress assessment. A vehicle equipped with line scan cameras, laser illumination and acquisition HW-SW is used to storage the digital images that will be further processed to identify road cracks. Pre-processing is firstly carried out to both smooth the texture and enhance the linear features. Non-crack features detection is then applied to mask areas of the images with joints, sealed cracks and white painting, that usually generate false positive cracking. A seed-based approach is proposed to deal with road crack detection, combining Multiple Directional Non-Minimum Suppression (MDNMS) with a symmetry check. Seeds are linked by computing the paths with the lowest cost that meet the symmetry restrictions. The whole detection process involves the use of several parameters. A correct setting becomes essential to get optimal results without manual intervention. A fully automatic approach by means of a linear SVM-based classifier ensemble able to distinguish between up to 10 different types of pavement that appear in the Spanish roads is proposed. The optimal feature vector includes different texture-based features. The parameters are then tuned depending on the output provided by the classifier. Regarding non-crack features detection, results show that the introduction of such module reduces the impact of false positives due to non-crack features up to a factor of 2. In addition, the observed performance of the crack detection system is significantly boosted by adapting the parameters to the type of pavement. PMID:22163717

  2. Fast three-dimensional inner volume excitations using parallel transmission and optimized k-space trajectories.

    PubMed

    Davids, Mathias; Schad, Lothar R; Wald, Lawrence L; Guérin, Bastien

    2016-10-01

    To design short parallel transmission (pTx) pulses for excitation of arbitrary three-dimensional (3D) magnetization patterns. We propose a joint optimization of the pTx radiofrequency (RF) and gradient waveforms for excitation of arbitrary 3D magnetization patterns. Our optimization of the gradient waveforms is based on the parameterization of k-space trajectories (3D shells, stack-of-spirals, and cross) using a small number of shape parameters that are well-suited for optimization. The resulting trajectories are smooth and sample k-space efficiently with few turns while using the gradient system at maximum performance. Within each iteration of the k-space trajectory optimization, we solve a small tip angle least-squares RF pulse design problem. Our RF pulse optimization framework was evaluated both in Bloch simulations and experiments on a 7T scanner with eight transmit channels. Using an optimized 3D cross (shells) trajectory, we were able to excite a cube shape (brain shape) with 3.4% (6.2%) normalized root-mean-square error in less than 5 ms using eight pTx channels and a clinical gradient system (Gmax  = 40 mT/m, Smax  = 150 T/m/s). This compared with 4.7% (41.2%) error for the unoptimized 3D cross (shells) trajectory. Incorporation of B0 robustness in the pulse design significantly altered the k-space trajectory solutions. Our joint gradient and RF optimization approach yields excellent excitation of 3D cube and brain shapes in less than 5 ms, which can be used for reduced field of view imaging and fat suppression in spectroscopy by excitation of the brain only. Magn Reson Med 76:1170-1182, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  3. A Non-linear Geodetic Data Inversion Using ABIC for Slip Distribution on a Fault With an Unknown dip Angle

    NASA Astrophysics Data System (ADS)

    Fukahata, Y.; Wright, T. J.

    2006-12-01

    We developed a method of geodetic data inversion for slip distribution on a fault with an unknown dip angle. When fault geometry is unknown, the problem of geodetic data inversion is non-linear. A common strategy for obtaining slip distribution is to first determine the fault geometry by minimizing the square misfit under the assumption of a uniform slip on a rectangular fault, and then apply the usual linear inversion technique to estimate a slip distribution on the determined fault. It is not guaranteed, however, that the fault determined under the assumption of a uniform slip gives the best fault geometry for a spatially variable slip distribution. In addition, in obtaining a uniform slip fault model, we have to simultaneously determine the values of the nine mutually dependent parameters, which is a highly non-linear, complicated process. Although the inverse problem is non-linear for cases with unknown fault geometries, the non-linearity of the problems is actually weak, when we can assume the fault surface to be flat. In particular, when a clear fault trace is observed on the EarthOs surface after an earthquake, we can precisely estimate the strike and the location of the fault. In this case only the dip angle has large ambiguity. In geodetic data inversion we usually need to introduce smoothness constraints in order to compromise reciprocal requirements for model resolution and estimation errors in a natural way. Strictly speaking, the inverse problem with smoothness constraints is also non-linear, even if the fault geometry is known. The non-linearity has been dissolved by introducing AkaikeOs Bayesian Information Criterion (ABIC), with which the optimal value of the relative weight of observed data to smoothness constraints is objectively determined. In this study, using ABIC in determining the optimal dip angle, we dissolved the non-linearity of the inverse problem. We applied the method to the InSAR data of the 1995 Dinar, Turkey earthquake and obtained a much shallower dip angle than before.

  4. Detection of the toughest: Pedestrian injury risk as a smooth function of age.

    PubMed

    Niebuhr, Tobias; Junge, Mirko

    2017-07-04

    Though it is common to refer to age-specific groups (e.g., children, adults, elderly), smooth trends conditional on age are mainly ignored in the literature. The present study examines the pedestrian injury risk in full-frontal pedestrian-to-passenger car accidents and incorporates age-in addition to collision speed and injury severity-as a plug-in parameter. Recent work introduced a model for pedestrian injury risk functions using explicit formulae with easily interpretable model parameters. This model is expanded by pedestrian age as another model parameter. Using the German In-Depth Accident Study (GIDAS) to obtain age-specific risk proportions, the model parameters are fitted to the raw data and then smoothed by broken-line regression. The approach supplies explicit probabilities for pedestrian injury risk conditional on pedestrian age, collision speed, and injury severity under investigation. All results yield consistency to each other in the sense that risks for more severe injuries are less probable than those for less severe injuries. As a side product, the approach indicates specific ages at which the risk behavior fundamentally changes. These threshold values can be interpreted as the most robust ages for pedestrians. The obtained age-wise risk functions can be aggregated and adapted to any population. The presented approach is formulated in such general terms that in can be directly used for other data sets or additional parameters; for example, the pedestrian's sex. Thus far, no other study using age as a plug-in parameter can be found.

  5. Fog Collection on Polyethylene Terephthalate (PET) Fibers: Influence of Cross Section and Surface Structure.

    PubMed

    Azad, M A K; Krause, Tobias; Danter, Leon; Baars, Albert; Koch, Kerstin; Barthlott, Wilhelm

    2017-06-06

    Fog-collecting meshes show a great potential in ensuring the availability of a supply of sustainable freshwater in certain arid regions. In most cases, the meshes are made of hydrophilic smooth fibers. Based on the study of plant surfaces, we analyzed the fog collection using various polyethylene terephthalate (PET) fibers with different cross sections and surface structures with the aim of developing optimized biomimetic fog collectors. Water droplet movement and the onset of dripping from fiber samples were compared. Fibers with round, oval, and rectangular cross sections with round edges showed higher fog-collection performance than those with other cross sections. However, other parameters, for example, width, surface structure, wettability, and so forth, also influenced the performance. The directional delivery of the collected fog droplets by wavy/v-shaped microgrooves on the surface of the fibers enhances the formation of a water film and their fog collection. A numerical simulation of the water droplet spreading behavior strongly supports these findings. Therefore, our study suggests the use of fibers with a round cross section, a microgrooved surface, and an optimized width for an efficient fog collection.

  6. Seeing the unseen: Complete volcano deformation fields by recursive filtering of satellite radar interferograms

    NASA Astrophysics Data System (ADS)

    Gonzalez, Pablo J.

    2017-04-01

    Automatic interferometric processing of satellite radar data has emerged as a solution to the increasing amount of acquired SAR data. Automatic SAR and InSAR processing ranges from focusing raw echoes to the computation of displacement time series using large stacks of co-registered radar images. However, this type of interferometric processing approach demands the pre-described or adaptive selection of multiple processing parameters. One of the interferometric processing steps that much strongly influences the final results (displacement maps) is the interferometric phase filtering. There are a large number of phase filtering methods, however the "so-called" Goldstein filtering method is the most popular [Goldstein and Werner, 1998; Baran et al., 2003]. The Goldstein filter needs basically two parameters, the size of the window filter and a parameter to indicate the filter smoothing intensity. The modified Goldstein method removes the need to select the smoothing parameter based on the local interferometric coherence level, but still requires to specify the dimension of the filtering window. An optimal filtered phase quality usually requires careful selection of those parameters. Therefore, there is an strong need to develop automatic filtering methods to adapt for automatic processing, while maximizing filtered phase quality. Here, in this paper, I present a recursive adaptive phase filtering algorithm for accurate estimation of differential interferometric ground deformation and local coherence measurements. The proposed filter is based upon the modified Goldstein filter [Baran et al., 2003]. This filtering method improves the quality of the interferograms by performing a recursive iteration using variable (cascade) kernel sizes, and improving the coherence estimation by locally defringing the interferometric phase. The method has been tested using simulations and real cases relevant to the characteristics of the Sentinel-1 mission. Here, I present real examples from C-band interferograms showing strong and weak deformation gradients, with moderate baselines ( 100-200 m) and variable temporal baselines of 70 and 190 days over variable vegetated volcanoes (Mt. Etna, Hawaii and Nyragongo-Nyamulagira). The differential phase of those examples show intense localized volcano deformation and also vast areas of small differential phase variation. The proposed method outperforms the classical Goldstein and modified Goldstein filters by preserving subtle phase variations where the deformation fringe rate is high, and effectively suppressing phase noise in smoothly phase variation regions. Finally, this method also has the additional advantage of not requiring input parameters, except for the maximum filtering kernel size. References: Baran, I., Stewart, M.P., Kampes, B.M., Perski, Z., Lilly, P., (2003) A modification to the Goldstein radar interferogram filter. IEEE Transactions on Geoscience and Remote Sensing, vol. 41, No. 9., doi:10.1109/TGRS.2003.817212 Goldstein, R.M., Werner, C.L. (1998) Radar interferogram filtering for geophysical applications, Geophysical Research Letters, vol. 25, No. 21, 4035-4038, doi:10.1029/1998GL900033

  7. An Efficient Augmented Lagrangian Method with Applications to Total Variation Minimization

    DTIC Science & Technology

    2012-08-17

    the classic augmented Lagrangian multiplier method, we propose, analyze and test an algorithm for solving a class of equality-constrained non-smooth...method, we propose, analyze and test an algorithm for solving a class of equality-constrained non-smooth optimization problems (chie y but not...significantly outperforming several state-of-the-art solvers on most tested problems. The resulting MATLAB solver, called TVAL3, has been posted online [23]. 2

  8. Real option valuation of a decremental regulation service provided by electricity storage.

    PubMed

    Szabó, Dávid Zoltán; Martyr, Randall

    2017-08-13

    This paper is a quantitative study of a reserve contract for real-time balancing of a power system. Under this contract, the owner of a storage device, such as a battery, helps smooth fluctuations in electricity demand and supply by using the device to increase electricity consumption. The battery owner must be able to provide immediate physical cover, and should therefore have sufficient storage available in the battery before entering the contract. Accordingly, the following problem can be formulated for the battery owner: determine the optimal time to enter the contract and, if necessary, the optimal time to discharge electricity before entering the contract. This problem is formulated as one of optimal stopping, and is solved explicitly in terms of the model parameters and instantaneous values of the power system imbalance. The optimal operational strategies thus obtained ensure that the battery owner has positive expected economic profit from the contract. Furthermore, they provide explicit conditions under which the optimal discharge time is consistent with the overall objective of power system balancing. This paper also carries out a preliminary investigation of the 'lifetime value' aggregated from an infinite sequence of these balancing reserve contracts. This lifetime value, which can be viewed as a single project valuation of the battery, is shown to be positive and bounded. Therefore, in the long run such reserve contracts can be beneficial to commercial operators of electricity storage, while reducing some of the financial and operational risks in power system balancing.This article is part of the themed issue 'Energy management: flexibility, risk and optimization'. © 2017 The Author(s).

  9. Maximum Principle in the Optimal Design of Plates with Stratified Thickness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roubicek, Tomas

    2005-03-15

    An optimal design problem for a plate governed by a linear, elliptic equation with bounded thickness varying only in a single prescribed direction and with unilateral isoperimetrical-type constraints is considered. Using Murat-Tartar's homogenization theory for stratified plates and Young-measure relaxation theory, smoothness of the extended cost and constraint functionals is proved, and then the maximum principle necessary for an optimal relaxed design is derived.

  10. Optimization-based scatter estimation using primary modulation for computed tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yi; Ma, Jingchen; Zhao, Jun, E-mail: junzhao

    Purpose: Scatter reduces the image quality in computed tomography (CT), but scatter correction remains a challenge. A previously proposed primary modulation method simultaneously obtains the primary and scatter in a single scan. However, separating the scatter and primary in primary modulation is challenging because it is an underdetermined problem. In this study, an optimization-based scatter estimation (OSE) algorithm is proposed to estimate and correct scatter. Methods: In the concept of primary modulation, the primary is modulated, but the scatter remains smooth by inserting a modulator between the x-ray source and the object. In the proposed algorithm, an objective function ismore » designed for separating the scatter and primary. Prior knowledge is incorporated in the optimization-based framework to improve the accuracy of the estimation: (1) the primary is always positive; (2) the primary is locally smooth and the scatter is smooth; (3) the location of penumbra can be determined; and (4) the scatter-contaminated data provide knowledge about which part is smooth. Results: The simulation study shows that the edge-preserving weighting in OSE improves the estimation accuracy near the object boundary. Simulation study also demonstrates that OSE outperforms the two existing primary modulation algorithms for most regions of interest in terms of the CT number accuracy and noise. The proposed method was tested on a clinical cone beam CT, demonstrating that OSE corrects the scatter even when the modulator is not accurately registered. Conclusions: The proposed OSE algorithm improves the robustness and accuracy in scatter estimation and correction. This method is promising for scatter correction of various kinds of x-ray imaging modalities, such as x-ray radiography, cone beam CT, and the fourth-generation CT.« less

  11. Accurate step-hold tracking of smoothly varying periodic and aperiodic probability.

    PubMed

    Ricci, Matthew; Gallistel, Randy

    2017-07-01

    Subjects observing many samples from a Bernoulli distribution are able to perceive an estimate of the generating parameter. A question of fundamental importance is how the current percept-what we think the probability now is-depends on the sequence of observed samples. Answers to this question are strongly constrained by the manner in which the current percept changes in response to changes in the hidden parameter. Subjects do not update their percept trial-by-trial when the hidden probability undergoes unpredictable and unsignaled step changes; instead, they update it only intermittently in a step-hold pattern. It could be that the step-hold pattern is not essential to the perception of probability and is only an artifact of step changes in the hidden parameter. However, we now report that the step-hold pattern obtains even when the parameter varies slowly and smoothly. It obtains even when the smooth variation is periodic (sinusoidal) and perceived as such. We elaborate on a previously published theory that accounts for: (i) the quantitative properties of the step-hold update pattern; (ii) subjects' quick and accurate reporting of changes; (iii) subjects' second thoughts about previously reported changes; (iv) subjects' detection of higher-order structure in patterns of change. We also call attention to the challenges these results pose for trial-by-trial updating theories.

  12. Improving the resolution for Lamb wave testing via a smoothed Capon algorithm

    NASA Astrophysics Data System (ADS)

    Cao, Xuwei; Zeng, Liang; Lin, Jing; Hua, Jiadong

    2018-04-01

    Lamb wave testing is promising for damage detection and evaluation in large-area structures. The dispersion of Lamb waves is often unavoidable, restricting testing resolution and making the signal hard to interpret. A smoothed Capon algorithm is proposed in this paper to estimate the accurate path length of each wave packet. In the algorithm, frequency domain whitening is firstly used to obtain the transfer function in the bandwidth of the excitation pulse. Subsequently, wavenumber domain smoothing is employed to reduce the correlation between wave packets. Finally, the path lengths are determined by distance domain searching based on the Capon algorithm. Simulations are applied to optimize the number of smoothing times. Experiments are performed on an aluminum plate consisting of two simulated defects. The results demonstrate that spatial resolution is improved significantly by the proposed algorithm.

  13. Intensity non-uniformity correction using N3 on 3-T scanners with multichannel phased array coils

    PubMed Central

    Boyes, Richard G.; Gunter, Jeff L.; Frost, Chris; Janke, Andrew L.; Yeatman, Thomas; Hill, Derek L.G.; Bernstein, Matt A.; Thompson, Paul M.; Weiner, Michael W.; Schuff, Norbert; Alexander, Gene E.; Killiany, Ronald J.; DeCarli, Charles; Jack, Clifford R.; Fox, Nick C.

    2008-01-01

    Measures of structural brain change based on longitudinal MR imaging are increasingly important but can be degraded by intensity non-uniformity. This non-uniformity can be more pronounced at higher field strengths, or when using multichannel receiver coils. We assessed the ability of the non-parametric non-uniform intensity normalization (N3) technique to correct non-uniformity in 72 volumetric brain MR scans from the preparatory phase of the Alzheimer’s Disease Neuroimaging Initiative (ADNI). Normal elderly subjects (n = 18) were scanned on different 3-T scanners with a multichannel phased array receiver coil at baseline, using magnetization prepared rapid gradient echo (MP-RAGE) and spoiled gradient echo (SPGR) pulse sequences, and again 2 weeks later. When applying N3, we used five brain masks of varying accuracy and four spline smoothing distances (d = 50, 100, 150 and 200 mm) to ascertain which combination of parameters optimally reduces the non-uniformity. We used the normalized white matter intensity variance (standard deviation/mean) to ascertain quantitatively the correction for a single scan; we used the variance of the normalized difference image to assess quantitatively the consistency of the correction over time from registered scan pairs. Our results showed statistically significant (p < 0.01) improvement in uniformity for individual scans and reduction in the normalized difference image variance when using masks that identified distinct brain tissue classes, and when using smaller spline smoothing distances (e.g., 50-100 mm) for both MP-RAGE and SPGR pulse sequences. These optimized settings may assist future large-scale studies where 3-T scanners and phased array receiver coils are used, such as ADNI, so that intensity non-uniformity does not influence the power of MR imaging to detect disease progression and the factors that influence it. PMID:18063391

  14. LROC assessment of non-linear filtering methods in Ga-67 SPECT imaging

    NASA Astrophysics Data System (ADS)

    De Clercq, Stijn; Staelens, Steven; De Beenhouwer, Jan; D'Asseler, Yves; Lemahieu, Ignace

    2006-03-01

    In emission tomography, iterative reconstruction is usually followed by a linear smoothing filter to make such images more appropriate for visual inspection and diagnosis by a physician. This will result in a global blurring of the images, smoothing across edges and possibly discarding valuable image information for detection tasks. The purpose of this study is to investigate which possible advantages a non-linear, edge-preserving postfilter could have on lesion detection in Ga-67 SPECT imaging. Image quality can be defined based on the task that has to be performed on the image. This study used LROC observer studies based on a dataset created by CPU-intensive Gate Monte Carlo simulations of a voxelized digital phantom. The filters considered in this study were a linear Gaussian filter, a bilateral filter, the Perona-Malik anisotropic diffusion filter and the Catte filtering scheme. The 3D MCAT software phantom was used to simulate the distribution of Ga-67 citrate in the abdomen. Tumor-present cases had a 1-cm diameter tumor randomly placed near the edges of the anatomical boundaries of the kidneys, bone, liver and spleen. Our data set was generated out of a single noisy background simulation using the bootstrap method, to significantly reduce the simulation time and to allow for a larger observer data set. Lesions were simulated separately and added to the background afterwards. These were then reconstructed with an iterative approach, using a sufficiently large number of MLEM iterations to establish convergence. The output of a numerical observer was used in a simplex optimization method to estimate an optimal set of parameters for each postfilter. No significant improvement was found for using edge-preserving filtering techniques over standard linear Gaussian filtering.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, S; Fan, Q; Lei, Y

    Purpose: In-Water-Output-Ratio (IWOR) plays a significant role in linac-based radiotherapy treatment planning, linking MUs to delivered radiation dose. For an open rectangular field, IWOR depends on both its width and length, and changes rapidly when one of them becomes small. In this study, a universal functional form is proposed to fit the open field IWOR tables in Varian TrueBeam representative datasets for all photon energies. Methods: A novel Generalized Mean formula is first used to estimate the Equivalent Square (ES) for a rectangular field. The formula’s weighting factor and power index are determined by collapsing all data points as muchmore » as possible onto a single curve in IWOR vs. ES plot. The result is then fitted with a novel universal function IWOR=1+b*Log(ES/10cm)/(ES/10cm)^c via a least-square procedure to determine the optimal values for parameters b and c. The maximum relative residual error in IWOR over the entire two-dimensional measurement table with field sizes between 3cm and 40cm is used to evaluate the quality of fit for the function. Results: The two-step fitting strategy works very well in determining the optimal parameter values for open field IWOR of each photon energies in the Varian data-set. Relative residual error ≤0.71% is achieved for all photon energies (including Flattening-Filter-Free modes) with field sizes between 3cm and 40cm. The optimal parameter values change smoothly with regular photon beam quality. Conclusion: The universal functional form fits the Varian TrueBeam open field IWOR measurement tables accurately with small relative residual errors for all photon energies. Therefore, it can be an excellent choice to represent IWOR in absolute dose and MU calculations. The functional form can also be used as a QA/commissioning tool to verify the measured data quality and consistency by checking the IWOR data behavior against the function for new photon energies with arbitrary beam quality.« less

  16. Visualization and curve-parameter estimation strategies for efficient exploration of phenotype microarray kinetics.

    PubMed

    Vaas, Lea A I; Sikorski, Johannes; Michael, Victoria; Göker, Markus; Klenk, Hans-Peter

    2012-01-01

    The Phenotype MicroArray (OmniLog® PM) system is able to simultaneously capture a large number of phenotypes by recording an organism's respiration over time on distinct substrates. This technique targets the object of natural selection itself, the phenotype, whereas previously addressed '-omics' techniques merely study components that finally contribute to it. The recording of respiration over time, however, adds a longitudinal dimension to the data. To optimally exploit this information, it must be extracted from the shapes of the recorded curves and displayed in analogy to conventional growth curves. The free software environment R was explored for both visualizing and fitting of PM respiration curves. Approaches using either a model fit (and commonly applied growth models) or a smoothing spline were evaluated. Their reliability in inferring curve parameters and confidence intervals was compared to the native OmniLog® PM analysis software. We consider the post-processing of the estimated parameters, the optimal classification of curve shapes and the detection of significant differences between them, as well as practically relevant questions such as detecting the impact of cultivation times and the minimum required number of experimental repeats. We provide a comprehensive framework for data visualization and parameter estimation according to user choices. A flexible graphical representation strategy for displaying the results is proposed, including 95% confidence intervals for the estimated parameters. The spline approach is less prone to irregular curve shapes than fitting any of the considered models or using the native PM software for calculating both point estimates and confidence intervals. These can serve as a starting point for the automated post-processing of PM data, providing much more information than the strict dichotomization into positive and negative reactions. Our results form the basis for a freely available R package for the analysis of PM data.

  17. Visualization and Curve-Parameter Estimation Strategies for Efficient Exploration of Phenotype Microarray Kinetics

    PubMed Central

    Vaas, Lea A. I.; Sikorski, Johannes; Michael, Victoria; Göker, Markus; Klenk, Hans-Peter

    2012-01-01

    Background The Phenotype MicroArray (OmniLog® PM) system is able to simultaneously capture a large number of phenotypes by recording an organism's respiration over time on distinct substrates. This technique targets the object of natural selection itself, the phenotype, whereas previously addressed ‘-omics’ techniques merely study components that finally contribute to it. The recording of respiration over time, however, adds a longitudinal dimension to the data. To optimally exploit this information, it must be extracted from the shapes of the recorded curves and displayed in analogy to conventional growth curves. Methodology The free software environment R was explored for both visualizing and fitting of PM respiration curves. Approaches using either a model fit (and commonly applied growth models) or a smoothing spline were evaluated. Their reliability in inferring curve parameters and confidence intervals was compared to the native OmniLog® PM analysis software. We consider the post-processing of the estimated parameters, the optimal classification of curve shapes and the detection of significant differences between them, as well as practically relevant questions such as detecting the impact of cultivation times and the minimum required number of experimental repeats. Conclusions We provide a comprehensive framework for data visualization and parameter estimation according to user choices. A flexible graphical representation strategy for displaying the results is proposed, including 95% confidence intervals for the estimated parameters. The spline approach is less prone to irregular curve shapes than fitting any of the considered models or using the native PM software for calculating both point estimates and confidence intervals. These can serve as a starting point for the automated post-processing of PM data, providing much more information than the strict dichotomization into positive and negative reactions. Our results form the basis for a freely available R package for the analysis of PM data. PMID:22536335

  18. Revealing facts behind spray dried solid dispersion technology used for solubility enhancement

    PubMed Central

    Patel, Bhavesh B.; Patel, Jayvadan K.; Chakraborty, Subhashis; Shukla, Dali

    2013-01-01

    Poor solubility and bioavailability of an existing or newly synthesized drug always pose challenge in the development of efficient pharmaceutical formulation. Numerous technologies can be used to improve the solubility and among them amorphous solid dispersion based spray drying technology can be successfully useful for development of product from lab scale to commercial scale with a wide range of powder characteristics. Current review deals with the importance of spray drying technology in drug delivery, basically for solubility and bioavailability enhancement. Role of additives, selection of polymer, effect of process and formulation parameters, scale up optimization, and IVIVC have been covered to gain the interest of readers about the technology. Design of experiment (DoE) to optimize the spray drying process has been covered in the review. A lot more research work is required to evaluate spray drying as a technology for screening the right polymer for solid dispersion, especially to overcome the issue related to drug re-crystallization and to achieve a stable product both in vitro and in vivo. Based on the recent FDA recommendation, the need of the hour is also to adopt Quality by Design approach in the manufacturing process to carefully optimize the spray drying technology for its smooth transfer from lab scale to commercial scale. PMID:27134535

  19. Colliders Come of Age in Europe: PETRA and LEP

    NASA Astrophysics Data System (ADS)

    Hofmann, Albert

    2003-04-01

    Based on the success with early electron positron rings a new generation of facilities was constructed, optimized in cost and performance. In Europe PETRA was built at DESY with many innovations: smooth vacuum chamber with small impedance, efficient multi-cell RF-cavities, an optics giving an emittance optimized for luminosity, few bunches in head-on collision, a mini-beta scheme, accurate energy calibration based on depolarization resonances. From 1978 to 1986 PETRA provided high luminosity with over 22 GeV beam energy for particle physics experiments. The next ring, LEP at CERN, was optimized for two beam energy ranges, 46 and 93 - 105 GeV for Z0 and W production and particle search. This resulted in a large circumference of 27 km and low field bending magnets which had widely spaced laminations filled with concrete. The RF-voltage was produced in Cu cavities being coupled to low loss storage cavities at the lower, and with a superconducting RF-system, exceeding 3.6 GV, at the higher energy. Superconducting low beta insertions helped to obtain a high luminosity which reached integrated values of over 2000 1/nb per day at high energy. Very important for LEP was a precise energy calibration using depolarizing resonaces and careful control of all relevant parameters. LEP operated with four experiments from 1989 to 2000.

  20. A Carrier Estimation Method Based on MLE and KF for Weak GNSS Signals.

    PubMed

    Zhang, Hongyang; Xu, Luping; Yan, Bo; Zhang, Hua; Luo, Liyan

    2017-06-22

    Maximum likelihood estimation (MLE) has been researched for some acquisition and tracking applications of global navigation satellite system (GNSS) receivers and shows high performance. However, all current methods are derived and operated based on the sampling data, which results in a large computation burden. This paper proposes a low-complexity MLE carrier tracking loop for weak GNSS signals which processes the coherent integration results instead of the sampling data. First, the cost function of the MLE of signal parameters such as signal amplitude, carrier phase, and Doppler frequency are used to derive a MLE discriminator function. The optimal value of the cost function is searched by an efficient Levenberg-Marquardt (LM) method iteratively. Its performance including Cramér-Rao bound (CRB), dynamic characteristics and computation burden are analyzed by numerical techniques. Second, an adaptive Kalman filter is designed for the MLE discriminator to obtain smooth estimates of carrier phase and frequency. The performance of the proposed loop, in terms of sensitivity, accuracy and bit error rate, is compared with conventional methods by Monte Carlo (MC) simulations both in pedestrian-level and vehicle-level dynamic circumstances. Finally, an optimal loop which combines the proposed method and conventional method is designed to achieve the optimal performance both in weak and strong signal circumstances.

  1. Revealing facts behind spray dried solid dispersion technology used for solubility enhancement.

    PubMed

    Patel, Bhavesh B; Patel, Jayvadan K; Chakraborty, Subhashis; Shukla, Dali

    2015-09-01

    Poor solubility and bioavailability of an existing or newly synthesized drug always pose challenge in the development of efficient pharmaceutical formulation. Numerous technologies can be used to improve the solubility and among them amorphous solid dispersion based spray drying technology can be successfully useful for development of product from lab scale to commercial scale with a wide range of powder characteristics. Current review deals with the importance of spray drying technology in drug delivery, basically for solubility and bioavailability enhancement. Role of additives, selection of polymer, effect of process and formulation parameters, scale up optimization, and IVIVC have been covered to gain the interest of readers about the technology. Design of experiment (DoE) to optimize the spray drying process has been covered in the review. A lot more research work is required to evaluate spray drying as a technology for screening the right polymer for solid dispersion, especially to overcome the issue related to drug re-crystallization and to achieve a stable product both in vitro and in vivo. Based on the recent FDA recommendation, the need of the hour is also to adopt Quality by Design approach in the manufacturing process to carefully optimize the spray drying technology for its smooth transfer from lab scale to commercial scale.

  2. Dynamic analysis of concentrated solar supercritical CO2-based power generation closed-loop cycle

    DOE PAGES

    Osorio, Julian D.; Hovsapian, Rob; Ordonez, Juan C.

    2016-01-01

    Here, the dynamic behavior of a concentrated solar power (CSP) supercritical CO 2 cycle is studied under different seasonal conditions. The system analyzed is composed of a central receiver, hot and cold thermal energy storage units, a heat exchanger, a recuperator, and multi-stage compression-expansion subsystems with intercoolers and reheaters between compressors and turbines respectively. Energy models for each component of the system are developed in order to optimize operating and design parameters such as mass flow rate, intermediate pressures and the effective area of the recuperator to lead to maximum efficiency. Our results show that the parametric optimization leads themore » system to a process efficiency of about 21 % and a maximum power output close to 1.5 MW. The thermal energy storage allows the system to operate for several hours after sunset. This operating time is approximately increased from 220 to 480 minutes after optimization. The hot and cold thermal energy storage also lessens the temperature fluctuations by providing smooth changes of temperatures at the turbines and compressors inlets. Our results indicate that concentrated solar systems using supercritical CO 2 could be a viable alternative to satisfying energy needs in desert areas with scarce water and fossil fuel resources.« less

  3. Data preparation for functional data analysis of PM10 in Peninsular Malaysia

    NASA Astrophysics Data System (ADS)

    Shaadan, Norshahida; Jemain, Abdul Aziz; Deni, Sayang Mohd

    2014-07-01

    The use of curves or functional data in the study analysis is increasingly gaining momentum in the various fields of research. The statistical method to analyze such data is known as functional data analysis (FDA). The first step in FDA is to convert the observed data points which are repeatedly recorded over a period of time or space into either a rough (raw) or smooth curve. In the case of the smooth curve, basis functions expansion is one of the methods used for the data conversion. The data can be converted into a smooth curve either by using the regression smoothing or roughness penalty smoothing approach. By using the regression smoothing approach, the degree of curve's smoothness is very dependent on k number of basis functions; meanwhile for the roughness penalty approach, the smoothness is dependent on a roughness coefficient given by parameter λ Based on previous studies, researchers often used the rather time-consuming trial and error or cross validation method to estimate the appropriate number of basis functions. Thus, this paper proposes a statistical procedure to construct functional data or curves for the hourly and daily recorded data. The Bayesian Information Criteria is used to determine the number of basis functions while the Generalized Cross Validation criteria is used to identify the parameter λ The proposed procedure is then applied on a ten year (2001-2010) period of PM10 data from 30 air quality monitoring stations that are located in Peninsular Malaysia. It was found that the number of basis functions required for the construction of the PM10 daily curve in Peninsular Malaysia was in the interval of between 14 and 20 with an average value of 17; the first percentile is 15 and the third percentile is 19. Meanwhile the initial value of the roughness coefficient was in the interval of between 10-5 and 10-7 and the mode was 10-6. An example of the functional descriptive analysis is also shown.

  4. Oral bioavailability enhancement of raloxifene by developing microemulsion using D-optimal mixture design: optimization and in-vivo pharmacokinetic study.

    PubMed

    Shah, Nirmal; Seth, Avinashkumar; Balaraman, R; Sailor, Girish; Javia, Ankur; Gohil, Dipti

    2018-04-01

    The objective of this work was to utilize a potential of microemulsion for the improvement in oral bioavailability of raloxifene hydrochloride, a BCS class-II drug with 2% bioavailability. Drug-loaded microemulsion was prepared by water titration method using Capmul MCM C8, Tween 20, and Polyethylene glycol 400 as oil, surfactant, and co-surfactant respectively. The pseudo-ternary phase diagram was constructed between oil and surfactants mixture to obtain appropriate components and their concentration ranges that result in large existence area of microemulsion. D-optimal mixture design was utilized as a statistical tool for optimization of microemulsion considering oil, S mix , and water as independent variables with percentage transmittance and globule size as dependent variables. The optimized formulation showed 100 ± 0.1% transmittance and 17.85 ± 2.78 nm globule size which was identically equal with the predicted values of dependent variables given by the design expert software. The optimized microemulsion showed pronounced enhancement in release rate compared to plain drug suspension following diffusion controlled release mechanism by the Higuchi model. The formulation showed zeta potential of value -5.88 ± 1.14 mV that imparts good stability to drug loaded microemulsion dispersion. Surface morphology study with transmission electron microscope showed discrete spherical nano sized globules with smooth surface. In-vivo pharmacokinetic study of optimized microemulsion formulation in Wistar rats showed 4.29-fold enhancements in bioavailability. Stability study showed adequate results for various parameters checked up to six months. These results reveal the potential of microemulsion for significant improvement in oral bioavailability of poorly soluble raloxifene hydrochloride.

  5. Comparison of the magnitude and phase of the reflection coefficient from a smooth water/sand interface with elastic and poroelastic models

    NASA Astrophysics Data System (ADS)

    Isakson, Marcia; Camin, H. John; Canepa, Gaetano

    2005-04-01

    The reflection coefficient from a sand/water interface is an important parameter in modeling the acoustics of littoral environments. Many models have been advanced to describe the influence of the sediment parameters and interface roughness parameters on the reflection coefficient. In this study, the magnitude and phase of the reflection coefficient from 30 to 160 kHz is measured in a bistatic experiment on a smoothed water/sand interface at grazing angles from 5 to 75 degrees. The measured complex reflection coefficient is compared with the fluid model, the elastic model and poro-elastic models. Effects of rough surface scattering are investigated using the Bottom Response from Inhomogeneities and Surface using Small Slope Approximation (BoRIS-SSA). Spherical wave effects are modeled using plane wave decomposition. Models are considered for their ability to predict the measured results using realistic parameters. [Work supported by ONR, Ocean Acoustics.

  6. Research of beam smoothing technologies using CPP, SSD, and PS

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Su, Jingqin; Hu, Dongxia; Li, Ping; Yuan, Haoyu; Zhou, Wei; Yuan, Qiang; Wang, Yuancheng; Tian, Xiaocheng; Xu, Dangpeng; Dong, Jun; Zhu, Qihua

    2015-02-01

    Precise physical experiments place strict requirements on target illumination uniformity in Inertial Confinement Fusion. To obtain a smoother focal spot and suppress transverse SBS in large aperture optics, Multi-FM smoothing by spectral dispersion (SSD) was studied combined with continuous phase plate (CPP) and polarization smoothing (PS). New ways of PS are being developed to improve the laser irradiation uniformity and solve LPI problems in indirect-drive laser fusion. The near field and far field properties of beams using polarization smoothing were studied and compared, including birefringent wedge and polarization control array. As more parameters can be manipulated in a combined beam smoothing scheme, quad beam smoothing was also studies. Simulation results indicate through adjusting dispersion directions of one-dimensional (1-D) SSD beams in a quad, two-dimensional SSD can be obtained. Experiments have been done on SG-III laser facility using CPP and Multi-FM SSD. The research provides some theoretical and experimental basis for the application of CPP, SSD and PS on high-power laser facilities.

  7. Implication of adaptive smoothness constraint and Helmert variance component estimation in seismic slip inversion

    NASA Astrophysics Data System (ADS)

    Fan, Qingbiao; Xu, Caijun; Yi, Lei; Liu, Yang; Wen, Yangmao; Yin, Zhi

    2017-10-01

    When ill-posed problems are inverted, the regularization process is equivalent to adding constraint equations or prior information from a Bayesian perspective. The veracity of the constraints (or the regularization matrix R) significantly affects the solution, and a smoothness constraint is usually added in seismic slip inversions. In this paper, an adaptive smoothness constraint (ASC) based on the classic Laplacian smoothness constraint (LSC) is proposed. The ASC not only improves the smoothness constraint, but also helps constrain the slip direction. A series of experiments are conducted in which different magnitudes of noise are imposed and different densities of observation are assumed, and the results indicated that the ASC was superior to the LSC. Using the proposed ASC, the Helmert variance component estimation method is highlighted as the best for selecting the regularization parameter compared with other methods, such as generalized cross-validation or the mean squared error criterion method. The ASC may also benefit other ill-posed problems in which a smoothness constraint is required.

  8. Boosted Multivariate Trees for Longitudinal Data

    PubMed Central

    Pande, Amol; Li, Liang; Rajeswaran, Jeevanantham; Ehrlinger, John; Kogalur, Udaya B.; Blackstone, Eugene H.; Ishwaran, Hemant

    2017-01-01

    Machine learning methods provide a powerful approach for analyzing longitudinal data in which repeated measurements are observed for a subject over time. We boost multivariate trees to fit a novel flexible semi-nonparametric marginal model for longitudinal data. In this model, features are assumed to be nonparametric, while feature-time interactions are modeled semi-nonparametrically utilizing P-splines with estimated smoothing parameter. In order to avoid overfitting, we describe a relatively simple in sample cross-validation method which can be used to estimate the optimal boosting iteration and which has the surprising added benefit of stabilizing certain parameter estimates. Our new multivariate tree boosting method is shown to be highly flexible, robust to covariance misspecification and unbalanced designs, and resistant to overfitting in high dimensions. Feature selection can be used to identify important features and feature-time interactions. An application to longitudinal data of forced 1-second lung expiratory volume (FEV1) for lung transplant patients identifies an important feature-time interaction and illustrates the ease with which our method can find complex relationships in longitudinal data. PMID:29249866

  9. Multi-parameters monitoring during traditional Chinese medicine concentration process with near infrared spectroscopy and chemometrics

    NASA Astrophysics Data System (ADS)

    Liu, Ronghua; Sun, Qiaofeng; Hu, Tian; Li, Lian; Nie, Lei; Wang, Jiayue; Zhou, Wanhui; Zang, Hengchang

    2018-03-01

    As a powerful process analytical technology (PAT) tool, near infrared (NIR) spectroscopy has been widely used in real-time monitoring. In this study, NIR spectroscopy was applied to monitor multi-parameters of traditional Chinese medicine (TCM) Shenzhiling oral liquid during the concentration process to guarantee the quality of products. Five lab scale batches were employed to construct quantitative models to determine five chemical ingredients and physical change (samples density) during concentration process. The paeoniflorin, albiflorin, liquiritin and samples density were modeled by partial least square regression (PLSR), while the content of the glycyrrhizic acid and cinnamic acid were modeled by support vector machine regression (SVMR). Standard normal variate (SNV) and/or Savitzkye-Golay (SG) smoothing with derivative methods were adopted for spectra pretreatment. Variable selection methods including correlation coefficient (CC), competitive adaptive reweighted sampling (CARS) and interval partial least squares regression (iPLS) were performed for optimizing the models. The results indicated that NIR spectroscopy was an effective tool to successfully monitoring the concentration process of Shenzhiling oral liquid.

  10. Optimising rigid motion compensation for small animal brain PET imaging

    NASA Astrophysics Data System (ADS)

    Spangler-Bickell, Matthew G.; Zhou, Lin; Kyme, Andre Z.; De Laat, Bart; Fulton, Roger R.; Nuyts, Johan

    2016-10-01

    Motion compensation (MC) in PET brain imaging of awake small animals is attracting increased attention in preclinical studies since it avoids the confounding effects of anaesthesia and enables behavioural tests during the scan. A popular MC technique is to use multiple external cameras to track the motion of the animal’s head, which is assumed to be represented by the motion of a marker attached to its forehead. In this study we have explored several methods to improve the experimental setup and the reconstruction procedures of this method: optimising the camera-marker separation; improving the temporal synchronisation between the motion tracker measurements and the list-mode stream; post-acquisition smoothing and interpolation of the motion data; and list-mode reconstruction with appropriately selected subsets. These techniques have been tested and verified on measurements of a moving resolution phantom and brain scans of an awake rat. The proposed techniques improved the reconstructed spatial resolution of the phantom by 27% and of the rat brain by 14%. We suggest a set of optimal parameter values to use for awake animal PET studies and discuss the relative significance of each parameter choice.

  11. VARIABLE SELECTION FOR REGRESSION MODELS WITH MISSING DATA

    PubMed Central

    Garcia, Ramon I.; Ibrahim, Joseph G.; Zhu, Hongtu

    2009-01-01

    We consider the variable selection problem for a class of statistical models with missing data, including missing covariate and/or response data. We investigate the smoothly clipped absolute deviation penalty (SCAD) and adaptive LASSO and propose a unified model selection and estimation procedure for use in the presence of missing data. We develop a computationally attractive algorithm for simultaneously optimizing the penalized likelihood function and estimating the penalty parameters. Particularly, we propose to use a model selection criterion, called the ICQ statistic, for selecting the penalty parameters. We show that the variable selection procedure based on ICQ automatically and consistently selects the important covariates and leads to efficient estimates with oracle properties. The methodology is very general and can be applied to numerous situations involving missing data, from covariates missing at random in arbitrary regression models to nonignorably missing longitudinal responses and/or covariates. Simulations are given to demonstrate the methodology and examine the finite sample performance of the variable selection procedures. Melanoma data from a cancer clinical trial is presented to illustrate the proposed methodology. PMID:20336190

  12. JIGSAW: Joint Inhomogeneity estimation via Global Segment Assembly for Water-fat separation.

    PubMed

    Lu, Wenmiao; Lu, Yi

    2011-07-01

    Water-fat separation in magnetic resonance imaging (MRI) is of great clinical importance, and the key to uniform water-fat separation lies in field map estimation. This work deals with three-point field map estimation, in which water and fat are modelled as two single-peak spectral lines, and field inhomogeneities shift the spectrum by an unknown amount. Due to the simplified spectrum modelling, there exists inherent ambiguity in forming field maps from multiple locally feasible field map values at each pixel. To resolve such ambiguity, spatial smoothness of field maps has been incorporated as a constraint of an optimization problem. However, there are two issues: the optimization problem is computationally intractable and even when it is solved exactly, it does not always separate water and fat images. Hence, robust field map estimation remains challenging in many clinically important imaging scenarios. This paper proposes a novel field map estimation technique called JIGSAW. It extends a loopy belief propagation (BP) algorithm to obtain an approximate solution to the optimization problem. The solution produces locally smooth segments and avoids error propagation associated with greedy methods. The locally smooth segments are then assembled into a globally consistent field map by exploiting the periodicity of the feasible field map values. In vivo results demonstrate that JIGSAW outperforms existing techniques and produces correct water-fat separation in challenging imaging scenarios.

  13. Penalized nonparametric scalar-on-function regression via principal coordinates

    PubMed Central

    Reiss, Philip T.; Miller, David L.; Wu, Pei-Shien; Hua, Wen-Yu

    2016-01-01

    A number of classical approaches to nonparametric regression have recently been extended to the case of functional predictors. This paper introduces a new method of this type, which extends intermediate-rank penalized smoothing to scalar-on-function regression. In the proposed method, which we call principal coordinate ridge regression, one regresses the response on leading principal coordinates defined by a relevant distance among the functional predictors, while applying a ridge penalty. Our publicly available implementation, based on generalized additive modeling software, allows for fast optimal tuning parameter selection and for extensions to multiple functional predictors, exponential family-valued responses, and mixed-effects models. In an application to signature verification data, principal coordinate ridge regression, with dynamic time warping distance used to define the principal coordinates, is shown to outperform a functional generalized linear model. PMID:29217963

  14. Towards microscale electrohydrodynamic three-dimensional printing

    NASA Astrophysics Data System (ADS)

    He, Jiankang; Xu, Fangyuan; Cao, Yi; Liu, Yaxiong; Li, Dichen

    2016-02-01

    It is challenging for the existing three-dimensional (3D) printing techniques to fabricate high-resolution 3D microstructures with low costs and high efficiency. In this work we present a solvent-based electrohydrodynamic 3D printing technique that allows fabrication of microscale structures like single walls, crossed walls, lattice and concentric circles. Process parameters were optimized to deposit tiny 3D patterns with a wall width smaller than 10 μm and a high aspect ratio of about 60. Tight bonding among neighbour layers could be achieved with a smooth lateral surface. In comparison with the existing microscale 3D printing techniques, the presented method is low-cost, highly efficient and applicable to multiple polymers. It is envisioned that this simple microscale 3D printing strategy might provide an alternative and innovative way for application in MEMS, biosensor and flexible electronics.

  15. Smooth Sensor Motion Planning for Robotic Cyber Physical Social Sensing (CPSS)

    PubMed Central

    Tang, Hong; Li, Liangzhi; Xiao, Nanfeng

    2017-01-01

    Although many researchers have begun to study the area of Cyber Physical Social Sensing (CPSS), few are focused on robotic sensors. We successfully utilize robots in CPSS, and propose a sensor trajectory planning method in this paper. Trajectory planning is a fundamental problem in mobile robotics. However, traditional methods are not suited for robotic sensors, because of their low efficiency, instability, and non-smooth-generated paths. This paper adopts an optimizing function to generate several intermediate points and regress these discrete points to a quintic polynomial which can output a smooth trajectory for the robotic sensor. Simulations demonstrate that our approach is robust and efficient, and can be well applied in the CPSS field. PMID:28218649

  16. Balancing aggregation and smoothing errors in inverse models

    DOE PAGES

    Turner, A. J.; Jacob, D. J.

    2015-06-30

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore » state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less

  17. Balancing aggregation and smoothing errors in inverse models

    NASA Astrophysics Data System (ADS)

    Turner, A. J.; Jacob, D. J.

    2015-01-01

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.

  18. Balancing aggregation and smoothing errors in inverse models

    NASA Astrophysics Data System (ADS)

    Turner, A. J.; Jacob, D. J.

    2015-06-01

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.

  19. Image Discrimination Models With Stochastic Channel Selection

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Beard, Bettina L.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    Many models of human image processing feature a large fixed number of channels representing cortical units varying in spatial position (visual field direction and eccentricity) and spatial frequency (radial frequency and orientation). The values of these parameters are usually sampled at fixed values selected to ensure adequate overlap considering the bandwidth and/or spread parameters, which are usually fixed. Even high levels of overlap does not always ensure that the performance of the model will vary smoothly with image translation or scale changes. Physiological measurements of bandwidth and/or spread parameters result in a broad distribution of estimated parameter values and the prediction of some psychophysical results are facilitated by the assumption that these parameters also take on a range of values. Selecting a sample of channels from a continuum of channels rather than using a fixed set can make model performance vary smoothly with changes in image position, scale, and orientation. It also facilitates the addition of spatial inhomogeneity, nonlinear feature channels, and focus of attention to channel models.

  20. Transitions of interaction outcomes in a uni-directional consumer-resource system

    USGS Publications Warehouse

    Wang, Y.; DeAngelis, D.L.

    2011-01-01

    A uni-directional consumer-resource system of two species is analyzed. Our aim is to understand the mechanisms that determine how the interaction outcomes depend on the context of the interaction; that is, on the model parameters. The dynamic behavior of the model is described and, in particular, it is demonstrated that no periodic orbits exist. Then the parameter (factor) space is shown to be divided into four regions, which correspond to the four forms of interaction outcomes; i.e. mutualism, commensalism, parasitism and amensalism. It is shown that the interaction outcomes of the system transition smoothly among these four forms when the parameters of the system are varied continuously. Varying each parameter individually or varying pairs of parameters can also lead to smooth transitions between the interaction outcomes. The analysis leads to both conditions for which each species achieves its maximal density, and situations in which periodic oscillations of the interaction outcomes emerge. ?? 2011 Elsevier Ltd.

  1. Variational algorithms for nonlinear smoothing applications

    NASA Technical Reports Server (NTRS)

    Bach, R. E., Jr.

    1977-01-01

    A variational approach is presented for solving a nonlinear, fixed-interval smoothing problem with application to offline processing of noisy data for trajectory reconstruction and parameter estimation. The nonlinear problem is solved as a sequence of linear two-point boundary value problems. Second-order convergence properties are demonstrated. Algorithms for both continuous and discrete versions of the problem are given, and example solutions are provided.

  2. Improvement of the relaxation time and the order parameter of nematic liquid crystal using a hybrid alignment mixture of carbon nanotube and polyimide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Hyojin; Yang, Seungbin; Lee, Ji-Hoon, E-mail: jihoonlee@jbnu.ac.kr

    2014-05-12

    We examined the electrooptical properties of a nematic liquid crystal (LC) sample whose substrates were coated with a mixture of carbon nanotube (CNT) and polyimide (PI). The relaxation time of the sample coated with 1.5 wt. % CNT mixture was about 35% reduced compared to the pure polyimide sample. The elastic constant and the order parameter of the CNT-mixture sample were increased and the fast relaxation of LC could be approximated to the mean-field theory. We found the CNT-mixed polyimide formed more smooth surface than the pure PI from atomic force microscopy images, indicating the increased order parameter is related to themore » smooth surface topology of the CNT-polyimide mixture.« less

  3. Fundamentals of cutting.

    PubMed

    Williams, J G; Patel, Y

    2016-06-06

    The process of cutting is analysed in fracture mechanics terms with a view to quantifying the various parameters involved. The model used is that of orthogonal cutting with a wedge removing a layer of material or chip. The behaviour of the chip is governed by its thickness and for large radii of curvature the chip is elastic and smooth cutting occurs. For smaller thicknesses, there is a transition, first to plastic bending and then to plastic shear for small thicknesses and smooth chips are formed. The governing parameters are tool geometry, which is principally the wedge angle, and the material properties of elastic modulus, yield stress and fracture toughness. Friction can also be important. It is demonstrated that the cutting process may be quantified via these parameters, which could be useful in the study of cutting in biology.

  4. Love-type wave propagation in a pre-stressed viscoelastic medium influenced by smooth moving punch

    NASA Astrophysics Data System (ADS)

    Singh, A. K.; Parween, Z.; Chatterjee, M.; Chattopadhyay, A.

    2015-04-01

    In the present paper, a mathematical model studying the effect of smooth moving semi-infinite punch on the propagation of Love-type wave in an initially stressed viscoelastic strip is developed. The dynamic stress concentration due to the punch for the force of a constant intensity has been obtained in the closed form. Method based on Weiner-hopf technique which is indicated by Matczynski has been employed. The study manifests the significant effect of various affecting parameters viz. speed of moving punch associated with Love-type wave speed, horizontal compressive/tensile initial stress, vertical compressive/tensile initial stress, frequency parameter, and viscoelastic parameter on dynamic stress concentration due to semi-infinite punch. Moreover, some important peculiarities have been traced out and depicted by means of graphs.

  5. Cariogenic potential of foods. II. Relationship of food composition, plaque microbial counts, and salivary parameters to caries in the rat model.

    PubMed

    Mundorff-Shrestha, S A; Featherstone, J D; Eisenberg, A D; Cowles, E; Curzon, M E; Espeland, M A; Shields, C P

    1994-01-01

    A series of rat caries experiments was carried out to test the relative cariogenic potential and to identify the major carcinogenic elements of 22 popular snack foods. Parameters that were measured included rat caries, number of cariogenic bacteria in plaque, salivary parameters including flow rate, buffering capacity, total protein, lysozyme and amylase content, and composition of test foods including protein, fat, phosphorus, calcium, fluoride, galactose, glucose, total reducing sugar, sucrose, and starch. Many interesting relationships were observed between food components, numbers of plaque bacteria, salivary components, and specific types of carious lesions. Protein, fat, and phosphorus in foods were all associated with inhibition of both sulcal and buccolingual (smooth-surface) caries. Food fluoride was associated with inhibition of buccolingual caries, whereas calcium was related to inhibition of sulcal caries. Glucose, reducing sugar, and sucrose in foods were all related to promotion of both sulcal and smooth-surface caries. The numbers of Streptococcus sobrinus in plaque were associated with promotion of smooth-surface caries only, whereas lactobacilli, non-mutans bacteria, and total viable flora were related to promotion of both smooth-surface and sulcal caries. The salivary flow rate was associated with inhibition of both buccolingual and sulcal caries. Salivary buffering capacity (at pH 7) and salivary lysozyme delivery were associated with inhibition of number and severity of sulcal caries, while the salivary amylase content was related to the promotion of the number of sulcal lesions.

  6. Non-linear dynamic compensation system

    NASA Technical Reports Server (NTRS)

    Lin, Yu-Hwan (Inventor); Lurie, Boris J. (Inventor)

    1992-01-01

    A non-linear dynamic compensation subsystem is added in the feedback loop of a high precision optical mirror positioning control system to smoothly alter the control system response bandwidth from a relatively wide response bandwidth optimized for speed of control system response to a bandwidth sufficiently narrow to reduce position errors resulting from the quantization noise inherent in the inductosyn used to measure mirror position. The non-linear dynamic compensation system includes a limiter for limiting the error signal within preselected limits, a compensator for modifying the limiter output to achieve the reduced bandwidth response, and an adder for combining the modified error signal with the difference between the limited and unlimited error signals. The adder output is applied to control system motor so that the system response is optimized for accuracy when the error signal is within the preselected limits, optimized for speed of response when the error signal is substantially beyond the preselected limits and smoothly varied therebetween as the error signal approaches the preselected limits.

  7. Development of an adaptive hp-version finite element method for computational optimal control

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Warner, Michael S.

    1994-01-01

    In this research effort, the usefulness of hp-version finite elements and adaptive solution-refinement techniques in generating numerical solutions to optimal control problems has been investigated. Under NAG-939, a general FORTRAN code was developed which approximated solutions to optimal control problems with control constraints and state constraints. Within that methodology, to get high-order accuracy in solutions, the finite element mesh would have to be refined repeatedly through bisection of the entire mesh in a given phase. In the current research effort, the order of the shape functions in each element has been made a variable, giving more flexibility in error reduction and smoothing. Similarly, individual elements can each be subdivided into many pieces, depending on the local error indicator, while other parts of the mesh remain coarsely discretized. The problem remains to reduce and smooth the error while still keeping computational effort reasonable enough to calculate time histories in a short enough time for on-board applications.

  8. Mini-batch optimized full waveform inversion with geological constrained gradient filtering

    NASA Astrophysics Data System (ADS)

    Yang, Hui; Jia, Junxiong; Wu, Bangyu; Gao, Jinghuai

    2018-05-01

    High computation cost and generating solutions without geological sense have hindered the wide application of Full Waveform Inversion (FWI). Source encoding technique is a way to dramatically reduce the cost of FWI but subject to fix-spread acquisition setup requirement and slow convergence for the suppression of cross-talk. Traditionally, gradient regularization or preconditioning is applied to mitigate the ill-posedness. An isotropic smoothing filter applied on gradients generally gives non-geological inversion results, and could also introduce artifacts. In this work, we propose to address both the efficiency and ill-posedness of FWI by a geological constrained mini-batch gradient optimization method. The mini-batch gradient descent optimization is adopted to reduce the computation time by choosing a subset of entire shots for each iteration. By jointly applying the structure-oriented smoothing to the mini-batch gradient, the inversion converges faster and gives results with more geological meaning. Stylized Marmousi model is used to show the performance of the proposed method on realistic synthetic model.

  9. Modeling envelope statistics of blood and myocardium for segmentation of echocardiographic images.

    PubMed

    Nillesen, Maartje M; Lopata, Richard G P; Gerrits, Inge H; Kapusta, Livia; Thijssen, Johan M; de Korte, Chris L

    2008-04-01

    The objective of this study was to investigate the use of speckle statistics as a preprocessing step for segmentation of the myocardium in echocardiographic images. Three-dimensional (3D) and biplane image sequences of the left ventricle of two healthy children and one dog (beagle) were acquired. Pixel-based speckle statistics of manually segmented blood and myocardial regions were investigated by fitting various probability density functions (pdf). The statistics of heart muscle and blood could both be optimally modeled by a K-pdf or Gamma-pdf (Kolmogorov-Smirnov goodness-of-fit test). Scale and shape parameters of both distributions could differentiate between blood and myocardium. Local estimation of these parameters was used to obtain parametric images, where window size was related to speckle size (5 x 2 speckles). Moment-based and maximum-likelihood estimators were used. Scale parameters were still able to differentiate blood from myocardium; however, smoothing of edges of anatomical structures occurred. Estimation of the shape parameter required a larger window size, leading to unacceptable blurring. Using these parameters as an input for segmentation resulted in unreliable segmentation. Adaptive mean squares filtering was then introduced using the moment-based scale parameter (sigma(2)/mu) of the Gamma-pdf to automatically steer the two-dimensional (2D) local filtering process. This method adequately preserved sharpness of the edges. In conclusion, a trade-off between preservation of sharpness of edges and goodness-of-fit when estimating local shape and scale parameters is evident for parametric images. For this reason, adaptive filtering outperforms parametric imaging for the segmentation of echocardiographic images.

  10. An adaptive segment method for smoothing lidar signal based on noise estimation

    NASA Astrophysics Data System (ADS)

    Wang, Yuzhao; Luo, Pingping

    2014-10-01

    An adaptive segmentation smoothing method (ASSM) is introduced in the paper to smooth the signal and suppress the noise. In the ASSM, the noise is defined as the 3σ of the background signal. An integer number N is defined for finding the changing positions in the signal curve. If the difference of adjacent two points is greater than 3Nσ, the position is recorded as an end point of the smoothing segment. All the end points detected as above are recorded and the curves between them will be smoothed separately. In the traditional method, the end points of the smoothing windows in the signals are fixed. The ASSM creates changing end points in different signals and the smoothing windows could be set adaptively. The windows are always set as the half of the segmentations and then the average smoothing method will be applied in the segmentations. The Iterative process is required for reducing the end-point aberration effect in the average smoothing method and two or three times are enough. In ASSM, the signals are smoothed in the spacial area nor frequent area, that means the frequent disturbance will be avoided. A lidar echo was simulated in the experimental work. The echo was supposed to be created by a space-born lidar (e.g. CALIOP). And white Gaussian noise was added to the echo to act as the random noise resulted from environment and the detector. The novel method, ASSM, was applied to the noisy echo to filter the noise. In the test, N was set to 3 and the Iteration time is two. The results show that, the signal could be smoothed adaptively by the ASSM, but the N and the Iteration time might be optimized when the ASSM is applied in a different lidar.

  11. On the optimal systems of subalgebras for the equations of hydrodynamic stability analysis of smooth shear flows and their group-invariant solutions

    NASA Astrophysics Data System (ADS)

    Hau, Jan-Niklas; Oberlack, Martin; Chagelishvili, George

    2017-04-01

    We present a unifying solution framework for the linearized compressible equations for two-dimensional linearly sheared unbounded flows using the Lie symmetry analysis. The full set of symmetries that are admitted by the underlying system of equations is employed to systematically derive the one- and two-dimensional optimal systems of subalgebras, whose connected group reductions lead to three distinct invariant ansatz functions for the governing sets of partial differential equations (PDEs). The purpose of this analysis is threefold and explicitly we show that (i) there are three invariant solutions that stem from the optimal system. These include a general ansatz function with two free parameters, as well as the ansatz functions of the Kelvin mode and the modal approach. Specifically, the first approach unifies these well-known ansatz functions. By considering two limiting cases of the free parameters and related algebraic transformations, the general ansatz function is reduced to either of them. This fact also proves the existence of a link between the Kelvin mode and modal ansatz functions, as these appear to be the limiting cases of the general one. (ii) The Lie algebra associated with the Lie group admitted by the PDEs governing the compressible dynamics is a subalgebra associated with the group admitted by the equations governing the incompressible dynamics, which allows an additional (scaling) symmetry. Hence, any consequences drawn from the compressible case equally hold for the incompressible counterpart. (iii) In any of the systems of ordinary differential equations, derived by the three ansatz functions in the compressible case, the linearized potential vorticity is a conserved quantity that allows us to analyze vortex and wave mode perturbations separately.

  12. Screening somatic cell nuclear transfer parameters for generation of transgenic cloned cattle with intragenomic integration of additional gene copies that encode bovine adipocyte-type fatty acid-binding protein (A-FABP).

    PubMed

    Guo, Yong; Li, Hejuan; Wang, Ying; Yan, Xingrong; Sheng, Xihui; Chang, Di; Qi, Xiaolong; Wang, Xiangguo; Liu, Yunhai; Li, Junya; Ni, Hemin

    2017-02-01

    Somatic cell nuclear transfer (SCNT) is frequently used to produce transgenic cloned livestock, but it is still associated with low success rates. To our knowledge, we are the first to report successful production of transgenic cattle that overexpress bovine adipocyte-type fatty acid binding proteins (A-FABPs) with the aid of SCNT. Intragenomic integration of additional A-FABP gene copies has been found to be positively correlated with the intramuscular fat content in different farm livestock species. First, we optimized the cloning parameters to produce bovine embryos integrated with A-FABP by SCNT, such as applied voltage field strength and pulse duration for electrofusion, morphology and size of donor cells, and number of donor cells passages. Then, bovine fibroblast cells from Qinchuan cattle were transfected with A-FABP and used as donor cells for SCNT. Hybrids of Simmental and Luxi local cattle were selected as the recipient females for A-FABP transgenic SCNT-derived embryos. The results showed that a field strength of 2.5 kV/cm with two 10-μs duration electrical pulses was ideal for electrofusion, and 4-6th generation circular smooth type donor cells with diameters of 15-25 μm were optimal for producing transgenic bovine embryos by SCNT, and resulted in higher fusion (80%), cleavage (73%), and blastocyst (27%) rates. In addition, we obtained two transgenic cloned calves that expressed additional bovine A-FABP gene copies, as detected by PCR-amplified cDNA sequencing. We proposed a set of optimal protocols to produce transgenic SCNT-derived cattle with intragenomic integration of ectopic A-FABP-inherited exon sequences.

  13. Formulation Development, Process Optimization, and In Vitro Characterization of Spray-Dried Lansoprazole Enteric Microparticles

    PubMed Central

    Vora, Chintan; Patadia, Riddhish; Mittal, Karan; Mashru, Rajashree

    2016-01-01

    This research focuses on the development of enteric microparticles of lansoprazole in a single step by employing the spray drying technique and studies the effects of variegated formulation/process variables on entrapment efficiency and in vitro gastric resistance. Preliminary trials were undertaken to optimize the type of Eudragit and its various levels. Further trials included the incorporation of plasticizer triethyl citrate and combinations of other polymers with Eudragit S 100. Finally, various process parameters were varied to investigate their effects on microparticle properties. The results revealed Eudragit S 100 as the paramount polymer giving the highest gastric resistance in comparison to Eudragit L 100-55 and L 100 due to its higher pH threshold and its polymeric backbone. Incorporation of plasticizer not only influenced entrapment efficiency, but diminished gastric resistance severely. On the contrary, polymeric combinations reduced entrapment efficiency for both sodium alginate and glyceryl behenate, but significantly influenced gastric resistance for only sodium alginate and not for glyceryl behenate. The optimized process parameters were comprised of an inlet temperature of 150°C, atomizing air pressure of 2 kg/cm2, feed solution concentration of 6% w/w, feed solution spray rate of 3 ml/min, and aspirator volume of 90%. The SEM analysis revealed smooth and spherical shape morphologies. The DSC and PXRD study divulged the amorphous nature of the drug. Regarding stability, the product was found to be stable under 3 months of accelerated and long-term stability conditions as per ICH Q1A(R2) guidelines. Thus, the technique offers a simple means to generate polymeric enteric microparticles that are ready to formulate and can be directly filled into hard gelatin capsules. PMID:27222612

  14. Reciprocating and Screw Compressor semi-empirical models for establishing minimum energy performance standards

    NASA Astrophysics Data System (ADS)

    Javed, Hassan; Armstrong, Peter

    2015-08-01

    The efficiency bar for a Minimum Equipment Performance Standard (MEPS) generally aims to minimize energy consumption and life cycle cost of a given chiller type and size category serving a typical load profile. Compressor type has a significant chiller performance impact. Performance of screw and reciprocating compressors is expressed in terms of pressure ratio and speed for a given refrigerant and suction density. Isentropic efficiency for a screw compressor is strongly affected by under- and over-compression (UOC) processes. The theoretical simple physical UOC model involves a compressor-specific (but sometimes unknown) volume index parameter and the real gas properties of the refrigerant used. Isentropic efficiency is estimated by the UOC model and a bi-cubic, used to account for flow, friction and electrical losses. The unknown volume index, a smoothing parameter (to flatten the UOC model peak) and bi-cubic coefficients are identified by curve fitting to minimize an appropriate residual norm. Chiller performance maps are produced for each compressor type by selecting optimized sub-cooling and condenser fan speed options in a generic component-based chiller model. SEER is the sum of hourly load (from a typical building in the climate of interest) and specific power for the same hourly conditions. An empirical UAE cooling load model, scalable to any equipment capacity, is used to establish proposed UAE MEPS. Annual electricity use and cost, determined from SEER and annual cooling load, and chiller component cost data are used to find optimal chiller designs and perform life-cycle cost comparison between screw and reciprocating compressor-based chillers. This process may be applied to any climate/load model in order to establish optimized MEPS for any country and/or region.

  15. String-averaging incremental subgradients for constrained convex optimization with applications to reconstruction of tomographic images

    NASA Astrophysics Data System (ADS)

    Massambone de Oliveira, Rafael; Salomão Helou, Elias; Fontoura Costa, Eduardo

    2016-11-01

    We present a method for non-smooth convex minimization which is based on subgradient directions and string-averaging techniques. In this approach, the set of available data is split into sequences (strings) and a given iterate is processed independently along each string, possibly in parallel, by an incremental subgradient method (ISM). The end-points of all strings are averaged to form the next iterate. The method is useful to solve sparse and large-scale non-smooth convex optimization problems, such as those arising in tomographic imaging. A convergence analysis is provided under realistic, standard conditions. Numerical tests are performed in a tomographic image reconstruction application, showing good performance for the convergence speed when measured as the decrease ratio of the objective function, in comparison to classical ISM.

  16. Re-designing a mechanism for higher speed: A case history from textile machinery

    NASA Astrophysics Data System (ADS)

    Douglas, S. S.; Rooney, G. T.

    The generation of general mechanism design software which is the formulation of suitable objective functions is discussed. There is a consistent drive towards higher speeds in the development of industrial sewing machines. This led to experimental analyses of dynamic performance and to a search for improved design methods. The experimental work highlighted the need for smoothness of motion at high speed, component inertias, and frame structural stiffness. Smoothness is associated with transmission properties and harmonic analysis. These are added to other design requirements of synchronization, mechanism size, and function. Some of the mechanism trains in overedte sewing machines are shown. All these trains are designed by digital optimization. The design software combines analysis of the sewing machine mechanisms, formulation of objectives innumerical terms, and suitable mathematical optimization ttechniques.

  17. Mathematical circulatory system model

    NASA Technical Reports Server (NTRS)

    Lakin, William D. (Inventor); Stevens, Scott A. (Inventor)

    2010-01-01

    A system and method of modeling a circulatory system including a regulatory mechanism parameter. In one embodiment, a regulatory mechanism parameter in a lumped parameter model is represented as a logistic function. In another embodiment, the circulatory system model includes a compliant vessel, the model having a parameter representing a change in pressure due to contraction of smooth muscles of a wall of the vessel.

  18. Constraining smoothness parameter and the DD relation of Dyer-Roeder equation with supernovae

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xi; Yu, Hao-Ran; Zhang, Tong-Jie, E-mail: yangwds@mail.bnu.edu.cn, E-mail: yu@bnu.edu.cn, E-mail: tjzhang@bnu.edu.cn

    2013-06-01

    Our real universe is locally inhomogeneous. Dyer and Roeder introduced the smoothness parameter α to describe the influence of local inhomogeneity on angular diameter distance, and they obtained the angular diameter distance-redshift approximate relation (Dyer-Roeder equation) for locally inhomogeneous universe. Furthermore, the Distance-Duality (DD) relation, D{sub L}(z)(1+z){sup −2}/D{sub A}(z) = 1, should be valid for all cosmological models that are described by Riemannian geometry, where D{sub L} and D{sub A} are, respectively, the luminosity and angular distance distances. Therefore, it is necessary to test whether if the Dyer-Roeder approximate equation can satisfy the Distance-Duality relation. In this paper, we usemore » Union2.1 SNe Ia data to constrain the smoothness parameter α and test whether the Dyer-Roeder equation meet the DD relation. By using χ{sup 2} minimization, we get α = 0.92{sub −0.32}{sup +0.08} at 1σ and 0.92{sub −0.65}{sup +0.08} at 2σ, and our results show that the Dyer-Roeder equation is in good consistency with the DD relation at 1σ.« less

  19. The computation of Laplacian smoothing splines with examples

    NASA Technical Reports Server (NTRS)

    Wendelberger, J. G.

    1982-01-01

    Laplacian smoothing splines (LSS) are presented as generalizations of graduation, cubic and thin plate splines. The method of generalized cross validation (GCV) to choose the smoothing parameter is described. The GCV is used in the algorithm for the computation of LSS's. An outline of a computer program which implements this algorithm is presented along with a description of the use of the program. Examples in one, two and three dimensions demonstrate how to obtain estimates of function values with confidence intervals and estimates of first and second derivatives. Probability plots are used as a diagnostic tool to check for model inadequacy.

  20. Research on polarization imaging information parsing method

    NASA Astrophysics Data System (ADS)

    Yuan, Hongwu; Zhou, Pucheng; Wang, Xiaolong

    2016-11-01

    Polarization information parsing plays an important role in polarization imaging detection. This paper focus on the polarization information parsing method: Firstly, the general process of polarization information parsing is given, mainly including polarization image preprocessing, multiple polarization parameters calculation, polarization image fusion and polarization image tracking, etc.; And then the research achievements of the polarization information parsing method are presented, in terms of polarization image preprocessing, the polarization image registration method based on the maximum mutual information is designed. The experiment shows that this method can improve the precision of registration and be satisfied the need of polarization information parsing; In terms of multiple polarization parameters calculation, based on the omnidirectional polarization inversion model is built, a variety of polarization parameter images are obtained and the precision of inversion is to be improve obviously; In terms of polarization image fusion , using fuzzy integral and sparse representation, the multiple polarization parameters adaptive optimal fusion method is given, and the targets detection in complex scene is completed by using the clustering image segmentation algorithm based on fractal characters; In polarization image tracking, the average displacement polarization image characteristics of auxiliary particle filtering fusion tracking algorithm is put forward to achieve the smooth tracking of moving targets. Finally, the polarization information parsing method is applied to the polarization imaging detection of typical targets such as the camouflage target, the fog and latent fingerprints.

  1. An adaptive surface filter for airborne laser scanning point clouds by means of regularization and bending energy

    NASA Astrophysics Data System (ADS)

    Hu, Han; Ding, Yulin; Zhu, Qing; Wu, Bo; Lin, Hui; Du, Zhiqiang; Zhang, Yeting; Zhang, Yunsheng

    2014-06-01

    The filtering of point clouds is a ubiquitous task in the processing of airborne laser scanning (ALS) data; however, such filtering processes are difficult because of the complex configuration of the terrain features. The classical filtering algorithms rely on the cautious tuning of parameters to handle various landforms. To address the challenge posed by the bundling of different terrain features into a single dataset and to surmount the sensitivity of the parameters, in this study, we propose an adaptive surface filter (ASF) for the classification of ALS point clouds. Based on the principle that the threshold should vary in accordance to the terrain smoothness, the ASF embeds bending energy, which quantitatively depicts the local terrain structure to self-adapt the filter threshold automatically. The ASF employs a step factor to control the data pyramid scheme in which the processing window sizes are reduced progressively, and the ASF gradually interpolates thin plate spline surfaces toward the ground with regularization to handle noise. Using the progressive densification strategy, regularization and self-adaption, both performance improvement and resilience to parameter tuning are achieved. When tested against the benchmark datasets provided by ISPRS, the ASF performs the best in comparison with all other filtering methods, yielding an average total error of 2.85% when optimized and 3.67% when using the same parameter set.

  2. Influence of non-smooth surface on tribological properties of glass fiber-epoxy resin composite sliding against stainless steel under natural seawater lubrication

    NASA Astrophysics Data System (ADS)

    Wu, Shaofeng; Gao, Dianrong; Liang, Yingna; Chen, Bo

    2015-11-01

    With the development of bionics, the bionic non-smooth surfaces are introduced to the field of tribology. Although non-smooth surface has been studied widely, the studies of non-smooth surface under the natural seawater lubrication are still very fewer, especially experimental research. The influences of smooth and non-smooth surface on the frictional properties of the glass fiber-epoxy resin composite (GF/EPR) coupled with stainless steel 316L are investigated under natural seawater lubrication in this paper. The tested non-smooth surfaces include the surfaces with semi-spherical pits, the conical pits, the cone-cylinder combined pits, the cylindrical pits and through holes. The friction and wear tests are performed using a ring-on-disc test rig under 60 N load and 1000 r/min rotational speed. The tests results show that GF/EPR with bionic non-smooth surface has quite lower friction coefficient and better wear resistance than GF/EPR with smooth surface without pits. The average friction coefficient of GF/EPR with semi-spherical pits is 0.088, which shows the largest reduction is approximately 63.18% of GF/EPR with smooth surface. In addition, the wear debris on the worn surfaces of GF/EPR are observed by a confocal scanning laser microscope. It is shown that the primary wear mechanism is the abrasive wear. The research results provide some design parameters for non-smooth surface, and the experiment results can serve as a beneficial supplement to non-smooth surface study.

  3. A new optimal seam method for seamless image stitching

    NASA Astrophysics Data System (ADS)

    Xue, Jiale; Chen, Shengyong; Cheng, Xu; Han, Ying; Zhao, Meng

    2017-07-01

    A novel optimal seam method which aims to stitch those images with overlapping area more seamlessly has been propos ed. Considering the traditional gradient domain optimal seam method and fusion algorithm result in bad color difference measurement and taking a long time respectively, the input images would be converted to HSV space and a new energy function is designed to seek optimal stitching path. To smooth the optimal stitching path, a simplified pixel correction and weighted average method are utilized individually. The proposed methods exhibit performance in eliminating the stitching seam compared with the traditional gradient optimal seam and high efficiency with multi-band blending algorithm.

  4. Improved Propulsion Modeling for Low-Thrust Trajectory Optimization

    NASA Technical Reports Server (NTRS)

    Knittel, Jeremy M.; Englander, Jacob A.; Ozimek, Martin T.; Atchison, Justin A.; Gould, Julian J.

    2017-01-01

    Low-thrust trajectory design is tightly coupled with spacecraft systems design. In particular, the propulsion and power characteristics of a low-thrust spacecraft are major drivers in the design of the optimal trajectory. Accurate modeling of the power and propulsion behavior is essential for meaningful low-thrust trajectory optimization. In this work, we discuss new techniques to improve the accuracy of propulsion modeling in low-thrust trajectory optimization while maintaining the smooth derivatives that are necessary for a gradient-based optimizer. The resulting model is significantly more realistic than the industry standard and performs well inside an optimizer. A variety of deep-space trajectory examples are presented.

  5. Quantification of microscopic surface features of single point diamond turned optics with subsequent chemical polishing

    NASA Astrophysics Data System (ADS)

    Cardenas, Nelson; Kyrish, Matthew; Taylor, Daniel; Fraelich, Margaret; Lechuga, Oscar; Claytor, Richard; Claytor, Nelson

    2015-03-01

    Electro-Chemical Polishing is routinely used in the anodizing industry to achieve specular surface finishes of various metals products prior to anodizing. Electro-Chemical polishing functions by leveling the microscopic peaks and valleys of the substrate, thereby increasing specularity and reducing light scattering. The rate of attack is dependent of the physical characteristics (height, depth, and width) of the microscopic structures that constitute the surface finish. To prepare the sample, mechanical polishing such as buffing or grinding is typically required before etching. This type of mechanical polishing produces random microscopic structures at varying depths and widths, thus the electropolishing parameters are determined in an ad hoc basis. Alternatively, single point diamond turning offers excellent repeatability and highly specific control of substrate polishing parameters. While polishing, the diamond tool leaves behind an associated tool mark, which is related to the diamond tool geometry and machining parameters. Machine parameters such as tool cutting depth, speed and step over can be changed in situ, thus providing control of the spatial frequency of the microscopic structures characteristic of the surface topography of the substrate. By combining single point diamond turning with subsequent electro-chemical etching, ultra smooth polishing of both rotationally symmetric and free form mirrors and molds is possible. Additionally, machining parameters can be set to optimize post polishing for increased surface quality and reduced processing times. In this work, we present a study of substrate surface finish based on diamond turning tool mark spatial frequency with subsequent electro-chemical polishing.

  6. Machine Learning Techniques for Global Sensitivity Analysis in Climate Models

    NASA Astrophysics Data System (ADS)

    Safta, C.; Sargsyan, K.; Ricciuto, D. M.

    2017-12-01

    Climate models studies are not only challenged by the compute intensive nature of these models but also by the high-dimensionality of the input parameter space. In our previous work with the land model components (Sargsyan et al., 2014) we identified subsets of 10 to 20 parameters relevant for each QoI via Bayesian compressive sensing and variance-based decomposition. Nevertheless the algorithms were challenged by the nonlinear input-output dependencies for some of the relevant QoIs. In this work we will explore a combination of techniques to extract relevant parameters for each QoI and subsequently construct surrogate models with quantified uncertainty necessary to future developments, e.g. model calibration and prediction studies. In the first step, we will compare the skill of machine-learning models (e.g. neural networks, support vector machine) to identify the optimal number of classes in selected QoIs and construct robust multi-class classifiers that will partition the parameter space in regions with smooth input-output dependencies. These classifiers will be coupled with techniques aimed at building sparse and/or low-rank surrogate models tailored to each class. Specifically we will explore and compare sparse learning techniques with low-rank tensor decompositions. These models will be used to identify parameters that are important for each QoI. Surrogate accuracy requirements are higher for subsequent model calibration studies and we will ascertain the performance of this workflow for multi-site ALM simulation ensembles.

  7. Optimal control of anthracnose using mixed strategies.

    PubMed

    Fotsa Mbogne, David Jaures; Thron, Christopher

    2015-11-01

    In this paper we propose and study a spatial diffusion model for the control of anthracnose disease in a bounded domain. The model is a generalization of the one previously developed in [15]. We use the model to simulate two different types of control strategies against anthracnose disease. Strategies that employ chemical fungicides are modeled using a continuous control function; while strategies that rely on cultivational practices (such as pruning and removal of mummified fruits) are modeled with a control function which is discrete in time (though not in space). For comparative purposes, we perform our analyses for a spatially-averaged model as well as the space-dependent diffusion model. Under weak smoothness conditions on parameters we demonstrate the well-posedness of both models by verifying existence and uniqueness of the solution for the growth inhibition rate for given initial conditions. We also show that the set [0, 1] is positively invariant. We first study control by impulsive strategies, then analyze the simultaneous use of mixed continuous and pulse strategies. In each case we specify a cost functional to be minimized, and we demonstrate the existence of optimal control strategies. In the case of pulse-only strategies, we provide explicit algorithms for finding the optimal control strategies for both the spatially-averaged model and the space-dependent model. We verify the algorithms for both models via simulation, and discuss properties of the optimal solutions. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. A Globally Optimal Particle Tracking Technique for Stereo Imaging Velocimetry Experiments

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2008-01-01

    An important phase of any Stereo Imaging Velocimetry experiment is particle tracking. Particle tracking seeks to identify and characterize the motion of individual particles entrained in a fluid or air experiment. We analyze a cylindrical chamber filled with water and seeded with density-matched particles. In every four-frame sequence, we identify a particle track by assigning a unique track label for each camera image. The conventional approach to particle tracking is to use an exhaustive tree-search method utilizing greedy algorithms to reduce search times. However, these types of algorithms are not optimal due to a cascade effect of incorrect decisions upon adjacent tracks. We examine the use of a guided evolutionary neural net with simulated annealing to arrive at a globally optimal assignment of tracks. The net is guided both by the minimization of the search space through the use of prior limiting assumptions about valid tracks and by a strategy which seeks to avoid high-energy intermediate states which can trap the net in a local minimum. A stochastic search algorithm is used in place of back-propagation of error to further reduce the chance of being trapped in an energy well. Global optimization is achieved by minimizing an objective function, which includes both track smoothness and particle-image utilization parameters. In this paper we describe our model and present our experimental results. We compare our results with a nonoptimizing, predictive tracker and obtain an average increase in valid track yield of 27 percent

  9. Comparison of IMRT planning with two-step and one-step optimization: a strategy for improving therapeutic gain and reducing the integral dose

    NASA Astrophysics Data System (ADS)

    Abate, A.; Pressello, M. C.; Benassi, M.; Strigari, L.

    2009-12-01

    The aim of this study was to evaluate the effectiveness and efficiency in inverse IMRT planning of one-step optimization with the step-and-shoot (SS) technique as compared to traditional two-step optimization using the sliding windows (SW) technique. The Pinnacle IMRT TPS allows both one-step and two-step approaches. The same beam setup for five head-and-neck tumor patients and dose-volume constraints were applied for all optimization methods. Two-step plans were produced converting the ideal fluence with or without a smoothing filter into the SW sequence. One-step plans, based on direct machine parameter optimization (DMPO), had the maximum number of segments per beam set at 8, 10, 12, producing a directly deliverable sequence. Moreover, the plans were generated whether a split-beam was used or not. Total monitor units (MUs), overall treatment time, cost function and dose-volume histograms (DVHs) were estimated for each plan. PTV conformality and homogeneity indexes and normal tissue complication probability (NTCP) that are the basis for improving therapeutic gain, as well as non-tumor integral dose (NTID), were evaluated. A two-sided t-test was used to compare quantitative variables. All plans showed similar target coverage. Compared to two-step SW optimization, the DMPO-SS plans resulted in lower MUs (20%), NTID (4%) as well as NTCP values. Differences of about 15-20% in the treatment delivery time were registered. DMPO generates less complex plans with identical PTV coverage, providing lower NTCP and NTID, which is expected to reduce the risk of secondary cancer. It is an effective and efficient method and, if available, it should be favored over the two-step IMRT planning.

  10. Optimization of Supersonic Transport Trajectories

    NASA Technical Reports Server (NTRS)

    Ardema, Mark D.; Windhorst, Robert; Phillips, James

    1998-01-01

    This paper develops a near-optimal guidance law for generating minimum fuel, time, or cost fixed-range trajectories for supersonic transport aircraft. The approach uses a choice of new state variables along with singular perturbation techniques to time-scale decouple the dynamic equations into multiple equations of single order (second order for the fast dynamics). Application of the maximum principle to each of the decoupled equations, as opposed to application to the original coupled equations, avoids the two point boundary value problem and transforms the problem from one of a functional optimization to one of multiple function optimizations. It is shown that such an approach produces well known aircraft performance results such as minimizing the Brequet factor for minimum fuel consumption and the energy climb path. Furthermore, the new state variables produce a consistent calculation of flight path angle along the trajectory, eliminating one of the deficiencies in the traditional energy state approximation. In addition, jumps in the energy climb path are smoothed out by integration of the original dynamic equations at constant load factor. Numerical results performed for a supersonic transport design show that a pushover dive followed by a pullout at nominal load factors are sufficient maneuvers to smooth the jump.

  11. Optimized statistical parametric mapping for partial-volume-corrected amyloid positron emission tomography in patients with Alzheimer's disease and Lewy body dementia

    NASA Astrophysics Data System (ADS)

    Oh, Jungsu S.; Kim, Jae Seung; Chae, Sun Young; Oh, Minyoung; Oh, Seung Jun; Cha, Seung Nam; Chang, Ho-Jong; Lee, Chong Sik; Lee, Jae Hong

    2017-03-01

    We present an optimized voxelwise statistical parametric mapping (SPM) of partial-volume (PV)-corrected positron emission tomography (PET) of 11C Pittsburgh Compound B (PiB), incorporating the anatomical precision of magnetic resonance image (MRI) and amyloid β (A β) burden-specificity of PiB PET. First, we applied region-based partial-volume correction (PVC), termed the geometric transfer matrix (GTM) method, to PiB PET, creating MRI-based lobar parcels filled with mean PiB uptakes. Then, we conducted a voxelwise PVC by multiplying the original PET by the ratio of a GTM-based PV-corrected PET to a 6-mm-smoothed PV-corrected PET. Finally, we conducted spatial normalizations of the PV-corrected PETs onto the study-specific template. As such, we increased the accuracy of the SPM normalization and the tissue specificity of SPM results. Moreover, lobar smoothing (instead of whole-brain smoothing) was applied to increase the signal-to-noise ratio in the image without degrading the tissue specificity. Thereby, we could optimize a voxelwise group comparison between subjects with high and normal A β burdens (from 10 patients with Alzheimer's disease, 30 patients with Lewy body dementia, and 9 normal controls). Our SPM framework outperformed than the conventional one in terms of the accuracy of the spatial normalization (85% of maximum likelihood tissue classification volume) and the tissue specificity (larger gray matter, and smaller cerebrospinal fluid volume fraction from the SPM results). Our SPM framework optimized the SPM of a PV-corrected A β PET in terms of anatomical precision, normalization accuracy, and tissue specificity, resulting in better detection and localization of A β burdens in patients with Alzheimer's disease and Lewy body dementia.

  12. Ayurvedic preparation of Zingiber officinale Roscoe: effects on cardiac and on smooth muscle parameters.

    PubMed

    Leoni, Alberto; Budriesi, Roberta; Poli, Ferruccio; Lianza, Mariacaterina; Graziadio, Alessandra; Venturini, Alice; Broccoli, Massimiliano; Micucci, Matteo

    2017-08-28

    The rhizome of the Zingiber officinale Roscoe, a biennial herb growing in South Asia, is commonly known as ginger. Ginger is used in clinical disorders, such as constipation, dyspepsia, diarrhoea, nausea and vomiting and its use is also recommended by the traditional medicine for cardiopathy, high blood pressure, palpitations and as a vasodilator to improve the circulation. The decoction of ginger rhizome is widely used in Ayurvedic medicine. In this papery by high-performance liquid chromatography, we have seen that its main phytomarkers were 6-gingerol, 8-gingerol and 6-shogaol and we report the effects of the decoction of ginger rhizome on cardiovascular parameters and on vascular and intestinal smooth muscle. In our experimental models, the decoction of ginger shows weak negative inotropic and chronotropic intrinsic activities but a significant intrinsic activity on smooth muscle with a potency on ileum is greater than on aorta: EC 50  = 0.66 mg/mL versus EC 50  = 1.45 mg/mL.

  13. Femtosecond laser structuring of titanium implants

    NASA Astrophysics Data System (ADS)

    Vorobyev, A. Y.; Guo, Chunlei

    2007-06-01

    In this study we perform the first femtosecond laser surface treatment of titanium in order to determine the potential of this technology for surface structuring of titanium implants. We find that the femtosecond laser produces a large variety of nanostructures (nanopores, nanoprotrusions) with a size down to 20 nm, multiple parallel grooved surface patterns with a period on the sub-micron level, microroughness in the range of 1-15 μm with various configurations, smooth surface with smooth micro-inhomogeneities, and smooth surface with sphere-like nanostructures down to 10 nm. Also, we have determined the optimal conditions for producing these surface structural modifications. Femtosecond laser treatment can produce a richer variety of surface structures on titanium for implants and other biomedical applications than long-pulse laser treatments.

  14. Bayesian estimation of dynamic matching function for U-V analysis in Japan

    NASA Astrophysics Data System (ADS)

    Kyo, Koki; Noda, Hideo; Kitagawa, Genshiro

    2012-05-01

    In this paper we propose a Bayesian method for analyzing unemployment dynamics. We derive a Beveridge curve for unemployment and vacancy (U-V) analysis from a Bayesian model based on a labor market matching function. In our framework, the efficiency of matching and the elasticities of new hiring with respect to unemployment and vacancy are regarded as time varying parameters. To construct a flexible model and obtain reasonable estimates in an underdetermined estimation problem, we treat the time varying parameters as random variables and introduce smoothness priors. The model is then described in a state space representation, enabling the parameter estimation to be carried out using Kalman filter and fixed interval smoothing. In such a representation, dynamic features of the cyclic unemployment rate and the structural-frictional unemployment rate can be accurately captured.

  15. Further Evidence of Complex Motor Dysfunction in Drug Naive Children with Autism Using Automatic Motion Analysis of Gait

    ERIC Educational Resources Information Center

    Nobile, Maria; Perego, Paolo; Piccinini, Luigi; Mani, Elisa; Rossi, Agnese; Bellina, Monica; Molteni, Massimo

    2011-01-01

    In order to increase the knowledge of locomotor disturbances in children with autism, and of the mechanism underlying them, the objective of this exploratory study was to reliably and quantitatively evaluate linear gait parameters (spatio-temporal and kinematic parameters), upper body kinematic parameters, walk orientation and smoothness using an…

  16. Review of smoothing methods for enhancement of noisy data from heavy-duty LHD mining machines

    NASA Astrophysics Data System (ADS)

    Wodecki, Jacek; Michalak, Anna; Stefaniak, Paweł

    2018-01-01

    Appropriate analysis of data measured on heavy-duty mining machines is essential for processes monitoring, management and optimization. Some particular classes of machines, for example LHD (load-haul-dump) machines, hauling trucks, drilling/bolting machines etc. are characterized with cyclicity of operations. In those cases, identification of cycles and their segments or in other words - simply data segmentation is a key to evaluate their performance, which may be very useful from the management point of view, for example leading to introducing optimization to the process. However, in many cases such raw signals are contaminated with various artifacts, and in general are expected to be very noisy, which makes the segmentation task very difficult or even impossible. To deal with that problem, there is a need for efficient smoothing methods that will allow to retain informative trends in the signals while disregarding noises and other undesired non-deterministic components. In this paper authors present a review of various approaches to diagnostic data smoothing. Described methods can be used in a fast and efficient way, effectively cleaning the signals while preserving informative deterministic behaviour, that is a crucial to precise segmentation and other approaches to industrial data analysis.

  17. Learning the dynamics of objects by optimal functional interpolation.

    PubMed

    Ahn, Jong-Hoon; Kim, In Young

    2012-09-01

    Many areas of science and engineering rely on functional data and their numerical analysis. The need to analyze time-varying functional data raises the general problem of interpolation, that is, how to learn a smooth time evolution from a finite number of observations. Here, we introduce optimal functional interpolation (OFI), a numerical algorithm that interpolates functional data over time. Unlike the usual interpolation or learning algorithms, the OFI algorithm obeys the continuity equation, which describes the transport of some types of conserved quantities, and its implementation shows smooth, continuous flows of quantities. Without the need to take into account equations of motion such as the Navier-Stokes equation or the diffusion equation, OFI is capable of learning the dynamics of objects such as those represented by mass, image intensity, particle concentration, heat, spectral density, and probability density.

  18. Optimization of orthotropic distributed-mode loudspeaker using attached masses and multi-exciters.

    PubMed

    Lu, Guochao; Shen, Yong; Liu, Ziyun

    2012-02-01

    Based on the orthotropic model of the plate, the method to optimize the sound response of the distributed-mode loudspeaker (DML) using the attached masses and the multi-exciters has been investigated. The attached masses method will rebuild the modes distribution of the plate, based on which multi-exciter method will smooth the sound response. The results indicate that the method can be used to optimize the sound response of the DML. © 2012 Acoustical Society of America

  19. The topographic development and areal parametric characterization of a stratified surface polished by mass finishing

    NASA Astrophysics Data System (ADS)

    Walton, Karl; Blunt, Liam; Fleming, Leigh

    2015-09-01

    Mass finishing is amongst the most widely used finishing processes in modern manufacturing, in applications from deburring to edge radiusing and polishing. Processing objectives are varied, ranging from the cosmetic to the functionally critical. One such critical application is the hydraulically smooth polishing of aero engine component gas-washed surfaces. In this, and many other applications the drive to improve process control and finish tolerance is ever present. Considering its widespread use mass finishing has seen limited research activity, particularly with respect to surface characterization. The objectives of the current paper are to; characterise the mass finished stratified surface and its development process using areal surface parameters, provide guidance on the optimal parameters and sampling method to characterise this surface type for a given application, and detail the spatial variation in surface topography due to coupon edge shadowing. Blasted and peened square plate coupons in titanium alloy are wet (vibro) mass finished iteratively with increasing duration. Measurement fields are precisely relocated between iterations by fixturing and an image superimposition alignment technique. Surface topography development is detailed with ‘log of process duration’ plots of the ‘areal parameters for scale-limited stratified functional surfaces’, (the Sk family). Characteristic features of the Smr2 plot are seen to map out the processing of peak, core and dale regions in turn. These surface process regions also become apparent in the ‘log of process duration’ plot for Sq, where lower core and dale regions are well modelled by logarithmic functions. Surface finish (Ra or Sa) with mass finishing duration is currently predicted with an exponential model. This model is shown to be limited for the current surface type at a critical range of surface finishes. Statistical analysis provides a group of areal parameters including; Vvc, Sq, and Sdq, showing optimal discrimination for a specific range of surface finish outcomes. As a consequence of edge shadowing surface segregation is suggested for characterization purposes.

  20. Poisson-Nernst-Planck equations with steric effects - non-convexity and multiple stationary solutions

    NASA Astrophysics Data System (ADS)

    Gavish, Nir

    2018-04-01

    We study the existence and stability of stationary solutions of Poisson-Nernst-Planck equations with steric effects (PNP-steric equations) with two counter-charged species. We show that within a range of parameters, steric effects give rise to multiple solutions of the corresponding stationary equation that are smooth. The PNP-steric equation, however, is found to be ill-posed at the parameter regime where multiple solutions arise. Following these findings, we introduce a novel PNP-Cahn-Hilliard model, show that it is well-posed and that it admits multiple stationary solutions that are smooth and stable. The various branches of stationary solutions and their stability are mapped utilizing bifurcation analysis and numerical continuation methods.

  1. Rational-Spline Subroutines

    NASA Technical Reports Server (NTRS)

    Schiess, James R.; Kerr, Patricia A.; Smith, Olivia C.

    1988-01-01

    Smooth curves drawn among plotted data easily. Rational-Spline Approximation with Automatic Tension Adjustment algorithm leads to flexible, smooth representation of experimental data. "Tension" denotes mathematical analog of mechanical tension in spline or other mechanical curve-fitting tool, and "spline" as denotes mathematical generalization of tool. Program differs from usual spline under tension, allows user to specify different values of tension between adjacent pairs of knots rather than constant tension over entire range of data. Subroutines use automatic adjustment scheme that varies tension parameter for each interval until maximum deviation of spline from line joining knots less than or equal to amount specified by user. Procedure frees user from drudgery of adjusting individual tension parameters while still giving control over local behavior of spline.

  2. A self-adaptive-grid method with application to airfoil flow

    NASA Technical Reports Server (NTRS)

    Nakahashi, K.; Deiwert, G. S.

    1985-01-01

    A self-adaptive-grid method is described that is suitable for multidimensional steady and unsteady computations. Based on variational principles, a spring analogy is used to redistribute grid points in an optimal sense to reduce the overall solution error. User-specified parameters, denoting both maximum and minimum permissible grid spacings, are used to define the all-important constants, thereby minimizing the empiricism and making the method self-adaptive. Operator splitting and one-sided controls for orthogonality and smoothness are used to make the method practical, robust, and efficient. Examples are included for both steady and unsteady viscous flow computations about airfoils in two dimensions, as well as for a steady inviscid flow computation and a one-dimensional case. These examples illustrate the precise control the user has with the self-adaptive method and demonstrate a significant improvement in accuracy and quality of the solutions.

  3. Computational Investigations in Rectangular Convergent and Divergent Ribbed Channels

    NASA Astrophysics Data System (ADS)

    Sivakumar, Karthikeyan; Kulasekharan, N.; Natarajan, E.

    2018-05-01

    Computational investigations on the rib turbulated flow inside a convergent and divergent rectangular channel with square ribs of different rib heights and different Reynolds numbers (Re=20,000, 40,000 and 60,000). The ribs were arranged in a staggered fashion between the upper and lower surfaces of the test section. Computational investigations are carried out using computational fluid dynamic software ANSYS Fluent 14.0. Suitable solver settings like turbulence models were identified from the literature and the boundary conditions for the simulations on a solution of independent grid. Computations were carried out for both convergent and divergent channels with 0 (smooth duct), 1.5, 3, 6, 9 and 12 mm rib heights, to identify the ribbed channel with optimal performance, assessed using a thermo hydraulic performance parameter. The convergent and divergent rectangular channels show higher Nu values than the standard correlation values.

  4. A TPMS-based method for modeling porous scaffolds for bionic bone tissue engineering.

    PubMed

    Shi, Jianping; Zhu, Liya; Li, Lan; Li, Zongan; Yang, Jiquan; Wang, Xingsong

    2018-05-09

    In the field of bone defect repair, gradient porous scaffolds have received increased attention because they provide a better environment for promoting tissue regeneration. In this study, we propose an effective method to generate bionic porous scaffolds based on the TPMS (triply periodic minimal surface) and SF (sigmoid function) methods. First, cortical bone morphological features (e.g., pore size and distribution) were determined for several regions of a rabbit femoral bone by analyzing CT-scans. A finite element method was used to evaluate the mechanical properties of the bone at these respective areas. These results were used to place different TPMS substructures into one scaffold domain with smooth transitions. The geometrical parameters of the scaffolds were optimized to match the elastic properties of a human bone. With this proposed method, a functional gradient porous scaffold could be designed and produced by an additive manufacturing method.

  5. Spline-Based Smoothing of Airfoil Curvatures

    NASA Technical Reports Server (NTRS)

    Li, W.; Krist, S.

    2008-01-01

    Constrained fitting for airfoil curvature smoothing (CFACS) is a splinebased method of interpolating airfoil surface coordinates (and, concomitantly, airfoil thicknesses) between specified discrete design points so as to obtain smoothing of surface-curvature profiles in addition to basic smoothing of surfaces. CFACS was developed in recognition of the fact that the performance of a transonic airfoil is directly related to both the curvature profile and the smoothness of the airfoil surface. Older methods of interpolation of airfoil surfaces involve various compromises between smoothing of surfaces and exact fitting of surfaces to specified discrete design points. While some of the older methods take curvature profiles into account, they nevertheless sometimes yield unfavorable results, including curvature oscillations near end points and substantial deviations from desired leading-edge shapes. In CFACS as in most of the older methods, one seeks a compromise between smoothing and exact fitting. Unlike in the older methods, the airfoil surface is modified as little as possible from its original specified form and, instead, is smoothed in such a way that the curvature profile becomes a smooth fit of the curvature profile of the original airfoil specification. CFACS involves a combination of rigorous mathematical modeling and knowledge-based heuristics. Rigorous mathematical formulation provides assurance of removal of undesirable curvature oscillations with minimum modification of the airfoil geometry. Knowledge-based heuristics bridge the gap between theory and designers best practices. In CFACS, one of the measures of the deviation of an airfoil surface from smoothness is the sum of squares of the jumps in the third derivatives of a cubicspline interpolation of the airfoil data. This measure is incorporated into a formulation for minimizing an overall deviation- from-smoothness measure of the airfoil data within a specified fitting error tolerance. CFACS has been extensively tested on a number of supercritical airfoil data sets generated by inverse design and optimization computer programs. All of the smoothing results show that CFACS is able to generate unbiased smooth fits of curvature profiles, trading small modifications of geometry for increasing curvature smoothness by eliminating curvature oscillations and bumps (see figure).

  6. Adaptive control of a quadrotor aerial vehicle with input constraints and uncertain parameters

    NASA Astrophysics Data System (ADS)

    Tran, Trong-Toan; Ge, Shuzhi Sam; He, Wei

    2018-05-01

    In this paper, we address the problem of adaptive bounded control for the trajectory tracking of a Quadrotor Aerial Vehicle (QAV) while the input saturations and uncertain parameters with the known bounds are simultaneously taken into account. First, to deal with the underactuated property of the QAV model, we decouple and construct the QAV model as a cascaded structure which consists of two fully actuated subsystems. Second, to handle the input constraints and uncertain parameters, we use a combination of the smooth saturation function and smooth projection operator in the control design. Third, to ensure the stability of the overall system of the QAV, we develop the technique for the cascaded system in the presence of both the input constraints and uncertain parameters. Finally, the region of stability of the closed-loop system is constructed explicitly, and our design ensures the asymptotic convergence of the tracking errors to the origin. The simulation results are provided to illustrate the effectiveness of the proposed method.

  7. Homogeneous solutions of stationary Navier-Stokes equations with isolated singularities on the unit sphere. II. Classification of axisymmetric no-swirl solutions

    NASA Astrophysics Data System (ADS)

    Li, Li; Li, YanYan; Yan, Xukai

    2018-05-01

    We classify all (- 1)-homogeneous axisymmetric no-swirl solutions of incompressible stationary Navier-Stokes equations in three dimension which are smooth on the unit sphere minus the south and north poles, parameterizing them as a four dimensional surface with boundary in appropriate function spaces. Then we establish smoothness properties of the solution surface in the four parameters. The smoothness properties will be used in a subsequent paper where we study the existence of (- 1)-homogeneous axisymmetric solutions with non-zero swirl on S2 ∖ { S , N }, emanating from the four dimensional solution surface.

  8. Interactive Inverse Design Optimization of Fuselage Shape for Low-Boom Supersonic Concepts

    NASA Technical Reports Server (NTRS)

    Li, Wu; Shields, Elwood; Le, Daniel

    2008-01-01

    This paper introduces a tool called BOSS (Boom Optimization using Smoothest Shape modifications). BOSS utilizes interactive inverse design optimization to develop a fuselage shape that yields a low-boom aircraft configuration. A fundamental reason for developing BOSS is the need to generate feasible low-boom conceptual designs that are appropriate for further refinement using computational fluid dynamics (CFD) based preliminary design methods. BOSS was not developed to provide a numerical solution to the inverse design problem. Instead, BOSS was intended to help designers find the right configuration among an infinite number of possible configurations that are equally good using any numerical figure of merit. BOSS uses the smoothest shape modification strategy for modifying the fuselage radius distribution at 100 or more longitudinal locations to find a smooth fuselage shape that reduces the discrepancies between the design and target equivalent area distributions over any specified range of effective distance. For any given supersonic concept (with wing, fuselage, nacelles, tails, and/or canards), a designer can examine the differences between the design and target equivalent areas, decide which part of the design equivalent area curve needs to be modified, choose a desirable rate for the reduction of the discrepancies over the specified range, and select a parameter for smoothness control of the fuselage shape. BOSS will then generate a fuselage shape based on the designer's inputs in a matter of seconds. Using BOSS, within a few hours, a designer can either generate a realistic fuselage shape that yields a supersonic configuration with a low-boom ground signature or quickly eliminate any configuration that cannot achieve low-boom characteristics with fuselage shaping alone. A conceptual design case study is documented to demonstrate how BOSS can be used to develop a low-boom supersonic concept from a low-drag supersonic concept. The paper also contains a study on how perturbations in the equivalent area distribution affect the ground signature shape and how new target area distributions for low-boom signatures can be constructed using superposition of equivalent area distributions derived from the Seebass-George-Darden (SGD) theory.

  9. The influence of swarm deformation on the velocity behavior of falling swarms of particles

    NASA Astrophysics Data System (ADS)

    Mitchell, C. A.; Pyrak-Nolte, L. J.; Nitsche, L.

    2017-12-01

    Cohesive particle swarms have been shown to exhibit enhanced sedimentation in fractures for an optimal range of fracture apertures. Within this range, swarms travel farther and faster than a disperse (particulate) solution. This study aims to uncover the physics underlying the enhanced sedimentation. Swarm behavior at low Reynolds number in a quiescent unbounded fluid and between smooth rigid planar boundaries is investigated numerically using direct-summation, particle-mesh (PM) and particle-particle particle-mesh (P3M) methods - based upon mutually interacting viscous point forces (Stokeslet fields). Wall effects are treated with a least-squares boundary singularity method. Sub-structural effects beyond pseudo-liquid behavior (i.e., particle-scale interactions) are approximated by the P3M method much more efficiently than with direct summation. The model parameters are selected from particle swarm experiments to enable comparison. From the simulations, if the initial swarm geometry at release is unaffected by the fracture aperture, no enhanced transport occurs. The swarm velocity as a function of apertures increases monotonically until it asymptotes to the swarm velocity in an open tank. However, if the fracture aperture affects the initial swarm geometry, the swarm velocity no longer exhibits a monotonic behavior. When swarms are released between two parallel smooth walls with very small apertures, the swarm is forced to reorganize and quickly deform, which results in dramatically reduced swarm velocities. At large apertures, the swarm evolution is similar to that of a swarm in open tank and quickly flattens into a slow speed torus. In the optimal aperture range, the swarm maintains a cohesive unit behaving similarly to a falling sphere. Swarms falling in apertures less than or greater than the optimal aperture range, experience a level of anisotropy that considerably decreases velocities. Unraveling the physics that drives swarm behavior in fractured porous media is important for understanding particle sedimentation and contaminant spreading in the subsurface. Acknowledgment: This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Geosciences Research Program under Award Number (DE-FG02-09ER16022).

  10. An efficient flexible-order model for 3D nonlinear water waves

    NASA Astrophysics Data System (ADS)

    Engsig-Karup, A. P.; Bingham, H. B.; Lindberg, O.

    2009-04-01

    The flexible-order, finite difference based fully nonlinear potential flow model described in [H.B. Bingham, H. Zhang, On the accuracy of finite difference solutions for nonlinear water waves, J. Eng. Math. 58 (2007) 211-228] is extended to three dimensions (3D). In order to obtain an optimal scaling of the solution effort multigrid is employed to precondition a GMRES iterative solution of the discretized Laplace problem. A robust multigrid method based on Gauss-Seidel smoothing is found to require special treatment of the boundary conditions along solid boundaries, and in particular on the sea bottom. A new discretization scheme using one layer of grid points outside the fluid domain is presented and shown to provide convergent solutions over the full physical and discrete parameter space of interest. Linear analysis of the fundamental properties of the scheme with respect to accuracy, robustness and energy conservation are presented together with demonstrations of grid independent iteration count and optimal scaling of the solution effort. Calculations are made for 3D nonlinear wave problems for steep nonlinear waves and a shoaling problem which show good agreement with experimental measurements and other calculations from the literature.

  11. Gaussian Decomposition of Laser Altimeter Waveforms

    NASA Technical Reports Server (NTRS)

    Hofton, Michelle A.; Minster, J. Bernard; Blair, J. Bryan

    1999-01-01

    We develop a method to decompose a laser altimeter return waveform into its Gaussian components assuming that the position of each Gaussian within the waveform can be used to calculate the mean elevation of a specific reflecting surface within the laser footprint. We estimate the number of Gaussian components from the number of inflection points of a smoothed copy of the laser waveform, and obtain initial estimates of the Gaussian half-widths and positions from the positions of its consecutive inflection points. Initial amplitude estimates are obtained using a non-negative least-squares method. To reduce the likelihood of fitting the background noise within the waveform and to minimize the number of Gaussians needed in the approximation, we rank the "importance" of each Gaussian in the decomposition using its initial half-width and amplitude estimates. The initial parameter estimates of all Gaussians ranked "important" are optimized using the Levenburg-Marquardt method. If the sum of the Gaussians does not approximate the return waveform to a prescribed accuracy, then additional Gaussians are included in the optimization procedure. The Gaussian decomposition method is demonstrated on data collected by the airborne Laser Vegetation Imaging Sensor (LVIS) in October 1997 over the Sequoia National Forest, California.

  12. Effect of time-of-flight and point spread function modeling on detectability of myocardial defects in PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaefferkoetter, Joshua, E-mail: dnrjds@nus.edu.sg; Ouyang, Jinsong; Rakvongthai, Yothin

    2014-06-15

    Purpose: A study was designed to investigate the impact of time-of-flight (TOF) and point spread function (PSF) modeling on the detectability of myocardial defects. Methods: Clinical FDG-PET data were used to generate populations of defect-present and defect-absent images. Defects were incorporated at three contrast levels, and images were reconstructed by ordered subset expectation maximization (OSEM) iterative methods including ordinary Poisson, alone and with PSF, TOF, and PSF+TOF. Channelized Hotelling observer signal-to-noise ratio (SNR) was the surrogate for human observer performance. Results: For three iterations, 12 subsets, and no postreconstruction smoothing, TOF improved overall defect detection SNR by 8.6% as comparedmore » to its non-TOF counterpart for all the defect contrasts. Due to the slow convergence of PSF reconstruction, PSF yielded 4.4% less SNR than non-PSF. For reconstruction parameters (iteration number and postreconstruction smoothing kernel size) optimizing observer SNR, PSF showed larger improvement for faint defects. The combination of TOF and PSF improved mean detection SNR as compared to non-TOF and non-PSF counterparts by 3.0% and 3.2%, respectively. Conclusions: For typical reconstruction protocol used in clinical practice, i.e., less than five iterations, TOF improved defect detectability. In contrast, PSF generally yielded less detectability. For large number of iterations, TOF+PSF yields the best observer performance.« less

  13. Adaptation of a cubic smoothing spline algortihm for multi-channel data stitching at the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, C; Adcock, A; Azevedo, S

    2010-12-28

    Some diagnostics at the National Ignition Facility (NIF), including the Gamma Reaction History (GRH) diagnostic, require multiple channels of data to achieve the required dynamic range. These channels need to be stitched together into a single time series, and they may have non-uniform and redundant time samples. We chose to apply the popular cubic smoothing spline technique to our stitching problem because we needed a general non-parametric method. We adapted one of the algorithms in the literature, by Hutchinson and deHoog, to our needs. The modified algorithm and the resulting code perform a cubic smoothing spline fit to multiple datamore » channels with redundant time samples and missing data points. The data channels can have different, time-varying, zero-mean white noise characteristics. The method we employ automatically determines an optimal smoothing level by minimizing the Generalized Cross Validation (GCV) score. In order to automatically validate the smoothing level selection, the Weighted Sum-Squared Residual (WSSR) and zero-mean tests are performed on the residuals. Further, confidence intervals, both analytical and Monte Carlo, are also calculated. In this paper, we describe the derivation of our cubic smoothing spline algorithm. We outline the algorithm and test it with simulated and experimental data.« less

  14. Image interpolation via regularized local linear regression.

    PubMed

    Liu, Xianming; Zhao, Debin; Xiong, Ruiqin; Ma, Siwei; Gao, Wen; Sun, Huifang

    2011-12-01

    The linear regression model is a very attractive tool to design effective image interpolation schemes. Some regression-based image interpolation algorithms have been proposed in the literature, in which the objective functions are optimized by ordinary least squares (OLS). However, it is shown that interpolation with OLS may have some undesirable properties from a robustness point of view: even small amounts of outliers can dramatically affect the estimates. To address these issues, in this paper we propose a novel image interpolation algorithm based on regularized local linear regression (RLLR). Starting with the linear regression model where we replace the OLS error norm with the moving least squares (MLS) error norm leads to a robust estimator of local image structure. To keep the solution stable and avoid overfitting, we incorporate the l(2)-norm as the estimator complexity penalty. Moreover, motivated by recent progress on manifold-based semi-supervised learning, we explicitly consider the intrinsic manifold structure by making use of both measured and unmeasured data points. Specifically, our framework incorporates the geometric structure of the marginal probability distribution induced by unmeasured samples as an additional local smoothness preserving constraint. The optimal model parameters can be obtained with a closed-form solution by solving a convex optimization problem. Experimental results on benchmark test images demonstrate that the proposed method achieves very competitive performance with the state-of-the-art interpolation algorithms, especially in image edge structure preservation. © 2011 IEEE

  15. Study on the synthesis and physicochemical properties of starch acetate with low substitution under microwave assistance.

    PubMed

    Lin, Derong; Zhou, Wei; Zhao, Jingjing; Lan, Weijie; Chen, Rongming; Li, Yutong; Xing, Baoshan; Li, Zhuohao; Xiao, Mengshi; Wu, Zhijun; Li, Xindan; Chen, Rongna; Zhang, Xingwen; Chen, Hong; Zhang, Qing; Qin, Wen; Li, Suqing

    2017-10-01

    In this study, synthesis and physicochemical properties of starch acetate with low substitution under microwave were studied. A three-level-three-factorial Central Composite Design using Response Surface Methodology (RSM) was employed to optimize the reaction conditions. The optimal parameters are as follows: amount of acetic anhydride of 12%, radiation time of 11min, and microwave power of 100W. These optimal conditions predicted by RSM were confirmed that the degree of substitution (DS) of acetate starch is 0.0691mg/g and the physical and chemical properties of natural corn starch (NCS) and corn starch acetate (ACS) were further studied.The transparency, water separation, water absorption, expansion force, and solubility of ACS low substitution are better than NCS, while the NCS's hydrolysis percentage is higher than ACS, which indicate that the modified corn starch has better performance than native corn starch. The surface morphology of the corn starch acetate was examined by scanning electron microscope (SEM), which showed that it had a smooth surface and a spherical and polygonal shape. However, samples' shape is irregular. Crystal structure was observed by X-ray diffraction, and the ACS can determine the level of microwave technology that can destroy the extent of the crystal and amorphous regions. Fourier transform infrared (FTIR) spectroscopy shows that around 1750cm -1 carbonyl signal determines acetylation bonding successfully. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Assessing the applicability of WRF optimal parameters under the different precipitation simulations in the Greater Beijing Area

    NASA Astrophysics Data System (ADS)

    Di, Zhenhua; Duan, Qingyun; Wang, Chen; Ye, Aizhong; Miao, Chiyuan; Gong, Wei

    2018-03-01

    Forecasting skills of the complex weather and climate models have been improved by tuning the sensitive parameters that exert the greatest impact on simulated results based on more effective optimization methods. However, whether the optimal parameter values are still work when the model simulation conditions vary, which is a scientific problem deserving of study. In this study, a highly-effective optimization method, adaptive surrogate model-based optimization (ASMO), was firstly used to tune nine sensitive parameters from four physical parameterization schemes of the Weather Research and Forecasting (WRF) model to obtain better summer precipitation forecasting over the Greater Beijing Area in China. Then, to assess the applicability of the optimal parameter values, simulation results from the WRF model with default and optimal parameter values were compared across precipitation events, boundary conditions, spatial scales, and physical processes in the Greater Beijing Area. The summer precipitation events from 6 years were used to calibrate and evaluate the optimal parameter values of WRF model. Three boundary data and two spatial resolutions were adopted to evaluate the superiority of the calibrated optimal parameters to default parameters under the WRF simulations with different boundary conditions and spatial resolutions, respectively. Physical interpretations of the optimal parameters indicating how to improve precipitation simulation results were also examined. All the results showed that the optimal parameters obtained by ASMO are superior to the default parameters for WRF simulations for predicting summer precipitation in the Greater Beijing Area because the optimal parameters are not constrained by specific precipitation events, boundary conditions, and spatial resolutions. The optimal values of the nine parameters were determined from 127 parameter samples using the ASMO method, which showed that the ASMO method is very highly-efficient for optimizing WRF model parameters.

  17. Amplitude, Latency, and Peak Velocity in Accommodation and Disaccommodation Dynamics

    PubMed Central

    Papadatou, Eleni; Ferrer-Blasco, Teresa; Montés-Micó, Robert

    2017-01-01

    The aim of this work was to ascertain whether there are differences in amplitude, latency, and peak velocity of accommodation and disaccommodation responses when different analysis strategies are used to compute them, such as fitting different functions to the responses or for smoothing them prior to computing the parameters. Accommodation and disaccommodation responses from four subjects to pulse changes in demand were recorded by means of aberrometry. Three different strategies were followed to analyze such responses: fitting an exponential function to the experimental data; fitting a Boltzmann sigmoid function to the data; and smoothing the data. Amplitude, latency, and peak velocity of the responses were extracted. Significant differences were found between the peak velocity in accommodation computed by fitting an exponential function and smoothing the experimental data (mean difference 2.36 D/s). Regarding disaccommodation, significant differences were found between latency and peak velocity, calculated with the two same strategies (mean difference of 0.15 s and −3.56 D/s, resp.). The strategy used to analyze accommodation and disaccommodation responses seems to affect the parameters that describe accommodation and disaccommodation dynamics. These results highlight the importance of choosing the most adequate analysis strategy in each individual to obtain the parameters that characterize accommodation and disaccommodation dynamics. PMID:29226128

  18. Research directed toward improved echelles for the ultraviolet. [large space teslescope spectrographs

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Low frequency gratings obtainable with present technology, can meet the grating-efficiency design goals for potential space telescope spectrographs. Gratings made with changes in the three specific parameters: the ruling tool profile, the coating material, and the lubricants used during the ruling process were compared. A series of coatings and test gratings were fabricated and were examined for surface smoothness with a Nomarski differential interference microscope and an electron microsocope. Photomicrographs were obtained to show the difference in smoothness of the various coatings and rulings. Efficiency measurements were made for those test rulings that showed good groove characteristics: smoothness, proper ruling depth, and absence of defects (e.g., streaks, feathered edges and rough sides). Higher grating efficiency should be correlated with the degree of smoothness of both the coating and the grating groove.

  19. Analysis of mixed traffic flow with human-driving and autonomous cars based on car-following model

    NASA Astrophysics Data System (ADS)

    Zhu, Wen-Xing; Zhang, H. M.

    2018-04-01

    We investigated the mixed traffic flow with human-driving and autonomous cars. A new mathematical model with adjustable sensitivity and smooth factor was proposed to describe the autonomous car's moving behavior in which smooth factor is used to balance the front and back headway in a flow. A lemma and a theorem were proved to support the stability criteria in traffic flow. A series of simulations were carried out to analyze the mixed traffic flow. The fundamental diagrams were obtained from the numerical simulation results. The varying sensitivity and smooth factor of autonomous cars affect traffic flux, which exhibits opposite varying tendency with increasing parameters before and after the critical density. Moreover, the sensitivity of sensors and smooth factors play an important role in stabilizing the mixed traffic flow and suppressing the traffic jam.

  20. Comparison of smoothing methods for the development of a smoothed seismicity model for Alaska and the implications for seismic hazard

    NASA Astrophysics Data System (ADS)

    Moschetti, M. P.; Mueller, C. S.; Boyd, O. S.; Petersen, M. D.

    2013-12-01

    In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood values from all rate models to rank the smoothing methods. We find that adaptively smoothed seismicity models yield better likelihood values than the fixed smoothing models. Holding all other (source and ground motion) models constant, we calculate seismic hazard curves for all points across Alaska on a 0.1 degree grid, using the adaptively smoothed and fixed smoothed seismicity models separately. Because adaptively smoothed models concentrate seismicity near the earthquake epicenters where seismicity rates are high, the corresponding hazard values are higher, locally, but reduced with distance from observed seismicity, relative to the hazard from fixed-bandwidth models. We suggest that adaptively smoothed seismicity models be considered for implementation in the update to the ASHMs because of their improved likelihood estimates relative to fixed smoothing methods; however, concomitant increases in seismic hazard will cause significant changes in regions of high seismicity, such as near the subduction zone, northeast of Kotzebue, and along the NNE trending zone of seismicity in the Alaskan interior.

  1. Comparison of smoothing methods for the development of a smoothed seismicity model for Alaska and the implications for seismic hazard

    USGS Publications Warehouse

    Moschetti, Morgan P.; Mueller, Charles S.; Boyd, Oliver S.; Petersen, Mark D.

    2014-01-01

    In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood values from all rate models to rank the smoothing methods. We find that adaptively smoothed seismicity models yield better likelihood values than the fixed smoothing models. Holding all other (source and ground motion) models constant, we calculate seismic hazard curves for all points across Alaska on a 0.1 degree grid, using the adaptively smoothed and fixed smoothed seismicity models separately. Because adaptively smoothed models concentrate seismicity near the earthquake epicenters where seismicity rates are high, the corresponding hazard values are higher, locally, but reduced with distance from observed seismicity, relative to the hazard from fixed-bandwidth models. We suggest that adaptively smoothed seismicity models be considered for implementation in the update to the ASHMs because of their improved likelihood estimates relative to fixed smoothing methods; however, concomitant increases in seismic hazard will cause significant changes in regions of high seismicity, such as near the subduction zone, northeast of Kotzebue, and along the NNE trending zone of seismicity in the Alaskan interior.

  2. Inherent smoothness of intensity patterns for intensity modulated radiation therapy generated by simultaneous projection algorithms

    NASA Astrophysics Data System (ADS)

    Xiao, Ying; Michalski, Darek; Censor, Yair; Galvin, James M.

    2004-07-01

    The efficient delivery of intensity modulated radiation therapy (IMRT) depends on finding optimized beam intensity patterns that produce dose distributions, which meet given constraints for the tumour as well as any critical organs to be spared. Many optimization algorithms that are used for beamlet-based inverse planning are susceptible to large variations of neighbouring intensities. Accurately delivering an intensity pattern with a large number of extrema can prove impossible given the mechanical limitations of standard multileaf collimator (MLC) delivery systems. In this study, we apply Cimmino's simultaneous projection algorithm to the beamlet-based inverse planning problem, modelled mathematically as a system of linear inequalities. We show that using this method allows us to arrive at a smoother intensity pattern. Including nonlinear terms in the simultaneous projection algorithm to deal with dose-volume histogram (DVH) constraints does not compromise this property from our experimental observation. The smoothness properties are compared with those from other optimization algorithms which include simulated annealing and the gradient descent method. The simultaneous property of these algorithms is ideally suited to parallel computing technologies.

  3. Performance Trades Study for Robust Airfoil Shape Optimization

    NASA Technical Reports Server (NTRS)

    Li, Wu; Padula, Sharon

    2003-01-01

    From time to time, existing aircraft need to be redesigned for new missions with modified operating conditions such as required lift or cruise speed. This research is motivated by the needs of conceptual and preliminary design teams for smooth airfoil shapes that are similar to the baseline design but have improved drag performance over a range of flight conditions. The proposed modified profile optimization method (MPOM) modifies a large number of design variables to search for nonintuitive performance improvements, while avoiding off-design performance degradation. Given a good initial design, the MPOM generates fairly smooth airfoils that are better than the baseline without making drastic shape changes. Moreover, the MPOM allows users to gain valuable information by exploring performance trades over various design conditions. Four simulation cases of airfoil optimization in transonic viscous ow are included to demonstrate the usefulness of the MPOM as a performance trades study tool. Simulation results are obtained by solving fully turbulent Navier-Stokes equations and the corresponding discrete adjoint equations using an unstructured grid computational fluid dynamics code FUN2D.

  4. Cisapride stimulates contraction of idiopathic megacolonic smooth muscle in cats.

    PubMed

    Hasler, A H; Washabau, R J

    1997-01-01

    We have previously shown that cisapride, a substituted piperidinyl benzamide, stimulates contraction of healthy feline colonic smooth muscle. The purpose of the present investigation was to determine the effect of cisapride on feline idiopathic megacolonic smooth muscle function. Longitudinal smooth muscle strips from ascending and descending colon were obtained from cats with idiopathic megacolon, suspended in a 1.5 mM Ca(2+)-HEPES buffer solution (37 degrees C, 100% O2, pH 7.4), attached to isometric force transducers, and stretched to optimal muscle length (Lo). Control responses were obtained at each muscle site with acetylcholine (10(-8) to 10(-4) M), substance P (10(-11) to 10(-7) M), or potassium chloride (10 to 80 mM). Muscles were then stimulated with cumulative (10(-9) to 10(-6) M) doses of cisapride in the absence or presence of tetrodotoxin (10(-6) M) and atropine (10(-6) M), or in a 0 calcium HEPES buffer solution. In cats with idiopathic megacolon, cisapride stimulated contractions of longitudinal smooth muscle from both the ascending and the descending colon. Cisapride-induced contractions were similar in magnitude to those induced by substance P and acetylcholine in the ascending colon, but were less than those observed in the descending colon. Cisapride-induced contractions in megacolonic smooth muscle were only partially inhibited by tetrodotoxin and atropine, but were virtually abolished by removal of extracellular calcium. We concluded that cisapride-induced contractions of feline megacolonic smooth muscle are largely smooth muscle mediated and dependent on influx of extracellular calcium. Cisapride-induced contractions in megacolonic smooth muscle are only partially dependent on enteric cholinergic nerves. Thus, cisapride may be useful in the treatment of cats with idiopathic megacolon.

  5. NUMERICAL CONVERGENCE IN SMOOTHED PARTICLE HYDRODYNAMICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Qirong; Li, Yuexing; Hernquist, Lars

    2015-02-10

    We study the convergence properties of smoothed particle hydrodynamics (SPH) using numerical tests and simple analytic considerations. Our analysis shows that formal numerical convergence is possible in SPH only in the joint limit N → ∞, h → 0, and N{sub nb} → ∞, where N is the total number of particles, h is the smoothing length, and N{sub nb} is the number of neighbor particles within the smoothing volume used to compute smoothed estimates. Previous work has generally assumed that the conditions N → ∞ and h → 0 are sufficient to achieve convergence, while holding N{sub nb} fixed.more » We demonstrate that if N{sub nb} is held fixed as the resolution is increased, there will be a residual source of error that does not vanish as N → ∞ and h → 0. Formal numerical convergence in SPH is possible only if N{sub nb} is increased systematically as the resolution is improved. Using analytic arguments, we derive an optimal compromise scaling for N{sub nb} by requiring that this source of error balance that present in the smoothing procedure. For typical choices of the smoothing kernel, we find N{sub nb} ∝N {sup 0.5}. This means that if SPH is to be used as a numerically convergent method, the required computational cost does not scale with particle number as O(N), but rather as O(N {sup 1} {sup +} {sup δ}), where δ ≈ 0.5, with a weak dependence on the form of the smoothing kernel.« less

  6. Task-based modeling and optimization of a cone-beam CT scanner for musculoskeletal imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prakash, P.; Zbijewski, W.; Gang, G. J.

    2011-10-15

    Purpose: This work applies a cascaded systems model for cone-beam CT imaging performance to the design and optimization of a system for musculoskeletal extremity imaging. The model provides a quantitative guide to the selection of system geometry, source and detector components, acquisition techniques, and reconstruction parameters. Methods: The model is based on cascaded systems analysis of the 3D noise-power spectrum (NPS) and noise-equivalent quanta (NEQ) combined with factors of system geometry (magnification, focal spot size, and scatter-to-primary ratio) and anatomical background clutter. The model was extended to task-based analysis of detectability index (d') for tasks ranging in contrast and frequencymore » content, and d' was computed as a function of system magnification, detector pixel size, focal spot size, kVp, dose, electronic noise, voxel size, and reconstruction filter to examine trade-offs and optima among such factors in multivariate analysis. The model was tested quantitatively versus the measured NPS and qualitatively in cadaver images as a function of kVp, dose, pixel size, and reconstruction filter under conditions corresponding to the proposed scanner. Results: The analysis quantified trade-offs among factors of spatial resolution, noise, and dose. System magnification (M) was a critical design parameter with strong effect on spatial resolution, dose, and x-ray scatter, and a fairly robust optimum was identified at M {approx} 1.3 for the imaging tasks considered. The results suggested kVp selection in the range of {approx}65-90 kVp, the lower end (65 kVp) maximizing subject contrast and the upper end maximizing NEQ (90 kVp). The analysis quantified fairly intuitive results--e.g., {approx}0.1-0.2 mm pixel size (and a sharp reconstruction filter) optimal for high-frequency tasks (bone detail) compared to {approx}0.4 mm pixel size (and a smooth reconstruction filter) for low-frequency (soft-tissue) tasks. This result suggests a specific protocol for 1 x 1 (full-resolution) projection data acquisition followed by full-resolution reconstruction with a sharp filter for high-frequency tasks along with 2 x 2 binning reconstruction with a smooth filter for low-frequency tasks. The analysis guided selection of specific source and detector components implemented on the proposed scanner. The analysis also quantified the potential benefits and points of diminishing return in focal spot size, reduced electronic noise, finer detector pixels, and low-dose limits of detectability. Theoretical results agreed quantitatively with the measured NPS and qualitatively with evaluation of cadaver images by a musculoskeletal radiologist. Conclusions: A fairly comprehensive model for 3D imaging performance in cone-beam CT combines factors of quantum noise, system geometry, anatomical background, and imaging task. The analysis provided a valuable, quantitative guide to design, optimization, and technique selection for a musculoskeletal extremities imaging system under development.« less

  7. Shape optimization of road tunnel cross-section by simulated annealing

    NASA Astrophysics Data System (ADS)

    Sobótka, Maciej; Pachnicz, Michał

    2016-06-01

    The paper concerns shape optimization of a tunnel excavation cross-section. The study incorporates optimization procedure of the simulated annealing (SA). The form of a cost function derives from the energetic optimality condition, formulated in the authors' previous papers. The utilized algorithm takes advantage of the optimization procedure already published by the authors. Unlike other approaches presented in literature, the one introduced in this paper takes into consideration a practical requirement of preserving fixed clearance gauge. Itasca Flac software is utilized in numerical examples. The optimal excavation shapes are determined for five different in situ stress ratios. This factor significantly affects the optimal topology of excavation. The resulting shapes are elongated in the direction of a principal stress greater value. Moreover, the obtained optimal shapes have smooth contours circumscribing the gauge.

  8. Quantum Monte Carlo calculations of NiO

    NASA Astrophysics Data System (ADS)

    Maezono, Ryo; Towler, Mike D.; Needs, Richard. J.

    2008-03-01

    We describe variational and diffusion quantum Monte Carlo (VMC and DMC) calculations [1] of NiO using a 1024-electron simulation cell. We have used a smooth, norm-conserving, Dirac-Fock pseudopotential [2] in our work. Our trial wave functions were of Slater-Jastrow form, containing orbitals generated in Gaussian-basis UHF periodic calculations. Jastrow factor is optimized using variance minimization with optimized cutoff lengths using the same scheme as our previous work. [4] We apply the lattice regulated scheme [5] to evaluate non-local pseudopotentials in DMC and find the scheme improves the smoothness of the energy-volume curve. [1] CASINO ver.2.1 User Manual, University of Cambridge (2007). [2] J.R. Trail et.al., J. Chem. Phys. 122, 014112 (2005). [3] CRYSTAL98 User's Manual, University of Torino (1998). [4] Ryo Maezono et.al., Phys. Rev. Lett., 98, 025701 (2007). [5] Michele Casula, Phys. Rev. B 74, 161102R (2006).

  9. Results and Error Estimates from GRACE Forward Modeling over Antarctica

    NASA Astrophysics Data System (ADS)

    Bonin, Jennifer; Chambers, Don

    2013-04-01

    Forward modeling using a weighted least squares technique allows GRACE information to be projected onto a pre-determined collection of local basins. This decreases the impact of spatial leakage, allowing estimates of mass change to be better localized. The technique is especially valuable where models of current-day mass change are poor, such as over Antarctica. However when tested previously, the least squares technique has required constraints in the form of added process noise in order to be reliable. Poor choice of local basin layout has also adversely affected results, as has the choice of spatial smoothing used with GRACE. To develop design parameters which will result in correct high-resolution mass detection and to estimate the systematic errors of the method over Antarctica, we use a "truth" simulation of the Antarctic signal. We apply the optimal parameters found from the simulation to RL05 GRACE data across Antarctica and the surrounding ocean. We particularly focus on separating the Antarctic peninsula's mass signal from that of the rest of western Antarctica. Additionally, we characterize how well the technique works for removing land leakage signal from the nearby ocean, particularly that near the Drake Passage.

  10. Influence of Filler Wire Feed Rate in Laser-Arc Hybrid Welding of T-butt Joint in Shipbuilding Steel with Different Optical Setups

    NASA Astrophysics Data System (ADS)

    Unt, Anna; Poutiainen, Ilkka; Salminen, Antti

    In this paper, a study of laser-arc hybrid welding featuring three different process fibres was conducted to build knowledge about process behaviour and discuss potential benefits for improving the weld properties. The welding parameters affect the weld geometry considerably, as an example the increase in welding speed usually decreases the penetration and a larger beam diameter usually widens the weld. The laser hybrid welding system equipped with process fibres with 200, 300 and 600 μm core diameter were used to produce fillet welds. Shipbuilding steel AH36 plates with 8 mm thickness were welded with Hybrid-Laser-Arc-Welding (HLAW) in inversed T configuration, the effects of the filler wire feed rate and the beam positioning distance from the joint plane were investigated. Based on the metallographic cross-sections, the effect of process parameters on the joint geometry was studied. Joints with optimized properties (full penetration, soundness, smooth transition from bead to base material) were produced with 200 μm and 600 μm process fibres, while fiber with 300 μm core diameter produced welds with unacceptable levels of porosity.

  11. Fiber Laser Welding-Brazing Characteristics of Dissimilar Metals AZ31B Mg Alloys to Copper with Mg-Based Filler

    NASA Astrophysics Data System (ADS)

    Zhao, Xiaoye; Tan, Caiwang; Meng, Shenghao; Chen, Bo; Song, Xiaoguo; Li, Liqun; Feng, Jicai

    2018-03-01

    Fiber laser welding-brazing of 1-mm-thick AZ31B Mg alloys to 1.5-mm-thick copper (T2) with Mg-based filler was performed in a lap configuration. The weld appearance, interfacial microstructure and mechanical properties were investigated with different heat inputs. The results indicated that processing windows for optimizing appropriate welding parameters were relatively narrow in this case. Visually acceptable joints with certain strength were achieved at appropriate welding parameters. The maximum tensile-shear fracture load of laser-welded-brazed Mg/Cu joint could reach 1730 N at the laser power of 1200 W, representing 64.1% joint efficiency relative to AZ31Mg base metal. The eutectic structure (α-Mg + Mg2Cu) and Mg-Cu intermetallic compound was observed at the Mg/Cu interface, and Mg-Al-Cu ternary intermetallic compound were identified between intermetallics and eutectic structure at high heat input. All the joints fractured at the Mg-Cu interface. However, the fracture mode was found to differ. For laser power of 1200 W, the surface was characterized by tearing edge, while that with poor joint strength was almost dominated by smooth surface or flat tear pattern.

  12. Effect of Weld Tool Geometry on Friction Stir Welded Ti-6Al-4V

    NASA Technical Reports Server (NTRS)

    Querin, Joseph A.; Schneider, Judy A.

    2008-01-01

    In this study, flat 0.250" thick Ti-6Al-4V panels were friction stir welded (FSWed) using weld tools with tapered pins. The five different pin geometries of the weld tools included: 0 degree (straight cylinder), 15 degree, 30 degree, 45 degree, and 60 degree angles on the frustum. All weld tools had a smooth 7 degree concave shoulder and were made from microwave sintered tungsten carbide. For each weld tool geometry, the FSW process parameters were optimized to eliminate internal defects. All the welds were produced in position control with a 2.5 degree lead angle using a butt joint configuration for the panels. The process parameters of spindle rpm and travel speed were varied, altering the hot working conditions imparted to the workpiece. Load cells on the FSWing machine allowed for the torque, the plunge force, and the plow force to be recorded during welding. Resulting mechanical properties were evaluated from tensile tests results of the FSWjoints. Variations in the material flow were investigated by use of microstructural analysis including optical microscopy (OM), scanning electron microscopy (SEM), and orientation image mapping (aIM).

  13. Open-pit coal mine production sequencing incorporating grade blending and stockpiling options: An application from an Indian mine

    NASA Astrophysics Data System (ADS)

    Kumar, Ashish; Chatterjee, Snehamoy

    2017-05-01

    Production scheduling is a crucial aspect of the mining industry. An optimal and efficient production schedule can increase the profits manifold and reduce the amount of waste to be handled. Production scheduling for coal mines is necessary to maintain consistency in the quality and quantity parameters of coal supplied to power plants. Irregularity in the quality parameters of the coal can lead to heavy losses in coal-fired power plants. Moreover, the stockpiling of coal poses environmental and fire problems owing to low incubation periods. This article proposes a production scheduling formulation for open-pit coal mines including stockpiling and blending opportunities, which play a major role in maintaining the quality and quantity of supplied coal. The proposed formulation was applied to a large open-pit coal mine in India. This contribution provides an efficient production scheduling formulation for coal mines after utilizing the stockpile coal within the incubation periods with the maximization of discounted cash flows. At the same time, consistency is maintained in the quality and quantity of coal to power plants through blending and stockpiling options to ensure smooth functioning.

  14. Smooth muscle-like tissue constructs with circumferentially oriented cells formed by the cell fiber technology.

    PubMed

    Hsiao, Amy Y; Okitsu, Teru; Onoe, Hiroaki; Kiyosawa, Mahiro; Teramae, Hiroki; Iwanaga, Shintaroh; Kazama, Tomohiko; Matsumoto, Taro; Takeuchi, Shoji

    2015-01-01

    The proper functioning of many organs and tissues containing smooth muscles greatly depends on the intricate organization of the smooth muscle cells oriented in appropriate directions. Consequently controlling the cellular orientation in three-dimensional (3D) cellular constructs is an important issue in engineering tissues of smooth muscles. However, the ability to precisely control the cellular orientation at the microscale cannot be achieved by various commonly used 3D tissue engineering building blocks such as spheroids. This paper presents the formation of coiled spring-shaped 3D cellular constructs containing circumferentially oriented smooth muscle-like cells differentiated from dedifferentiated fat (DFAT) cells. By using the cell fiber technology, DFAT cells suspended in a mixture of extracellular proteins possessing an optimized stiffness were encapsulated in the core region of alginate shell microfibers and uniformly aligned to the longitudinal direction. Upon differentiation induction to the smooth muscle lineage, DFAT cell fibers self-assembled to coiled spring structures where the cells became circumferentially oriented. By changing the initial core-shell microfiber diameter, we demonstrated that the spring pitch and diameter could be controlled. 21 days after differentiation induction, the cell fibers contained high percentages of ASMA-positive and calponin-positive cells. Our technology to create these smooth muscle-like spring constructs enabled precise control of cellular alignment and orientation in 3D. These constructs can further serve as tissue engineering building blocks for larger organs and cellular implants used in clinical treatments.

  15. Smooth Muscle-Like Tissue Constructs with Circumferentially Oriented Cells Formed by the Cell Fiber Technology

    PubMed Central

    Hsiao, Amy Y.; Okitsu, Teru; Onoe, Hiroaki; Kiyosawa, Mahiro; Teramae, Hiroki; Iwanaga, Shintaroh; Kazama, Tomohiko; Matsumoto, Taro; Takeuchi, Shoji

    2015-01-01

    The proper functioning of many organs and tissues containing smooth muscles greatly depends on the intricate organization of the smooth muscle cells oriented in appropriate directions. Consequently controlling the cellular orientation in three-dimensional (3D) cellular constructs is an important issue in engineering tissues of smooth muscles. However, the ability to precisely control the cellular orientation at the microscale cannot be achieved by various commonly used 3D tissue engineering building blocks such as spheroids. This paper presents the formation of coiled spring-shaped 3D cellular constructs containing circumferentially oriented smooth muscle-like cells differentiated from dedifferentiated fat (DFAT) cells. By using the cell fiber technology, DFAT cells suspended in a mixture of extracellular proteins possessing an optimized stiffness were encapsulated in the core region of alginate shell microfibers and uniformly aligned to the longitudinal direction. Upon differentiation induction to the smooth muscle lineage, DFAT cell fibers self-assembled to coiled spring structures where the cells became circumferentially oriented. By changing the initial core-shell microfiber diameter, we demonstrated that the spring pitch and diameter could be controlled. 21 days after differentiation induction, the cell fibers contained high percentages of ASMA-positive and calponin-positive cells. Our technology to create these smooth muscle-like spring constructs enabled precise control of cellular alignment and orientation in 3D. These constructs can further serve as tissue engineering building blocks for larger organs and cellular implants used in clinical treatments. PMID:25734774

  16. Fast Quantitative Susceptibility Mapping with L1-Regularization and Automatic Parameter Selection

    PubMed Central

    Bilgic, Berkin; Fan, Audrey P.; Polimeni, Jonathan R.; Cauley, Stephen F.; Bianciardi, Marta; Adalsteinsson, Elfar; Wald, Lawrence L.; Setsompop, Kawin

    2014-01-01

    Purpose To enable fast reconstruction of quantitative susceptibility maps with Total Variation penalty and automatic regularization parameter selection. Methods ℓ1-regularized susceptibility mapping is accelerated by variable-splitting, which allows closed-form evaluation of each iteration of the algorithm by soft thresholding and FFTs. This fast algorithm also renders automatic regularization parameter estimation practical. A weighting mask derived from the magnitude signal can be incorporated to allow edge-aware regularization. Results Compared to the nonlinear Conjugate Gradient (CG) solver, the proposed method offers 20× speed-up in reconstruction time. A complete pipeline including Laplacian phase unwrapping, background phase removal with SHARP filtering and ℓ1-regularized dipole inversion at 0.6 mm isotropic resolution is completed in 1.2 minutes using Matlab on a standard workstation compared to 22 minutes using the Conjugate Gradient solver. This fast reconstruction allows estimation of regularization parameters with the L-curve method in 13 minutes, which would have taken 4 hours with the CG algorithm. Proposed method also permits magnitude-weighted regularization, which prevents smoothing across edges identified on the magnitude signal. This more complicated optimization problem is solved 5× faster than the nonlinear CG approach. Utility of the proposed method is also demonstrated in functional BOLD susceptibility mapping, where processing of the massive time-series dataset would otherwise be prohibitive with the CG solver. Conclusion Online reconstruction of regularized susceptibility maps may become feasible with the proposed dipole inversion. PMID:24259479

  17. Genetic Algorithm-Based Model Order Reduction of Aeroservoelastic Systems with Consistant States

    NASA Technical Reports Server (NTRS)

    Zhu, Jin; Wang, Yi; Pant, Kapil; Suh, Peter M.; Brenner, Martin J.

    2017-01-01

    This paper presents a model order reduction framework to construct linear parameter-varying reduced-order models of flexible aircraft for aeroservoelasticity analysis and control synthesis in broad two-dimensional flight parameter space. Genetic algorithms are used to automatically determine physical states for reduction and to generate reduced-order models at grid points within parameter space while minimizing the trial-and-error process. In addition, balanced truncation for unstable systems is used in conjunction with the congruence transformation technique to achieve locally optimal realization and weak fulfillment of state consistency across the entire parameter space. Therefore, aeroservoelasticity reduced-order models at any flight condition can be obtained simply through model interpolation. The methodology is applied to the pitch-plant model of the X-56A Multi-Use Technology Testbed currently being tested at NASA Armstrong Flight Research Center for flutter suppression and gust load alleviation. The present studies indicate that the reduced-order model with more than 12× reduction in the number of states relative to the original model is able to accurately predict system response among all input-output channels. The genetic-algorithm-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The interpolated aeroservoelasticity reduced order models exhibit smooth pole transition and continuously varying gains along a set of prescribed flight conditions, which verifies consistent state representation obtained by congruence transformation. The present model order reduction framework can be used by control engineers for robust aeroservoelasticity controller synthesis and novel vehicle design.

  18. Remote sensing of soil moisture content over bare fields at 1.4 GHz frequency

    NASA Technical Reports Server (NTRS)

    Wang, J. R.; Choudhury, B. J.

    1980-01-01

    A simple method of estimating moisture content (W) of a bare soil from the observed brightness temperature (T sub B) at 1.4 GHz is discussed. The method is based on a radiative transfer model calculation, which has been successfully used in the past to account for many observational results, with some modifications to take into account the effect of surface roughness. Besides the measured T sub B's, the three additional inputs required by the method are the effective soil thermodynamic temperature, the precise relation between W and the smooth field brightness temperature T sub B and a parameter specifying the surface roughness characteristics. The soil effective temperature can be readily measured and the procedures of estimating surface roughness parameter and obtaining the relation between W and smooth field brightness temperature are discussed in detail. Dual polarized radiometric measurements at an off-nadir incident angle are sufficient to estimate both surface roughness parameter and W, provided that the relation between W and smooth field brightness temperature at the same angle is known. The method of W estimate is demonstrated with two sets of experimental data, one from a controlled field experiment by a mobile tower and the other, from aircraft overflight. The results from both data sets are encouraging when the estimated W's are compared with the acquired ground truth of W's in the top 2 cm layer. An offset between the estimated and the measured W's exists in the results of the analyses, but that can be accounted for by the presently poor knowledge of the relationship between W and smooth field brightness temperature for various types of soils. An approach to quantify this relationship for different soils and thus improve the method of W estimate is suggested.

  19. Effect of 3-substituted 1,4-benzodiazepin-2-ones on bradykinin-induced smooth muscle contraction.

    PubMed

    Virych, P A; Shelyuk, O V; Kabanova, T A; Khalimova, E I; Martynyuk, V S; Pavlovsky, V I; Andronati, S A

    2017-01-01

    Biochemical properties of 3-substituted 1,4-benzodiazepine determined by the characteristics of their chemical structure. Influence of 3-substituted 1,4-benzodiazepin-2-ones on maximal normalized rate and amplitudes of isometric smooth muscle contraction in rats was investigated. Compounds MX-1775 and MX-1828 demonstrated the similar inhibition effect on bradykinin-induced contraction of smooth muscle like competitive inhibitor des-arg9-bradykinin-acetate to bradykinin B2-receptors. MX-1626 demonstrated unidirectional changes of maximal normalized rate and force of smooth muscle that proportionally depended on bradykinin concentration in the range 10-10-10-6 M. MX-1828 has statistically significant decrease of normalized rate of smooth muscle contraction for bradykinin concentrations 10-10 and 10-9 M by 20.7 and 8.6%, respectively, but for agonist concentration 10-6 M, this parameter increased by 10.7% and amplitude was reduced by 29.5%. Compounds MX-2011, MX-1785 and MX-2004 showed no natural effect on bradykinin-induced smooth muscle contraction. Compounds MX-1775, MX-1828, MX-1626 were selected for further research of their influence on kinin-kallikrein system and pain perception.

  20. Rapid Airplane Parametric Input Design (RAPID)

    NASA Technical Reports Server (NTRS)

    Smith, Robert E.

    1995-01-01

    RAPID is a methodology and software system to define a class of airplane configurations and directly evaluate surface grids, volume grids, and grid sensitivity on and about the configurations. A distinguishing characteristic which separates RAPID from other airplane surface modellers is that the output grids and grid sensitivity are directly applicable in CFD analysis. A small set of design parameters and grid control parameters govern the process which is incorporated into interactive software for 'real time' visual analysis and into batch software for the application of optimization technology. The computed surface grids and volume grids are suitable for a wide range of Computational Fluid Dynamics (CFD) simulation. The general airplane configuration has wing, fuselage, horizontal tail, and vertical tail components. The double-delta wing and tail components are manifested by solving a fourth order partial differential equation (PDE) subject to Dirichlet and Neumann boundary conditions. The design parameters are incorporated into the boundary conditions and therefore govern the shapes of the surfaces. The PDE solution yields a smooth transition between boundaries. Surface grids suitable for CFD calculation are created by establishing an H-type topology about the configuration and incorporating grid spacing functions in the PDE equation for the lifting components and the fuselage definition equations. User specified grid parameters govern the location and degree of grid concentration. A two-block volume grid about a configuration is calculated using the Control Point Form (CPF) technique. The interactive software, which runs on Silicon Graphics IRIS workstations, allows design parameters to be continuously varied and the resulting surface grid to be observed in real time. The batch software computes both the surface and volume grids and also computes the sensitivity of the output grid with respect to the input design parameters by applying the precompiler tool ADIFOR to the grid generation program. The output of ADIFOR is a new source code containing the old code plus expressions for derivatives of specified dependent variables (grid coordinates) with respect to specified independent variables (design parameters). The RAPID methodology and software provide a means of rapidly defining numerical prototypes, grids, and grid sensitivity of a class of airplane configurations. This technology and software is highly useful for CFD research for preliminary design and optimization processes.

  1. Optimal interpolation analysis of leaf area index using MODIS data

    USGS Publications Warehouse

    Gu, Yingxin; Belair, Stephane; Mahfouf, Jean-Francois; Deblonde, Godelieve

    2006-01-01

    A simple data analysis technique for vegetation leaf area index (LAI) using Moderate Resolution Imaging Spectroradiometer (MODIS) data is presented. The objective is to generate LAI data that is appropriate for numerical weather prediction. A series of techniques and procedures which includes data quality control, time-series data smoothing, and simple data analysis is applied. The LAI analysis is an optimal combination of the MODIS observations and derived climatology, depending on their associated errors σo and σc. The “best estimate” LAI is derived from a simple three-point smoothing technique combined with a selection of maximum LAI (after data quality control) values to ensure a higher quality. The LAI climatology is a time smoothed mean value of the “best estimate” LAI during the years of 2002–2004. The observation error is obtained by comparing the MODIS observed LAI with the “best estimate” of the LAI, and the climatological error is obtained by comparing the “best estimate” of LAI with the climatological LAI value. The LAI analysis is the result of a weighting between these two errors. Demonstration of the method described in this paper is presented for the 15-km grid of Meteorological Service of Canada (MSC)'s regional version of the numerical weather prediction model. The final LAI analyses have a relatively smooth temporal evolution, which makes them more appropriate for environmental prediction than the original MODIS LAI observation data. They are also more realistic than the LAI data currently used operationally at the MSC which is based on land-cover databases.

  2. SPH with dynamical smoothing length adjustment based on the local flow kinematics

    NASA Astrophysics Data System (ADS)

    Olejnik, Michał; Szewc, Kamil; Pozorski, Jacek

    2017-11-01

    Due to the Lagrangian nature of Smoothed Particle Hydrodynamics (SPH), the adaptive resolution remains a challenging task. In this work, we first analyse the influence of the simulation parameters and the smoothing length on solution accuracy, in particular in high strain regions. Based on this analysis we develop a novel approach to dynamically adjust the kernel range for each SPH particle separately, accounting for the local flow kinematics. We use the Okubo-Weiss parameter that distinguishes the strain and vorticity dominated regions in the flow domain. The proposed development is relatively simple and implies only a moderate computational overhead. We validate the modified SPH algorithm for a selection of two-dimensional test cases: the Taylor-Green flow, the vortex spin-down, the lid-driven cavity and the dam-break flow against a sharp-edged obstacle. The simulation results show good agreement with the reference data and improvement of the long-term accuracy for unsteady flows. For the lid-driven cavity case, the proposed dynamical adjustment remedies the problem of tensile instability (particle clustering).

  3. Point Set Denoising Using Bootstrap-Based Radial Basis Function.

    PubMed

    Liew, Khang Jie; Ramli, Ahmad; Abd Majid, Ahmad

    2016-01-01

    This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.

  4. Landscape Encodings Enhance Optimization

    PubMed Central

    Klemm, Konstantin; Mehta, Anita; Stadler, Peter F.

    2012-01-01

    Hard combinatorial optimization problems deal with the search for the minimum cost solutions (ground states) of discrete systems under strong constraints. A transformation of state variables may enhance computational tractability. It has been argued that these state encodings are to be chosen invertible to retain the original size of the state space. Here we show how redundant non-invertible encodings enhance optimization by enriching the density of low-energy states. In addition, smooth landscapes may be established on encoded state spaces to guide local search dynamics towards the ground state. PMID:22496860

  5. Determinant Representation of N-Times Darboux Transformation for the Defocusing Nonlinear SCHRÖDINGER Equation

    NASA Astrophysics Data System (ADS)

    Han, Jingwei; Yu, Jing; He, Jingsong

    2013-10-01

    The determinant expression T[N] of a new Darboux transformation (DT) for the Ablowitz-Kaup-Newell-Segur equation are given in this paper. By making use of this DT under the reduction r = q*, we construct determinant expressions of dark N-soliton solution for the defocusing NLS equation. Except known one-soliton, we provide smooth two-soliton and smooth N-soliton on a certain domain of parameter for the defocusing NLS equation.

  6. Determination of the optimal mesh parameters for Iguassu centrifuge flow and separation calculations

    NASA Astrophysics Data System (ADS)

    Romanihin, S. M.; Tronin, I. V.

    2016-09-01

    We present the method and the results of the determination for optimal computational mesh parameters for axisymmetric modeling of flow and separation in the Iguasu gas centrifuge. The aim of this work was to determine the mesh parameters which provide relatively low computational cost whithout loss of accuracy. We use direct search optimization algorithm to calculate optimal mesh parameters. Obtained parameters were tested by the calculation of the optimal working regime of the Iguasu GC. Separative power calculated using the optimal mesh parameters differs less than 0.5% from the result obtained on the detailed mesh. Presented method can be used to determine optimal mesh parameters of the Iguasu GC with different rotor speeds.

  7. Intrathoracic airway wall detection using graph search and scanner PSF information

    NASA Astrophysics Data System (ADS)

    Reinhardt, Joseph M.; Park, Wonkyu; Hoffman, Eric A.; Sonka, Milan

    1997-05-01

    Measurements of the in vivo bronchial tree can be used to assess regional airway physiology. High-resolution CT (HRCT) provides detailed images of the lungs and has been used to evaluate bronchial airway geometry. Such measurements have been sued to assess diseases affecting the airways, such as asthma and cystic fibrosis, to measure airway response to external stimuli, and to evaluate the mechanics of airway collapse in sleep apnea. To routinely use CT imaging in a clinical setting to evaluate the in vivo airway tree, there is a need for an objective, automatic technique for identifying the airway tree in the CT images and measuring airway geometry parameters. Manual or semi-automatic segmentation and measurement of the airway tree from a 3D data set may require several man-hours of work, and the manual approaches suffer from inter-observer and intra- observer variabilities. This paper describes a method for automatic airway tree analysis that combines accurate airway wall location estimation with a technique for optimal airway border smoothing. A fuzzy logic, rule-based system is used to identify the branches of the 3D airway tree in thin-slice HRCT images. Raycasting is combined with a model-based parameter estimation technique to identify the approximate inner and outer airway wall borders in 2D cross-sections through the image data set. Finally, a 2D graph search is used to optimize the estimated airway wall locations and obtain accurate airway borders. We demonstrate this technique using CT images of a plexiglass tube phantom.

  8. Vibration Sensor-Based Bearing Fault Diagnosis Using Ellipsoid-ARTMAP and Differential Evolution Algorithms

    PubMed Central

    Liu, Chang; Wang, Guofeng; Xie, Qinglu; Zhang, Yanchao

    2014-01-01

    Effective fault classification of rolling element bearings provides an important basis for ensuring safe operation of rotating machinery. In this paper, a novel vibration sensor-based fault diagnosis method using an Ellipsoid-ARTMAP network (EAM) and a differential evolution (DE) algorithm is proposed. The original features are firstly extracted from vibration signals based on wavelet packet decomposition. Then, a minimum-redundancy maximum-relevancy algorithm is introduced to select the most prominent features so as to decrease feature dimensions. Finally, a DE-based EAM (DE-EAM) classifier is constructed to realize the fault diagnosis. The major characteristic of EAM is that the sample distribution of each category is realized by using a hyper-ellipsoid node and smoothing operation algorithm. Therefore, it can depict the decision boundary of disperse samples accurately and effectively avoid over-fitting phenomena. To optimize EAM network parameters, the DE algorithm is presented and two objectives, including both classification accuracy and nodes number, are simultaneously introduced as the fitness functions. Meanwhile, an exponential criterion is proposed to realize final selection of the optimal parameters. To prove the effectiveness of the proposed method, the vibration signals of four types of rolling element bearings under different loads were collected. Moreover, to improve the robustness of the classifier evaluation, a two-fold cross validation scheme is adopted and the order of feature samples is randomly arranged ten times within each fold. The results show that DE-EAM classifier can recognize the fault categories of the rolling element bearings reliably and accurately. PMID:24936949

  9. Effect of processing parameters on microstructure of MoS{sub 2} ultra-thin films synthesized by chemical vapor deposition method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Yang; You, Suping; Sun, Kewei

    2015-06-15

    MoS{sub 2} ultra-thin layers are synthesized using a chemical vapor deposition method based on the sulfurization of molybdenum trioxide (MoO{sub 3}). The ultra-thin layers are characterized by X-ray diffraction (XRD), photoluminescence (PL) spectroscopy and atomic force microscope (AFM). Based on our experimental results, all the processing parameters, such as the tilt angle of substrate, applied voltage, heating time and the weight of source materials have effect on the microstructures of the layers. In this paper, the effects of such processing parameters on the crystal structures and morphologies of the as-grown layers are studied. It is found that the film obtainedmore » with the tilt angle of 0.06° is more uniform. A larger applied voltage is preferred to the growth of MoS{sub 2} thin films at a certain heating time. In order to obtain the ultra-thin layers of MoS{sub 2}, the weight of 0.003 g of source materials is preferred. Under our optimal experimental conditions, the surface of the film is smooth and composed of many uniformly distributed and aggregated particles, and the ultra-thin MoS{sub 2} atomic layers (1∼10 layers) covers an area of more than 2 mm×2 mm.« less

  10. Estimation of hyper-parameters for a hierarchical model of combined cortical and extra-brain current sources in the MEG inverse problem.

    PubMed

    Morishige, Ken-ichi; Yoshioka, Taku; Kawawaki, Dai; Hiroe, Nobuo; Sato, Masa-aki; Kawato, Mitsuo

    2014-11-01

    One of the major obstacles in estimating cortical currents from MEG signals is the disturbance caused by magnetic artifacts derived from extra-cortical current sources such as heartbeats and eye movements. To remove the effect of such extra-brain sources, we improved the hybrid hierarchical variational Bayesian method (hyVBED) proposed by Fujiwara et al. (NeuroImage, 2009). hyVBED simultaneously estimates cortical and extra-brain source currents by placing dipoles on cortical surfaces as well as extra-brain sources. This method requires EOG data for an EOG forward model that describes the relationship between eye dipoles and electric potentials. In contrast, our improved approach requires no EOG and less a priori knowledge about the current variance of extra-brain sources. We propose a new method, "extra-dipole," that optimally selects hyper-parameter values regarding current variances of the cortical surface and extra-brain source dipoles. With the selected parameter values, the cortical and extra-brain dipole currents were accurately estimated from the simulated MEG data. The performance of this method was demonstrated to be better than conventional approaches, such as principal component analysis and independent component analysis, which use only statistical properties of MEG signals. Furthermore, we applied our proposed method to measured MEG data during covert pursuit of a smoothly moving target and confirmed its effectiveness. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Eye-Target Synchrony and Attention

    NASA Astrophysics Data System (ADS)

    Contreras, R.; Kolster, R.; Basu, S.; Voss, H. U.; Ghajar, J.; Suh, M.; Bahar, S.

    2007-03-01

    Eye-target synchrony is critical during smooth pursuit. We apply stochastic phase synchronization to human pursuit of a moving target, in both normal and mild traumatic brain injured (TBI) subjects. Smooth pursuit utilizes the same neural networks used by attention. To test whether smooth pursuit is modulated by attention, subjects tracked a target while loaded with tasks involving working memory. Preliminary results suggest that additional cognitive load increases normal subjects' performance, while the effect is reversed in TBI patients. We correlate these results with eye-target synchrony. Additionally, we correlate eye-target synchrony with frequency of target motion, and discuss how the range of frequencies for optimal synchrony depends on the shift from attentional to automatic-response time scales. Synchrony deficits in TBI patients can be correlated with specific regions of brain damage imaged with diffusion tensor imaging (DTI).

  12. Smooth and vertical facet formation for AlGaN-based deep-UV laser diodes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bogart, Katherine Huderle Andersen; Shul, Randy John; Stevens, Jeffrey

    2008-10-01

    Using a two-step method of plasma and wet chemical etching, we demonstrate smooth, vertical facets for use in Al{sub x} Ga{sub 1-x} N-based deep-ultraviolet laser-diode heterostructures where x = 0 to 0.5. Optimization of plasma-etching conditions included increasing both temperature and radiofrequency (RF) power to achieve a facet angle of 5 deg from vertical. Subsequent etching in AZ400K developer was investigated to reduce the facet surface roughness and improve facet verticality. The resulting combined processes produced improved facet sidewalls with an average angle of 0.7 deg from vertical and less than 2-nm root-mean-square (RMS) roughness, yielding an estimated reflectivity greatermore » than 95% of that of a perfectly smooth and vertical facet.« less

  13. Interpreting SBUV Smoothing Errors: an Example Using the Quasi-biennial Oscillation

    NASA Technical Reports Server (NTRS)

    Kramarova, N. A.; Bhartia, Pawan K.; Frith, S. M.; McPeters, R. D.; Stolarski, R. S.

    2013-01-01

    The Solar Backscattered Ultraviolet (SBUV) observing system consists of a series of instruments that have been measuring both total ozone and the ozone profile since 1970. SBUV measures the profile in the upper stratosphere with a resolution that is adequate to resolve most of the important features of that region. In the lower stratosphere the limited vertical resolution of the SBUV system means that there are components of the profile variability that SBUV cannot measure. The smoothing error, as defined in the optimal estimation retrieval method, describes the components of the profile variability that the SBUV observing system cannot measure. In this paper we provide a simple visual interpretation of the SBUV smoothing error by comparing SBUV ozone anomalies in the lower tropical stratosphere associated with the quasi-biennial oscillation (QBO) to anomalies obtained from the Aura Microwave Limb Sounder (MLS). We describe a methodology for estimating the SBUV smoothing error for monthly zonal mean (mzm) profiles. We construct covariance matrices that describe the statistics of the inter-annual ozone variability using a 6 yr record of Aura MLS and ozonesonde data. We find that the smoothing error is of the order of 1percent between 10 and 1 hPa, increasing up to 15-20 percent in the troposphere and up to 5 percent in the mesosphere. The smoothing error for total ozone columns is small, mostly less than 0.5 percent. We demonstrate that by merging the partial ozone columns from several layers in the lower stratosphere/troposphere into one thick layer, we can minimize the smoothing error. We recommend using the following layer combinations to reduce the smoothing error to about 1 percent: surface to 25 hPa (16 hPa) outside (inside) of the narrow equatorial zone 20 S-20 N.

  14. Non-smooth Hopf-type bifurcations arising from impact–friction contact events in rotating machinery

    PubMed Central

    Mora, Karin; Budd, Chris; Glendinning, Paul; Keogh, Patrick

    2014-01-01

    We analyse the novel dynamics arising in a nonlinear rotor dynamic system by investigating the discontinuity-induced bifurcations corresponding to collisions with the rotor housing (touchdown bearing surface interactions). The simplified Föppl/Jeffcott rotor with clearance and mass unbalance is modelled by a two degree of freedom impact–friction oscillator, as appropriate for a rigid rotor levitated by magnetic bearings. Two types of motion observed in experiments are of interest in this paper: no contact and repeated instantaneous contact. We study how these are affected by damping and stiffness present in the system using analytical and numerical piecewise-smooth dynamical systems methods. By studying the impact map, we show that these types of motion arise at a novel non-smooth Hopf-type bifurcation from a boundary equilibrium bifurcation point for certain parameter values. A local analysis of this bifurcation point allows us a complete understanding of this behaviour in a general setting. The analysis identifies criteria for the existence of such smooth and non-smooth bifurcations, which is an essential step towards achieving reliable and robust controllers that can take compensating action. PMID:25383034

  15. Trajectory planning and optimal tracking for an industrial mobile robot

    NASA Astrophysics Data System (ADS)

    Hu, Huosheng; Brady, J. Michael; Probert, Penelope J.

    1994-02-01

    This paper introduces a unified approach to trajectory planning and tracking for an industrial mobile robot subject to non-holonomic constraints. We show (1) how a smooth trajectory is generated that takes into account the constraints from the dynamic environment and the robot kinematics; and (2) how a general predictive controller works to provide optimal tracking capability for nonlinear systems. The tracking performance of the proposed guidance system is analyzed by simulation.

  16. Normal aging affects movement execution but not visual motion working memory and decision-making delay during cue-dependent memory-based smooth-pursuit.

    PubMed

    Fukushima, Kikuro; Barnes, Graham R; Ito, Norie; Olley, Peter M; Warabi, Tateo

    2014-07-01

    Aging affects virtually all functions including sensory/motor and cognitive activities. While retinal image motion is the primary input for smooth-pursuit, its efficiency/accuracy depends on cognitive processes. Elderly subjects exhibit gain decrease during initial and steady-state pursuit, but reports on latencies are conflicting. Using a cue-dependent memory-based smooth-pursuit task, we identified important extra-retinal mechanisms for initial pursuit in young adults including cue information priming and extra-retinal drive components (Ito et al. in Exp Brain Res 229:23-35, 2013). We examined aging effects on parameters for smooth-pursuit using the same tasks. Elderly subjects were tested during three task conditions as previously described: memory-based pursuit, simple ramp-pursuit just to follow motion of a single spot, and popping-out of the correct spot during memory-based pursuit to enhance retinal image motion. Simple ramp-pursuit was used as a task that did not require visual motion working memory. To clarify aging effects, we then compared the results with the previous young subject data. During memory-based pursuit, elderly subjects exhibited normal working memory of cue information. Most movement-parameters including pursuit latencies differed significantly between memory-based pursuit and simple ramp-pursuit and also between young and elderly subjects. Popping-out of the correct spot motion was ineffective for enhancing initial pursuit in elderly subjects. However, the latency difference between memory-based pursuit and simple ramp-pursuit in individual subjects, which includes decision-making delay in the memory task, was similar between the two groups. Our results suggest that smooth-pursuit latencies depend on task conditions and that, although the extra-retinal mechanisms were functional for initial pursuit in elderly subjects, they were less effective.

  17. A comparison of regional flood frequency analysis approaches in a simulation framework

    NASA Astrophysics Data System (ADS)

    Ganora, D.; Laio, F.

    2016-07-01

    Regional frequency analysis (RFA) is a well-established methodology to provide an estimate of the flood frequency curve at ungauged (or scarcely gauged) sites. Different RFA approaches exist, depending on the way the information is transferred to the site of interest, but it is not clear in the literature if a specific method systematically outperforms the others. The aim of this study is to provide a framework wherein carrying out the intercomparison by building up a virtual environment based on synthetically generated data. The considered regional approaches include: (i) a unique regional curve for the whole region; (ii) a multiple-region model where homogeneous subregions are determined through cluster analysis; (iii) a Region-of-Influence model which defines a homogeneous subregion for each site; (iv) a spatially smooth estimation procedure where the parameters of the regional model vary continuously along the space. Virtual environments are generated considering different patterns of heterogeneity, including step change and smooth variations. If the region is heterogeneous, with the parent distribution changing continuously within the region, the spatially smooth regional approach outperforms the others, with overall errors 10-50% lower than the other methods. In the case of a step-change, the spatially smooth and clustering procedures perform similarly if the heterogeneity is moderate, while clustering procedures work better when the step-change is severe. To extend our findings, an extensive sensitivity analysis has been performed to investigate the effect of sample length, number of virtual stations, return period of the predicted quantile, variability of the scale parameter of the parent distribution, number of predictor variables and different parent distribution. Overall, the spatially smooth approach appears as the most robust approach as its performances are more stable across different patterns of heterogeneity, especially when short records are considered.

  18. 2D dynamic studies combined with the surface curvature analysis to predict Arias Intensity amplification

    NASA Astrophysics Data System (ADS)

    Torgoev, Almaz; Havenith, Hans-Balder

    2016-07-01

    A 2D elasto-dynamic modelling of the pure topographic seismic response is performed for six models with a total length of around 23.0 km. These models are reconstructed from the real topographic settings of the landslide-prone slopes situated in the Mailuu-Suu River Valley, Southern Kyrgyzstan. The main studied parameter is the Arias Intensity (Ia, m/sec), which is applied in the GIS-based Newmark method to regionally map the seismically-induced landslide susceptibility. This method maps the Ia values via empirical attenuation laws and our studies investigate a potential to include topographic input into them. Numerical studies analyse several signals with varying shape and changing central frequency values. All tests demonstrate that the spectral amplification patterns directly affect the amplification of the Ia values. These results let to link the 2D distribution of the topographically amplified Ia values with the parameter called as smoothed curvature. The amplification values for the low-frequency signals are better correlated with the curvature smoothed over larger spatial extent, while those values for the high-frequency signals are more linked to the curvature with smaller smoothing extent. The best predictions are provided by the curvature smoothed over the extent calculated according to Geli's law. The sample equations predicting the Ia amplification based on the smoothed curvature are presented for the sinusoid-shape input signals. These laws cannot be directly implemented in the regional Newmark method, as 3D amplification of the Ia values addresses more problem complexities which are not studied here. Nevertheless, our 2D results prepare the theoretical framework which can potentially be applied to the 3D domain and, therefore, represent a robust basis for these future research targets.

  19. Humans make near-optimal adjustments of control to initial body configuration in vertical squat jumping.

    PubMed

    Bobbert, Maarten F; Richard Casius, L J; Kistemaker, Dinant A

    2013-05-01

    We investigated adjustments of control to initial posture in squat jumping. Eleven male subjects jumped from three initial postures: preferred initial posture (PP), a posture in which the trunk was rotated 18° more backward (BP) and a posture in which it was rotated 15° more forward (FP) than in PP. Kinematics, ground reaction forces and electromyograms (EMG) were collected. EMG was rectified and smoothed to obtain smoothed rectified EMG (srEMG). Subjects showed adjustments in srEMG histories, most conspicuously a shift in srEMG-onset of rectus femoris (REC): from early in BP to late in FP. Jumps from the subjects' initial postures were simulated with a musculoskeletal model comprising four segments and six Hill-type muscles, which had muscle stimulation (STIM) over time as input. STIM of each muscle changed from initial to maximal at STIM-onset, and STIM-onsets were optimized using jump height as criterion. Optimal simulated jumps from BP, PP and FP were similar to jumps of the subjects. Optimal solutions primarily differed in STIM-onset of REC: from early in BP to late in FP. Because the subjects' adjustments in srEMG-onsets were similar to adjustments of the model's optimal STIM-onsets, it was concluded that the former were near-optimal. With the model we also showed that near-maximum jumps from BP, PP and FP could be achieved when STIM-onset of REC depended on initial hip joint angle and STIM-onsets of the other muscles were posture-independent. A control theory that relies on a mapping from initial posture to STIM-onsets seems a parsimonious alternative to theories relying on internal optimal control models. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.

  20. Controlled laboratory testing of arthroscopic shaver systems: do blades, contact pressure, and speed influence their performance?

    PubMed

    Wieser, Karl; Erschbamer, Matthias; Neuhofer, Stefan; Ek, Eugene T; Gerber, Christian; Meyer, Dominik C

    2012-10-01

    The purposes of this study were (1) to establish a reproducible, standardized testing protocol to evaluate the performance of different shaver systems and blades in a controlled, laboratory setting, and (2) to determine the optimal use of different blades with respect to the influence of contact pressure and speed of blade rotation. A holding device was developed for reproducible testing of soft-tissue (tendon and meniscal) resection performance in a submerged environment, after loading of the shaver with interchangeable weights. The Karl Storz Powershaver S2 (Karl Storz, Tuttlingen, Germany), the Stryker Power Shaver System (Stryker, Kalamazoo, MI), and the Dyonics Power Shaver System (Smith & Nephew, Andover, MA) were tested, with different 5.5-mm shaver blades and varied contact pressure and rotation speed. For quality testing, serrated shaver blades were evaluated at 40× image magnification. Overall, more than 150 test cycles were performed. No significant differences could be detected between comparable blade types from different manufacturers. Shavers with a serrated inner blade and smooth outer blade performed significantly better than the standard smooth resectors (P < .001). Teeth on the outer layer of the blade did not lead to any further improvement of resection (P = .482). Optimal contact pressure ranged between 6 and 8 N, and optimal speed was found to be 2,000 to 2,500 rpm. Minimal blunting of the shaver blades occurred after soft-tissue resection; however, with bone resection, progressive blunting of the shaver blades was observed. Arthroscopic shavers can be tested in a controlled setting. The performance of the tested shaver types appears to be fairly independent of the manufacturer. For tendon resection, a smooth outer blade and serrated inner blade were optimal. This is one of the first established independent and quantitative assessments of arthroscopic shaver systems and blades. We believe that this study will assist the surgeon in choosing the optimal tool for the desired effect. Copyright © 2012 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  1. Efficient Mean Field Variational Algorithm for Data Assimilation (Invited)

    NASA Astrophysics Data System (ADS)

    Vrettas, M. D.; Cornford, D.; Opper, M.

    2013-12-01

    Data assimilation algorithms combine available observations of physical systems with the assumed model dynamics in a systematic manner, to produce better estimates of initial conditions for prediction. Broadly they can be categorized in three main approaches: (a) sequential algorithms, (b) sampling methods and (c) variational algorithms which transform the density estimation problem to an optimization problem. However, given finite computational resources, only a handful of ensemble Kalman filters and 4DVar algorithms have been applied operationally to very high dimensional geophysical applications, such as weather forecasting. In this paper we present a recent extension to our variational Bayesian algorithm which seeks the ';optimal' posterior distribution over the continuous time states, within a family of non-stationary Gaussian processes. Our initial work on variational Bayesian approaches to data assimilation, unlike the well-known 4DVar method which seeks only the most probable solution, computes the best time varying Gaussian process approximation to the posterior smoothing distribution for dynamical systems that can be represented by stochastic differential equations. This approach was based on minimising the Kullback-Leibler divergence, over paths, between the true posterior and our Gaussian process approximation. Whilst the observations were informative enough to keep the posterior smoothing density close to Gaussian the algorithm proved very effective on low dimensional systems (e.g. O(10)D). However for higher dimensional systems, the high computational demands make the algorithm prohibitively expensive. To overcome the difficulties presented in the original framework and make our approach more efficient in higher dimensional systems we have been developing a new mean field version of the algorithm which treats the state variables at any given time as being independent in the posterior approximation, while still accounting for their relationships in the mean solution arising from the original system dynamics. Here we present this new mean field approach, illustrating its performance on a range of benchmark data assimilation problems whose dimensionality varies from O(10) to O(10^3)D. We emphasise that the variational Bayesian approach we adopt, unlike other variational approaches, provides a natural bound on the marginal likelihood of the observations given the model parameters which also allows for inference of (hyper-) parameters such as observational errors, parameters in the dynamical model and model error representation. We also stress that since our approach is intrinsically parallel it can be implemented very efficiently to address very long data assimilation time windows. Moreover, like most traditional variational approaches our Bayesian variational method has the benefit of being posed as an optimisation problem therefore its complexity can be tuned to the available computational resources. We finish with a sketch of possible future directions.

  2. Optimal application of Morrison's iterative noise removal for deconvolution. Appendices

    NASA Technical Reports Server (NTRS)

    Ioup, George E.; Ioup, Juliette W.

    1987-01-01

    Morrison's iterative method of noise removal, or Morrison's smoothing, is applied in a simulation to noise-added data sets of various noise levels to determine its optimum use. Morrison's smoothing is applied for noise removal alone, and for noise removal prior to deconvolution. For the latter, an accurate method is analyzed to provide confidence in the optimization. The method consists of convolving the data with an inverse filter calculated by taking the inverse discrete Fourier transform of the reciprocal of the transform of the response of the system. Various length filters are calculated for the narrow and wide Gaussian response functions used. Deconvolution of non-noisy data is performed, and the error in each deconvolution calculated. Plots are produced of error versus filter length; and from these plots the most accurate length filters determined. The statistical methodologies employed in the optimizations of Morrison's method are similar. A typical peak-type input is selected and convolved with the two response functions to produce the data sets to be analyzed. Both constant and ordinate-dependent Gaussian distributed noise is added to the data, where the noise levels of the data are characterized by their signal-to-noise ratios. The error measures employed in the optimizations are the L1 and L2 norms. Results of the optimizations for both Gaussians, both noise types, and both norms include figures of optimum iteration number and error improvement versus signal-to-noise ratio, and tables of results. The statistical variation of all quantities considered is also given.

  3. Research on bulbous bow optimization based on the improved PSO algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Sheng-long; Zhang, Bao-ji; Tezdogan, Tahsin; Xu, Le-ping; Lai, Yu-yang

    2017-08-01

    In order to reduce the total resistance of a hull, an optimization framework for the bulbous bow optimization was presented. The total resistance in calm water was selected as the objective function, and the overset mesh technique was used for mesh generation. RANS method was used to calculate the total resistance of the hull. In order to improve the efficiency and smoothness of the geometric reconstruction, the arbitrary shape deformation (ASD) technique was introduced to change the shape of the bulbous bow. To improve the global search ability of the particle swarm optimization (PSO) algorithm, an improved particle swarm optimization (IPSO) algorithm was proposed to set up the optimization model. After a series of optimization analyses, the optimal hull form was found. It can be concluded that the simulation based design framework built in this paper is a promising method for bulbous bow optimization.

  4. The clustering of galaxies in the SDSS-III Baryon Oscillation Spectroscopic Survey: effect of smoothing of density field on reconstruction and anisotropic BAO analysis

    NASA Astrophysics Data System (ADS)

    Vargas-Magaña, Mariana; Ho, Shirley; Fromenteau, Sebastien.; Cuesta, Antonio. J.

    2017-05-01

    The reconstruction algorithm introduced by Eisenstein et al., which is widely used in clustering analysis, is based on the inference of the first-order Lagrangian displacement field from the Gaussian smoothed galaxy density field in redshift space. The smoothing scale applied to the density field affects the inferred displacement field that is used to move the galaxies, and partially erases the non-linear evolution of the density field. In this article, we explore this crucial step in the reconstruction algorithm. We study the performance of the reconstruction technique using two metrics: first, we study the performance using the anisotropic clustering, extending previous studies focused on isotropic clustering; secondly, we study its effect on the displacement field. We find that smoothing has a strong effect in the quadrupole of the correlation function and affects the accuracy and precision with which we can measure DA(z) and H(z). We find that the optimal smoothing scale to use in the reconstruction algorithm applied to Baryonic Oscillations Spectroscopic Survey-Constant (stellar) MASS (CMASS) is between 5 and 10 h-1 Mpc. Varying from the `usual' 15-5 h-1 Mpc shows ˜0.3 per cent variations in DA(z) and ˜0.4 per cent H(z) and uncertainties are also reduced by 40 per cent and 30 per cent, respectively. We also find that the accuracy of velocity field reconstruction depends strongly on the smoothing scale used for the density field. We measure the bias and uncertainties associated with different choices of smoothing length.

  5. The biophysics of asthmatic airway smooth muscle.

    PubMed

    Stephens, Newman L; Li, Weilong; Jiang, He; Unruh, H; Ma, Xuefei

    2003-09-16

    It is clear that significant advances have been made in the understanding of the physiology, biochemistry and molecular biology of airway smooth muscle (ASM) contraction and how the knowledge obtained from these approaches may be used to elucidate the pathogenesis of asthma. Not to belittle other theories of smooth muscle contraction extant in the field, perhaps the most outstanding development has been the formulation of plasticity theory. This may radically alter our understanding of smooth muscle contraction. Its message is that while shortening velocity and capacity are linear functions of length, active force is length independent. These changes are explained by the ability of thick filament protein to depolymerize at short lengths and to increase numbers of contractile units in series at lengths greater than optimal length or L(ref). Other advances are represented by the report that the major part of ASM shortening is complete within the initial first 20% of contraction time, that the nature and history of loading determine the extent of shortening and that these findings can be explained by the finding that the crossbridges are cycling four times faster than in the remaining time. Another unexpected finding is that late in the course of isotonic relaxation the muscle undergoes spontaneous activation which delays relaxation and smoothes it out; speculatively this could minimize turbulence of airflow. On the applied front evidence now shows the shortening ability of bronchial smooth muscle of human subjects of asthma is significantly increased. Measurements also indicate that increased smooth muscle myosin light chain kinase content, via increased actomyosin ATPase activity could be responsible for the changes in contractility.

  6. Localized states in an unbounded neural field equation with smooth firing rate function: a multi-parameter analysis.

    PubMed

    Faye, Grégory; Rankin, James; Chossat, Pascal

    2013-05-01

    The existence of spatially localized solutions in neural networks is an important topic in neuroscience as these solutions are considered to characterize working (short-term) memory. We work with an unbounded neural network represented by the neural field equation with smooth firing rate function and a wizard hat spatial connectivity. Noting that stationary solutions of our neural field equation are equivalent to homoclinic orbits in a related fourth order ordinary differential equation, we apply normal form theory for a reversible Hopf bifurcation to prove the existence of localized solutions; further, we present results concerning their stability. Numerical continuation is used to compute branches of localized solution that exhibit snaking-type behaviour. We describe in terms of three parameters the exact regions for which localized solutions persist.

  7. Development of Advanced Methods of Structural and Trajectory Analysis for Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Ardema, Mark D.; Windhorst, Robert; Phillips, James

    1998-01-01

    This paper develops a near-optimal guidance law for generating minimum fuel, time, or cost fixed-range trajectories for supersonic transport aircraft. The approach uses a choice of new state variables along with singular perturbation techniques to time-scale decouple the dynamic equations into multiple equations of single order (second order for the fast dynamics). Application of the maximum principle to each of the decoupled equations, as opposed to application to the original coupled equations, avoids the two point boundary value problem and transforms the problem from one of a functional optimization to one of multiple function optimizations. It is shown that such an approach produces well known aircraft performance results such as minimizing the Brequet factor for minimum fuel consumption and the energy climb path. Furthermore, the new state variables produce a consistent calculation of flight path angle along the trajectory, eliminating one of the deficiencies in the traditional energy state approximation. In addition, jumps in the energy climb path are smoothed out by integration of the original dynamic equations at constant load factor. Numerical results performed for a supersonic transport design show that a pushover dive followed by a pullout at nominal load factors are sufficient maneuvers to smooth the jump.

  8. Smoothing strategies combined with ARIMA and neural networks to improve the forecasting of traffic accidents.

    PubMed

    Barba, Lida; Rodríguez, Nibaldo; Montt, Cecilia

    2014-01-01

    Two smoothing strategies combined with autoregressive integrated moving average (ARIMA) and autoregressive neural networks (ANNs) models to improve the forecasting of time series are presented. The strategy of forecasting is implemented using two stages. In the first stage the time series is smoothed using either, 3-point moving average smoothing, or singular value Decomposition of the Hankel matrix (HSVD). In the second stage, an ARIMA model and two ANNs for one-step-ahead time series forecasting are used. The coefficients of the first ANN are estimated through the particle swarm optimization (PSO) learning algorithm, while the coefficients of the second ANN are estimated with the resilient backpropagation (RPROP) learning algorithm. The proposed models are evaluated using a weekly time series of traffic accidents of Valparaíso, Chilean region, from 2003 to 2012. The best result is given by the combination HSVD-ARIMA, with a MAPE of 0:26%, followed by MA-ARIMA with a MAPE of 1:12%; the worst result is given by the MA-ANN based on PSO with a MAPE of 15:51%.

  9. A compressed sensing based 3D resistivity inversion algorithm for hydrogeological applications

    NASA Astrophysics Data System (ADS)

    Ranjan, Shashi; Kambhammettu, B. V. N. P.; Peddinti, Srinivasa Rao; Adinarayana, J.

    2018-04-01

    Image reconstruction from discrete electrical responses pose a number of computational and mathematical challenges. Application of smoothness constrained regularized inversion from limited measurements may fail to detect resistivity anomalies and sharp interfaces separated by hydro stratigraphic units. Under favourable conditions, compressed sensing (CS) can be thought of an alternative to reconstruct the image features by finding sparse solutions to highly underdetermined linear systems. This paper deals with the development of a CS assisted, 3-D resistivity inversion algorithm for use with hydrogeologists and groundwater scientists. CS based l1-regularized least square algorithm was applied to solve the resistivity inversion problem. Sparseness in the model update vector is introduced through block oriented discrete cosine transformation, with recovery of the signal achieved through convex optimization. The equivalent quadratic program was solved using primal-dual interior point method. Applicability of the proposed algorithm was demonstrated using synthetic and field examples drawn from hydrogeology. The proposed algorithm has outperformed the conventional (smoothness constrained) least square method in recovering the model parameters with much fewer data, yet preserving the sharp resistivity fronts separated by geologic layers. Resistivity anomalies represented by discrete homogeneous blocks embedded in contrasting geologic layers were better imaged using the proposed algorithm. In comparison to conventional algorithm, CS has resulted in an efficient (an increase in R2 from 0.62 to 0.78; a decrease in RMSE from 125.14 Ω-m to 72.46 Ω-m), reliable, and fast converging (run time decreased by about 25%) solution.

  10. Seeking to Improve Low Energy Neutral Atom Detection in Space

    NASA Technical Reports Server (NTRS)

    Shappirio, M.; Coplan, M.; Chornay, D.; Collier, M.; Herrero, F.; Ogilvie, K.; Williams, E.

    2007-01-01

    The detection of energetic neutral atoms allows for the remote examination of the interactions between plasmas and neutral populations in space. Before these neutral atoms can be measured, they must first be converted to ions. For the low energy end of this spectrum, interaction with a conversion surface is often the most efficient method to convert neutrals into ions. It is generally thought that the most efficient surfaces are low work functions materials. However, by their very nature, these surfaces are highly reactive and unstable, and therefore are not suitable for space missions where conditions cannot be controlled as they are in a laboratory. We therefore are looking to optimize a stable surface for conversion efficiency. Conversion efficiency can be increased either by changing the incident angle of the neutral particles to be grazing incidence and using stable surfaces with high conversion efficiencies. We have examined how to increase the angle of incidence from -80 degrees to -89 degrees, while maintaining or improving the total active conversion surface area without increasing the overall volume of the instrument. We are developing a method to micro-machine silicon, which will reduce the volume to surface area ratio by a factor of 60. We have also examined the material properties that affect the conversion efficiency of the surface for stable surfaces. Some of the parameters we have examined are work function, smoothness, and bond structure. We find that for stable surfaces, the most important property is the smoothness of the surface.

  11. Ultrasound window-modulated compounding Nakagami imaging: Resolution improvement and computational acceleration for liver characterization.

    PubMed

    Ma, Hsiang-Yang; Lin, Ying-Hsiu; Wang, Chiao-Yin; Chen, Chiung-Nien; Ho, Ming-Chih; Tsui, Po-Hsiang

    2016-08-01

    Ultrasound Nakagami imaging is an attractive method for visualizing changes in envelope statistics. Window-modulated compounding (WMC) Nakagami imaging was reported to improve image smoothness. The sliding window technique is typically used for constructing ultrasound parametric and Nakagami images. Using a large window overlap ratio may improve the WMC Nakagami image resolution but reduces computational efficiency. Therefore, the objectives of this study include: (i) exploring the effects of the window overlap ratio on the resolution and smoothness of WMC Nakagami images; (ii) proposing a fast algorithm that is based on the convolution operator (FACO) to accelerate WMC Nakagami imaging. Computer simulations and preliminary clinical tests on liver fibrosis samples (n=48) were performed to validate the FACO-based WMC Nakagami imaging. The results demonstrated that the width of the autocorrelation function and the parameter distribution of the WMC Nakagami image reduce with the increase in the window overlap ratio. One-pixel shifting (i.e., sliding the window on the image data in steps of one pixel for parametric imaging) as the maximum overlap ratio significantly improves the WMC Nakagami image quality. Concurrently, the proposed FACO method combined with a computational platform that optimizes the matrix computation can accelerate WMC Nakagami imaging, allowing the detection of liver fibrosis-induced changes in envelope statistics. FACO-accelerated WMC Nakagami imaging is a new-generation Nakagami imaging technique with an improved image quality and fast computation. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. MO-G-17A-07: Improved Image Quality in Brain F-18 FDG PET Using Penalized-Likelihood Image Reconstruction Via a Generalized Preconditioned Alternating Projection Algorithm: The First Patient Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmidtlein, CR; Beattie, B; Humm, J

    2014-06-15

    Purpose: To investigate the performance of a new penalized-likelihood PET image reconstruction algorithm using the 1{sub 1}-norm total-variation (TV) sum of the 1st through 4th-order gradients as the penalty. Simulated and brain patient data sets were analyzed. Methods: This work represents an extension of the preconditioned alternating projection algorithm (PAPA) for emission-computed tomography. In this new generalized algorithm (GPAPA), the penalty term is expanded to allow multiple components, in this case the sum of the 1st to 4th order gradients, to reduce artificial piece-wise constant regions (“staircase” artifacts typical for TV) seen in PAPA images penalized with only the 1stmore » order gradient. Simulated data were used to test for “staircase” artifacts and to optimize the penalty hyper-parameter in the root-mean-squared error (RMSE) sense. Patient FDG brain scans were acquired on a GE D690 PET/CT (370 MBq at 1-hour post-injection for 10 minutes) in time-of-flight mode and in all cases were reconstructed using resolution recovery projectors. GPAPA images were compared PAPA and RMSE-optimally filtered OSEM (fully converged) in simulations and to clinical OSEM reconstructions (3 iterations, 32 subsets) with 2.6 mm XYGaussian and standard 3-point axial smoothing post-filters. Results: The results from the simulated data show a significant reduction in the 'staircase' artifact for GPAPA compared to PAPA and lower RMSE (up to 35%) compared to optimally filtered OSEM. A simple power-law relationship between the RMSE-optimal hyper-parameters and the noise equivalent counts (NEC) per voxel is revealed. Qualitatively, the patient images appear much sharper and with less noise than standard clinical images. The convergence rate is similar to OSEM. Conclusions: GPAPA reconstructions using the 1{sub 1}-norm total-variation sum of the 1st through 4th-order gradients as the penalty show great promise for the improvement of image quality over that currently achieved with clinical OSEM reconstructions.« less

  13. Pair Identity and Smooth Variation Rules Applicable for the Spectroscopic Parameters of H2O Transitions Involving High-J States

    NASA Technical Reports Server (NTRS)

    Ma, Q.; Tipping, R. H.; Lavrentieva, N. N.

    2010-01-01

    Two basic rules (i.e. the pair identity and the smooth variation) applicable for H2O transitions involving high-J states have been discovered. The origins of these rules are the properties of the energy levels and wavefunctions of H2O states with the quantum number J above certain boundaries. As a result, for lines involving high-J states in individually defined groups, all their spectroscopic parameters (i.e. the transition wavenumber, intensity, pressure-broadened half-width, pressure-induced shift, and temperature exponent) must follow these rules. One can use these rules to screen spectroscopic data provided by databases and to identify possible errors. In addition, by using extrapolation methods within the individual groups, one is able to predict the spectroscopic parameters for lines in this group involving very high-J states. The latter are required in developing high-temperature molecular spectroscopic databases such as HITEMP.

  14. On configurational forces for gradient-enhanced inelasticity

    NASA Astrophysics Data System (ADS)

    Floros, Dimosthenis; Larsson, Fredrik; Runesson, Kenneth

    2018-04-01

    In this paper we discuss how configurational forces can be computed in an efficient and robust manner when a constitutive continuum model of gradient-enhanced viscoplasticity is adopted, whereby a suitably tailored mixed variational formulation in terms of displacements and micro-stresses is used. It is demonstrated that such a formulation produces sufficient regularity to overcome numerical difficulties that are notorious for a local constitutive model. In particular, no nodal smoothing of the internal variable fields is required. Moreover, the pathological mesh sensitivity that has been reported in the literature for a standard local model is no longer present. Numerical results in terms of configurational forces are shown for (1) a smooth interface and (2) a discrete edge crack. The corresponding configurational forces are computed for different values of the intrinsic length parameter. It is concluded that the convergence of the computed configurational forces with mesh refinement depends strongly on this parameter value. Moreover, the convergence behavior for the limit situation of rate-independent plasticity is unaffected by the relaxation time parameter.

  15. Nanostructured lipid carriers for oral bioavailability enhancement of raloxifene: Design and in vivo study.

    PubMed

    Shah, Nirmal V; Seth, Avinash K; Balaraman, R; Aundhia, Chintan J; Maheshwari, Rajesh A; Parmar, Ghanshyam R

    2016-05-01

    The objective of present work was to utilize potential of nanostructured lipid carriers (NLCs) for improvement in oral bioavailability of raloxifene hydrochloride (RLX). RLX loaded NLCs were prepared by solvent diffusion method using glyceryl monostearate and Capmul MCM C8 as solid lipid and liquid lipid, respectively. A full 3(2) factorial design was utilized to study the effect of two independent parameters namely solid lipid to liquid lipid ratio and concentration of stabilizer on the entrapment efficiency of prepared NLCs. The statistical evaluation confirmed pronounced improvement in entrapment efficiency when liquid lipid content in the formulation increased from 5% w/w to 15% w/w. Solid-state characterization studies (DSC and XRD) in optimized formulation NLC-8 revealed transformation of RLX from crystalline to amorphous form. Optimized formulation showed 32.50 ± 5.12 nm average particle size and -12.8 ± 3.2 mV zeta potential that impart good stability of NLCs dispersion. In vitro release study showed burst release for initial 8 h followed by sustained release up to 36 h. TEM study confirmed smooth surface discrete spherical nano sized particles. To draw final conclusion, in vivo pharmacokinetic study was carried out that showed 3.75-fold enhancements in bioavailability with optimized NLCs formulation than plain drug suspension. These results showed potential of NLCs for significant improvement in oral bioavailability of poorly soluble RLX.

  16. Nanostructured lipid carriers for oral bioavailability enhancement of raloxifene: Design and in vivo study

    PubMed Central

    Shah, Nirmal V.; Seth, Avinash K.; Balaraman, R.; Aundhia, Chintan J.; Maheshwari, Rajesh A.; Parmar, Ghanshyam R.

    2016-01-01

    The objective of present work was to utilize potential of nanostructured lipid carriers (NLCs) for improvement in oral bioavailability of raloxifene hydrochloride (RLX). RLX loaded NLCs were prepared by solvent diffusion method using glyceryl monostearate and Capmul MCM C8 as solid lipid and liquid lipid, respectively. A full 32 factorial design was utilized to study the effect of two independent parameters namely solid lipid to liquid lipid ratio and concentration of stabilizer on the entrapment efficiency of prepared NLCs. The statistical evaluation confirmed pronounced improvement in entrapment efficiency when liquid lipid content in the formulation increased from 5% w/w to 15% w/w. Solid-state characterization studies (DSC and XRD) in optimized formulation NLC-8 revealed transformation of RLX from crystalline to amorphous form. Optimized formulation showed 32.50 ± 5.12 nm average particle size and −12.8 ± 3.2 mV zeta potential that impart good stability of NLCs dispersion. In vitro release study showed burst release for initial 8 h followed by sustained release up to 36 h. TEM study confirmed smooth surface discrete spherical nano sized particles. To draw final conclusion, in vivo pharmacokinetic study was carried out that showed 3.75-fold enhancements in bioavailability with optimized NLCs formulation than plain drug suspension. These results showed potential of NLCs for significant improvement in oral bioavailability of poorly soluble RLX. PMID:27222747

  17. Smoothing analysis of slug tests data for aquifer characterization at laboratory scale

    NASA Astrophysics Data System (ADS)

    Aristodemo, Francesco; Ianchello, Mario; Fallico, Carmine

    2018-07-01

    The present paper proposes a smoothing analysis of hydraulic head data sets obtained by means of different slug tests introduced in a confined aquifer. Laboratory experiments were performed through a 3D large-scale physical model built at the University of Calabria. The hydraulic head data were obtained by a pressure transducer placed in the injection well and subjected to a processing operation to smooth out the high-frequency noise occurring in the recorded signals. The adopted smoothing techniques working in time, frequency and time-frequency domain are the Savitzky-Golay filter modeled by third-order polynomial, the Fourier Transform and two types of Wavelet Transform (Mexican hat and Morlet). The performances of the filtered time series of the hydraulic heads for different slug volumes and measurement frequencies were statistically analyzed in terms of optimal fitting of the classical Cooper's equation. For practical purposes, the hydraulic heads smoothed by the involved techniques were used to determine the hydraulic conductivity of the aquifer. The energy contents and the frequency oscillations of the hydraulic head variations in the aquifer were exploited in the time-frequency domain by means of Wavelet Transform as well as the non-linear features of the observed hydraulic head oscillations around the theoretical Cooper's equation.

  18. GlobiPack v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartlett, Roscoe

    2010-03-31

    GlobiPack contains a small collection of optimization globalization algorithms. These algorithms are used by optimization and various nonlinear equation solver algorithms.Used as the line-search procedure with Newton and Quasi-Newton optimization and nonlinear equation solver methods. These are standard published 1-D line search algorithms such as are described in the book Nocedal and Wright Numerical Optimization: 2nd edition, 2006. One set of algorithms were copied and refactored from the existing open-source Trilinos package MOOCHO where the linear search code is used to globalize SQP methods. This software is generic to any mathematical optimization problem where smooth derivatives exist. There is nomore » specific connection or mention whatsoever to any specific application, period. You cannot find more general mathematical software.« less

  19. Comparative Study of Speckle Filtering Methods in PolSAR Radar Images

    NASA Astrophysics Data System (ADS)

    Boutarfa, S.; Bouchemakh, L.; Smara, Y.

    2015-04-01

    Images acquired by polarimetric SAR (PolSAR) radar systems are characterized by the presence of a noise called speckle. This noise has a multiplicative nature, corrupts both the amplitude and phase images, which complicates data interpretation, degrades segmentation performance and reduces the detectability of targets. Hence, the need to preprocess the images by adapted filtering methods before analysis.In this paper, we present a comparative study of implemented methods for reducing speckle in PolSAR images. These developed filters are: refined Lee filter based on the estimation of the minimum mean square error MMSE, improved Sigma filter with detection of strong scatterers based on the calculation of the coherency matrix to detect the different scatterers in order to preserve the polarization signature and maintain structures that are necessary for image interpretation, filtering by stationary wavelet transform SWT using multi-scale edge detection and the technique for improving the wavelet coefficients called SSC (sum of squared coefficients), and Turbo filter which is a combination between two complementary filters the refined Lee filter and the wavelet transform SWT. One filter can boost up the results of the other.The originality of our work is based on the application of these methods to several types of images: amplitude, intensity and complex, from a satellite or an airborne radar, and on the optimization of wavelet filtering by adding a parameter in the calculation of the threshold. This parameter will control the filtering effect and get a good compromise between smoothing homogeneous areas and preserving linear structures.The methods are applied to the fully polarimetric RADARSAT-2 images (HH, HV, VH, VV) acquired on Algiers, Algeria, in C-band and to the three polarimetric E-SAR images (HH, HV, VV) acquired on Oberpfaffenhofen area located in Munich, Germany, in P-band.To evaluate the performance of each filter, we used the following criteria: smoothing homogeneous areas, preserving edges and polarimetric information.Experimental results are included to illustrate the different implemented methods.

  20. SU-E-T-314: Dosimetric Effect of Smooth Drilling On Proton Compensators in Prostate Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reyhan, M; Yue, N; Zou, J

    2015-06-15

    Purpose: To evaluate the dosimetric effect of smooth drilling of proton compensators in proton prostate plans when compared to typical plunge drilling settings. Methods: Twelve prostate patients were planned in Eclipse treatment planning system using three different drill settings Smooth, Plunge drill A, and Plunge drill B. The differences between A and B were: spacing X[cm]: 0.4(A), 0.1(B), spacing Y[cm]: 0.35(A), 0.1(B), row offset [cm]: 0.2(A), 0(B). Planning parameters were kept consistent between the different plans, which utilized two opposed lateral beams arrangement. Mean differences absolute dosimetry in OAR constraints are presented. Results: The smooth drilled compensator based plans yieldedmore » equivalent target coverage to the plans generated with drill settings A and B. Overall, the smooth compensators reduced dose to the majority of organs at risk compared to settings A and B. Constraints were reduced for the following OAR: Rectal V75 by 2.12 and 2.48%, V70 by 2.45 and 2.91%, V65 by 2.85 and 3.37%, V50 by 2.3 and 5.1%, Bladder V65 by 4.49 and 3.67%, Penial Bulb mean by 3.7 and 4.2Gy, and the maximum plan dose 5.3 and 7.4Gy for option A vs smooth and option B vs smooth respectively. The femoral head constraint (V50<5%) was met by all plans, but it was not consistently lower for the smooth drilling plan. Conclusion: Smooth drilled compensators provide equivalent target coverage and overall slightly cooler plans to the majority of organs at risk; it also minimizes the potential dosimetric impacts caused by patient positioning uncertainty.« less

  1. Control and optimization system

    DOEpatents

    Xinsheng, Lou

    2013-02-12

    A system for optimizing a power plant includes a chemical loop having an input for receiving an input parameter (270) and an output for outputting an output parameter (280), a control system operably connected to the chemical loop and having a multiple controller part (230) comprising a model-free controller. The control system receives the output parameter (280), optimizes the input parameter (270) based on the received output parameter (280), and outputs an optimized input parameter (270) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.

  2. Distortion control in 20MnCr5 bevel gears after liquid nitriding process to maintain precision dimensions

    NASA Astrophysics Data System (ADS)

    Mahendiran, M.; Kavitha, M.

    2018-02-01

    Robotic and automotive gears are generally very high precision components with limitations in tolerances. Bevel gears are more widely used and dimensionally very close tolerance components that need stability without any backlash or distortion for smooth and trouble free functions. Nitriding is carried out to enhance wear resistance of the surface. The aim of this paper is to reduce the distortion in liquid nitriding process, though plasma nitriding is preferred for high precision components. Various trials were conducted to optimize the process parameters, considering pre dimensional setting for nominal nitriding layer growth. Surface cleaning, suitable fixtures and stress relieving operations were also done to optimize the process. Micro structural analysis and Vickers hardness testing were carried out for analyzing the phase changes, variation in surface hardness and case depth. CNC gear testing machine was used for determining the distortion level. The presence of white layer was found for about 10-15μm in the case depth of 250± 3.5μm showing an average surface hardness of 670 HV. Hence the economical liquid nitriding process was successfully used for producing high hardness and wear resistant coating over 20MnCr5 material with less distortion and reduced secondary grinding process for dimensional control.

  3. Spatio-temporal Granger causality: a new framework

    PubMed Central

    Luo, Qiang; Lu, Wenlian; Cheng, Wei; Valdes-Sosa, Pedro A.; Wen, Xiaotong; Ding, Mingzhou; Feng, Jianfeng

    2015-01-01

    That physiological oscillations of various frequencies are present in fMRI signals is the rule, not the exception. Herein, we propose a novel theoretical framework, spatio-temporal Granger causality, which allows us to more reliably and precisely estimate the Granger causality from experimental datasets possessing time-varying properties caused by physiological oscillations. Within this framework, Granger causality is redefined as a global index measuring the directed information flow between two time series with time-varying properties. Both theoretical analyses and numerical examples demonstrate that Granger causality is a monotonically increasing function of the temporal resolution used in the estimation. This is consistent with the general principle of coarse graining, which causes information loss by smoothing out very fine-scale details in time and space. Our results confirm that the Granger causality at the finer spatio-temporal scales considerably outperforms the traditional approach in terms of an improved consistency between two resting-state scans of the same subject. To optimally estimate the Granger causality, the proposed theoretical framework is implemented through a combination of several approaches, such as dividing the optimal time window and estimating the parameters at the fine temporal and spatial scales. Taken together, our approach provides a novel and robust framework for estimating the Granger causality from fMRI, EEG, and other related data. PMID:23643924

  4. A study of acoustic-to-articulatory inversion of speech by analysis-by-synthesis using chain matrices and the Maeda articulatory model

    PubMed Central

    Panchapagesan, Sankaran; Alwan, Abeer

    2011-01-01

    In this paper, a quantitative study of acoustic-to-articulatory inversion for vowel speech sounds by analysis-by-synthesis using the Maeda articulatory model is performed. For chain matrix calculation of vocal tract (VT) acoustics, the chain matrix derivatives with respect to area function are calculated and used in a quasi-Newton method for optimizing articulatory trajectories. The cost function includes a distance measure between natural and synthesized first three formants, and parameter regularization and continuity terms. Calibration of the Maeda model to two speakers, one male and one female, from the University of Wisconsin x-ray microbeam (XRMB) database, using a cost function, is discussed. Model adaptation includes scaling the overall VT and the pharyngeal region and modifying the outer VT outline using measured palate and pharyngeal traces. The inversion optimization is initialized by a fast search of an articulatory codebook, which was pruned using XRMB data to improve inversion results. Good agreement between estimated midsagittal VT outlines and measured XRMB tongue pellet positions was achieved for several vowels and diphthongs for the male speaker, with average pellet-VT outline distances around 0.15 cm, smooth articulatory trajectories, and less than 1% average error in the first three formants. PMID:21476670

  5. Hermite WENO limiting for multi-moment finite-volume methods using the ADER-DT time discretization for 1-D systems of conservation laws

    DOE PAGES

    Norman, Matthew R.

    2014-11-24

    New Hermite Weighted Essentially Non-Oscillatory (HWENO) interpolants are developed and investigated within the Multi-Moment Finite-Volume (MMFV) formulation using the ADER-DT time discretization. Whereas traditional WENO methods interpolate pointwise, function-based WENO methods explicitly form a non-oscillatory, high-order polynomial over the cell in question. This study chooses a function-based approach and details how fast convergence to optimal weights for smooth flow is ensured. Methods of sixth-, eighth-, and tenth-order accuracy are developed. We compare these against traditional single-moment WENO methods of fifth-, seventh-, ninth-, and eleventh-order accuracy to compare against more familiar methods from literature. The new HWENO methods improve upon existingmore » HWENO methods (1) by giving a better resolution of unreinforced contact discontinuities and (2) by only needing a single HWENO polynomial to update both the cell mean value and cell mean derivative. Test cases to validate and assess these methods include 1-D linear transport, the 1-D inviscid Burger's equation, and the 1-D inviscid Euler equations. Smooth and non-smooth flows are used for evaluation. These HWENO methods performed better than comparable literature-standard WENO methods for all regimes of discontinuity and smoothness in all tests herein. They exhibit improved optimal accuracy due to the use of derivatives, and they collapse to solutions similar to typical WENO methods when limiting is required. The study concludes that the new HWENO methods are robust and effective when used in the ADER-DT MMFV framework. Finally, these results are intended to demonstrate capability rather than exhaust all possible implementations.« less

  6. A new smoothing function to introduce long-range electrostatic effects in QM/MM calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Dong; Department of Chemistry, University of Wisconsin, Madison, Wisconsin 53706; Duke, Robert E.

    2015-07-28

    A new method to account for long range electrostatic contributions is proposed and implemented for quantum mechanics/molecular mechanics long range electrostatic correction (QM/MM-LREC) calculations. This method involves the use of the minimum image convention under periodic boundary conditions and a new smoothing function for energies and forces at the cutoff boundary for the Coulomb interactions. Compared to conventional QM/MM calculations without long-range electrostatic corrections, the new method effectively includes effects on the MM environment in the primary image from its replicas in the neighborhood. QM/MM-LREC offers three useful features including the avoidance of calculations in reciprocal space (k-space), with themore » concomitant avoidance of having to reproduce (analytically or approximately) the QM charge density in k-space, and the straightforward availability of analytical Hessians. The new method is tested and compared with results from smooth particle mesh Ewald (PME) for three systems including a box of neat water, a double proton transfer reaction, and the geometry optimization of the critical point structures for the rate limiting step of the DNA dealkylase AlkB. As with other smoothing or shifting functions, relatively large cutoffs are necessary to achieve comparable accuracy with PME. For the double-proton transfer reaction, the use of a 22 Å cutoff shows a close reaction energy profile and geometries of stationary structures with QM/MM-LREC compared to conventional QM/MM with no truncation. Geometry optimization of stationary structures for the hydrogen abstraction step by AlkB shows some differences between QM/MM-LREC and the conventional QM/MM. These differences underscore the necessity of the inclusion of the long-range electrostatic contribution.« less

  7. Electron parallel closures for various ion charge numbers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Jeong-Young, E-mail: j.ji@usu.edu; Held, Eric D.; Kim, Sang-Kyeun

    2016-03-15

    Electron parallel closures for the ion charge number Z = 1 [J.-Y. Ji and E. D. Held, Phys. Plasmas 21, 122116 (2014)] are extended for 1 ≤ Z ≤ 10. Parameters are computed for various Z with the same form of the Z = 1 kernels adopted. The parameters are smoothly varying in Z and hence can be used to interpolate parameters and closures for noninteger, effective ion charge numbers.

  8. Applications of Sharp Interface Method for Flow Dynamics, Scattering and Control Problems

    DTIC Science & Technology

    2012-07-30

    Reynolds number, Advances in Applied Mathematics and Mechanics, to appear. 17. K. Ito and K. Kunisch, Optimal Control of Parabolic Variational ...provides more precise and detailed sensitivity of the solution and describes the dynamical change due to the variation in the Reynolds number. The immersed... Inequalities , Journal de Math. Pures et Appl, 93 (2010), no. 4, 329-360. 18. K. Ito and K. Kunisch, Semi-smooth Newton Methods for Time-Optimal Control for a

  9. Projection of postgraduate students flow with a smoothing matrix transition diagram of Markov chain

    NASA Astrophysics Data System (ADS)

    Rahim, Rahela; Ibrahim, Haslinda; Adnan, Farah Adibah

    2013-04-01

    This paper presents a case study of modeling postgraduate students flow at the College of Art and Sciences, Universiti Utara Malaysia. First, full time postgraduate students and the semester they were in are identified. Then administrative data were used to estimate the transitions between these semesters for the year 2001-2005 periods. Markov chain model is developed to calculate the -5 and -10 years projection of postgraduate students flow at the college. The optimization question addressed in this study is 'Which transitions would sustain the desired structure in the dynamic situation such as trend towards graduation?' The smoothed transition probabilities are proposed to estimate the transition probabilities matrix of 16 × 16. The results shows that using smoothed transition probabilities, the projection number of postgraduate students enrolled in the respective semesters are closer to actual than using the conventional steady states transition probabilities.

  10. Optimal spatial filtering and transfer function for SAR ocean wave spectra

    NASA Technical Reports Server (NTRS)

    Beal, R. C.; Tilley, D. G.

    1981-01-01

    The impulse response of the SAR system is not a delta function and the spectra represent the product of the underlying image spectrum with the transform of the impulse response which must be removed. A digitally computed spectrum of SEASAT imagery of the Atlantic Ocean east of Cape Hatteras was smoothed with a 5 x 5 convolution filter and the trend was sampled in a direction normal to the predominant wave direction. This yielded a transform of a noise-like process. The smoothed value of this trend is the transform of the impulse response. This trend is fit with either a second- or fourth-order polynomial which is then used to correct the entire spectrum. A 16 x 16 smoothing of the spectrum shows the presence of two distinct swells. Correction of the effects of speckle is effected by the subtraction of a bias from the spectrum.

  11. An n -material thresholding method for improving integerness of solutions in topology optimization

    DOE PAGES

    Watts, Seth; Tortorelli, Daniel A.

    2016-04-10

    It is common in solving topology optimization problems to replace an integer-valued characteristic function design field with the material volume fraction field, a real-valued approximation of the design field that permits "fictitious" mixtures of materials during intermediate iterations in the optimization process. This is reasonable so long as one can interpolate properties for such materials and so long as the final design is integer valued. For this purpose, we present a method for smoothly thresholding the volume fractions of an arbitrary number of material phases which specify the design. This method is trivial for two-material design problems, for example, themore » canonical topology design problem of specifying the presence or absence of a single material within a domain, but it becomes more complex when three or more materials are used, as often occurs in material design problems. We take advantage of the similarity in properties between the volume fractions and the barycentric coordinates on a simplex to derive a thresholding, method which is applicable to an arbitrary number of materials. As we show in a sensitivity analysis, this method has smooth derivatives, allowing it to be used in gradient-based optimization algorithms. Finally, we present results, which show synergistic effects when used with Solid Isotropic Material with Penalty and Rational Approximation of Material Properties material interpolation functions, popular methods of ensuring integerness of solutions.« less

  12. Fat water decomposition using globally optimal surface estimation (GOOSE) algorithm.

    PubMed

    Cui, Chen; Wu, Xiaodong; Newell, John D; Jacob, Mathews

    2015-03-01

    This article focuses on developing a novel noniterative fat water decomposition algorithm more robust to fat water swaps and related ambiguities. Field map estimation is reformulated as a constrained surface estimation problem to exploit the spatial smoothness of the field, thus minimizing the ambiguities in the recovery. Specifically, the differences in the field map-induced frequency shift between adjacent voxels are constrained to be in a finite range. The discretization of the above problem yields a graph optimization scheme, where each node of the graph is only connected with few other nodes. Thanks to the low graph connectivity, the problem is solved efficiently using a noniterative graph cut algorithm. The global minimum of the constrained optimization problem is guaranteed. The performance of the algorithm is compared with that of state-of-the-art schemes. Quantitative comparisons are also made against reference data. The proposed algorithm is observed to yield more robust fat water estimates with fewer fat water swaps and better quantitative results than other state-of-the-art algorithms in a range of challenging applications. The proposed algorithm is capable of considerably reducing the swaps in challenging fat water decomposition problems. The experiments demonstrate the benefit of using explicit smoothness constraints in field map estimation and solving the problem using a globally convergent graph-cut optimization algorithm. © 2014 Wiley Periodicals, Inc.

  13. Deep Laser-Assisted Lamellar Anterior Keratoplasty with Microkeratome-Cut Grafts

    PubMed Central

    Yokogawa, Hideaki; Tang, Maolong; Li, Yan; Liu, Liang; Chamberlain, Winston; Huang, David

    2016-01-01

    Background The goals of this laboratory study were to evaluate the interface quality in laser-assisted lamellar anterior keratoplasty (LALAK) with microkeratome-cut grafts, and to achieve good graft–host apposition. Methods Simulated LALAK surgeries were performed on six pairs of eye bank corneoscleral discs. Anterior lamellar grafts were precut with microkeratomes. Deep femtosecond (FS) laser cuts were performed on host corneas followed by excimer laser smoothing. Different parameters of FS laser cuts and excimer laser smoothing were tested. OCT was used to measure corneal pachymetry and evaluate graft-host apposition. The interface quality was quantified in a masked fashion using a 5-point scale based on scanning electron microscopy images. Results Deep FS laser cuts at 226–380 μm resulted in visible ridges on the host bed. Excimer laser smoothing with central ablation depth of 29 μm and saline as a smoothing agent did not adequately reduce ridges (score = 4.0). Deeper excimer laser ablation of 58 μm and Optisol-GS as a smoothing agent smoothed ridges to an acceptable level (score = 2.1). Same sizing of the graft and host cut diameters with an approximately 50 μm deeper host side-cut relative to the central graft thickness provided the best graft–host fit. Conclusions Deep excimer laser ablation with a viscous smoothing agent was needed to remove ridges after deep FS lamellar cuts. The host side cut should be deep enough to accommodate thicker graft peripheral thickness compared to the center. This LALAK design provides smooth lamellar interfaces, moderately thick grafts, and good graft-host fits. PMID:26890667

  14. Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation

    NASA Astrophysics Data System (ADS)

    Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah

    2018-04-01

    The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.

  15. Robust optimization for nonlinear time-delay dynamical system of dha regulon with cost sensitivity constraint in batch culture

    NASA Astrophysics Data System (ADS)

    Yuan, Jinlong; Zhang, Xu; Liu, Chongyang; Chang, Liang; Xie, Jun; Feng, Enmin; Yin, Hongchao; Xiu, Zhilong

    2016-09-01

    Time-delay dynamical systems, which depend on both the current state of the system and the state at delayed times, have been an active area of research in many real-world applications. In this paper, we consider a nonlinear time-delay dynamical system of dha-regulonwith unknown time-delays in batch culture of glycerol bioconversion to 1,3-propanediol induced by Klebsiella pneumonia. Some important properties and strong positive invariance are discussed. Because of the difficulty in accurately measuring the concentrations of intracellular substances and the absence of equilibrium points for the time-delay system, a quantitative biological robustness for the concentrations of intracellular substances is defined by penalizing a weighted sum of the expectation and variance of the relative deviation between system outputs before and after the time-delays are perturbed. Our goal is to determine optimal values of the time-delays. To this end, we formulate an optimization problem in which the time delays are decision variables and the cost function is to minimize the biological robustness. This optimization problem is subject to the time-delay system, parameter constraints, continuous state inequality constraints for ensuring that the concentrations of extracellular and intracellular substances lie within specified limits, a quality constraint to reflect operational requirements and a cost sensitivity constraint for ensuring that an acceptable level of the system performance is achieved. It is approximated as a sequence of nonlinear programming sub-problems through the application of constraint transcription and local smoothing approximation techniques. Due to the highly complex nature of this optimization problem, the computational cost is high. Thus, a parallel algorithm is proposed to solve these nonlinear programming sub-problems based on the filled function method. Finally, it is observed that the obtained optimal estimates for the time-delays are highly satisfactory via numerical simulations.

  16. Growth of perturbations in dark energy parametrization scenarios

    NASA Astrophysics Data System (ADS)

    Mehrabi, Ahmad

    2018-04-01

    In this paper, we study the evolution of dark matter perturbations in the linear regime by considering the possibility of dark energy perturbations. To do this, two popular parametrizations, Chevallier-Polarski-Linder (CPL) and Barboza-Alcaniz (BA), with the same number of free parameters and different redshift dependency have been considered. We integrate the full relativistic equations to obtain the growth of matter fluctuations for both clustering and smooth versions of CPL and BA dark energy. The growth rate is larger (smaller) than the Λ CDM in the smooth cases when w <-1 (w >-1 ), but the dark energy clustering gives a larger (smaller) growth index when w >-1 (w <-1 ). We measure the relative difference of the growth rate with respect to concordance Λ CDM and study how it changes depending on the free parameters. Furthermore, it is found that the difference of growth rates between smooth CPL and BA is negligible, less than 0.5%, while for the clustering case, the difference is considerable and might be as large as 2%. Eventually, using the latest geometrical and growth rate observational data, we perform an overall likelihood analysis and show that both smooth and clustering cases of CPL and BA parametrizations are consistent with observations. In particular, we find the dark energy figure of merit is approximately 70 for the BA and approximately 30 for the CPL, which indicates the BA model constrains relatively better than the CPL one.

  17. Length adaptation of airway smooth muscle.

    PubMed

    Bossé, Ynuk; Sobieszek, Apolinary; Paré, Peter D; Seow, Chun Y

    2008-01-01

    Many types of smooth muscle, including airway smooth muscle (ASM), are capable of generating maximal force over a large length range due to length adaptation, which is a relatively rapid process in which smooth muscle regains contractility after experiencing a force decrease induced by length fluctuation. Although the underlying mechanism is unclear, it is believed that structural malleability of smooth muscle cells is essential for the adaptation to occur. The process is triggered by strain on the cell cytoskeleton that results in a series of yet undefined biochemical and biophysical events leading to restructuring of the cytoskeleton and contractile apparatus and consequently optimization of the overlap between the myosin and actin filaments. Although length adaptability is an intrinsic property of smooth muscle, maladaptation of ASM could result in excessive constriction of the airways and the inability of deep inspirations to dilate them. In this article, we describe the phenomenon of length adaptation in ASM and some possible underlying mechanisms that involve the myosin filament assembly and disassembly. We discuss a possible role of maladaptation of ASM in the pathogenesis of asthma. We believe that length adaptation in ASM is mediated by specific proteins and their posttranslational regulations involving covalent modifications, such as phosphorylation. The discovery of these molecules and the processes that regulate their activity will greatly enhance our understanding of the basic mechanisms of ASM contraction and will suggest molecular targets to alleviate asthma exacerbation related to excessive constriction of the airways.

  18. Cosmological information in Gaussianized weak lensing signals

    NASA Astrophysics Data System (ADS)

    Joachimi, B.; Taylor, A. N.; Kiessling, A.

    2011-11-01

    Gaussianizing the one-point distribution of the weak gravitational lensing convergence has recently been shown to increase the signal-to-noise ratio contained in two-point statistics. We investigate the information on cosmology that can be extracted from the transformed convergence fields. Employing Box-Cox transformations to determine optimal transformations to Gaussianity, we develop analytical models for the transformed power spectrum, including effects of noise and smoothing. We find that optimized Box-Cox transformations perform substantially better than an offset logarithmic transformation in Gaussianizing the convergence, but both yield very similar results for the signal-to-noise ratio. None of the transformations is capable of eliminating correlations of the power spectra between different angular frequencies, which we demonstrate to have a significant impact on the errors in cosmology. Analytic models of the Gaussianized power spectrum yield good fits to the simulations and produce unbiased parameter estimates in the majority of cases, where the exceptions can be traced back to the limitations in modelling the higher order correlations of the original convergence. In the ideal case, without galaxy shape noise, we find an increase in the cumulative signal-to-noise ratio by a factor of 2.6 for angular frequencies up to ℓ= 1500, and a decrease in the area of the confidence region in the Ωm-σ8 plane, measured in terms of q-values, by a factor of 4.4 for the best performing transformation. When adding a realistic level of shape noise, all transformations perform poorly with little decorrelation of angular frequencies, a maximum increase in signal-to-noise ratio of 34 per cent, and even slightly degraded errors on cosmological parameters. We argue that to find Gaussianizing transformations of practical use, it will be necessary to go beyond transformations of the one-point distribution of the convergence, extend the analysis deeper into the non-linear regime and resort to an exploration of parameter space via simulations.

  19. An optimized method to calculate error correction capability of tool influence function in frequency domain

    NASA Astrophysics Data System (ADS)

    Wang, Jia; Hou, Xi; Wan, Yongjian; Shi, Chunyan

    2017-10-01

    An optimized method to calculate error correction capability of tool influence function (TIF) in certain polishing conditions will be proposed based on smoothing spectral function. The basic mathematical model for this method will be established in theory. A set of polishing experimental data with rigid conformal tool is used to validate the optimized method. The calculated results can quantitatively indicate error correction capability of TIF for different spatial frequency errors in certain polishing conditions. The comparative analysis with previous method shows that the optimized method is simpler in form and can get the same accuracy results with less calculating time in contrast to previous method.

  20. Gait parameters are differently affected by concurrent smartphone-based activities with scaled levels of cognitive effort.

    PubMed

    Caramia, Carlotta; Bernabucci, Ivan; D'Anna, Carmen; De Marchis, Cristiano; Schmid, Maurizio

    2017-01-01

    The widespread and pervasive use of smartphones for sending messages, calling, and entertainment purposes, mainly among young adults, is often accompanied by the concurrent execution of other tasks. Recent studies have analyzed how texting, reading or calling while walking-in some specific conditions-might significantly influence gait parameters. The aim of this study is to examine the effect of different smartphone activities on walking, evaluating the variations of several gait parameters. 10 young healthy students (all smartphone proficient users) were instructed to text chat (with two different levels of cognitive load), call, surf on a social network or play with a math game while walking in a real-life outdoor setting. Each of these activities is characterized by a different cognitive load. Using an inertial measurement unit on the lower trunk, spatio-temporal gait parameters, together with regularity, symmetry and smoothness parameters, were extracted and grouped for comparison among normal walking and different dual task demands. An overall significant effect of task type on the aforementioned parameters group was observed. The alterations in gait parameters vary as a function of cognitive effort. In particular, stride frequency, step length and gait speed show a decrement, while step time increases as a function of cognitive effort. Smoothness, regularity and symmetry parameters are significantly altered for specific dual task conditions, mainly along the mediolateral direction. These results may lead to a better understanding of the possible risks related to walking and concurrent smartphone use.

  1. Gait parameters are differently affected by concurrent smartphone-based activities with scaled levels of cognitive effort

    PubMed Central

    Bernabucci, Ivan; D'Anna, Carmen; De Marchis, Cristiano; Schmid, Maurizio

    2017-01-01

    The widespread and pervasive use of smartphones for sending messages, calling, and entertainment purposes, mainly among young adults, is often accompanied by the concurrent execution of other tasks. Recent studies have analyzed how texting, reading or calling while walking–in some specific conditions–might significantly influence gait parameters. The aim of this study is to examine the effect of different smartphone activities on walking, evaluating the variations of several gait parameters. 10 young healthy students (all smartphone proficient users) were instructed to text chat (with two different levels of cognitive load), call, surf on a social network or play with a math game while walking in a real-life outdoor setting. Each of these activities is characterized by a different cognitive load. Using an inertial measurement unit on the lower trunk, spatio-temporal gait parameters, together with regularity, symmetry and smoothness parameters, were extracted and grouped for comparison among normal walking and different dual task demands. An overall significant effect of task type on the aforementioned parameters group was observed. The alterations in gait parameters vary as a function of cognitive effort. In particular, stride frequency, step length and gait speed show a decrement, while step time increases as a function of cognitive effort. Smoothness, regularity and symmetry parameters are significantly altered for specific dual task conditions, mainly along the mediolateral direction. These results may lead to a better understanding of the possible risks related to walking and concurrent smartphone use. PMID:29023456

  2. Parallel algorithms for the molecular conformation problem

    NASA Astrophysics Data System (ADS)

    Rajan, Kumar

    Given a set of objects, and some of the pairwise distances between them, the problem of identifying the positions of the objects in the Euclidean space is referred to as the molecular conformation problem. This problem is known to be computationally difficult. One of the most important applications of this problem is the determination of the structure of molecules. In the case of molecular structure determination, usually only the lower and upper bounds on some of the interatomic distances are available. The process of obtaining a tighter set of bounds between all pairs of atoms, using the available interatomic distance bounds is referred to as bound-smoothing . One method for bound-smoothing is to use the limits imposed by the triangle inequality. The distance bounds so obtained can often be tightened further by applying the tetrangle inequality---the limits imposed on the six pairwise distances among a set of four atoms (instead of three for the triangle inequalities). The tetrangle inequality is expressed by the Cayley-Menger determinants. The sequential tetrangle-inequality bound-smoothing algorithm considers a quadruple of atoms at a time, and tightens the bounds on each of its six distances. The sequential algorithm is computationally expensive, and its application is limited to molecules with up to a few hundred atoms. Here, we conduct an experimental study of tetrangle-inequality bound-smoothing and reduce the sequential time by identifying the most computationally expensive portions of the process. We also present a simple criterion to determine which of the quadruples of atoms are likely to be tightened the most by tetrangle-inequality bound-smoothing. This test could be used to enhance the applicability of this process to large molecules. We map the problem of parallelizing tetrangle-inequality bound-smoothing to that of generating disjoint packing designs of a certain kind. We map this, in turn, to a regular-graph coloring problem, and present a simple, parallel algorithm for tetrangle-inequality bound-smoothing. We implement the parallel algorithm on the Intel Paragon X/PS, and apply it to real-life molecules. Our results show that with this parallel algorithm, tetrangle inequality can be applied to large molecules in a reasonable amount of time. We extend the regular graph to represent more general packing designs, and present a coloring algorithm for this graph. This can be used to generate constant-weight binary codes in parallel. Once a tighter set of distance bounds is obtained, the molecular conformation problem is usually formulated as a non-linear optimization problem, and a global optimization algorithm is then used to solve the problem. Here we present a parallel, deterministic algorithm for the optimization problem based on Interval Analysis. We implement our algorithm, using dynamic load balancing, on a network of Sun Ultra-Sparc workstations. Our experience with this algorithm shows that its application is limited to small instances of the molecular conformation problem, where the number of measured, pairwise distances is close to the maximum value. However, since the interval method eliminates a substantial portion of the initial search space very quickly, it can be used to prune the search space before any of the more efficient, nondeterministic methods can be applied.

  3. Metallic Zinc Exhibits Optimal Biocompatibility for Bioabsorbable Endovascular Stents

    PubMed Central

    Bowen, Patrick K.; Guillory, Roger J.; Shearier, Emily R.; Seitz, Jan-Marten; Drelich, Jaroslaw; Bocks, Martin; Zhao, Feng; Goldman, Jeremy

    2015-01-01

    Although corrosion resistant bare metal stents are considered generally effective, their permanent presence in a diseased artery is an increasingly recognized limitation due to the potential for long-term complications. We previously reported that metallic zinc exhibited an ideal biocorrosion rate within murine aortas, thus raising the possibility of zinc as a candidate base material for endovascular stenting applications. This study was undertaken to further assess the arterial biocompatibility of metallic zinc. Metallic zinc wires were punctured and advanced into the rat abdominal aorta lumen for up to 6.5 months. This study demonstrated that metallic zinc did not provoke responses that often contribute to restenosis. Low cell densities and neointimal tissue thickness, along with tissue regeneration within the corroding implant, point to optimal biocompatibility of corroding zinc. Furthermore, the lack of progression in neointimal tissue thickness over 6.5 months or the presence of smooth muscle cells near the zinc implant suggest that the products of zinc corrosion may suppress the activities of inflammatory and smooth muscle cells. PMID:26249616

  4. Molecular beam epitaxy growth of high electron mobility InAs/AlSb deep quantum well structure

    NASA Astrophysics Data System (ADS)

    Wang, Juan; Wang, Guo-Wei; Xu, Ying-Qiang; Xing, Jun-Liang; Xiang, Wei; Tang, Bao; Zhu, Yan; Ren, Zheng-Wei; He, Zhen-Hong; Niu, Zhi-Chuan

    2013-07-01

    InAs/AlSb deep quantum well (QW) structures with high electron mobility were grown by molecular beam epitaxy (MBE) on semi-insulating GaAs substrates. AlSb and Al0.75Ga0.25Sb buffer layers were grown to accommodate the lattice mismatch (7%) between the InAs/AlSb QW active region and GaAs substrate. Transmission electron microscopy shows abrupt interface and atomic force microscopy measurements display smooth surface morphology. Growth conditions of AlSb and Al0.75Ga0.25Sb buffer were optimized. Al0.75Ga0.25Sb is better than AlSb as a buffer layer as indicated. The sample with optimal Al0.75Ga0.25Sb buffer layer shows a smooth surface morphology with root-mean-square roughness of 6.67 Å. The electron mobility has reached as high as 27 000 cm2/Vs with a sheet density of 4.54 × 1011/cm2 at room temperature.

  5. Development of a new integrated local trajectory planning and tracking control framework for autonomous ground vehicles

    NASA Astrophysics Data System (ADS)

    Li, Xiaohui; Sun, Zhenping; Cao, Dongpu; Liu, Daxue; He, Hangen

    2017-03-01

    This study proposes a novel integrated local trajectory planning and tracking control (ILTPTC) framework for autonomous vehicles driving along a reference path with obstacles avoidance. For this ILTPTC framework, an efficient state-space sampling-based trajectory planning scheme is employed to smoothly follow the reference path. A model-based predictive path generation algorithm is applied to produce a set of smooth and kinematically-feasible paths connecting the initial state with the sampling terminal states. A velocity control law is then designed to assign a speed value at each of the points along the generated paths. An objective function considering both safety and comfort performance is carefully formulated for assessing the generated trajectories and selecting the optimal one. For accurately tracking the optimal trajectory while overcoming external disturbances and model uncertainties, a combined feedforward and feedback controller is developed. Both simulation analyses and vehicle testing are performed to verify the effectiveness of the proposed ILTPTC framework, and future research is also briefly discussed.

  6. Determining the Optimal Spectral Sampling Frequency and Uncertainty Thresholds for Hyperspectral Remote Sensing of Ocean Color

    NASA Technical Reports Server (NTRS)

    Vandermeulen, Ryan A.; Mannino, Antonio; Neeley, Aimee; Werdell, Jeremy; Arnone, Robert

    2017-01-01

    Using a modified geostatistical technique, empirical variograms were constructed from the first derivative of several diverse remote sensing reflectance and phytoplankton absorbance spectra to describe how data points are correlated with distance across the spectra. The maximum rate of information gain is measured as a function of the kurtosis associated with the Gaussian structure of the output, and is determined for discrete segments of spectra obtained from a variety of water types (turbid river filaments, coastal waters, shelf waters, a dense Microcystis bloom, and oligotrophic waters), as well as individual and mixed phytoplankton functional types (PFTs; diatoms, chlorophytes, cyanobacteria, coccolithophores). Results show that a continuous spectrum of 5 to 7 nm spectral resolution is optimal to resolve the variability across mixed reflectance and absorbance spectra. In addition, the impact of uncertainty on subsequent derivative analysis is assessed, showing that a limit of 3 Gaussian noise (SNR 66) is tolerated without smoothing the spectrum, and 13 (SNR 15) noise is tolerated with smoothing.

  7. Revision of the Phenomenological Characteristics of the Algol-Type Stars Using the Nav Algorithm

    NASA Astrophysics Data System (ADS)

    Tkachenko, M. G.; Andronov, I. L.; Chinarova, L. L.

    Phenomenological characteristics of the sample of the Algol-type stars are revised using a recently developed NAV ("New Algol Variable") algorithm (2012Ap.....55..536A, 2012arXiv 1212.6707A) and compared to that obtained using common methods of Trigonometric Polynomial Fit (TP) or local Algebraic Polynomial (A) fit of a fixed or (alternately) statistically optimal degree (1994OAP.....7...49A, 2003ASPC..292..391A). The computer program NAV is introduced, which allows to determine the best fit with 7 "linear" and 5 "nonlinear" parameters and their error estimates. The number of parameters is much smaller than for the TP fit (typically 20-40, depending on the width of the eclipse, and is much smaller (5-20) for the W UMa and β Lyrae-type stars. This causes more smooth approximation taking into account the reflection and ellipsoidal effects (TP2) and generally different shapes of the primary and secondary eclipses. An application of the method to two-color CCD photometry to the recently discovered eclipsing variable 2MASS J18024395 + 4003309 = VSX J180243.9 +400331 (2015JASS...32..101A) allowed to make estimates of the physical parameters of the binary system based on the phenomenological parameters of the light curve. The phenomenological parameters of the light curves were determined for the sample of newly discovered EA and EW-type stars (VSX J223429.3+552903, VSX J223421.4+553013, VSX J223416.2+553424, USNO-B1.0 1347-0483658, UCAC3-191-085589, VSX J180755.6+074711= UCAC3 196-166827). Despite we have used original observations published by the discoverers, the accuracy estimates of the period using the NAV method are typically better than the original ones.

  8. Graph Laplacian Regularization for Image Denoising: Analysis in the Continuous Domain.

    PubMed

    Pang, Jiahao; Cheung, Gene

    2017-04-01

    Inverse imaging problems are inherently underdetermined, and hence, it is important to employ appropriate image priors for regularization. One recent popular prior-the graph Laplacian regularizer-assumes that the target pixel patch is smooth with respect to an appropriately chosen graph. However, the mechanisms and implications of imposing the graph Laplacian regularizer on the original inverse problem are not well understood. To address this problem, in this paper, we interpret neighborhood graphs of pixel patches as discrete counterparts of Riemannian manifolds and perform analysis in the continuous domain, providing insights into several fundamental aspects of graph Laplacian regularization for image denoising. Specifically, we first show the convergence of the graph Laplacian regularizer to a continuous-domain functional, integrating a norm measured in a locally adaptive metric space. Focusing on image denoising, we derive an optimal metric space assuming non-local self-similarity of pixel patches, leading to an optimal graph Laplacian regularizer for denoising in the discrete domain. We then interpret graph Laplacian regularization as an anisotropic diffusion scheme to explain its behavior during iterations, e.g., its tendency to promote piecewise smooth signals under certain settings. To verify our analysis, an iterative image denoising algorithm is developed. Experimental results show that our algorithm performs competitively with state-of-the-art denoising methods, such as BM3D for natural images, and outperforms them significantly for piecewise smooth images.

  9. Research directed toward improved echelles for the ultraviolet

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Research was undertaken to demonstrate that improved efficiencies for low frequency gratings are obtainable with the careful application of present technology. The motivation for the study was the desire to be assured that the grating-efficiency design goals for potential Space Telescope spectrographs can be achieved. The work was organized to compare gratings made with changes in the three specific parameters: the ruling tool profile, the coating material, and the lubricants used during the ruling process. A series of coatings and test gratings were fabricated and were examined for surface smoothness with a Nomarski Differential Interference Microscope and an electron microscope. Photomicrographs were obtained to show the difference in smoothness of the various coatings and rulings. Efficiency measurements were made for those test rulings that showed good groove characteristics: smoothness, proper ruling depth, and absence of defects. The intuitive feeling that higher grating efficiency should be correlated with the degree of smoothness of both the coating and the grating is supported by the results.

  10. Rough versus smooth topography along oceanic hotspot tracks: Observations and scaling analysis

    NASA Astrophysics Data System (ADS)

    Orellana-Rovirosa, Felipe; Richards, Mark

    2017-05-01

    Some hotspot tracks are topographically smooth and broad (Nazca, Carnegie/Cocos/Galápagos, Walvis, Iceland), while others are rough and discontinuous (Easter/Sala y Gomez, Tristan-Gough, Louisville, St. Helena, Hawaiian-Emperor). Smooth topography occurs when the lithospheric age at emplacement is young, favoring intrusive magmatism, whereas rough topography is due to isolated volcanic edifices constructed on older/thicker lithosphere. The main controls on the balance of intrusive versus extrusive magmatism are expected to be the hotspot swell volume flux Qs, plate hotspot relative speed v, and lithospheric elastic thickness Te, which can be combined as a dimensionless parameter R = (Qs/v)1/2/Te, which represents the ratio of plume heat to the lithospheric heat capacity. Observational constraints show that, except for the Ninetyeast Ridge, R is a good predictor of topographic character: for R < 1.5 hotspot tracks are topographically rough and dominated by volcanic edifices, whereas for R > 3 they are smooth and dominated by intrusion.

  11. A Modified Penalty Parameter Approach for Optimal Estimation of UH with Simultaneous Estimation of Infiltration Parameters

    NASA Astrophysics Data System (ADS)

    Bhattacharjya, Rajib Kumar

    2018-05-01

    The unit hydrograph and the infiltration parameters of a watershed can be obtained from observed rainfall-runoff data by using inverse optimization technique. This is a two-stage optimization problem. In the first stage, the infiltration parameters are obtained and the unit hydrograph ordinates are estimated in the second stage. In order to combine this two-stage method into a single stage one, a modified penalty parameter approach is proposed for converting the constrained optimization problem to an unconstrained one. The proposed approach is designed in such a way that the model initially obtains the infiltration parameters and then searches the optimal unit hydrograph ordinates. The optimization model is solved using Genetic Algorithms. A reduction factor is used in the penalty parameter approach so that the obtained optimal infiltration parameters are not destroyed during subsequent generation of genetic algorithms, required for searching optimal unit hydrograph ordinates. The performance of the proposed methodology is evaluated by using two example problems. The evaluation shows that the model is superior, simple in concept and also has the potential for field application.

  12. Fast function-on-scalar regression with penalized basis expansions.

    PubMed

    Reiss, Philip T; Huang, Lei; Mennes, Maarten

    2010-01-01

    Regression models for functional responses and scalar predictors are often fitted by means of basis functions, with quadratic roughness penalties applied to avoid overfitting. The fitting approach described by Ramsay and Silverman in the 1990 s amounts to a penalized ordinary least squares (P-OLS) estimator of the coefficient functions. We recast this estimator as a generalized ridge regression estimator, and present a penalized generalized least squares (P-GLS) alternative. We describe algorithms by which both estimators can be implemented, with automatic selection of optimal smoothing parameters, in a more computationally efficient manner than has heretofore been available. We discuss pointwise confidence intervals for the coefficient functions, simultaneous inference by permutation tests, and model selection, including a novel notion of pointwise model selection. P-OLS and P-GLS are compared in a simulation study. Our methods are illustrated with an analysis of age effects in a functional magnetic resonance imaging data set, as well as a reanalysis of a now-classic Canadian weather data set. An R package implementing the methods is publicly available.

  13. Advances in mosquito dynamics modeling

    NASA Astrophysics Data System (ADS)

    Wijaya, Karunia Putra; Götz, Thomas; Soewono, Edy

    2016-11-01

    It is preliminarily known that Aedes mosquitoes are very close to humans and their dwellings, also give rises to a broad spectrum of diseases: dengue, yellow fever, chikungunya. In this paper, we explore a multi-age-class model for mosquito population secondarily classified into indoor-outdoor dynamics. We accentuate a novel design for the model in which periodicity of the affecting time-varying environmental condition is taken into account. Application of the optimal control with collocated measure as apposed to the widely-used prototypic smooth time-continuous measure is also considered. Using two approaches: least-square and maximum likelihood, we estimate several involving undetermined parameters. We analyze the model enforceability to biological point of view such as existence, uniqueness, positivity and boundedness of solution trajectory, also existence and stability of (non)trivial periodic solution(s) by means of the basic mosquito offspring number. Some numerical tests are brought along at the rest of the paper as a compact realistic visualization of the model.

  14. Intraventricular Flow Velocity Vector Visualization Based on the Continuity Equation and Measurements of Vorticity and Wall Shear Stress

    NASA Astrophysics Data System (ADS)

    Itatani, Keiichi; Okada, Takashi; Uejima, Tokuhisa; Tanaka, Tomohiko; Ono, Minoru; Miyaji, Kagami; Takenaka, Katsu

    2013-07-01

    We have developed a system to estimate velocity vector fields inside the cardiac ventricle by echocardiography and to evaluate several flow dynamical parameters to assess the pathophysiology of cardiovascular diseases. A two-dimensional continuity equation was applied to color Doppler data using speckle tracking data as boundary conditions, and the velocity component perpendicular to the echo beam line was obtained. We determined the optimal smoothing method of the color Doppler data, and the 8-pixel standard deviation of the Gaussian filter provided vorticity without nonphysiological stripe shape noise. We also determined the weight function at the bilateral boundaries given by the speckle tracking data of the ventricle or vascular wall motion, and the weight function linear to the distance from the boundary provided accurate flow velocities not only inside the vortex flow but also around near-wall regions on the basis of the results of the validation of a digital phantom of a pipe flow model.

  15. Uniformity analysis for a direct-drive laser fusion reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lund, L.D.; Skupsky, S.; Goldman, L.M.

    1983-01-01

    We show the results of an analysis of the uniformity for a direct-drive reactor using 20, 32, 60, or 96 beams. Several of these options achieve less than the 1% nonuniformity that is required. These options are considered for the cases where the solid angle fraction of the beam ports is 2% and 8%. The analysis is facilitated by separating the contributions due to the geometrical effects related to the number and orientation of the beams from those due to the spatial profile of the individual beams. Emphasis is placed on the wavelength of the nonuniformities, as the shorter wavelengthmore » nonuniformities are more easily smoothed by thermal conduction within the target. The analysis demonstrates that the longer wavelengths can be minimized by suitable choices of geometry and by maintaining beam balance, whereas the shorter wavelength nonuniformities can be reduced by optimizing parameters such as the focal position and the spatial intensity profile of each beam. The tolerances required for beam-to-beam energy balance will be discussed.« less

  16. Calibrated Multivariate Regression with Application to Neural Semantic Basis Discovery.

    PubMed

    Liu, Han; Wang, Lie; Zhao, Tuo

    2015-08-01

    We propose a calibrated multivariate regression method named CMR for fitting high dimensional multivariate regression models. Compared with existing methods, CMR calibrates regularization for each regression task with respect to its noise level so that it simultaneously attains improved finite-sample performance and tuning insensitiveness. Theoretically, we provide sufficient conditions under which CMR achieves the optimal rate of convergence in parameter estimation. Computationally, we propose an efficient smoothed proximal gradient algorithm with a worst-case numerical rate of convergence O (1/ ϵ ), where ϵ is a pre-specified accuracy of the objective function value. We conduct thorough numerical simulations to illustrate that CMR consistently outperforms other high dimensional multivariate regression methods. We also apply CMR to solve a brain activity prediction problem and find that it is as competitive as a handcrafted model created by human experts. The R package camel implementing the proposed method is available on the Comprehensive R Archive Network http://cran.r-project.org/web/packages/camel/.

  17. Surface morphology of refractive-index waveguide gratings fabricated in polymer films

    NASA Astrophysics Data System (ADS)

    Dong, Yi; Song, Yan-fang; Ma, Lei; Gao, Fang-fang

    2016-09-01

    The characteristic modifications are reported on the surface of polymeric waveguide film in the process of volume- grating fabrication. The light from a mode-locked 76 MHz femtosecond laser with pulse duration of 200 fs and wavelength of 800 nm is focused normal to the surface of the sample. The surface morphology modifications are ascribed to a fact that surface swelling occurs during the process. Periodic micro-structure is inscribed with increasing incident power. The laser-induced swelling threshold on the grating, which is higher than that of two-photon initiated photo-polymerization (TPIP) (8 mW), is verified to be about 20 mW. It is feasible to enhance the surface smoothness of integrated optics devices for further encapsulation. The variation of modulation depth is studied for different values of incident power and scan spacing. Ablation accompanied with surface swelling appears when the power is higher. By optimizing the laser carving parameters, highly efficient grating devices can be fabricated.

  18. Adaptive Fuzzy Bounded Control for Consensus of Multiple Strict-Feedback Nonlinear Systems.

    PubMed

    Wang, Wei; Tong, Shaocheng

    2018-02-01

    This paper studies the adaptive fuzzy bounded control problem for leader-follower multiagent systems, where each follower is modeled by the uncertain nonlinear strict-feedback system. Combining the fuzzy approximation with the dynamic surface control, an adaptive fuzzy control scheme is developed to guarantee the output consensus of all agents under directed communication topologies. Different from the existing results, the bounds of the control inputs are known as a priori, and they can be determined by the feedback control gains. To realize smooth and fast learning, a predictor is introduced to estimate each error surface, and the corresponding predictor error is employed to learn the optimal fuzzy parameter vector. It is proved that the developed adaptive fuzzy control scheme guarantees the uniformly ultimate boundedness of the closed-loop systems, and the tracking error converges to a small neighborhood of the origin. The simulation results and comparisons are provided to show the validity of the control strategy presented in this paper.

  19. A new measurement of the intergalactic temperature at z ˜ 2.55-2.95

    NASA Astrophysics Data System (ADS)

    Rorai, Alberto; Carswell, Robert F.; Haehnelt, Martin G.; Becker, George D.; Bolton, James S.; Murphy, Michael T.

    2018-03-01

    We present two measurements of the temperature-density relationship (TDR) of the intergalactic medium (IGM) in the redshift range 2.55 < z < 2.95 using a sample of 13 high-quality quasar spectra and high resolution numerical simulations of the IGM. Our approach is based on fitting the neutral hydrogen column density N_{H I} and the Doppler parameter b of the absorption lines in the Lyα forest. The first measurement is obtained using a novel Bayesian scheme that takes into account the statistical correlations between the parameters characterizing the lower cut-off of the b-N_{H I} distribution and the power-law parameters T0 and γ describing the TDR. This approach yields T0/103 K = 15.6 ± 4.4 and γ = 1.45 ± 0.17 independent of the assumed pressure smoothing of the small-scale density field. In order to explore the information contained in the overall b-N_{H I} distribution rather than only the lower cut-off, we obtain a second measurement based on a similar Bayesian analysis of the median Doppler parameter for separate column-density ranges of the absorbers. In this case, we obtain T0/103 K = 14.6 ± 3.7 and γ = 1.37 ± 0.17 in good agreement with the first measurement. Our Bayesian analysis reveals strong anticorrelations between the inferred T0 and γ for both methods as well as an anticorrelation of the inferred T0 and the pressure smoothing length for the second method, suggesting that the measurement accuracy can in the latter case be substantially increased if independent constraints on the smoothing are obtained. Our results are in good agreement with other recent measurements of the thermal state of the IGM probing similar (over-)density ranges.

  20. Smoothing and Predicting Celestial Pole Offsets using a Kalman Filter and Smoother

    NASA Astrophysics Data System (ADS)

    Nastula, J.; Chin, T. M.; Gross, R. S.; Winska, M.; Winska, J.

    2017-12-01

    Since the early days of interplanetary spaceflight, accounting for changes in the Earth's rotation is recognized to be critical for accurate navigation. In the 1960s, tracking anomalies during the Ranger VII and VIII lunar missions were traced to errors in the Earth orientation parameters. As a result, Earth orientation calibration methods were improved to support the Mariner IV and V planetary missions. Today, accurate Earth orientation parameters are used to track and navigate every interplanetary spaceflight mission. The interplanetary spacecraft tracking and navigation teams at JPL require the UT1 and polar motion parameters, and these Earth orientation parameters are estimated by the use of a Kalman filter to combine past measurements of these parameters and predict their future evolution. A model was then used to provide the nutation/precession components of the Earth's orientation separately. As a result, variations caused by the free core nutation were not taken into account. But for the highest accuracy, these variations must be considered. So JPL recently developed an approach based upon the use of a Kalman filter and smoother to provide smoothed and predicted celestial pole offsets (CPOs) to the interplanetary spacecraft tracking and navigation teams. The approach used at JPL to do this and an evaluation of the accuracy of the predicted CPOs will be given here.

  1. Minimax Estimation of Functionals of Discrete Distributions

    PubMed Central

    Jiao, Jiantao; Venkat, Kartik; Han, Yanjun; Weissman, Tsachy

    2017-01-01

    We propose a general methodology for the construction and analysis of essentially minimax estimators for a wide class of functionals of finite dimensional parameters, and elaborate on the case of discrete distributions, where the support size S is unknown and may be comparable with or even much larger than the number of observations n. We treat the respective regions where the functional is nonsmooth and smooth separately. In the nonsmooth regime, we apply an unbiased estimator for the best polynomial approximation of the functional whereas, in the smooth regime, we apply a bias-corrected version of the maximum likelihood estimator (MLE). We illustrate the merit of this approach by thoroughly analyzing the performance of the resulting schemes for estimating two important information measures: 1) the entropy H(P)=∑i=1S−pilnpi and 2) Fα(P)=∑i=1Spiα, α > 0. We obtain the minimax L2 rates for estimating these functionals. In particular, we demonstrate that our estimator achieves the optimal sample complexity n ≍ S/ln S for entropy estimation. We also demonstrate that the sample complexity for estimating Fα(P), 0 < α < 1, is n ≍ S1/α/ln S, which can be achieved by our estimator but not the MLE. For 1 < α < 3/2, we show the minimax L2 rate for estimating Fα(P) is (n ln n)−2(α−1) for infinite support size, while the maximum L2 rate for the MLE is n−2(α−1). For all the above cases, the behavior of the minimax rate-optimal estimators with n samples is essentially that of the MLE (plug-in rule) with n ln n samples, which we term “effective sample size enlargement.” We highlight the practical advantages of our schemes for the estimation of entropy and mutual information. We compare our performance with various existing approaches, and demonstrate that our approach reduces running time and boosts the accuracy. Moreover, we show that the minimax rate-optimal mutual information estimator yielded by our framework leads to significant performance boosts over the Chow–Liu algorithm in learning graphical models. The wide use of information measure estimation suggests that the insights and estimators obtained in this paper could be broadly applicable. PMID:29375152

  2. Long-Term Safety of Textured and Smooth Breast Implants.

    PubMed

    Calobrace, M Bradley; Schwartz, Michael R; Zeidler, Kamakshi R; Pittman, Troy A; Cohen, Robert; Stevens, W Grant

    2017-12-13

    In this review, the authors provide a 20-year review and comparison of implant options and describe the evolution of breast implant surface textures; compare available implant surfaces; present long-term safety data from the 10-year US-based Core clinical studies; list the key benefits and risks associated with smooth and textured implants; and provide perspectives on breast implant-associated anaplastic large cell lymphoma (BIA-ALCL). The authors explore the key benefits and risks associated with all available devices so that optimal and safe patient outcomes can be achieved. © 2017 The American Society for Aesthetic Plastic Surgery, Inc. Reprints and permission: journals.permissions@oup.com.

  3. Development of a Control Optimization System for Real Time Monitoring of Managed Aquifer Recharge and Recovery Systems Using Intelligent Sensors

    NASA Astrophysics Data System (ADS)

    Smits, K. M.; Drumheller, Z. W.; Lee, J. H.; Illangasekare, T. H.; Regnery, J.; Kitanidis, P. K.

    2015-12-01

    Aquifers around the world show troubling signs of irreversible depletion and seawater intrusion as climate change, population growth, and urbanization lead to reduced natural recharge rates and overuse. Scientists and engineers have begun to revisit the technology of managed aquifer recharge and recovery (MAR) as a means to increase the reliability of the diminishing and increasingly variable groundwater supply. Unfortunately, MAR systems remain wrought with operational challenges related to the quality and quantity of recharged and recovered water stemming from a lack of data-driven, real-time control. This research seeks to develop and validate a general simulation-based control optimization algorithm that relies on real-time data collected though embedded sensors that can be used to ease the operational challenges of MAR facilities. Experiments to validate the control algorithm were conducted at the laboratory scale in a two-dimensional synthetic aquifer under both homogeneous and heterogeneous packing configurations. The synthetic aquifer used well characterized technical sands and the electrical conductivity signal of an inorganic conservative tracer as a surrogate measure for water quality. The synthetic aquifer was outfitted with an array of sensors and an autonomous pumping system. Experimental results verified the feasibility of the approach and suggested that the system can improve the operation of MAR facilities. The dynamic parameter inversion reduced the average error between the simulated and observed pressures between 12.5 and 71.4%. The control optimization algorithm ran smoothly and generated optimal control decisions. Overall, results suggest that with some improvements to the inversion and interpolation algorithms, which can be further advanced through testing with laboratory experiments using sensors, the concept can successfully improve the operation of MAR facilities.

  4. Nonclassical states of light with a smooth P function

    NASA Astrophysics Data System (ADS)

    Damanet, François; Kübler, Jonas; Martin, John; Braun, Daniel

    2018-02-01

    There is a common understanding in quantum optics that nonclassical states of light are states that do not have a positive semidefinite and sufficiently regular Glauber-Sudarshan P function. Almost all known nonclassical states have P functions that are highly irregular, which makes working with them difficult and direct experimental reconstruction impossible. Here we introduce classes of nonclassical states with regular, non-positive-definite P functions. They are constructed by "puncturing" regular smooth positive P functions with negative Dirac-δ peaks or other sufficiently narrow smooth negative functions. We determine the parameter ranges for which such punctures are possible without losing the positivity of the state, the regimes yielding antibunching of light, and the expressions of the Wigner functions for all investigated punctured states. Finally, we propose some possible experimental realizations of such states.

  5. Vestibular-Related Frontal Cortical Areas and Their Roles in Smooth-Pursuit Eye Movements: Representation of Neck Velocity, Neck-Vestibular Interactions, and Memory-Based Smooth-Pursuit

    PubMed Central

    Fukushima, Kikuro; Fukushima, Junko; Warabi, Tateo

    2011-01-01

    Smooth-pursuit eye movements are voluntary responses to small slow-moving objects in the fronto-parallel plane. They evolved in primates, who possess high-acuity foveae, to ensure clear vision about the moving target. The primate frontal cortex contains two smooth-pursuit related areas; the caudal part of the frontal eye fields (FEF) and the supplementary eye fields (SEF). Both areas receive vestibular inputs. We review functional differences between the two areas in smooth-pursuit. Most FEF pursuit neurons signal pursuit parameters such as eye velocity and gaze-velocity, and are involved in canceling the vestibulo-ocular reflex by linear addition of vestibular and smooth-pursuit responses. In contrast, gaze-velocity signals are rarely represented in the SEF. Most FEF pursuit neurons receive neck velocity inputs, while discharge modulation during pursuit and trunk-on-head rotation adds linearly. Linear addition also occurs between neck velocity responses and vestibular responses during head-on-trunk rotation in a task-dependent manner. During cross-axis pursuit–vestibular interactions, vestibular signals effectively initiate predictive pursuit eye movements. Most FEF pursuit neurons discharge during the interaction training after the onset of pursuit eye velocity, making their involvement unlikely in the initial stages of generating predictive pursuit. Comparison of representative signals in the two areas and the results of chemical inactivation during a memory-based smooth-pursuit task indicate they have different roles; the SEF plans smooth-pursuit including working memory of motion–direction, whereas the caudal FEF generates motor commands for pursuit eye movements. Patients with idiopathic Parkinson’s disease were asked to perform this task, since impaired smooth-pursuit and visual working memory deficit during cognitive tasks have been reported in most patients. Preliminary results suggested specific roles of the basal ganglia in memory-based smooth-pursuit. PMID:22174706

  6. Analysis of Flatness Deviations for Austenitic Stainless Steel Workpieces after Efficient Surface Machining

    NASA Astrophysics Data System (ADS)

    Nadolny, K.; Kapłonek, W.

    2014-08-01

    The following work is an analysis of flatness deviations of a workpiece made of X2CrNiMo17-12-2 austenitic stainless steel. The workpiece surface was shaped using efficient machining techniques (milling, grinding, and smoothing). After the machining was completed, all surfaces underwent stylus measurements in order to obtain surface flatness and roughness parameters. For this purpose the stylus profilometer Hommel-Tester T8000 by Hommelwerke with HommelMap software was used. The research results are presented in the form of 2D surface maps, 3D surface topographies with extracted single profiles, Abbott-Firestone curves, and graphical studies of the Sk parameters. The results of these experimental tests proved the possibility of a correlation between flatness and roughness parameters, as well as enabled an analysis of changes in these parameters from shaping and rough grinding to finished machining. The main novelty of this paper is comprehensive analysis of measurement results obtained during a three-step machining process of austenitic stainless steel. Simultaneous analysis of individual machining steps (milling, grinding, and smoothing) enabled a complementary assessment of the process of shaping the workpiece surface macro- and micro-geometry, giving special consideration to minimize the flatness deviations

  7. Design and technology parameters influence on durability for heat exchangers tube to tubesheet joints

    NASA Astrophysics Data System (ADS)

    Ripeanu, R. G.

    2017-02-01

    The main failures of heat exchangers are: corrosion of tubes and jacket, tubes blockage and failures of tube to tubesheet joints also by corrosion. The most critical zone is tube to tubesheet joints. Depending on types of tube to tubesheet joints, in order to better respect conditions of tension and compression, this paper analyses the tubesheet holes shapes, smooth and with a grove, on corrosion behavior. In the case of welding tubes with tubesheet, welding parameters modify corrosion behavior. Were realized welded joints by three welding regimes and tested at corrosion in two media, tap water and industrial water. Were tested also samples made of smooth tubes, finned tubes and tubes coated with a passive product as applied by a heat exchanger manufacturer. For all samples, the roughness parameters were measured, before and after the corrosion tests. The obtained corrosion rates show that stress values and their distribution along the joint modify the corrosion behavior. The optimum welding parameters were established in order to increase the joint durability. The paper has shown that passive product used is not proper chosen and the technology of obtaining rolled thread pipes diminishes tubes’ durability by increasing the corrosion rate.

  8. Simultaneous versus sequential optimal experiment design for the identification of multi-parameter microbial growth kinetics as a function of temperature.

    PubMed

    Van Derlinden, E; Bernaerts, K; Van Impe, J F

    2010-05-21

    Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy). This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  9. Options for Robust Airfoil Optimization under Uncertainty

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; Li, Wu

    2002-01-01

    A robust optimization method is developed to overcome point-optimization at the sampled design points. This method combines the best features from several preliminary methods proposed by the authors and their colleagues. The robust airfoil shape optimization is a direct method for drag reduction over a given range of operating conditions and has three advantages: (1) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (2) it uses a large number of spline control points as design variables yet the resulting airfoil shape does not need to be smoothed, and (3) it allows the user to make a tradeoff between the level of optimization and the amount of computing time consumed. For illustration purposes, the robust optimization method is used to solve a lift-constrained drag minimization problem for a two-dimensional (2-D) airfoil in Euler flow with 20 geometric design variables.

  10. Translator for Optimizing Fluid-Handling Components

    NASA Technical Reports Server (NTRS)

    Landon, Mark; Perry, Ernest

    2007-01-01

    A software interface has been devised to facilitate optimization of the shapes of valves, elbows, fittings, and other components used to handle fluids under extreme conditions. This software interface translates data files generated by PLOT3D (a NASA grid-based plotting-and- data-display program) and by computational fluid dynamics (CFD) software into a format in which the files can be read by Sculptor, which is a shape-deformation- and-optimization program. Sculptor enables the user to interactively, smoothly, and arbitrarily deform the surfaces and volumes in two- and three-dimensional CFD models. Sculptor also includes design-optimization algorithms that can be used in conjunction with the arbitrary-shape-deformation components to perform automatic shape optimization. In the optimization process, the output of the CFD software is used as feedback while the optimizer strives to satisfy design criteria that could include, for example, improved values of pressure loss, velocity, flow quality, mass flow, etc.

  11. Multi-parameter optimization of piezoelectric actuators for multi-mode active vibration control of cylindrical shells

    NASA Astrophysics Data System (ADS)

    Hu, K. M.; Li, Hua

    2018-07-01

    A novel technique for the multi-parameter optimization of distributed piezoelectric actuators is presented in this paper. The proposed method is designed to improve the performance of multi-mode vibration control in cylindrical shells. The optimization parameters of actuator patch configuration include position, size, and tilt angle. The modal control force of tilted orthotropic piezoelectric actuators is derived and the multi-parameter cylindrical shell optimization model is established. The linear quadratic energy index is employed as the optimization criterion. A geometric constraint is proposed to prevent overlap between tilted actuators, which is plugged into a genetic algorithm to search the optimal configuration parameters. A simply-supported closed cylindrical shell with two actuators serves as a case study. The vibration control efficiencies of various parameter sets are evaluated via frequency response and transient response simulations. The results show that the linear quadratic energy indexes of position and size optimization decreased by 14.0% compared to position optimization; those of position and tilt angle optimization decreased by 16.8%; and those of position, size, and tilt angle optimization decreased by 25.9%. It indicates that, adding configuration optimization parameters is an efficient approach to improving the vibration control performance of piezoelectric actuators on shells.

  12. On the Pontryagin maximum principle for systems with delays. Economic applications

    NASA Astrophysics Data System (ADS)

    Kim, A. V.; Kormyshev, V. M.; Kwon, O. B.; Mukhametshin, E. R.

    2017-11-01

    The Pontryagin maximum principle [6] is the key stone of finite-dimensional optimal control theory [1, 2, 5]. So beginning with opening the maximum principle it was important to extend the maximum principle on various classes of dynamical systems. In t he paper we consider some aspects of application of i-smooth analysis [3, 4] in the theory of the Pontryagin maximum principle [6] for systems with delays, obtained results can be applied by elaborating optimal program controls in economic models with delays.

  13. Optimizing the G/T ratio of the DSS-13 34-meter beam-waveguide antenna

    NASA Technical Reports Server (NTRS)

    Esquivel, M. S.

    1992-01-01

    Calculations using Physical Optics computer software were done to optimize the gain-to-noise temperature (G/T) ratio of DSS-13, the DSN's 34-m beam-waveguide antenna, at X-band for operation with the ultra-low-noise amplifier maser system. A better G/T value was obtained by using a 24.2-dB far-field-gain smooth-wall dual-mode horn than by using the standard X-band 22.5-dB-gain corrugated horn.

  14. Estimating Optimal Transformations for Multiple Regression and Correlation.

    DTIC Science & Technology

    1982-07-01

    algorithm; we minimize (2.4) e2 (,,, ...,) = E[e(Y) - 1I (X 2 j=l j 2holding EO =1, E6 = E0, =.-. =Ecp = 0, through a series of single function minimizations...X, x = INU = lIVe . Then (5.16) THEOREM. If 6*, p* is an optimal transformation for regression, then = ue*o Conversely, if e satisfies Xe = U6, Nll1...Stanford University, Tech. Report ORIONOO6. Gasser, T. and Rosenblatt, M. (eds.) (1979). Smoothing Techniques for Curve Estimation, in Lecture Notes in

  15. A comparison of back propagation and Generalized Regression Neural Networks performance in neutron spectrometry.

    PubMed

    Martínez-Blanco, Ma Del Rosario; Ornelas-Vargas, Gerardo; Solís-Sánchez, Luis Octavio; Castañeda-Miranada, Rodrigo; Vega-Carrillo, Héctor René; Celaya-Padilla, José M; Garza-Veloz, Idalia; Martínez-Fierro, Margarita; Ortiz-Rodríguez, José Manuel

    2016-11-01

    The process of unfolding the neutron energy spectrum has been subject of research for many years. Monte Carlo, iterative methods, the bayesian theory, the principle of maximum entropy are some of the methods used. The drawbacks associated with traditional unfolding procedures have motivated the research of complementary approaches. Back Propagation Neural Networks (BPNN), have been applied with success in neutron spectrometry and dosimetry domains, however, the structure and learning parameters are factors that highly impact in the networks performance. In ANN domain, Generalized Regression Neural Network (GRNN) is one of the simplest neural networks in term of network architecture and learning algorithm. The learning is instantaneous, requiring no time for training. Opposite to BPNN, a GRNN would be formed instantly with just a 1-pass training on the development data. In the network development phase, the only hurdle is to optimize the hyper-parameter, which is known as sigma, governing the smoothness of the network. The aim of this work was to compare the performance of BPNN and GRNN in the solution of the neutron spectrometry problem. From results obtained it can be observed that despite the very similar results, GRNN performs better than BPNN. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Neuromorphic learning of continuous-valued mappings from noise-corrupted data. Application to real-time adaptive control

    NASA Technical Reports Server (NTRS)

    Troudet, Terry; Merrill, Walter C.

    1990-01-01

    The ability of feed-forward neural network architectures to learn continuous valued mappings in the presence of noise was demonstrated in relation to parameter identification and real-time adaptive control applications. An error function was introduced to help optimize parameter values such as number of training iterations, observation time, sampling rate, and scaling of the control signal. The learning performance depended essentially on the degree of embodiment of the control law in the training data set and on the degree of uniformity of the probability distribution function of the data that are presented to the net during sequence. When a control law was corrupted by noise, the fluctuations of the training data biased the probability distribution function of the training data sequence. Only if the noise contamination is minimized and the degree of embodiment of the control law is maximized, can a neural net develop a good representation of the mapping and be used as a neurocontroller. A multilayer net was trained with back-error-propagation to control a cart-pole system for linear and nonlinear control laws in the presence of data processing noise and measurement noise. The neurocontroller exhibited noise-filtering properties and was found to operate more smoothly than the teacher in the presence of measurement noise.

  17. Vector splines on the sphere with application to the estimation of vorticity and divergence from discrete, noisy data

    NASA Technical Reports Server (NTRS)

    Wahba, G.

    1982-01-01

    Vector smoothing splines on the sphere are defined. Theoretical properties are briefly alluded to. The appropriate Hilbert space norms used in a specific meteorological application are described and justified via a duality theorem. Numerical procedures for computing the splines as well as the cross validation estimate of two smoothing parameters are given. A Monte Carlo study is described which suggests the accuracy with which upper air vorticity and divergence can be estimated using measured wind vectors from the North American radiosonde network.

  18. Analysis of unstable periodic orbits and chaotic orbits in the one-dimensional linear piecewise-smooth discontinuous map

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajpathak, Bhooshan, E-mail: bhooshan@ee.iitb.ac.in; Pillai, Harish K., E-mail: hp@ee.iitb.ac.in; Bandyopadhyay, Santanu, E-mail: santanu@me.iitb.ac.in

    2015-10-15

    In this paper, we analytically examine the unstable periodic orbits and chaotic orbits of the 1-D linear piecewise-smooth discontinuous map. We explore the existence of unstable orbits and the effect of variation in parameters on the coexistence of unstable orbits. Further, we show that this structuring is different from the well known period adding cascade structure associated with the stable periodic orbits of the same map. Further, we analytically prove the existence of chaotic orbit for this map.

  19. Evaluation of earthquake potential in China

    NASA Astrophysics Data System (ADS)

    Rong, Yufang

    I present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (that is, the probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. I test all three estimates, and another published estimate, against earthquake data. I constructed a special earthquake catalog which combines previous catalogs covering different times. I estimated moment magnitudes for some events using regression relationships that are derived in this study. I used the special catalog to construct the smoothed seismicity model and to test all models retrospectively. In all the models, I adopted a kind of Gutenberg-Richter magnitude distribution with modifications at higher magnitude. The assumed magnitude distribution depends on three parameters: a multiplicative " a-value," the slope or "b-value," and a "corner magnitude" marking a rapid decrease of earthquake rate with magnitude. I assumed the "b-value" to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and declines as a negative power of the epicentral distance out to a few hundred kilometers. I derived the upper magnitude limit from the special catalog, and estimated local "a-values" from smoothed seismicity. I have begun a "prospective" test, and earthquakes since the beginning of 2000 are quite compatible with the model. For the geologic estimations, I adopted the seismic source zones that are used in the published Global Seismic Hazard Assessment Project (GSHAP) model. The zones are divided according to geological, geodetic and seismicity data. Corner magnitudes are estimated from fault length, while fault slip rates and an assumed locking depth determine earthquake rates. The geological model fits the earthquake data better than the GSHAP model. By smoothing geodetic strain rate, another potential model was constructed and tested. I derived the upper magnitude limit from the Special catalog, and assume local "a-values" proportional to geodetic strain rates. "Prospective" tests show that the geodetic strain rate model is quite compatible with earthquakes. By assuming the smoothed seismicity model as a null hypothesis, I tested every other model against it. Test results indicate that the smoothed seismicity model performs best.

  20. A novel JEAnS analysis of the Fornax dwarf using evolutionary algorithms: mass follows light with signs of an off-centre merger

    NASA Astrophysics Data System (ADS)

    Diakogiannis, Foivos I.; Lewis, Geraint F.; Ibata, Rodrigo A.; Guglielmo, Magda; Kafle, Prajwal R.; Wilkinson, Mark I.; Power, Chris

    2017-09-01

    Dwarf galaxies, among the most dark matter dominated structures of our Universe, are excellent test-beds for dark matter theories. Unfortunately, mass modelling of these systems suffers from the well-documented mass-velocity anisotropy degeneracy. For the case of spherically symmetric systems, we describe a method for non-parametric modelling of the radial and tangential velocity moments. The method is a numerical velocity anisotropy 'inversion', with parametric mass models, where the radial velocity dispersion profile, σrr2, is modelled as a B-spline, and the optimization is a three-step process that consists of (I) an evolutionary modelling to determine the mass model form and the best B-spline basis to represent σrr2; (II) an optimization of the smoothing parameters and (III) a Markov chain Monte Carlo analysis to determine the physical parameters. The mass-anisotropy degeneracy is reduced into mass model inference, irrespective of kinematics. We test our method using synthetic data. Our algorithm constructs the best kinematic profile and discriminates between competing dark matter models. We apply our method to the Fornax dwarf spheroidal galaxy. Using a King brightness profile and testing various dark matter mass models, our model inference favours a simple mass-follows-light system. We find that the anisotropy profile of Fornax is tangential (β(r) < 0) and we estimate a total mass of M_{tot} = 1.613^{+0.050}_{-0.075} × 10^8 M_{⊙}, and a mass-to-light ratio of Υ_V = 8.93 ^{+0.32}_{-0.47} (M_{⊙}/L_{⊙}). The algorithm we present is a robust and computationally inexpensive method for non-parametric modelling of spherical clusters independent of the mass-anisotropy degeneracy.

  1. On the sensitivity of teleseismic full-waveform inversion to earth parametrization, initial model and acquisition design

    NASA Astrophysics Data System (ADS)

    Beller, S.; Monteiller, V.; Combe, L.; Operto, S.; Nolet, G.

    2018-02-01

    Full-waveform inversion (FWI) is not yet a mature imaging technology for lithospheric imaging from teleseismic data. Therefore, its promise and pitfalls need to be assessed more accurately according to the specifications of teleseismic experiments. Three important issues are related to (1) the choice of the lithospheric parametrization for optimization and visualization, (2) the initial model and (3) the acquisition design, in particular in terms of receiver spread and sampling. These three issues are investigated with a realistic synthetic example inspired by the CIFALPS experiment in the Western Alps. Isotropic elastic FWI is implemented with an adjoint-state formalism and aims to update three parameter classes by minimization of a classical least-squares difference-based misfit function. Three different subsurface parametrizations, combining density (ρ) with P and S wave speeds (Vp and Vs) , P and S impedances (Ip and Is), or elastic moduli (λ and μ) are first discussed based on their radiation patterns before their assessment by FWI. We conclude that the (ρ, λ, μ) parametrization provides the FWI models that best correlate with the true ones after recombining a posteriori the (ρ, λ, μ) optimization parameters into Ip and Is. Owing to the low frequency content of teleseismic data, 1-D reference global models as PREM provide sufficiently accurate initial models for FWI after smoothing that is necessary to remove the imprint of the layering. Two kinds of station deployments are assessed: coarse areal geometry versus dense linear one. We unambiguously conclude that a coarse areal geometry should be favoured as it dramatically increases the penetration in depth of the imaging as well as the horizontal resolution. This results because the areal geometry significantly increases local wavenumber coverage, through a broader sampling of the scattering and dip angles, compared to a linear deployment.

  2. Development & characterization of alumina coating by atmospheric plasma spraying

    NASA Astrophysics Data System (ADS)

    Sebastian, Jobin; Scaria, Abyson; Kurian, Don George

    2018-03-01

    Ceramic coatings are applied on metals to prevent them from oxidation and corrosion at room as well as elevated temperatures. The service environment, mechanisms of protection, chemical and mechanical compatibility, application method, control of coating quality and ability of the coating to be repaired are the factors that need to be considered while selecting the required coating. The coatings based on oxide materials provides high degree of thermal insulation and protection against oxidation at high temperatures for the underlying substrate materials. These coatings are usually applied by the flame or plasma spraying methods. The surface cleanliness needs to be ensured before spraying. Abrasive blasting can be used to provide the required surface roughness for good adhesion between the substrate and the coating. A pre bond coat like Nickel Chromium can be applied on to the substrate material before spraying the oxide coating to avoid chances of poor adhesion between the oxide coating and the metallic substrate. Plasma spraying produces oxide coatings of greater density, higher hardness, and smooth surface finish than that of the flame spraying process Inert gas is often used for generation of plasma gas so as to avoid the oxidation of the substrate material. The work focuses to develop, characterize and optimize the parameters used in Al2O3 coating on transition stainless steel substrate material for minimizing the wear rate and maximizing the leak tightness using plasma spray process. The experiment is designed using Taguchi’s L9 orthogonal array. The parameters that are to be optimized are plasma voltage, spraying distance and the cooling jet pressure. The characterization techniques includes micro-hardness and porosity tests followed by Grey relational analysis of the results.

  3. Network intrusion detection based on a general regression neural network optimized by an improved artificial immune algorithm.

    PubMed

    Wu, Jianfa; Peng, Dahao; Li, Zhuping; Zhao, Li; Ling, Huanzhang

    2015-01-01

    To effectively and accurately detect and classify network intrusion data, this paper introduces a general regression neural network (GRNN) based on the artificial immune algorithm with elitist strategies (AIAE). The elitist archive and elitist crossover were combined with the artificial immune algorithm (AIA) to produce the AIAE-GRNN algorithm, with the aim of improving its adaptivity and accuracy. In this paper, the mean square errors (MSEs) were considered the affinity function. The AIAE was used to optimize the smooth factors of the GRNN; then, the optimal smooth factor was solved and substituted into the trained GRNN. Thus, the intrusive data were classified. The paper selected a GRNN that was separately optimized using a genetic algorithm (GA), particle swarm optimization (PSO), and fuzzy C-mean clustering (FCM) to enable a comparison of these approaches. As shown in the results, the AIAE-GRNN achieves a higher classification accuracy than PSO-GRNN, but the running time of AIAE-GRNN is long, which was proved first. FCM and GA-GRNN were eliminated because of their deficiencies in terms of accuracy and convergence. To improve the running speed, the paper adopted principal component analysis (PCA) to reduce the dimensions of the intrusive data. With the reduction in dimensionality, the PCA-AIAE-GRNN decreases in accuracy less and has better convergence than the PCA-PSO-GRNN, and the running speed of the PCA-AIAE-GRNN was relatively improved. The experimental results show that the AIAE-GRNN has a higher robustness and accuracy than the other algorithms considered and can thus be used to classify the intrusive data.

  4. UDECON: deconvolution optimization software for restoring high-resolution records from pass-through paleomagnetic measurements

    NASA Astrophysics Data System (ADS)

    Xuan, Chuang; Oda, Hirokuni

    2015-11-01

    The rapid accumulation of continuous paleomagnetic and rock magnetic records acquired from pass-through measurements on superconducting rock magnetometers (SRM) has greatly contributed to our understanding of the paleomagnetic field and paleo-environment. Pass-through measurements are inevitably smoothed and altered by the convolution effect of SRM sensor response, and deconvolution is needed to restore high-resolution paleomagnetic and environmental signals. Although various deconvolution algorithms have been developed, the lack of easy-to-use software has hindered the practical application of deconvolution. Here, we present standalone graphical software UDECON as a convenient tool to perform optimized deconvolution for pass-through paleomagnetic measurements using the algorithm recently developed by Oda and Xuan (Geochem Geophys Geosyst 15:3907-3924, 2014). With the preparation of a format file, UDECON can directly read pass-through paleomagnetic measurement files collected at different laboratories. After the SRM sensor response is determined and loaded to the software, optimized deconvolution can be conducted using two different approaches (i.e., "Grid search" and "Simplex method") with adjustable initial values or ranges for smoothness, corrections of sample length, and shifts in measurement position. UDECON provides a suite of tools to view conveniently and check various types of original measurement and deconvolution data. Multiple steps of measurement and/or deconvolution data can be compared simultaneously to check the consistency and to guide further deconvolution optimization. Deconvolved data together with the loaded original measurement and SRM sensor response data can be saved and reloaded for further treatment in UDECON. Users can also export the optimized deconvolution data to a text file for analysis in other software.

  5. Optimization of a centrifugal compressor impeller using CFD: the choice of simulation model parameters

    NASA Astrophysics Data System (ADS)

    Neverov, V. V.; Kozhukhov, Y. V.; Yablokov, A. M.; Lebedev, A. A.

    2017-08-01

    Nowadays the optimization using computational fluid dynamics (CFD) plays an important role in the design process of turbomachines. However, for the successful and productive optimization it is necessary to define a simulation model correctly and rationally. The article deals with the choice of a grid and computational domain parameters for optimization of centrifugal compressor impellers using computational fluid dynamics. Searching and applying optimal parameters of the grid model, the computational domain and solver settings allows engineers to carry out a high-accuracy modelling and to use computational capability effectively. The presented research was conducted using Numeca Fine/Turbo package with Spalart-Allmaras and Shear Stress Transport turbulence models. Two radial impellers was investigated: the high-pressure at ψT=0.71 and the low-pressure at ψT=0.43. The following parameters of the computational model were considered: the location of inlet and outlet boundaries, type of mesh topology, size of mesh and mesh parameter y+. Results of the investigation demonstrate that the choice of optimal parameters leads to the significant reduction of the computational time. Optimal parameters in comparison with non-optimal but visually similar parameters can reduce the calculation time up to 4 times. Besides, it is established that some parameters have a major impact on the result of modelling.

  6. Generation of Plausible Hurricane Tracks for Preparedness Exercises

    DTIC Science & Technology

    2017-04-25

    wind extents are simulated by Poisson regression and temporal filtering . The un-optimized MATLAB code runs in less than a minute and is integrated into...of real hurricanes. After wind radii have been simulated for the entire track, median filtering , attenuation over land, and smoothing clean up the wind

  7. Improved algorithm for estimating optical properties of food and biological materials using spatially-resolved diffuse reflectance

    USDA-ARS?s Scientific Manuscript database

    In this research, the inverse algorithm for estimating optical properties of food and biological materials from spatially-resolved diffuse reflectance was optimized in terms of data smoothing, normalization and spatial region of reflectance profile for curve fitting. Monte Carlo simulation was used ...

  8. Recursive inverse kinematics for robot arms via Kalman filtering and Bryson-Frazier smoothing

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Scheid, R. E., Jr.

    1987-01-01

    This paper applies linear filtering and smoothing theory to solve recursively the inverse kinematics problem for serial multilink manipulators. This problem is to find a set of joint angles that achieve a prescribed tip position and/or orientation. A widely applicable numerical search solution is presented. The approach finds the minimum of a generalized distance between the desired and the actual manipulator tip position and/or orientation. Both a first-order steepest-descent gradient search and a second-order Newton-Raphson search are developed. The optimal relaxation factor required for the steepest descent method is computed recursively using an outward/inward procedure similar to those used typically for recursive inverse dynamics calculations. The second-order search requires evaluation of a gradient and an approximate Hessian. A Gauss-Markov approach is used to approximate the Hessian matrix in terms of products of first-order derivatives. This matrix is inverted recursively using a two-stage process of inward Kalman filtering followed by outward smoothing. This two-stage process is analogous to that recently developed by the author to solve by means of spatial filtering and smoothing the forward dynamics problem for serial manipulators.

  9. Contrast and assimilation in motion perception and smooth pursuit eye movements.

    PubMed

    Spering, Miriam; Gegenfurtner, Karl R

    2007-09-01

    The analysis of visual motion serves many different functions ranging from object motion perception to the control of self-motion. The perception of visual motion and the oculomotor tracking of a moving object are known to be closely related and are assumed to be controlled by shared brain areas. We compared perceived velocity and the velocity of smooth pursuit eye movements in human observers in a paradigm that required the segmentation of target object motion from context motion. In each trial, a pursuit target and a visual context were independently perturbed simultaneously to briefly increase or decrease in speed. Observers had to accurately track the target and estimate target speed during the perturbation interval. Here we show that the same motion signals are processed in fundamentally different ways for perception and steady-state smooth pursuit eye movements. For the computation of perceived velocity, motion of the context was subtracted from target motion (motion contrast), whereas pursuit velocity was determined by the motion average (motion assimilation). We conclude that the human motion system uses these computations to optimally accomplish different functions: image segmentation for object motion perception and velocity estimation for the control of smooth pursuit eye movements.

  10. Smoothing Strategies Combined with ARIMA and Neural Networks to Improve the Forecasting of Traffic Accidents

    PubMed Central

    Rodríguez, Nibaldo

    2014-01-01

    Two smoothing strategies combined with autoregressive integrated moving average (ARIMA) and autoregressive neural networks (ANNs) models to improve the forecasting of time series are presented. The strategy of forecasting is implemented using two stages. In the first stage the time series is smoothed using either, 3-point moving average smoothing, or singular value Decomposition of the Hankel matrix (HSVD). In the second stage, an ARIMA model and two ANNs for one-step-ahead time series forecasting are used. The coefficients of the first ANN are estimated through the particle swarm optimization (PSO) learning algorithm, while the coefficients of the second ANN are estimated with the resilient backpropagation (RPROP) learning algorithm. The proposed models are evaluated using a weekly time series of traffic accidents of Valparaíso, Chilean region, from 2003 to 2012. The best result is given by the combination HSVD-ARIMA, with a MAPE of 0 : 26%, followed by MA-ARIMA with a MAPE of 1 : 12%; the worst result is given by the MA-ANN based on PSO with a MAPE of 15 : 51%. PMID:25243200

  11. Using High Resolution Design Spaces for Aerodynamic Shape Optimization Under Uncertainty

    NASA Technical Reports Server (NTRS)

    Li, Wu; Padula, Sharon

    2004-01-01

    This paper explains why high resolution design spaces encourage traditional airfoil optimization algorithms to generate noisy shape modifications, which lead to inaccurate linear predictions of aerodynamic coefficients and potential failure of descent methods. By using auxiliary drag constraints for a simultaneous drag reduction at all design points and the least shape distortion to achieve the targeted drag reduction, an improved algorithm generates relatively smooth optimal airfoils with no severe off-design performance degradation over a range of flight conditions, in high resolution design spaces parameterized by cubic B-spline functions. Simulation results using FUN2D in Euler flows are included to show the capability of the robust aerodynamic shape optimization method over a range of flight conditions.

  12. Numerical optimization methods for controlled systems with parameters

    NASA Astrophysics Data System (ADS)

    Tyatyushkin, A. I.

    2017-10-01

    First- and second-order numerical methods for optimizing controlled dynamical systems with parameters are discussed. In unconstrained-parameter problems, the control parameters are optimized by applying the conjugate gradient method. A more accurate numerical solution in these problems is produced by Newton's method based on a second-order functional increment formula. Next, a general optimal control problem with state constraints and parameters involved on the righthand sides of the controlled system and in the initial conditions is considered. This complicated problem is reduced to a mathematical programming one, followed by the search for optimal parameter values and control functions by applying a multimethod algorithm. The performance of the proposed technique is demonstrated by solving application problems.

  13. A modular approach to large-scale design optimization of aerospace systems

    NASA Astrophysics Data System (ADS)

    Hwang, John T.

    Gradient-based optimization and the adjoint method form a synergistic combination that enables the efficient solution of large-scale optimization problems. Though the gradient-based approach struggles with non-smooth or multi-modal problems, the capability to efficiently optimize up to tens of thousands of design variables provides a valuable design tool for exploring complex tradeoffs and finding unintuitive designs. However, the widespread adoption of gradient-based optimization is limited by the implementation challenges for computing derivatives efficiently and accurately, particularly in multidisciplinary and shape design problems. This thesis addresses these difficulties in two ways. First, to deal with the heterogeneity and integration challenges of multidisciplinary problems, this thesis presents a computational modeling framework that solves multidisciplinary systems and computes their derivatives in a semi-automated fashion. This framework is built upon a new mathematical formulation developed in this thesis that expresses any computational model as a system of algebraic equations and unifies all methods for computing derivatives using a single equation. The framework is applied to two engineering problems: the optimization of a nanosatellite with 7 disciplines and over 25,000 design variables; and simultaneous allocation and mission optimization for commercial aircraft involving 330 design variables, 12 of which are integer variables handled using the branch-and-bound method. In both cases, the framework makes large-scale optimization possible by reducing the implementation effort and code complexity. The second half of this thesis presents a differentiable parametrization of aircraft geometries and structures for high-fidelity shape optimization. Existing geometry parametrizations are not differentiable, or they are limited in the types of shape changes they allow. This is addressed by a novel parametrization that smoothly interpolates aircraft components, providing differentiability. An unstructured quadrilateral mesh generation algorithm is also developed to automate the creation of detailed meshes for aircraft structures, and a mesh convergence study is performed to verify that the quality of the mesh is maintained as it is refined. As a demonstration, high-fidelity aerostructural analysis is performed for two unconventional configurations with detailed structures included, and aerodynamic shape optimization is applied to the truss-braced wing, which finds and eliminates a shock in the region bounded by the struts and the wing.

  14. [Correlation between physical characteristics of sticks and quality of traditional Chinese medicine pills prepared by plastic molded method].

    PubMed

    Wang, Ling; Xian, Jiechen; Hong, Yanlong; Lin, Xiao; Feng, Yi

    2012-05-01

    To quantify the physical characteristics of sticks of traditional Chinese medicine (TCM) honeyed pills prepared by the plastic molded method and the correlation of adhesiveness and plasticity-related parameters of sticks and quality of pills, in order to find major parameters and the appropriate range impacting pill quality. Sticks were detected by texture analyzer for their physical characteristic parameters such as hardness and compression action, and pills were observed by visual evaluation for their quality. The correlation of both data was determined by the stepwise discriminant analysis. Stick physical characteristic parameter l(CD) can exactly depict the adhesiveness, with the discriminant equation of Y0 - Y1 = 6.415 - 41.594l(CD). When Y0 < Y1, pills were scattered well; when Y0 > Y1, pills were adhesive with each other. Pills' physical characteristic parameters l(CD) and l(AC), Ar, Tr can exactly depict smoothness of pills, with the discriminant equation of Z0 - Z1 = -195.318 + 78.79l(AC) - 3 258. 982Ar + 3437.935Tr. When Z0 < Z1, pills were smooth on surface. When Z0 > Z1, pills were rough on surface. The stepwise discriminant analysis is made to show the obvious correlation between key physical characteristic parameters l(CD) and l(AC), Ar, Tr of sticks and appearance quality of pills, defining the molding process for preparing pills by the plastic molded and qualifying ranges of key physical characteristic parameters characterizing intermediate sticks, in order to provide theoretical basis for prescription screening and technical parameter adjustment for pills.

  15. Non-Linear Relationship between Economic Growth and CO2 Emissions in China: An Empirical Study Based on Panel Smooth Transition Regression Models

    PubMed Central

    Wang, Zheng-Xin; Hao, Peng; Yao, Pei-Yi

    2017-01-01

    The non-linear relationship between provincial economic growth and carbon emissions is investigated by using panel smooth transition regression (PSTR) models. The research indicates that, on the condition of separately taking Gross Domestic Product per capita (GDPpc), energy structure (Es), and urbanisation level (Ul) as transition variables, three models all reject the null hypothesis of a linear relationship, i.e., a non-linear relationship exists. The results show that the three models all contain only one transition function but different numbers of location parameters. The model taking GDPpc as the transition variable has two location parameters, while the other two models separately considering Es and Ul as the transition variables both contain one location parameter. The three models applied in the study all favourably describe the non-linear relationship between economic growth and CO2 emissions in China. It also can be seen that the conversion rate of the influence of Ul on per capita CO2 emissions is significantly higher than those of GDPpc and Es on per capita CO2 emissions. PMID:29236083

  16. A least-squares parameter estimation algorithm for switched hammerstein systems with applications to the VOR

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.; Kearney, Robert E.; Galiana, Henrietta L.

    2005-01-01

    A "Multimode" or "switched" system is one that switches between various modes of operation. When a switch occurs from one mode to another, a discontinuity may result followed by a smooth evolution under the new regime. Characterizing the switching behavior of these systems is not well understood and, therefore, identification of multimode systems typically requires a preprocessing step to classify the observed data according to a mode of operation. A further consequence of the switched nature of these systems is that data available for parameter estimation of any subsystem may be inadequate. As such, identification and parameter estimation of multimode systems remains an unresolved problem. In this paper, we 1) show that the NARMAX model structure can be used to describe the impulsive-smooth behavior of switched systems, 2) propose a modified extended least squares (MELS) algorithm to estimate the coefficients of such models, and 3) demonstrate its applicability to simulated and real data from the Vestibulo-Ocular Reflex (VOR). The approach will also allow the identification of other nonlinear bio-systems, suspected of containing "hard" nonlinearities.

  17. Non-Linear Relationship between Economic Growth and CO₂ Emissions in China: An Empirical Study Based on Panel Smooth Transition Regression Models.

    PubMed

    Wang, Zheng-Xin; Hao, Peng; Yao, Pei-Yi

    2017-12-13

    The non-linear relationship between provincial economic growth and carbon emissions is investigated by using panel smooth transition regression (PSTR) models. The research indicates that, on the condition of separately taking Gross Domestic Product per capita (GDPpc), energy structure (Es), and urbanisation level (Ul) as transition variables, three models all reject the null hypothesis of a linear relationship, i.e., a non-linear relationship exists. The results show that the three models all contain only one transition function but different numbers of location parameters. The model taking GDPpc as the transition variable has two location parameters, while the other two models separately considering Es and Ul as the transition variables both contain one location parameter. The three models applied in the study all favourably describe the non-linear relationship between economic growth and CO₂ emissions in China. It also can be seen that the conversion rate of the influence of Ul on per capita CO₂ emissions is significantly higher than those of GDPpc and Es on per capita CO₂ emissions.

  18. Wind-Tunnel Study of Scalar Transfer Phenomena for Surfaces of Block Arrays and Smooth Walls with Dry Patches

    NASA Astrophysics Data System (ADS)

    Chung, Juyeon; Hagishima, Aya; Ikegaya, Naoki; Tanimoto, Jun

    2015-11-01

    We report the result of a wind-tunnel experiment to measure the scalar transfer efficiency of three types of surfaces, wet street surfaces of cube arrays, wet smooth surfaces with dry patches, and fully wet smooth surfaces, to examine the effects of roughness topography and scalar source allocation. Scalar transfer coefficients defined by the source area {C}_{E wet} for an underlying wet street surface of dry block arrays show a convex trend against the block density λ _p. Comparison with past data, and results for wet smooth surfaces including dry patches, reveal that the positive peak of {C}_{E wet} with increasing λ _p is caused by reduced horizontal advection due to block roughness and enhanced evaporation due to a heterogeneous scalar source distribution. In contrast, scalar transfer coefficients defined by a lot-area including wet and dry areas {C}_{E lot} for smooth surfaces with dry patches indicate enhanced evaporation compared to the fully wet smooth surface (the oasis effect) for all three conditions of dry plan-area ratio up to 31 %. Relationships between the local Sherwood and Reynolds numbers derived from experimental data suggest that attenuation of {C}_{E wet} for a wet street of cube arrays against streamwise distance is weaker than for a wet smooth surface because of canopy flow around the blocks. Relevant parameters of ratio of roughness length for momentum to scalar {B}^{-1} were calculated from observational data. The result implies that {B}^{-1} possibly increases with block roughness, and decreases with the partitioning of the scalar boundary layer because of dry patches.

  19. Machining Parameters Optimization using Hybrid Firefly Algorithm and Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Farahlina Johari, Nur; Zain, Azlan Mohd; Haszlinna Mustaffa, Noorfa; Udin, Amirmudin

    2017-09-01

    Firefly Algorithm (FA) is a metaheuristic algorithm that is inspired by the flashing behavior of fireflies and the phenomenon of bioluminescent communication and the algorithm is used to optimize the machining parameters (feed rate, depth of cut, and spindle speed) in this research. The algorithm is hybridized with Particle Swarm Optimization (PSO) to discover better solution in exploring the search space. Objective function of previous research is used to optimize the machining parameters in turning operation. The optimal machining cutting parameters estimated by FA that lead to a minimum surface roughness are validated using ANOVA test.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muslimov, A. E., E-mail: amuslimov@mail.ru; Butashin, A. V.; Kanevsky, V. M.

    The (001) cleavage surface of vanadium pentoxide (V{sub 2}O{sub 5}) crystal has been studied by scanning tunneling spectroscopy (STM). It is shown that the surface is not reconstructed; the STM image allows geometric lattice parameters to be determined with high accuracy. The nanostructure formed on the (001) cleavage surface of crystal consists of atomically smooth steps with a height multiple of unit-cell parameter c = 4.37 Å. The V{sub 2}O{sub 5} crystal cleavages can be used as references in calibration of a scanning tunneling microscope under atmospheric conditions both along the (Ñ…, y) surface and normally to the sample surfacemore » (along the z axis). It is found that the terrace surface is not perfectly atomically smooth; its roughness is estimated to be ~0.5 Å. This circumstance may introduce an additional error into the microscope calibration along the z coordinate.« less

Top