Sample records for variable problem parameters

  1. Heat and mass transfer of Williamson nanofluid flow yield by an inclined Lorentz force over a nonlinear stretching sheet

    NASA Astrophysics Data System (ADS)

    Khan, Mair; Malik, M. Y.; Salahuddin, T.; Hussian, Arif.

    2018-03-01

    The present analysis is devoted to explore the computational solution of the problem addressing the variable viscosity and inclined Lorentz force effects on Williamson nanofluid over a stretching sheet. Variable viscosity is assumed to vary as a linear function of temperature. The basic mathematical modelled problem i.e. system of PDE's is converted nonlinear into ODE's via applying suitable transformations. Computational solutions of the problem is also achieved via efficient numerical technique shooting. Characteristics of controlling parameters i.e. stretching index, inclined angle, Hartmann number, Weissenberg number, variable viscosity parameter, mixed convention parameter, Brownian motion parameter, Prandtl number, Lewis number, thermophoresis parameter and chemical reactive species on concentration, temperature and velocity gradient. Additionally, friction factor coefficient, Nusselt number and Sherwood number are describe with the help of graphics as well as tables verses flow controlling parameters.

  2. A black box optimization approach to parameter estimation in a model for long/short term variations dynamics of commodity prices

    NASA Astrophysics Data System (ADS)

    De Santis, Alberto; Dellepiane, Umberto; Lucidi, Stefano

    2012-11-01

    In this paper we investigate the estimation problem for a model of the commodity prices. This model is a stochastic state space dynamical model and the problem unknowns are the state variables and the system parameters. Data are represented by the commodity spot prices, very seldom time series of Futures contracts are available for free. Both the system joint likelihood function (state variables and parameters) and the system marginal likelihood (the state variables are eliminated) function are addressed.

  3. An efficient variable projection formulation for separable nonlinear least squares problems.

    PubMed

    Gan, Min; Li, Han-Xiong

    2014-05-01

    We consider in this paper a class of nonlinear least squares problems in which the model can be represented as a linear combination of nonlinear functions. The variable projection algorithm projects the linear parameters out of the problem, leaving the nonlinear least squares problems involving only the nonlinear parameters. To implement the variable projection algorithm more efficiently, we propose a new variable projection functional based on matrix decomposition. The advantage of the proposed formulation is that the size of the decomposed matrix may be much smaller than those of previous ones. The Levenberg-Marquardt algorithm using finite difference method is then applied to minimize the new criterion. Numerical results show that the proposed approach achieves significant reduction in computing time.

  4. Aerodynamic optimization by simultaneously updating flow variables and design parameters

    NASA Technical Reports Server (NTRS)

    Rizk, M. H.

    1990-01-01

    The application of conventional optimization schemes to aerodynamic design problems leads to inner-outer iterative procedures that are very costly. An alternative approach is presented based on the idea of updating the flow variable iterative solutions and the design parameter iterative solutions simultaneously. Two schemes based on this idea are applied to problems of correcting wind tunnel wall interference and optimizing advanced propeller designs. The first of these schemes is applicable to a limited class of two-design-parameter problems with an equality constraint. It requires the computation of a single flow solution. The second scheme is suitable for application to general aerodynamic problems. It requires the computation of several flow solutions in parallel. In both schemes, the design parameters are updated as the iterative flow solutions evolve. Computations are performed to test the schemes' efficiency, accuracy, and sensitivity to variations in the computational parameters.

  5. On the orbital evolution of radiating binary systems

    NASA Astrophysics Data System (ADS)

    Bekov, A. A.; Momynov, S. B.

    2018-05-01

    The evolution of dynamic parameters of radiating binary systems with variable mass is studied. As a dynamic model, the problem of two gravitating and radiating bodies is considered, taking into account the gravitational attraction and the light pressure of the interacting bodies with the additional assumption of isotropic variability of their masses. The problem combines the Gylden-Meshchersky problem, acquiring a new physical meaning, and the two-body photogravitational Radzievsky problem. The evolving orbit is presented, unlike Kepler, with varying orbital elements - parameter and eccentricity, defines by the parameter µ(t), area integral C and quasi-integral energy h(t). Adiabatic invariants of the problem, which are of interest for the slow evolution of orbits, are determined. The general course of evolution of orbits of binary systems with radiation are determined by the change of the parameter µ(t) and the total energy of the system.

  6. Aerodynamic optimization by simultaneously updating flow variables and design parameters with application to advanced propeller designs

    NASA Technical Reports Server (NTRS)

    Rizk, Magdi H.

    1988-01-01

    A scheme is developed for solving constrained optimization problems in which the objective function and the constraint function are dependent on the solution of the nonlinear flow equations. The scheme updates the design parameter iterative solutions and the flow variable iterative solutions simultaneously. It is applied to an advanced propeller design problem with the Euler equations used as the flow governing equations. The scheme's accuracy, efficiency and sensitivity to the computational parameters are tested.

  7. A reduced successive quadratic programming strategy for errors-in-variables estimation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tjoa, I.-B.; Biegler, L. T.; Carnegie-Mellon Univ.

    Parameter estimation problems in process engineering represent a special class of nonlinear optimization problems, because the maximum likelihood structure of the objective function can be exploited. Within this class, the errors in variables method (EVM) is particularly interesting. Here we seek a weighted least-squares fit to the measurements with an underdetermined process model. Thus, both the number of variables and degrees of freedom available for optimization increase linearly with the number of data sets. Large optimization problems of this type can be particularly challenging and expensive to solve because, for general-purpose nonlinear programming (NLP) algorithms, the computational effort increases atmore » least quadratically with problem size. In this study we develop a tailored NLP strategy for EVM problems. The method is based on a reduced Hessian approach to successive quadratic programming (SQP), but with the decomposition performed separately for each data set. This leads to the elimination of all variables but the model parameters, which are determined by a QP coordination step. In this way the computational effort remains linear in the number of data sets. Moreover, unlike previous approaches to the EVM problem, global and superlinear properties of the SQP algorithm apply naturally. Also, the method directly incorporates inequality constraints on the model parameters (although not on the fitted variables). This approach is demonstrated on five example problems with up to 102 degrees of freedom. Compared to general-purpose NLP algorithms, large improvements in computational performance are observed.« less

  8. Parametri, Variabili e altro: Un ripensamento su come questi concetti sono presentati in classe (Variable Parameters and Other Issues: A Review of How These Concepts Are Presented in Class).

    ERIC Educational Resources Information Center

    Chiarugi, Ivana; And Others

    1995-01-01

    This paper considers the problem of variables, in particular parameters, analyzes how these concepts are presented in textbooks, comments on paradigms of exercises in which parameters intervene, and points out difficulties encountered by students. Discusses results of teacher interviews concerning their dealing with parameters in class.…

  9. Bayesian LASSO, scale space and decision making in association genetics.

    PubMed

    Pasanen, Leena; Holmström, Lasse; Sillanpää, Mikko J

    2015-01-01

    LASSO is a penalized regression method that facilitates model fitting in situations where there are as many, or even more explanatory variables than observations, and only a few variables are relevant in explaining the data. We focus on the Bayesian version of LASSO and consider four problems that need special attention: (i) controlling false positives, (ii) multiple comparisons, (iii) collinearity among explanatory variables, and (iv) the choice of the tuning parameter that controls the amount of shrinkage and the sparsity of the estimates. The particular application considered is association genetics, where LASSO regression can be used to find links between chromosome locations and phenotypic traits in a biological organism. However, the proposed techniques are relevant also in other contexts where LASSO is used for variable selection. We separate the true associations from false positives using the posterior distribution of the effects (regression coefficients) provided by Bayesian LASSO. We propose to solve the multiple comparisons problem by using simultaneous inference based on the joint posterior distribution of the effects. Bayesian LASSO also tends to distribute an effect among collinear variables, making detection of an association difficult. We propose to solve this problem by considering not only individual effects but also their functionals (i.e. sums and differences). Finally, whereas in Bayesian LASSO the tuning parameter is often regarded as a random variable, we adopt a scale space view and consider a whole range of fixed tuning parameters, instead. The effect estimates and the associated inference are considered for all tuning parameters in the selected range and the results are visualized with color maps that provide useful insights into data and the association problem considered. The methods are illustrated using two sets of artificial data and one real data set, all representing typical settings in association genetics.

  10. Multiple regression for physiological data analysis: the problem of multicollinearity.

    PubMed

    Slinker, B K; Glantz, S A

    1985-07-01

    Multiple linear regression, in which several predictor variables are related to a response variable, is a powerful statistical tool for gaining quantitative insight into complex in vivo physiological systems. For these insights to be correct, all predictor variables must be uncorrelated. However, in many physiological experiments the predictor variables cannot be precisely controlled and thus change in parallel (i.e., they are highly correlated). There is a redundancy of information about the response, a situation called multicollinearity, that leads to numerical problems in estimating the parameters in regression equations; the parameters are often of incorrect magnitude or sign or have large standard errors. Although multicollinearity can be avoided with good experimental design, not all interesting physiological questions can be studied without encountering multicollinearity. In these cases various ad hoc procedures have been proposed to mitigate multicollinearity. Although many of these procedures are controversial, they can be helpful in applying multiple linear regression to some physiological problems.

  11. A random utility model of delay discounting and its application to people with externalizing psychopathology.

    PubMed

    Dai, Junyi; Gunn, Rachel L; Gerst, Kyle R; Busemeyer, Jerome R; Finn, Peter R

    2016-10-01

    Previous studies have demonstrated that working memory capacity plays a central role in delay discounting in people with externalizing psychopathology. These studies used a hyperbolic discounting model, and its single parameter-a measure of delay discounting-was estimated using the standard method of searching for indifference points between intertemporal options. However, there are several problems with this approach. First, the deterministic perspective on delay discounting underlying the indifference point method might be inappropriate. Second, the estimation procedure using the R2 measure often leads to poor model fit. Third, when parameters are estimated using indifference points only, much of the information collected in a delay discounting decision task is wasted. To overcome these problems, this article proposes a random utility model of delay discounting. The proposed model has 2 parameters, 1 for delay discounting and 1 for choice variability. It was fit to choice data obtained from a recently published data set using both maximum-likelihood and Bayesian parameter estimation. As in previous studies, the delay discounting parameter was significantly associated with both externalizing problems and working memory capacity. Furthermore, choice variability was also found to be significantly associated with both variables. This finding suggests that randomness in decisions may be a mechanism by which externalizing problems and low working memory capacity are associated with poor decision making. The random utility model thus has the advantage of disclosing the role of choice variability, which had been masked by the traditional deterministic model. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  12. SVM-RFE based feature selection and Taguchi parameters optimization for multiclass SVM classifier.

    PubMed

    Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W M; Li, R K; Jiang, Bo-Ru

    2014-01-01

    Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases.

  13. SVM-RFE Based Feature Selection and Taguchi Parameters Optimization for Multiclass SVM Classifier

    PubMed Central

    Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W. M.; Li, R. K.; Jiang, Bo-Ru

    2014-01-01

    Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases. PMID:25295306

  14. Comparison Between Two Methods for Estimating the Vertical Scale of Fluctuation for Modeling Random Geotechnical Problems

    NASA Astrophysics Data System (ADS)

    Pieczyńska-Kozłowska, Joanna M.

    2015-12-01

    The design process in geotechnical engineering requires the most accurate mapping of soil. The difficulty lies in the spatial variability of soil parameters, which has been a site of investigation of many researches for many years. This study analyses the soil-modeling problem by suggesting two effective methods of acquiring information for modeling that consists of variability from cone penetration test (CPT). The first method has been used in geotechnical engineering, but the second one has not been associated with geotechnics so far. Both methods are applied to a case study in which the parameters of changes are estimated. The knowledge of the variability of parameters allows in a long term more effective estimation, for example, bearing capacity probability of failure.

  15. Modeling and simulation of different and representative engineering problems using Network Simulation Method

    PubMed Central

    2018-01-01

    Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model. PMID:29518121

  16. Modeling and simulation of different and representative engineering problems using Network Simulation Method.

    PubMed

    Sánchez-Pérez, J F; Marín, F; Morales, J L; Cánovas, M; Alhama, F

    2018-01-01

    Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model.

  17. Stability of libration points in the restricted four-body problem with variable mass

    NASA Astrophysics Data System (ADS)

    Mittal, Amit; Aggarwal, Rajiv; Suraj, Md. Sanam; Bisht, Virender Singh

    2016-10-01

    We have investigated the stability of the Lagrangian solutions for the restricted four-body problem with variable mass. It has been assumed that the three primaries with masses m1, m2 and m3 form an equilateral triangle, wherein m2=m3. According to Jeans' law (Astronomy and Cosmogony, Cambridge University Press, Cambridge, 1928), the infinitesimal body varies its mass m with time. The space-time transformations of Meshcherskii (Studies on the Mechanics of Bodies of Variable Mass, GITTL, Moscow, 1949) are used by taking the values of the parameters q=1/2, k=0, n=1. The equations of motion of the infinitesimal body with variable mass have been determined. The equations of motion of the current problem differ from the ones of the restricted four-body problem with constant mass. There exist eight libration points, out of which two are collinear with the primary m1 and the rest are non-collinear for a fixed value of parameters γ (m {at time} t/m {at initial time}, 0<γ≤1 ), α (the proportionality constant in Jeans' law (Astronomy and Cosmogony, Cambridge University Press, Cambridge, 1928), 0≤α≤2.2) and μ=0.019 (the mass parameter). All the libration points are found to be unstable. The zero velocity surfaces (ZVS) are also drawn and regions of motion are discussed.

  18. [Application of CWT to extract characteristic monitoring parameters during spine surgery].

    PubMed

    Chen, Penghui; Wu, Baoming; Hu, Yong

    2005-10-01

    It is necessary to monitor intraoperative spinal function in order to prevent spinal neurological deficit during spine surgery. This study aims to extract characteristic electrophysiological monitoring parameters during surgical treatment of scoliosis. The problem, "the monitoring parameters in time domain are of great variability and are sensitive to noise", may also be solved in this study. By use of continuous wavelet transform to analyze the intraoperative cortical somatosensory evoked potential (CSEP), three new characteristic monitoring parameters in time-frequency domain (TFD) are extracted. The results indicate that the variability of CSEP characteristic parameters in TFD is lower than the variability of those in time domain. Therefore, the TFD characteristic monitoring parameters are more stable and reliable parameters of latency and amplitude in time domain. The application of TFD monitoring parameters during spine surgery may avoid spinal injury effectively.

  19. Optimization of a Small Scale Linear Reluctance Accelerator

    NASA Astrophysics Data System (ADS)

    Barrera, Thor; Beard, Robby

    2011-11-01

    Reluctance accelerators are extremely promising future methods of transportation. Several problems still plague these devices, most prominently low efficiency. Variables to overcoming efficiency problems are many and difficult to correlate how they affect our accelerator. The study examined several differing variables that present potential challenges in optimizing the efficiency of reluctance accelerators. These include coil and projectile design, power supplies, switching, and the elusive gradient inductance problem. Extensive research in these areas has been performed from computational and theoretical to experimental. Findings show that these parameters share significant similarity to transformer design elements, thus general findings show current optimized parameters the research suggests as a baseline for further research and design. Demonstration of these current findings will be offered at the time of presentation.

  20. Bayesian model comparison and parameter inference in systems biology using nested sampling.

    PubMed

    Pullen, Nick; Morris, Richard J

    2014-01-01

    Inferring parameters for models of biological processes is a current challenge in systems biology, as is the related problem of comparing competing models that explain the data. In this work we apply Skilling's nested sampling to address both of these problems. Nested sampling is a Bayesian method for exploring parameter space that transforms a multi-dimensional integral to a 1D integration over likelihood space. This approach focuses on the computation of the marginal likelihood or evidence. The ratio of evidences of different models leads to the Bayes factor, which can be used for model comparison. We demonstrate how nested sampling can be used to reverse-engineer a system's behaviour whilst accounting for the uncertainty in the results. The effect of missing initial conditions of the variables as well as unknown parameters is investigated. We show how the evidence and the model ranking can change as a function of the available data. Furthermore, the addition of data from extra variables of the system can deliver more information for model comparison than increasing the data from one variable, thus providing a basis for experimental design.

  1. An analysis of input errors in precipitation-runoff models using regression with errors in the independent variables

    USGS Publications Warehouse

    Troutman, Brent M.

    1982-01-01

    Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.

  2. A Framework for Multifaceted Evaluation of Student Models

    ERIC Educational Resources Information Center

    Huang, Yun; González-Brenes, José P.; Kumar, Rohit; Brusilovsky, Peter

    2015-01-01

    Latent variable models, such as the popular Knowledge Tracing method, are often used to enable adaptive tutoring systems to personalize education. However, finding optimal model parameters is usually a difficult non-convex optimization problem when considering latent variable models. Prior work has reported that latent variable models obtained…

  3. Variable structure control of spacecraft reorientation maneuvers

    NASA Technical Reports Server (NTRS)

    Sira-Ramirez, H.; Dwyer, T. A. W., III

    1986-01-01

    A Variable Structure Control (VSC) approach is presented for multi-axial spacecraft reorientation maneuvers. A nonlinear sliding surface is proposed which results in an asymptotically stable, ideal linear sliding motion of Cayley-Rodriques attitude parameters. By imposing a desired equivalent dynamics on the attitude parameters, the approach is devoid of optimal control considerations. The single axis case provides a design scheme for the multiple axes design problem. Illustrative examples are presented.

  4. Parameter identification using a creeping-random-search algorithm

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.

    1971-01-01

    A creeping-random-search algorithm is applied to different types of problems in the field of parameter identification. The studies are intended to demonstrate that a random-search algorithm can be applied successfully to these various problems, which often cannot be handled by conventional deterministic methods, and, also, to introduce methods that speed convergence to an extremal of the problem under investigation. Six two-parameter identification problems with analytic solutions are solved, and two application problems are discussed in some detail. Results of the study show that a modified version of the basic creeping-random-search algorithm chosen does speed convergence in comparison with the unmodified version. The results also show that the algorithm can successfully solve problems that contain limits on state or control variables, inequality constraints (both independent and dependent, and linear and nonlinear), or stochastic models.

  5. Choice of Variables and Preconditioning for Time Dependent Problems

    NASA Technical Reports Server (NTRS)

    Turkel, Eli; Vatsa, Verr N.

    2003-01-01

    We consider the use of low speed preconditioning for time dependent problems. These are solved using a dual time step approach. We consider the effect of this dual time step on the parameter of the low speed preconditioning. In addition, we compare the use of two sets of variables, conservation and primitive variables, to solve the system. We show the effect of these choices on both the convergence to a steady state and the accuracy of the numerical solutions for low Mach number steady state and time dependent flows.

  6. Simultaneous escaping of explicit and hidden free energy barriers: application of the orthogonal space random walk strategy in generalized ensemble based conformational sampling.

    PubMed

    Zheng, Lianqing; Chen, Mengen; Yang, Wei

    2009-06-21

    To overcome the pseudoergodicity problem, conformational sampling can be accelerated via generalized ensemble methods, e.g., through the realization of random walks along prechosen collective variables, such as spatial order parameters, energy scaling parameters, or even system temperatures or pressures, etc. As usually observed, in generalized ensemble simulations, hidden barriers are likely to exist in the space perpendicular to the collective variable direction and these residual free energy barriers could greatly abolish the sampling efficiency. This sampling issue is particularly severe when the collective variable is defined in a low-dimension subset of the target system; then the "Hamiltonian lagging" problem, which reveals the fact that necessary structural relaxation falls behind the move of the collective variable, may be likely to occur. To overcome this problem in equilibrium conformational sampling, we adopted the orthogonal space random walk (OSRW) strategy, which was originally developed in the context of free energy simulation [L. Zheng, M. Chen, and W. Yang, Proc. Natl. Acad. Sci. U.S.A. 105, 20227 (2008)]. Thereby, generalized ensemble simulations can simultaneously escape both the explicit barriers along the collective variable direction and the hidden barriers that are strongly coupled with the collective variable move. As demonstrated in our model studies, the present OSRW based generalized ensemble treatments show improved sampling capability over the corresponding classical generalized ensemble treatments.

  7. Stochastic reduced order models for inverse problems under uncertainty

    PubMed Central

    Warner, James E.; Aquino, Wilkins; Grigoriu, Mircea D.

    2014-01-01

    This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well. PMID:25558115

  8. Evaluation of Kurtosis into the product of two normally distributed variables

    NASA Astrophysics Data System (ADS)

    Oliveira, Amílcar; Oliveira, Teresa; Seijas-Macías, Antonio

    2016-06-01

    Kurtosis (κ) is any measure of the "peakedness" of a distribution of a real-valued random variable. We study the evolution of the Kurtosis for the product of two normally distributed variables. Product of two normal variables is a very common problem for some areas of study, like, physics, economics, psychology, … Normal variables have a constant value for kurtosis (κ = 3), independently of the value of the two parameters: mean and variance. In fact, the excess kurtosis is defined as κ- 3 and the Normal Distribution Kurtosis is zero. The product of two normally distributed variables is a function of the parameters of the two variables and the correlation between then, and the range for kurtosis is in [0, 6] for independent variables and in [0, 12] when correlation between then is allowed.

  9. JuPOETs: a constrained multiobjective optimization approach to estimate biochemical model ensembles in the Julia programming language.

    PubMed

    Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D

    2017-01-25

    Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open source, available under an MIT license, and can be installed using the Julia package manager from the JuPOETs GitHub repository.

  10. Efficiency versus bias: the role of distributional parameters in count contingent behaviour models

    Treesearch

    Joseph Englin; Arwin Pang; Thomas Holmes

    2011-01-01

    One of the challenges facing many applications of non-market valuations is to find data with enough variation in the variable(s) of interest to estimate econometrically their effects on the quantity demanded. A solution to this problem was the introduction of stated preference surveys. These surveys can introduce variation into variables where there is no natural...

  11. Inverse Ising problem in continuous time: A latent variable approach

    NASA Astrophysics Data System (ADS)

    Donner, Christian; Opper, Manfred

    2017-12-01

    We consider the inverse Ising problem: the inference of network couplings from observed spin trajectories for a model with continuous time Glauber dynamics. By introducing two sets of auxiliary latent random variables we render the likelihood into a form which allows for simple iterative inference algorithms with analytical updates. The variables are (1) Poisson variables to linearize an exponential term which is typical for point process likelihoods and (2) Pólya-Gamma variables, which make the likelihood quadratic in the coupling parameters. Using the augmented likelihood, we derive an expectation-maximization (EM) algorithm to obtain the maximum likelihood estimate of network parameters. Using a third set of latent variables we extend the EM algorithm to sparse couplings via L1 regularization. Finally, we develop an efficient approximate Bayesian inference algorithm using a variational approach. We demonstrate the performance of our algorithms on data simulated from an Ising model. For data which are simulated from a more biologically plausible network with spiking neurons, we show that the Ising model captures well the low order statistics of the data and how the Ising couplings are related to the underlying synaptic structure of the simulated network.

  12. Dynamic least-squares kernel density modeling of Fokker-Planck equations with application to neural population.

    PubMed

    Shotorban, Babak

    2010-04-01

    The dynamic least-squares kernel density (LSQKD) model [C. Pantano and B. Shotorban, Phys. Rev. E 76, 066705 (2007)] is used to solve the Fokker-Planck equations. In this model the probability density function (PDF) is approximated by a linear combination of basis functions with unknown parameters whose governing equations are determined by a global least-squares approximation of the PDF in the phase space. In this work basis functions are set to be Gaussian for which the mean, variance, and covariances are governed by a set of partial differential equations (PDEs) or ordinary differential equations (ODEs) depending on what phase-space variables are approximated by Gaussian functions. Three sample problems of univariate double-well potential, bivariate bistable neurodynamical system [G. Deco and D. Martí, Phys. Rev. E 75, 031913 (2007)], and bivariate Brownian particles in a nonuniform gas are studied. The LSQKD is verified for these problems as its results are compared against the results of the method of characteristics in nondiffusive cases and the stochastic particle method in diffusive cases. For the double-well potential problem it is observed that for low to moderate diffusivity the dynamic LSQKD well predicts the stationary PDF for which there is an exact solution. A similar observation is made for the bistable neurodynamical system. In both these problems least-squares approximation is made on all phase-space variables resulting in a set of ODEs with time as the independent variable for the Gaussian function parameters. In the problem of Brownian particles in a nonuniform gas, this approximation is made only for the particle velocity variable leading to a set of PDEs with time and particle position as independent variables. Solving these PDEs, a very good performance by LSQKD is observed for a wide range of diffusivities.

  13. A variable-gain output feedback control design methodology

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Moerder, Daniel D.; Broussard, John R.; Taylor, Deborah B.

    1989-01-01

    A digital control system design technique is developed in which the control system gain matrix varies with the plant operating point parameters. The design technique is obtained by formulating the problem as an optimal stochastic output feedback control law with variable gains. This approach provides a control theory framework within which the operating range of a control law can be significantly extended. Furthermore, the approach avoids the major shortcomings of the conventional gain-scheduling techniques. The optimal variable gain output feedback control problem is solved by embedding the Multi-Configuration Control (MCC) problem, previously solved at ICS. An algorithm to compute the optimal variable gain output feedback control gain matrices is developed. The algorithm is a modified version of the MCC algorithm improved so as to handle the large dimensionality which arises particularly in variable-gain control problems. The design methodology developed is applied to a reconfigurable aircraft control problem. A variable-gain output feedback control problem was formulated to design a flight control law for an AFTI F-16 aircraft which can automatically reconfigure its control strategy to accommodate failures in the horizontal tail control surface. Simulations of the closed-loop reconfigurable system show that the approach produces a control design which can accommodate such failures with relative ease. The technique can be applied to many other problems including sensor failure accommodation, mode switching control laws and super agility.

  14. Direct Multiple Shooting Optimization with Variable Problem Parameters

    NASA Technical Reports Server (NTRS)

    Whitley, Ryan J.; Ocampo, Cesar A.

    2009-01-01

    Taking advantage of a novel approach to the design of the orbital transfer optimization problem and advanced non-linear programming algorithms, several optimal transfer trajectories are found for problems with and without known analytic solutions. This method treats the fixed known gravitational constants as optimization variables in order to reduce the need for an advanced initial guess. Complex periodic orbits are targeted with very simple guesses and the ability to find optimal transfers in spite of these bad guesses is successfully demonstrated. Impulsive transfers are considered for orbits in both the 2-body frame as well as the circular restricted three-body problem (CRTBP). The results with this new approach demonstrate the potential for increasing robustness for all types of orbit transfer problems.

  15. Linguistic Parameters in Performance Models.

    ERIC Educational Resources Information Center

    Mansell, Philip

    This paper deals with problems concerning the nature of the input to a phonetic processor. Several assumptions provide the basis for consideration of the problem. There is a phonological level of processing which reflects the sound structure of the language; the rules associated with it are not affected by variables associated either with the…

  16. Hamilton's Equations with Euler Parameters for Rigid Body Dynamics Modeling. Chapter 3

    NASA Technical Reports Server (NTRS)

    Shivarama, Ravishankar; Fahrenthold, Eric P.

    2004-01-01

    A combination of Euler parameter kinematics and Hamiltonian mechanics provides a rigid body dynamics model well suited for use in strongly nonlinear problems involving arbitrarily large rotations. The model is unconstrained, free of singularities, includes a general potential energy function and a minimum set of momentum variables, and takes an explicit state space form convenient for numerical implementation. The general formulation may be specialized to address particular applications, as illustrated in several three dimensional example problems.

  17. Pose-free structure from motion using depth from motion constraints.

    PubMed

    Zhang, Ji; Boutin, Mireille; Aliaga, Daniel G

    2011-10-01

    Structure from motion (SFM) is the problem of recovering the geometry of a scene from a stream of images taken from unknown viewpoints. One popular approach to estimate the geometry of a scene is to track scene features on several images and reconstruct their position in 3-D. During this process, the unknown camera pose must also be recovered. Unfortunately, recovering the pose can be an ill-conditioned problem which, in turn, can make the SFM problem difficult to solve accurately. We propose an alternative formulation of the SFM problem with fixed internal camera parameters known a priori. In this formulation, obtained by algebraic variable elimination, the external camera pose parameters do not appear. As a result, the problem is better conditioned in addition to involving much fewer variables. Variable elimination is done in three steps. First, we take the standard SFM equations in projective coordinates and eliminate the camera orientations from the equations. We then further eliminate the camera center positions. Finally, we also eliminate all 3-D point positions coordinates, except for their depths with respect to the camera center, thus obtaining a set of simple polynomial equations of degree two and three. We show that, when there are merely a few points and pictures, these "depth-only equations" can be solved in a global fashion using homotopy methods. We also show that, in general, these same equations can be used to formulate a pose-free cost function to refine SFM solutions in a way that is more accurate than by minimizing the total reprojection error, as done when using the bundle adjustment method. The generalization of our approach to the case of varying internal camera parameters is briefly discussed. © 2011 IEEE

  18. Investigating a hybrid perturbation-Galerkin technique using computer algebra

    NASA Technical Reports Server (NTRS)

    Andersen, Carl M.; Geer, James F.

    1988-01-01

    A two-step hybrid perturbation-Galerkin method is presented for the solution of a variety of differential equations type problems which involve a scalar parameter. The resulting (approximate) solution has the form of a sum where each term consists of the product of two functions. The first function is a function of the independent field variable(s) x, and the second is a function of the parameter lambda. In step one the functions of x are determined by forming a perturbation expansion in lambda. In step two the functions of lambda are determined through the use of the classical Bubnov-Gelerkin method. The resulting hybrid method has the potential of overcoming some of the drawbacks of the perturbation and Bubnov-Galerkin methods applied separately, while combining some of the good features of each. In particular, the results can be useful well beyond the radius of convergence associated with the perturbation expansion. The hybrid method is applied with the aid of computer algebra to a simple two-point boundary value problem where the radius of convergence is finite and to a quantum eigenvalue problem where the radius of convergence is zero. For both problems the hybrid method apparently converges for an infinite range of the parameter lambda. The results obtained from the hybrid method are compared with approximate solutions obtained by other methods, and the applicability of the hybrid method to broader problem areas is discussed.

  19. Nonlinear Inference in Partially Observed Physical Systems and Deep Neural Networks

    NASA Astrophysics Data System (ADS)

    Rozdeba, Paul J.

    The problem of model state and parameter estimation is a significant challenge in nonlinear systems. Due to practical considerations of experimental design, it is often the case that physical systems are partially observed, meaning that data is only available for a subset of the degrees of freedom required to fully model the observed system's behaviors and, ultimately, predict future observations. Estimation in this context is highly complicated by the presence of chaos, stochasticity, and measurement noise in dynamical systems. One of the aims of this dissertation is to simultaneously analyze state and parameter estimation in as a regularized inverse problem, where the introduction of a model makes it possible to reverse the forward problem of partial, noisy observation; and as a statistical inference problem using data assimilation to transfer information from measurements to the model states and parameters. Ultimately these two formulations achieve the same goal. Similar aspects that appear in both are highlighted as a means for better understanding the structure of the nonlinear inference problem. An alternative approach to data assimilation that uses model reduction is then examined as a way to eliminate unresolved nonlinear gating variables from neuron models. In this formulation, only measured variables enter into the model, and the resulting errors are themselves modeled by nonlinear stochastic processes with memory. Finally, variational annealing, a data assimilation method previously applied to dynamical systems, is introduced as a potentially useful tool for understanding deep neural network training in machine learning by exploiting similarities between the two problems.

  20. Eigenvectors phase correction in inverse modal problem

    NASA Astrophysics Data System (ADS)

    Qiao, Guandong; Rahmatalla, Salam

    2017-12-01

    The solution of the inverse modal problem for the spatial parameters of mechanical and structural systems is heavily dependent on the quality of the modal parameters obtained from the experiments. While experimental and environmental noises will always exist during modal testing, the resulting modal parameters are expected to be corrupted with different levels of noise. A novel methodology is presented in this work to mitigate the errors in the eigenvectors when solving the inverse modal problem for the spatial parameters. The phases of the eigenvector component were utilized as design variables within an optimization problem that minimizes the difference between the calculated and experimental transfer functions. The equation of motion in terms of the modal and spatial parameters was used as a constraint in the optimization problem. Constraints that reserve the positive and semi-positive definiteness and the inter-connectivity of the spatial matrices were implemented using semi-definite programming. Numerical examples utilizing noisy eigenvectors with augmented Gaussian white noise of 1%, 5%, and 10% were used to demonstrate the efficacy of the proposed method. The results showed that the proposed method is superior when compared with a known method in the literature.

  1. Decomposition and model selection for large contingency tables.

    PubMed

    Dahinden, Corinne; Kalisch, Markus; Bühlmann, Peter

    2010-04-01

    Large contingency tables summarizing categorical variables arise in many areas. One example is in biology, where large numbers of biomarkers are cross-tabulated according to their discrete expression level. Interactions of the variables are of great interest and are generally studied with log-linear models. The structure of a log-linear model can be visually represented by a graph from which the conditional independence structure can then be easily read off. However, since the number of parameters in a saturated model grows exponentially in the number of variables, this generally comes with a heavy computational burden. Even if we restrict ourselves to models of lower-order interactions or other sparse structures, we are faced with the problem of a large number of cells which play the role of sample size. This is in sharp contrast to high-dimensional regression or classification procedures because, in addition to a high-dimensional parameter, we also have to deal with the analogue of a huge sample size. Furthermore, high-dimensional tables naturally feature a large number of sampling zeros which often leads to the nonexistence of the maximum likelihood estimate. We therefore present a decomposition approach, where we first divide the problem into several lower-dimensional problems and then combine these to form a global solution. Our methodology is computationally feasible for log-linear interaction models with many categorical variables each or some of them having many levels. We demonstrate the proposed method on simulated data and apply it to a bio-medical problem in cancer research.

  2. GVIPS Models and Software

    NASA Technical Reports Server (NTRS)

    Arnold, Steven M.; Gendy, Atef; Saleeb, Atef F.; Mark, John; Wilt, Thomas E.

    2007-01-01

    Two reports discuss, respectively, (1) the generalized viscoplasticity with potential structure (GVIPS) class of mathematical models and (2) the Constitutive Material Parameter Estimator (COMPARE) computer program. GVIPS models are constructed within a thermodynamics- and potential-based theoretical framework, wherein one uses internal state variables and derives constitutive equations for both the reversible (elastic) and the irreversible (viscoplastic) behaviors of materials. Because of the underlying potential structure, GVIPS models not only capture a variety of material behaviors but also are very computationally efficient. COMPARE comprises (1) an analysis core and (2) a C++-language subprogram that implements a Windows-based graphical user interface (GUI) for controlling the core. The GUI relieves the user of the sometimes tedious task of preparing data for the analysis core, freeing the user to concentrate on the task of fitting experimental data and ultimately obtaining a set of material parameters. The analysis core consists of three modules: one for GVIPS material models, an analysis module containing a specialized finite-element solution algorithm, and an optimization module. COMPARE solves the problem of finding GVIPS material parameters in the manner of a design-optimization problem in which the parameters are the design variables.

  3. Linear theory for filtering nonlinear multiscale systems with model error

    PubMed Central

    Berry, Tyrus; Harlim, John

    2014-01-01

    In this paper, we study filtering of multiscale dynamical systems with model error arising from limitations in resolving the smaller scale processes. In particular, the analysis assumes the availability of continuous-time noisy observations of all components of the slow variables. Mathematically, this paper presents new results on higher order asymptotic expansion of the first two moments of a conditional measure. In particular, we are interested in the application of filtering multiscale problems in which the conditional distribution is defined over the slow variables, given noisy observation of the slow variables alone. From the mathematical analysis, we learn that for a continuous time linear model with Gaussian noise, there exists a unique choice of parameters in a linear reduced model for the slow variables which gives the optimal filtering when only the slow variables are observed. Moreover, these parameters simultaneously give the optimal equilibrium statistical estimates of the underlying system, and as a consequence they can be estimated offline from the equilibrium statistics of the true signal. By examining a nonlinear test model, we show that the linear theory extends in this non-Gaussian, nonlinear configuration as long as we know the optimal stochastic parametrization and the correct observation model. However, when the stochastic parametrization model is inappropriate, parameters chosen for good filter performance may give poor equilibrium statistical estimates and vice versa; this finding is based on analytical and numerical results on our nonlinear test model and the two-layer Lorenz-96 model. Finally, even when the correct stochastic ansatz is given, it is imperative to estimate the parameters simultaneously and to account for the nonlinear feedback of the stochastic parameters into the reduced filter estimates. In numerical experiments on the two-layer Lorenz-96 model, we find that the parameters estimated online, as part of a filtering procedure, simultaneously produce accurate filtering and equilibrium statistical prediction. In contrast, an offline estimation technique based on a linear regression, which fits the parameters to a training dataset without using the filter, yields filter estimates which are worse than the observations or even divergent when the slow variables are not fully observed. This finding does not imply that all offline methods are inherently inferior to the online method for nonlinear estimation problems, it only suggests that an ideal estimation technique should estimate all parameters simultaneously whether it is online or offline. PMID:25002829

  4. Variable mass diffusion effects on free convection flow past an impulsively started infinite vertical plate

    NASA Astrophysics Data System (ADS)

    Rushi Kumar, B.; Jayakar, R.; Vijay Kumar, A. G.

    2017-11-01

    An exact analysis of the problem of free convection flow of a viscous incompressible chemically reacting fluid past an infinite vertical plate with the flow due to impulsive motion of the plate with Newtonian heating in the presence of thermal radiation and variable mass diffusion is performed. The resulting governing equations were tackled by Laplace transform technique. Finally the effects of pertinent flow parameters such as the radiation parameter, chemical reaction parameter, buoyancy ratio parameter, thermal Grashof number, Schmidt number, Prandtl number and time on the velocity, temperature, concentration and skin friction for both aiding and opposing flows were examined in detail when Pr=0.71(conducting air) and Pr=7.0(water).

  5. Optimal Decision Making in a Class of Uncertain Systems Based on Uncertain Variables

    NASA Astrophysics Data System (ADS)

    Bubnicki, Z.

    2006-06-01

    The paper is concerned with a class of uncertain systems described by relational knowledge representations with unknown parameters which are assumed to be values of uncertain variables characterized by a user in the form of certainty distributions. The first part presents the basic optimization problem consisting in finding the decision maximizing the certainty index that the requirement given by a user is satisfied. The main part is devoted to the description of the optimization problem with the given certainty threshold. It is shown how the approach presented in the paper may be applied to some problems for anticipatory systems.

  6. Multiobjective optimization in structural design with uncertain parameters and stochastic processes

    NASA Technical Reports Server (NTRS)

    Rao, S. S.

    1984-01-01

    The application of multiobjective optimization techniques to structural design problems involving uncertain parameters and random processes is studied. The design of a cantilever beam with a tip mass subjected to a stochastic base excitation is considered for illustration. Several of the problem parameters are assumed to be random variables and the structural mass, fatigue damage, and negative of natural frequency of vibration are considered for minimization. The solution of this three-criteria design problem is found by using global criterion, utility function, game theory, goal programming, goal attainment, bounded objective function, and lexicographic methods. It is observed that the game theory approach is superior in finding a better optimum solution, assuming the proper balance of the various objective functions. The procedures used in the present investigation are expected to be useful in the design of general dynamic systems involving uncertain parameters, stochastic process, and multiple objectives.

  7. Identification and stochastic control of helicopter dynamic modes

    NASA Technical Reports Server (NTRS)

    Molusis, J. A.; Bar-Shalom, Y.

    1983-01-01

    A general treatment of parameter identification and stochastic control for use on helicopter dynamic systems is presented. Rotor dynamic models, including specific applications to rotor blade flapping and the helicopter ground resonance problem are emphasized. Dynamic systems which are governed by periodic coefficients as well as constant coefficient models are addressed. The dynamic systems are modeled by linear state variable equations which are used in the identification and stochastic control formulation. The pure identification problem as well as the stochastic control problem which includes combined identification and control for dynamic systems is addressed. The stochastic control problem includes the effect of parameter uncertainty on the solution and the concept of learning and how this is affected by the control's duel effect. The identification formulation requires algorithms suitable for on line use and thus recursive identification algorithms are considered. The applications presented use the recursive extended kalman filter for parameter identification which has excellent convergence for systems without process noise.

  8. A fully Sinc-Galerkin method for Euler-Bernoulli beam models

    NASA Technical Reports Server (NTRS)

    Smith, R. C.; Bowers, K. L.; Lund, J.

    1990-01-01

    A fully Sinc-Galerkin method in both space and time is presented for fourth-order time-dependent partial differential equations with fixed and cantilever boundary conditions. The Sinc discretizations for the second-order temporal problem and the fourth-order spatial problems are presented. Alternate formulations for variable parameter fourth-order problems are given which prove to be especially useful when applying the forward techniques to parameter recovery problems. The discrete system which corresponds to the time-dependent partial differential equations of interest are then formulated. Computational issues are discussed and a robust and efficient algorithm for solving the resulting matrix system is outlined. Numerical results which highlight the method are given for problems with both analytic and singular solutions as well as fixed and cantilever boundary conditions.

  9. Transoptr — A second order beam transport design code with optimization and constraints

    NASA Astrophysics Data System (ADS)

    Heighway, E. A.; Hutcheon, R. M.

    1981-08-01

    This code was written initially to design an achromatic and isochronous reflecting magnet and has been extended to compete in capability (for constrained problems) with TRANSPORT. Its advantage is its flexibility in that the user writes a routine to describe his transport system. The routine allows the definition of general variables from which the system parameters can be derived. Further, the user can write any constraints he requires as algebraic equations relating the parameters. All variables may be used in either a first or second order optimization.

  10. Combined Parameter and State Estimation Problem in a Complex Domain: RF Hyperthermia Treatment Using Nanoparticles

    NASA Astrophysics Data System (ADS)

    Bermeo Varon, L. A.; Orlande, H. R. B.; Eliçabe, G. E.

    2016-09-01

    The particle filter methods have been widely used to solve inverse problems with sequential Bayesian inference in dynamic models, simultaneously estimating sequential state variables and fixed model parameters. This methods are an approximation of sequences of probability distributions of interest, that using a large set of random samples, with presence uncertainties in the model, measurements and parameters. In this paper the main focus is the solution combined parameters and state estimation in the radiofrequency hyperthermia with nanoparticles in a complex domain. This domain contains different tissues like muscle, pancreas, lungs, small intestine and a tumor which is loaded iron oxide nanoparticles. The results indicated that excellent agreements between estimated and exact value are obtained.

  11. Optimal radiotherapy dose schedules under parametric uncertainty

    NASA Astrophysics Data System (ADS)

    Badri, Hamidreza; Watanabe, Yoichi; Leder, Kevin

    2016-01-01

    We consider the effects of parameter uncertainty on the optimal radiation schedule in the context of the linear-quadratic model. Our interest arises from the observation that if inter-patient variability in normal and tumor tissue radiosensitivity or sparing factor of the organs-at-risk (OAR) are not accounted for during radiation scheduling, the performance of the therapy may be strongly degraded or the OAR may receive a substantially larger dose than the allowable threshold. This paper proposes a stochastic radiation scheduling concept to incorporate inter-patient variability into the scheduling optimization problem. Our method is based on a probabilistic approach, where the model parameters are given by a set of random variables. Our probabilistic formulation ensures that our constraints are satisfied with a given probability, and that our objective function achieves a desired level with a stated probability. We used a variable transformation to reduce the resulting optimization problem to two dimensions. We showed that the optimal solution lies on the boundary of the feasible region and we implemented a branch and bound algorithm to find the global optimal solution. We demonstrated how the configuration of optimal schedules in the presence of uncertainty compares to optimal schedules in the absence of uncertainty (conventional schedule). We observed that in order to protect against the possibility of the model parameters falling into a region where the conventional schedule is no longer feasible, it is required to avoid extremal solutions, i.e. a single large dose or very large total dose delivered over a long period. Finally, we performed numerical experiments in the setting of head and neck tumors including several normal tissues to reveal the effect of parameter uncertainty on optimal schedules and to evaluate the sensitivity of the solutions to the choice of key model parameters.

  12. Application of an automatic approach to calibrate the NEMURO nutrient-phytoplankton-zooplankton food web model in the Oyashio region

    NASA Astrophysics Data System (ADS)

    Ito, Shin-ichi; Yoshie, Naoki; Okunishi, Takeshi; Ono, Tsuneo; Okazaki, Yuji; Kuwata, Akira; Hashioka, Taketo; Rose, Kenneth A.; Megrey, Bernard A.; Kishi, Michio J.; Nakamachi, Miwa; Shimizu, Yugo; Kakehi, Shigeho; Saito, Hiroaki; Takahashi, Kazutaka; Tadokoro, Kazuaki; Kusaka, Akira; Kasai, Hiromi

    2010-10-01

    The Oyashio region in the western North Pacific supports high biological productivity and has been well monitored. We applied the NEMURO (North Pacific Ecosystem Model for Understanding Regional Oceanography) model to simulate the nutrients, phytoplankton, and zooplankton dynamics. Determination of parameters values is very important, yet ad hoc calibration methods are often used. We used the automatic calibration software PEST (model-independent Parameter ESTimation), which has been used previously with NEMURO but in a system without ontogenetic vertical migration of the large zooplankton functional group. Determining the performance of PEST with vertical migration, and obtaining a set of realistic parameter values for the Oyashio, will likely be useful in future applications of NEMURO. Five identical twin simulation experiments were performed with the one-box version of NEMURO. The experiments differed in whether monthly snapshot or averaged state variables were used, in whether state variables were model functional groups or were aggregated (total phytoplankton, small plus large zooplankton), and in whether vertical migration of large zooplankton was included or not. We then applied NEMURO to monthly climatological field data covering 1 year for the Oyashio, and compared model fits and parameter values between PEST-determined estimates and values used in previous applications to the Oyashio region that relied on ad hoc calibration. We substituted the PEST and ad hoc calibrated parameter values into a 3-D version of NEMURO for the western North Pacific, and compared the two sets of spatial maps of chlorophyll- a with satellite-derived data. The identical twin experiments demonstrated that PEST could recover the known model parameter values when vertical migration was included, and that over-fitting can occur as a result of slight differences in the values of the state variables. PEST recovered known parameter values when using monthly snapshots of aggregated state variables, but estimated a different set of parameters with monthly averaged values. Both sets of parameters resulted in good fits of the model to the simulated data. Disaggregating the variables provided to PEST into functional groups did not solve the over-fitting problem, and including vertical migration seemed to amplify the problem. When we used the climatological field data, simulated values with PEST-estimated parameters were closer to these field data than with the previously determined ad hoc set of parameter values. When these same PEST and ad hoc sets of parameter values were substituted into 3-D-NEMURO (without vertical migration), the PEST-estimated parameter values generated spatial maps that were similar to the satellite data for the Kuroshio Extension during January and March and for the subarctic ocean from May to November. With non-linear problems, such as vertical migration, PEST should be used with caution because parameter estimates can be sensitive to how the data are prepared and to the values used for the searching parameters of PEST. We recommend the usage of PEST, or other parameter optimization methods, to generate first-order parameter estimates for simulating specific systems and for insertion into 2-D and 3-D models. The parameter estimates that are generated are useful, and the inconsistencies between simulated values and the available field data provide valuable information on model behavior and the dynamics of the ecosystem.

  13. Effect of multiplicative noise on stationary stochastic process

    NASA Astrophysics Data System (ADS)

    Kargovsky, A. V.; Chikishev, A. Yu.; Chichigina, O. A.

    2018-03-01

    An open system that can be analyzed using the Langevin equation with multiplicative noise is considered. The stationary state of the system results from a balance of deterministic damping and random pumping simulated as noise with controlled periodicity. The dependence of statistical moments of the variable that characterizes the system on parameters of the problem is studied. A nontrivial decrease in the mean value of the main variable with an increase in noise stochasticity is revealed. Applications of the results in several physical, chemical, biological, and technical problems of natural and humanitarian sciences are discussed.

  14. Water quality parameter measurement using spectral signatures

    NASA Technical Reports Server (NTRS)

    White, P. E.

    1973-01-01

    Regression analysis is applied to the problem of measuring water quality parameters from remote sensing spectral signature data. The equations necessary to perform regression analysis are presented and methods of testing the strength and reliability of a regression are described. An efficient algorithm for selecting an optimal subset of the independent variables available for a regression is also presented.

  15. The contribution of social capital and coping strategies to functioning and quality of life of patients with fibromyalgia.

    PubMed

    Boehm, Amnon; Eisenberg, Elon; Lampel, Shirly

    2011-01-01

    The study aimed to determine the degree to which social capital (a combination of social resources that can be beneficial to a person's physical health and well-being), personal coping strategies, and additional personal and disease-related factors, contribute to the functioning and quality of life (QoL) of fibromyalgia (FM) patients. In the assessment of their functioning and QoL, 175 Israeli FM patients completed the Fibromyalgia Impact Questionnaire (FIQ) and the Short-Form Health Survey (SF-36) (dependent variables). In addition, they completed a modified Social Capital Questionnaires (which tests 3 subtypes of social capital: bonding, bridging, and linking), COPE-Multidimensional Coping Inventory (measures the use of problem vs. emotional-focused coping strategies), and a personal demographic questionnaire (independent variables). A multivariate regression analysis was used to assess the relative contribution of each independent variable to functioning and QoL of these patients. The regression analysis showed that: (1) Bonding social capital and particularly the friend-connections component of bonding social capital contributed to the FIQ score and to the SF-36 parameters of social function, mental health, and bodily pain. (2) Problem-focused coping strategy contributed to the mental health parameter of the SF-36, whereas emotional-focused coping strategy contributed negatively to the FIQ score and to the mental health, general health, and bodily pain parameters of the SF-36. (3) In addition, duration of FM symptoms contributed to the SF-36 parameters of general health, social function, mental health, and bodily pain but not to the FIQ score; whereas, work status contributed significantly to the variance of FIQ. Bonding social capital, problem-solving coping strategies, and the duration of FM contribute positively to functioning and QoL of FM patients; whereas, emotional-focused coping strategies do the opposite. Further research to test the effects of strengthened social capital and enhanced problem-solving rather than emotion-focused coping strategies on functioning and QoL of FM patients is warranted.

  16. Prepositioning emergency supplies under uncertainty: a parametric optimization method

    NASA Astrophysics Data System (ADS)

    Bai, Xuejie; Gao, Jinwu; Liu, Yankui

    2018-07-01

    Prepositioning of emergency supplies is an effective method for increasing preparedness for disasters and has received much attention in recent years. In this article, the prepositioning problem is studied by a robust parametric optimization method. The transportation cost, supply, demand and capacity are unknown prior to the extraordinary event, which are represented as fuzzy parameters with variable possibility distributions. The variable possibility distributions are obtained through the credibility critical value reduction method for type-2 fuzzy variables. The prepositioning problem is formulated as a fuzzy value-at-risk model to achieve a minimum total cost incurred in the whole process. The key difficulty in solving the proposed optimization model is to evaluate the quantile of the fuzzy function in the objective and the credibility in the constraints. The objective function and constraints can be turned into their equivalent parametric forms through chance constrained programming under the different confidence levels. Taking advantage of the structural characteristics of the equivalent optimization model, a parameter-based domain decomposition method is developed to divide the original optimization problem into six mixed-integer parametric submodels, which can be solved by standard optimization solvers. Finally, to explore the viability of the developed model and the solution approach, some computational experiments are performed on realistic scale case problems. The computational results reported in the numerical example show the credibility and superiority of the proposed parametric optimization method.

  17. Hypergeometric Series Solution to a Class of Second-Order Boundary Value Problems via Laplace Transform with Applications to Nanofluids

    NASA Astrophysics Data System (ADS)

    Ebaid, Abdelhalim; Wazwaz, Abdul-Majid; Alali, Elham; Masaedeh, Basem S.

    2017-03-01

    Very recently, it was observed that the temperature of nanofluids is finally governed by second-order ordinary differential equations with variable coefficients of exponential orders. Such coefficients were then transformed to polynomials type by using new independent variables. In this paper, a class of second-order ordinary differential equations with variable coefficients of polynomials type has been solved analytically. The analytical solution is expressed in terms of a hypergeometric function with generalized parameters. Moreover, applications of the present results have been applied on some selected nanofluids problems in the literature. The exact solutions in the literature were derived as special cases of our generalized analytical solution.

  18. Simulating variable source problems via post processing of individual particle tallies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bleuel, D.L.; Donahue, R.J.; Ludewigt, B.A.

    2000-10-20

    Monte Carlo is an extremely powerful method of simulating complex, three dimensional environments without excessive problem simplification. However, it is often time consuming to simulate models in which the source can be highly varied. Similarly difficult are optimization studies involving sources in which many input parameters are variable, such as particle energy, angle, and spatial distribution. Such studies are often approached using brute force methods or intelligent guesswork. One field in which these problems are often encountered is accelerator-driven Boron Neutron Capture Therapy (BNCT) for the treatment of cancers. Solving the reverse problem of determining the best neutron source formore » optimal BNCT treatment can be accomplished by separating the time-consuming particle-tracking process of a full Monte Carlo simulation from the calculation of the source weighting factors which is typically performed at the beginning of a Monte Carlo simulation. By post-processing these weighting factors on a recorded file of individual particle tally information, the effect of changing source variables can be realized in a matter of seconds, instead of requiring hours or days for additional complete simulations. By intelligent source biasing, any number of different source distributions can be calculated quickly from a single Monte Carlo simulation. The source description can be treated as variable and the effect of changing multiple interdependent source variables on the problem's solution can be determined. Though the focus of this study is on BNCT applications, this procedure may be applicable to any problem that involves a variable source.« less

  19. Distributed Constrained Optimization with Semicoordinate Transformations

    NASA Technical Reports Server (NTRS)

    Macready, William; Wolpert, David

    2006-01-01

    Recent work has shown how information theory extends conventional full-rationality game theory to allow bounded rational agents. The associated mathematical framework can be used to solve constrained optimization problems. This is done by translating the problem into an iterated game, where each agent controls a different variable of the problem, so that the joint probability distribution across the agents moves gives an expected value of the objective function. The dynamics of the agents is designed to minimize a Lagrangian function of that joint distribution. Here we illustrate how the updating of the Lagrange parameters in the Lagrangian is a form of automated annealing, which focuses the joint distribution more and more tightly about the joint moves that optimize the objective function. We then investigate the use of "semicoordinate" variable transformations. These separate the joint state of the agents from the variables of the optimization problem, with the two connected by an onto mapping. We present experiments illustrating the ability of such transformations to facilitate optimization. We focus on the special kind of transformation in which the statistically independent states of the agents induces a mixture distribution over the optimization variables. Computer experiment illustrate this for &sat constraint satisfaction problems and for unconstrained minimization of NK functions.

  20. Genetic algorithm optimization of transcutaneous energy transmission systems for implantable ventricular assist devices.

    PubMed

    Byron, Kelly; Bluvshtein, Vlad; Lucke, Lori

    2013-01-01

    Transcutaneous energy transmission systems (TETS) wirelessly transmit power through the skin. TETS is particularly desirable for ventricular assist devices (VAD), which currently require cables through the skin to power the implanted pump. Optimizing the inductive link of the TET system is a multi-parameter problem. Most current techniques to optimize the design simplify the problem by combining parameters leading to sub-optimal solutions. In this paper we present an optimization method using a genetic algorithm to handle a larger set of parameters, which leads to a more optimal design. Using this approach, we were able to increase efficiency while also reducing power variability in a prototype, compared to a traditional manual design method.

  1. Computation of Standard Errors

    PubMed Central

    Dowd, Bryan E; Greene, William H; Norton, Edward C

    2014-01-01

    Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304

  2. An investigation of using an RQP based method to calculate parameter sensitivity derivatives

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1989-01-01

    Estimation of the sensitivity of problem functions with respect to problem variables forms the basis for many of our modern day algorithms for engineering optimization. The most common application of problem sensitivities has been in the calculation of objective function and constraint partial derivatives for determining search directions and optimality conditions. A second form of sensitivity analysis, parameter sensitivity, has also become an important topic in recent years. By parameter sensitivity, researchers refer to the estimation of changes in the modeling functions and current design point due to small changes in the fixed parameters of the formulation. Methods for calculating these derivatives have been proposed by several authors (Armacost and Fiacco 1974, Sobieski et al 1981, Schmit and Chang 1984, and Vanderplaats and Yoshida 1985). Two drawbacks to estimating parameter sensitivities by current methods have been: (1) the need for second order information about the Lagrangian at the current point, and (2) the estimates assume no change in the active set of constraints. The first of these two problems is addressed here and a new algorithm is proposed that does not require explicit calculation of second order information.

  3. State-space models’ dirty little secrets: even simple linear Gaussian models can have estimation problems

    NASA Astrophysics Data System (ADS)

    Auger-Méthé, Marie; Field, Chris; Albertsen, Christoffer M.; Derocher, Andrew E.; Lewis, Mark A.; Jonsen, Ian D.; Mills Flemming, Joanna

    2016-05-01

    State-space models (SSMs) are increasingly used in ecology to model time-series such as animal movement paths and population dynamics. This type of hierarchical model is often structured to account for two levels of variability: biological stochasticity and measurement error. SSMs are flexible. They can model linear and nonlinear processes using a variety of statistical distributions. Recent ecological SSMs are often complex, with a large number of parameters to estimate. Through a simulation study, we show that even simple linear Gaussian SSMs can suffer from parameter- and state-estimation problems. We demonstrate that these problems occur primarily when measurement error is larger than biological stochasticity, the condition that often drives ecologists to use SSMs. Using an animal movement example, we show how these estimation problems can affect ecological inference. Biased parameter estimates of a SSM describing the movement of polar bears (Ursus maritimus) result in overestimating their energy expenditure. We suggest potential solutions, but show that it often remains difficult to estimate parameters. While SSMs are powerful tools, they can give misleading results and we urge ecologists to assess whether the parameters can be estimated accurately before drawing ecological conclusions from their results.

  4. Simultaneous analysis and design

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.

    1984-01-01

    Optimization techniques are increasingly being used for performing nonlinear structural analysis. The development of element by element (EBE) preconditioned conjugate gradient (CG) techniques is expected to extend this trend to linear analysis. Under these circumstances the structural design problem can be viewed as a nested optimization problem. There are computational benefits to treating this nested problem as a large single optimization problem. The response variables (such as displacements) and the structural parameters are all treated as design variables in a unified formulation which performs simultaneously the design and analysis. Two examples are used for demonstration. A seventy-two bar truss is optimized subject to linear stress constraints and a wing box structure is optimized subject to nonlinear collapse constraints. Both examples show substantial computational savings with the unified approach as compared to the traditional nested approach.

  5. Using Indirect Turbulence Measurements for Real-Time Parameter Estimation in Turbulent Air

    NASA Technical Reports Server (NTRS)

    Martos, Borja; Morelli, Eugene A.

    2012-01-01

    The use of indirect turbulence measurements for real-time estimation of parameters in a linear longitudinal dynamics model in atmospheric turbulence was studied. It is shown that measuring the atmospheric turbulence makes it possible to treat the turbulence as a measured explanatory variable in the parameter estimation problem. Commercial off-the-shelf sensors were researched and evaluated, then compared to air data booms. Sources of colored noise in the explanatory variables resulting from typical turbulence measurement techniques were identified and studied. A major source of colored noise in the explanatory variables was identified as frequency dependent upwash and time delay. The resulting upwash and time delay corrections were analyzed and compared to previous time shift dynamic modeling research. Simulation data as well as flight test data in atmospheric turbulence were used to verify the time delay behavior. Recommendations are given for follow on flight research and instrumentation.

  6. Selecting Design Parameters for Flying Vehicles

    NASA Astrophysics Data System (ADS)

    Makeev, V. I.; Strel'nikova, E. A.; Trofimenko, P. E.; Bondar', A. V.

    2013-09-01

    Studying the influence of a number of design parameters of solid-propellant rockets on the longitudinal and lateral dispersion is an important applied problem. A mathematical model of a rigid body of variable mass moving in a disturbed medium exerting both wave drag and friction is considered. The model makes it possible to determine the coefficients of aerodynamic forces and moments, which affect the motion of vehicles, and to assess the effect of design parameters on their accuracy

  7. Investigation on the effects of temperature dependency of material parameters on a thermoelastic loading problem

    NASA Astrophysics Data System (ADS)

    Kumar, Anil; Mukhopadhyay, Santwana

    2017-08-01

    The present work is concerned with the investigation of thermoelastic interactions inside a spherical shell with temperature-dependent material parameters. We employ the heat conduction model with a single delay term. The problem is studied by considering three different kinds of time-dependent temperature and stress distributions applied at the inner and outer surfaces of the shell. The problem is formulated by considering that the thermal properties vary as linear function of temperature that yield nonlinear governing equations. The problem is solved by applying Kirchhoff transformation along with integral transform technique. The numerical results of the field variables are shown in the different graphs to study the influence of temperature-dependent thermal parameters in various cases. It has been shown that the temperature-dependent effect is more prominent in case of stress distribution as compared to other fields and also the effect is significant in case of thermal shock applied at the two boundary surfaces of the spherical shell.

  8. Optimal positions and parameters of translational and rotational mass dampers in beams subjected to random excitation

    NASA Astrophysics Data System (ADS)

    Łatas, Waldemar

    2018-01-01

    The problem of vibrations of the beam with the attached system of translational and rotational dynamic mass dampers subjected to random excitations with peaked power spectral densities, is presented in the hereby paper. The Euler-Bernoulli beam model is applied, while for solving the equation of motion the Galerkin method and the Laplace time transform are used. The obtained transfer functions allow to determine power spectral densities of the beam deflection and other dependent variables. Numerical examples present simple optimization problems of mass dampers parameters for local and global objective functions.

  9. On the performance of exponential integrators for problems in magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Einkemmer, Lukas; Tokman, Mayya; Loffeld, John

    2017-02-01

    Exponential integrators have been introduced as an efficient alternative to explicit and implicit methods for integrating large stiff systems of differential equations. Over the past decades these methods have been studied theoretically and their performance was evaluated using a range of test problems. While the results of these investigations showed that exponential integrators can provide significant computational savings, the research on validating this hypothesis for large scale systems and understanding what classes of problems can particularly benefit from the use of the new techniques is in its initial stages. Resistive magnetohydrodynamic (MHD) modeling is widely used in studying large scale behavior of laboratory and astrophysical plasmas. In many problems numerical solution of MHD equations is a challenging task due to the temporal stiffness of this system in the parameter regimes of interest. In this paper we evaluate the performance of exponential integrators on large MHD problems and compare them to a state-of-the-art implicit time integrator. Both the variable and constant time step exponential methods of EPIRK-type are used to simulate magnetic reconnection and the Kevin-Helmholtz instability in plasma. Performance of these methods, which are part of the EPIC software package, is compared to the variable time step variable order BDF scheme included in the CVODE (part of SUNDIALS) library. We study performance of the methods on parallel architectures and with respect to magnitudes of important parameters such as Reynolds, Lundquist, and Prandtl numbers. We find that the exponential integrators provide superior or equal performance in most circumstances and conclude that further development of exponential methods for MHD problems is warranted and can lead to significant computational advantages for large scale stiff systems of differential equations such as MHD.

  10. Wormholes and the cosmological constant problem.

    NASA Astrophysics Data System (ADS)

    Klebanov, I.

    The author reviews the cosmological constant problem and the recently proposed wormhole mechanism for its solution. Summation over wormholes in the Euclidean path integral for gravity turns all the coupling parameters into dynamical variables, sampled from a probability distribution. A formal saddle point analysis results in a distribution with a sharp peak at the cosmological constant equal to zero, which appears to solve the cosmological constant problem. He discusses the instabilities of the gravitational Euclidean path integral and the difficulties with its interpretation. He presents an alternate formalism for baby universes, based on the "third quantization" of the Wheeler-De Witt equation. This approach is analyzed in a minisuperspace model for quantum gravity, where it reduces to simple quantum mechanics. Once again, the coupling parameters become dynamical. Unfortunately, the a priori probability distribution for the cosmological constant and other parameters is typically a smooth function, with no sharp peaks.

  11. An algorithm for analytical solution of basic problems featuring elastostatic bodies with cavities and surface flaws

    NASA Astrophysics Data System (ADS)

    Penkov, V. B.; Levina, L. V.; Novikova, O. S.; Shulmin, A. S.

    2018-03-01

    Herein we propose a methodology for structuring a full parametric analytical solution to problems featuring elastostatic media based on state-of-the-art computing facilities that support computerized algebra. The methodology includes: direct and reverse application of P-Theorem; methods of accounting for physical properties of media; accounting for variable geometrical parameters of bodies, parameters of boundary states, independent parameters of volume forces, and remote stress factors. An efficient tool to address the task is the sustainable method of boundary states originally designed for the purposes of computerized algebra and based on the isomorphism of Hilbertian spaces of internal states and boundary states of bodies. We performed full parametric solutions of basic problems featuring a ball with a nonconcentric spherical cavity, a ball with a near-surface flaw, and an unlimited medium with two spherical cavities.

  12. A Lagrange multiplier and Hopfield-type barrier function method for the traveling salesman problem.

    PubMed

    Dang, Chuangyin; Xu, Lei

    2002-02-01

    A Lagrange multiplier and Hopfield-type barrier function method is proposed for approximating a solution of the traveling salesman problem. The method is derived from applications of Lagrange multipliers and a Hopfield-type barrier function and attempts to produce a solution of high quality by generating a minimum point of a barrier problem for a sequence of descending values of the barrier parameter. For any given value of the barrier parameter, the method searches for a minimum point of the barrier problem in a feasible descent direction, which has a desired property that lower and upper bounds on variables are always satisfied automatically if the step length is a number between zero and one. At each iteration, the feasible descent direction is found by updating Lagrange multipliers with a globally convergent iterative procedure. For any given value of the barrier parameter, the method converges to a stationary point of the barrier problem without any condition on the objective function. Theoretical and numerical results show that the method seems more effective and efficient than the softassign algorithm.

  13. Inverse problem of the vibrational band gap of periodically supported beam

    NASA Astrophysics Data System (ADS)

    Shi, Xiaona; Shu, Haisheng; Dong, Fuzhen; Zhao, Lei

    2017-04-01

    The researches of periodic structures have a long history with the main contents confined in the field of forward problem. In this paper, the inverse problem is considered and an overall frame is proposed which includes two main stages, i.e., the band gap criterion and its optimization. As a preliminary investigation, the inverse problem of the flexural vibrational band gap of a periodically supported beam is analyzed. According to existing knowledge of its forward problem, the band gap criterion is given in implicit form. Then, two cases with three independent parameters, namely the double supported case and the triple one, are studied in detail and the explicit expressions of the feasible domain are constructed by numerical fitting. Finally, the parameter optimization of the double supported case with three variables is conducted using genetic algorithm aiming for the best mean attenuation within specified frequency band.

  14. The Mathematics of Psychotherapy: A Nonlinear Model of Change Dynamics.

    PubMed

    Schiepek, Gunter; Aas, Benjamin; Viol, Kathrin

    2016-07-01

    Psychotherapy is a dynamic process produced by a complex system of interacting variables. Even though there are qualitative models of such systems the link between structure and function, between network and network dynamics is still missing. The aim of this study is to realize these links. The proposed model is composed of five state variables (P: problem severity, S: success and therapeutic progress, M: motivation to change, E: emotions, I: insight and new perspectives) interconnected by 16 functions. The shape of each function is modified by four parameters (a: capability to form a trustful working alliance, c: mentalization and emotion regulation, r: behavioral resources and skills, m: self-efficacy and reward expectation). Psychologically, the parameters play the role of competencies or traits, which translate into the concept of control parameters in synergetics. The qualitative model was transferred into five coupled, deterministic, nonlinear difference equations generating the dynamics of each variable as a function of other variables. The mathematical model is able to reproduce important features of psychotherapy processes. Examples of parameter-dependent bifurcation diagrams are given. Beyond the illustrated similarities between simulated and empirical dynamics, the model has to be further developed, systematically tested by simulated experiments, and compared to empirical data.

  15. Hybrid Genetic Agorithms and Line Search Method for Industrial Production Planning with Non-Linear Fitness Function

    NASA Astrophysics Data System (ADS)

    Vasant, Pandian; Barsoum, Nader

    2008-10-01

    Many engineering, science, information technology and management optimization problems can be considered as non linear programming real world problems where the all or some of the parameters and variables involved are uncertain in nature. These can only be quantified using intelligent computational techniques such as evolutionary computation and fuzzy logic. The main objective of this research paper is to solve non linear fuzzy optimization problem where the technological coefficient in the constraints involved are fuzzy numbers which was represented by logistic membership functions by using hybrid evolutionary optimization approach. To explore the applicability of the present study a numerical example is considered to determine the production planning for the decision variables and profit of the company.

  16. Against Laplacian Reduction of Newtonian Mass to Spatiotemporal Quantities

    NASA Astrophysics Data System (ADS)

    Martens, Niels C. M.

    2018-05-01

    Laplace wondered about the minimal choice of initial variables and parameters corresponding to a well-posed initial value problem. Discussions of Laplace's problem in the literature have focused on choosing between spatiotemporal variables relative to absolute space (i.e. substantivalism) or merely relative to other material bodies (i.e. relationalism) and between absolute masses (i.e. absolutism) or merely mass ratios (i.e. comparativism). This paper extends these discussions of Laplace's problem, in the context of Newtonian Gravity, by asking whether mass needs to be included in the initial state at all, or whether a purely spatiotemporal initial state suffices. It is argued that mass indeed needs to be included; removing mass from the initial state drastically reduces the predictive and explanatory power of Newtonian Gravity.

  17. Against Laplacian Reduction of Newtonian Mass to Spatiotemporal Quantities

    NASA Astrophysics Data System (ADS)

    Martens, Niels C. M.

    2018-03-01

    Laplace wondered about the minimal choice of initial variables and parameters corresponding to a well-posed initial value problem. Discussions of Laplace's problem in the literature have focused on choosing between spatiotemporal variables relative to absolute space (i.e. substantivalism) or merely relative to other material bodies (i.e. relationalism) and between absolute masses (i.e. absolutism) or merely mass ratios (i.e. comparativism). This paper extends these discussions of Laplace's problem, in the context of Newtonian Gravity, by asking whether mass needs to be included in the initial state at all, or whether a purely spatiotemporal initial state suffices. It is argued that mass indeed needs to be included; removing mass from the initial state drastically reduces the predictive and explanatory power of Newtonian Gravity.

  18. Separation of variables in anisotropic models: anisotropic Rabi and elliptic Gaudin model in an external magnetic field

    NASA Astrophysics Data System (ADS)

    Skrypnyk, T.

    2017-08-01

    We study the problem of separation of variables for classical integrable Hamiltonian systems governed by non-skew-symmetric non-dynamical so(3)\\otimes so(3) -valued elliptic r-matrices with spectral parameters. We consider several examples of such models, and perform separation of variables for classical anisotropic one- and two-spin Gaudin-type models in an external magnetic field, and for Jaynes-Cummings-Dicke-type models without the rotating wave approximation.

  19. Singularity problems of the power law for modeling creep compliance

    NASA Technical Reports Server (NTRS)

    Dillard, D. A.; Hiel, C.

    1985-01-01

    An explanation is offered for the extreme sensitivity that has been observed in the power law parameters of the T300/934 graphite epoxy material systems during experiments to evaluate the system's viscoelastic response. It is shown that the singularity associated with the power law can explain the sensitivity as well as the observed variability in the calculated parameters. Techniques for minimizing errors are suggested.

  20. Using Bayesian regression to test hypotheses about relationships between parameters and covariates in cognitive models.

    PubMed

    Boehm, Udo; Steingroever, Helen; Wagenmakers, Eric-Jan

    2018-06-01

    An important tool in the advancement of cognitive science are quantitative models that represent different cognitive variables in terms of model parameters. To evaluate such models, their parameters are typically tested for relationships with behavioral and physiological variables that are thought to reflect specific cognitive processes. However, many models do not come equipped with the statistical framework needed to relate model parameters to covariates. Instead, researchers often revert to classifying participants into groups depending on their values on the covariates, and subsequently comparing the estimated model parameters between these groups. Here we develop a comprehensive solution to the covariate problem in the form of a Bayesian regression framework. Our framework can be easily added to existing cognitive models and allows researchers to quantify the evidential support for relationships between covariates and model parameters using Bayes factors. Moreover, we present a simulation study that demonstrates the superiority of the Bayesian regression framework to the conventional classification-based approach.

  1. LASER APPLICATIONS AND OTHER TOPICS IN QUANTUM ELECTRONICS On control of kinematic parameters of ultracold neutrons in waveguides

    NASA Astrophysics Data System (ADS)

    Rivlin, Lev A.

    2010-10-01

    The possibility of controlling the kinematic parameters of ultracold neutrons (UCNs) is analysed by the example of a waveguide transfer and transformation of 2D images in ultracold neutrons and by the example of an increase in the concentration and deceleration/acceleration of ultracold neutrons during their transport in the waveguide with a variable cross section. The critical parameters of the problem are estimated, which indicates both consistency of the proposed approach and the emerging experimental limitations.

  2. Robustness and Actuator Bandwidth of MRP-Based Sliding Mode Control for Spacecraft Attitude Control Problems

    NASA Astrophysics Data System (ADS)

    Keum, Jung-Hoon; Ra, Sung-Woong

    2009-12-01

    Nonlinear sliding surface design in variable structure systems for spacecraft attitude control problems is studied. A robustness analysis is performed for regular form of system, and calculation of actuator bandwidth is presented by reviewing sliding surface dynamics. To achieve non-singular attitude description and minimal parameterization, spacecraft attitude control problems are considered based on modified Rodrigues parameters (MRP). It is shown that the derived controller ensures the sliding motion in pre-determined region irrespective of unmodeled effects and disturbances.

  3. Gas evolution from spheres

    NASA Astrophysics Data System (ADS)

    Longhurst, G. R.

    1991-04-01

    Gas evolution from spherical solids or liquids where no convective processes are active is analyzed. Three problem classes are considered: (1) constant concentration boundary, (2) Henry's law (first order) boundary, and (3) Sieverts' law (second order) boundary. General expressions are derived for dimensionless times and transport parameters appropriate to each of the classes considered. However, in the second order case, the non-linearities of the problem require the presence of explicit dimensional variables in the solution. Sample problems are solved to illustrate the method.

  4. Structural identifiability of cyclic graphical models of biological networks with latent variables.

    PubMed

    Wang, Yulin; Lu, Na; Miao, Hongyu

    2016-06-13

    Graphical models have long been used to describe biological networks for a variety of important tasks such as the determination of key biological parameters, and the structure of graphical model ultimately determines whether such unknown parameters can be unambiguously obtained from experimental observations (i.e., the identifiability problem). Limited by resources or technical capacities, complex biological networks are usually partially observed in experiment, which thus introduces latent variables into the corresponding graphical models. A number of previous studies have tackled the parameter identifiability problem for graphical models such as linear structural equation models (SEMs) with or without latent variables. However, the limited resolution and efficiency of existing approaches necessarily calls for further development of novel structural identifiability analysis algorithms. An efficient structural identifiability analysis algorithm is developed in this study for a broad range of network structures. The proposed method adopts the Wright's path coefficient method to generate identifiability equations in forms of symbolic polynomials, and then converts these symbolic equations to binary matrices (called identifiability matrix). Several matrix operations are introduced for identifiability matrix reduction with system equivalency maintained. Based on the reduced identifiability matrices, the structural identifiability of each parameter is determined. A number of benchmark models are used to verify the validity of the proposed approach. Finally, the network module for influenza A virus replication is employed as a real example to illustrate the application of the proposed approach in practice. The proposed approach can deal with cyclic networks with latent variables. The key advantage is that it intentionally avoids symbolic computation and is thus highly efficient. Also, this method is capable of determining the identifiability of each single parameter and is thus of higher resolution in comparison with many existing approaches. Overall, this study provides a basis for systematic examination and refinement of graphical models of biological networks from the identifiability point of view, and it has a significant potential to be extended to more complex network structures or high-dimensional systems.

  5. Peristaltic transport of a fractional Burgers' fluid with variable viscosity through an inclined tube

    NASA Astrophysics Data System (ADS)

    Rachid, Hassan

    2015-12-01

    In the present study,we investigate the unsteady peristaltic transport of a viscoelastic fluid with fractional Burgers' model in an inclined tube. We suppose that the viscosity is variable in the radial direction. This analysis has been carried out under low Reynolds number and long-wavelength approximations. An analytical solution to the problem is obtained using a fractional calculus approach. Figures are plotted to show the effects of angle of inclination, Reynolds number, Froude number, material constants, fractional parameters, parameter of viscosity and amplitude ratio on the pressure gradient, pressure rise, friction force, axial velocity and on the mechanical efficiency.

  6. Plate falling in a fluid: Regular and chaotic dynamics of finite-dimensional models

    NASA Astrophysics Data System (ADS)

    Kuznetsov, Sergey P.

    2015-05-01

    Results are reviewed concerning the planar problem of a plate falling in a resisting medium studied with models based on ordinary differential equations for a small number of dynamical variables. A unified model is introduced to conduct a comparative analysis of the dynamical behaviors of models of Kozlov, Tanabe-Kaneko, Belmonte-Eisenberg-Moses and Andersen-Pesavento-Wang using common dimensionless variables and parameters. It is shown that the overall structure of the parameter spaces for the different models manifests certain similarities caused by the same inherent symmetry and by the universal nature of the phenomena involved in nonlinear dynamics (fixed points, limit cycles, attractors, and bifurcations).

  7. Neural Network Machine Learning and Dimension Reduction for Data Visualization

    NASA Technical Reports Server (NTRS)

    Liles, Charles A.

    2014-01-01

    Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.

  8. Strict Constraint Feasibility in Analysis and Design of Uncertain Systems

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.

    2006-01-01

    This paper proposes a methodology for the analysis and design optimization of models subject to parametric uncertainty, where hard inequality constraints are present. Hard constraints are those that must be satisfied for all parameter realizations prescribed by the uncertainty model. Emphasis is given to uncertainty models prescribed by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles. These models make it possible to consider sets of parameters having comparable as well as dissimilar levels of uncertainty. Two alternative formulations for hyper-rectangular sets are proposed, one based on a transformation of variables and another based on an infinity norm approach. The suite of tools developed enable us to determine if the satisfaction of hard constraints is feasible by identifying critical combinations of uncertain parameters. Since this practice is performed without sampling or partitioning the parameter space, the resulting assessments of robustness are analytically verifiable. Strategies that enable the comparison of the robustness of competing design alternatives, the approximation of the robust design space, and the systematic search for designs with improved robustness characteristics are also proposed. Since the problem formulation is generic and the solution methods only require standard optimization algorithms for their implementation, the tools developed are applicable to a broad range of problems in several disciplines.

  9. Mesh refinement and numerical sensitivity analysis for parameter calibration of partial differential equations

    NASA Astrophysics Data System (ADS)

    Becker, Roland; Vexler, Boris

    2005-06-01

    We consider the calibration of parameters in physical models described by partial differential equations. This task is formulated as a constrained optimization problem with a cost functional of least squares type using information obtained from measurements. An important issue in the numerical solution of this type of problem is the control of the errors introduced, first, by discretization of the equations describing the physical model, and second, by measurement errors or other perturbations. Our strategy is as follows: we suppose that the user defines an interest functional I, which might depend on both the state variable and the parameters and which represents the goal of the computation. First, we propose an a posteriori error estimator which measures the error with respect to this functional. This error estimator is used in an adaptive algorithm to construct economic meshes by local mesh refinement. The proposed estimator requires the solution of an auxiliary linear equation. Second, we address the question of sensitivity. Applying similar techniques as before, we derive quantities which describe the influence of small changes in the measurements on the value of the interest functional. These numbers, which we call relative condition numbers, give additional information on the problem under consideration. They can be computed by means of the solution of the auxiliary problem determined before. Finally, we demonstrate our approach at hand of a parameter calibration problem for a model flow problem.

  10. Dynamic analysis of four bar planar mechanism extended to six-bar planar mechanism with variable topology

    NASA Astrophysics Data System (ADS)

    Belleri, Basayya K.; Kerur, Shravankumar B.

    2018-04-01

    A computer-oriented procedure for solving the dynamic force analysis problem for general planar mechanisms is presented. This paper provides position analysis, velocity analysis, acceleration analysis and force analysis of six bar mechanism with variable topology approach. Six bar mechanism is constructed by joining two simple four bar mechanisms. Initially the position, velocity and acceleration analysis of first four bar mechanism are determined by using the input parameters. The outputs (angular displacement, velocity and acceleration of rocker)of first four bar mechanism are used as input parameter for the second four bar mechanism and the position, velocity, acceleration and forces are analyzed. With out-put parameters of second four-bar mechanism the force analysis of first four-bar mechanism is carried out.

  11. Variation of parameters using Battin's universal functions

    NASA Astrophysics Data System (ADS)

    Burton, James R., III; Melton, Robert G.

    This paper presents a variation of parameters analysis, suitable for use in situations involving small perturbations to the two-body problem, using Battin's universal functions. Unlike the universal variable formulation, this approach avoids the need to switch among different functional representations if the orbit transitions from elliptical, through parabolic, to hyperbolic state, making it attractive for use in simulating low-thrust trajectories ascending to escape or capturing into orbit.

  12. Simulating parameters of lunar physical libration on the basis of its analytical theory

    NASA Astrophysics Data System (ADS)

    Petrova, N.; Zagidullin, A.; Nefediev, Yu.

    2014-04-01

    Results of simulating behavior of lunar physical libration parameters are presented. Some features in the speed change of impulse variables are revealed: fast periodic changes in р2 and long periodic changes in р3. A problem of searching for a dynamic explanation of this phenomenon is put. The simulation was performed on the basis of the analytical libration theory [1] in the programming environment VBA.

  13. Markov Chain Monte Carlo Inference of Parametric Dictionaries for Sparse Bayesian Approximations

    PubMed Central

    Chaspari, Theodora; Tsiartas, Andreas; Tsilifis, Panagiotis; Narayanan, Shrikanth

    2016-01-01

    Parametric dictionaries can increase the ability of sparse representations to meaningfully capture and interpret the underlying signal information, such as encountered in biomedical problems. Given a mapping function from the atom parameter space to the actual atoms, we propose a sparse Bayesian framework for learning the atom parameters, because of its ability to provide full posterior estimates, take uncertainty into account and generalize on unseen data. Inference is performed with Markov Chain Monte Carlo, that uses block sampling to generate the variables of the Bayesian problem. Since the parameterization of dictionary atoms results in posteriors that cannot be analytically computed, we use a Metropolis-Hastings-within-Gibbs framework, according to which variables with closed-form posteriors are generated with the Gibbs sampler, while the remaining ones with the Metropolis Hastings from appropriate candidate-generating densities. We further show that the corresponding Markov Chain is uniformly ergodic ensuring its convergence to a stationary distribution independently of the initial state. Results on synthetic data and real biomedical signals indicate that our approach offers advantages in terms of signal reconstruction compared to previously proposed Steepest Descent and Equiangular Tight Frame methods. This paper demonstrates the ability of Bayesian learning to generate parametric dictionaries that can reliably represent the exemplar data and provides the foundation towards inferring the entire variable set of the sparse approximation problem for signal denoising, adaptation and other applications. PMID:28649173

  14. On the photo-gravitational restricted four-body problem with variable mass

    NASA Astrophysics Data System (ADS)

    Mittal, Amit; Agarwal, Rajiv; Suraj, Md Sanam; Arora, Monika

    2018-05-01

    This paper deals with the photo-gravitational restricted four-body problem (PR4BP) with variable mass. Following the procedure given by Gascheau (C. R. 16:393-394, 1843) and Routh (Proc. Lond. Math. Soc. 6:86-97, 1875), the conditions of linear stability of Lagrange triangle solution in the PR4BP are determined. The three radiating primaries having masses m1, m2 and m3 in an equilateral triangle with m2=m3 will be stable as long as they satisfy the linear stability condition of the Lagrangian triangle solution. We have derived the equations of motion of the mentioned problem and observed that there exist eight libration points for a fixed value of parameters γ (m at time t/m at initial time, 0<γ≤1 ), α (the proportionality constant in Jeans' law (Astronomy and Cosmogony, Cambridge University Press, Cambridge, 1928), 0≤α≤2.2), the mass parameter μ=0.005 and radiation parameters qi, (0< qi≤1, i=1, 2, 3). All the libration points are non-collinear if q2≠ q3. It has been observed that the collinear and out-of-plane libration points also exist for q2=q3. In all the cases, each libration point is found to be unstable. Further, zero velocity curves (ZVCs) and Newton-Raphson basins of attraction are also discussed.

  15. VARIABILITY OF PARAMETERS MEASURED DURING THE RESUSPENSION OF SEDIMENTS WITH A PARTICULATE ENTRAINMENT SIMULATOR

    EPA Science Inventory

    Contaminated sediments are a problem facing many environmental managers concerned with issues such as maintenance dredging, habitat restoration and dredged material placement. Currently, there are few methods which can be used to assess contaminant remobilization potential from ...

  16. Techniques for shuttle trajectory optimization

    NASA Technical Reports Server (NTRS)

    Edge, E. R.; Shieh, C. J.; Powers, W. F.

    1973-01-01

    The application of recently developed function-space Davidon-type techniques to the shuttle ascent trajectory optimization problem is discussed along with an investigation of the recently developed PRAXIS algorithm for parameter optimization. At the outset of this analysis, the major deficiency of the function-space algorithms was their potential storage problems. Since most previous analyses of the methods were with relatively low-dimension problems, no storage problems were encountered. However, in shuttle trajectory optimization, storage is a problem, and this problem was handled efficiently. Topics discussed include: the shuttle ascent model and the development of the particular optimization equations; the function-space algorithms; the operation of the algorithm and typical simulations; variable final-time problem considerations; and a modification of Powell's algorithm.

  17. TRUMP. Transient & S-State Temperature Distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elrod, D.C.; Turner, W.D.

    1992-03-03

    TRUMP solves a general nonlinear parabolic partial differential equation describing flow in various kinds of potential fields, such as fields of temperature, pressure, or electricity and magnetism; simultaneously, it will solve two additional equations representing, in thermal problems, heat production by decomposition of two reactants having rate constants with a general Arrhenius temperature dependence. Steady-state and transient flow in one, two, or three dimensions are considered in geometrical configurations having simple or complex shapes and structures. Problem parameters may vary with spatial position, time, or primary dependent variables, temperature, pressure, or field strength. Initial conditions may vary with spatial position,more » and among the criteria that may be specified for ending a problem are upper and lower limits on the size of the primary dependent variable, upper limits on the problem time or on the number of time-steps or on the computer time, and attainment of steady state.« less

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elrod, D.C.; Turner, W.D.

    TRUMP solves a general nonlinear parabolic partial differential equation describing flow in various kinds of potential fields, such as fields of temperature, pressure, or electricity and magnetism; simultaneously, it will solve two additional equations representing, in thermal problems, heat production by decomposition of two reactants having rate constants with a general Arrhenius temperature dependence. Steady-state and transient flow in one, two, or three dimensions are considered in geometrical configurations having simple or complex shapes and structures. Problem parameters may vary with spatial position, time, or primary dependent variables, temperature, pressure, or field strength. Initial conditions may vary with spatial position,more » and among the criteria that may be specified for ending a problem are upper and lower limits on the size of the primary dependent variable, upper limits on the problem time or on the number of time-steps or on the computer time, and attainment of steady state.« less

  19. Analysis of variance in investigations on anisotropy of Cu ore deposits

    NASA Astrophysics Data System (ADS)

    Namysłowska-Wilczyńska, B.

    1986-10-01

    The problem of variability of copper grades and ore thickness in the Lubin copper ore deposit in southwestern Poland is presented. Results of statistical analysis of variations of ledge parameters carried out for three exploited regions of the mine, representing different types of lithological profile show considerable differences. Variability of copper grades occurs in vertical profiles, as well as on extension of field (the copper-bearing series). Against the background of a complex, well-substantiated description of the spatial variability in the Lubin deposit, a methodology is presented that has been applied for the determination of homogeneous ore blocks. The method is a two-factorial (cross) analysis of variance with the special tests of Tukey, Scheffe and Duncan. Blocks of homogeneous sandstone ore have dimensions of up to 160,000 m2 and 60,000 m2 in the case of the Cu content parameter and 200,000 m2 and 10,000 m2 for the thickness parameter.

  20. CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.

    USGS Publications Warehouse

    Cooley, Richard L.; Vecchia, Aldo V.

    1987-01-01

    A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.

  1. An application of robust ridge regression model in the presence of outliers to real data problem

    NASA Astrophysics Data System (ADS)

    Shariff, N. S. Md.; Ferdaos, N. A.

    2017-09-01

    Multicollinearity and outliers are often leads to inconsistent and unreliable parameter estimates in regression analysis. The well-known procedure that is robust to multicollinearity problem is the ridge regression method. This method however is believed are affected by the presence of outlier. The combination of GM-estimation and ridge parameter that is robust towards both problems is on interest in this study. As such, both techniques are employed to investigate the relationship between stock market price and macroeconomic variables in Malaysia due to curiosity of involving the multicollinearity and outlier problem in the data set. There are four macroeconomic factors selected for this study which are Consumer Price Index (CPI), Gross Domestic Product (GDP), Base Lending Rate (BLR) and Money Supply (M1). The results demonstrate that the proposed procedure is able to produce reliable results towards the presence of multicollinearity and outliers in the real data.

  2. Inverse modeling for seawater intrusion in coastal aquifers: Insights about parameter sensitivities, variances, correlations and estimation procedures derived from the Henry problem

    USGS Publications Warehouse

    Sanz, E.; Voss, C.I.

    2006-01-01

    Inverse modeling studies employing data collected from the classic Henry seawater intrusion problem give insight into several important aspects of inverse modeling of seawater intrusion problems and effective measurement strategies for estimation of parameters for seawater intrusion. Despite the simplicity of the Henry problem, it embodies the behavior of a typical seawater intrusion situation in a single aquifer. Data collected from the numerical problem solution are employed without added noise in order to focus on the aspects of inverse modeling strategies dictated by the physics of variable-density flow and solute transport during seawater intrusion. Covariances of model parameters that can be estimated are strongly dependent on the physics. The insights gained from this type of analysis may be directly applied to field problems in the presence of data errors, using standard inverse modeling approaches to deal with uncertainty in data. Covariance analysis of the Henry problem indicates that in order to generally reduce variance of parameter estimates, the ideal places to measure pressure are as far away from the coast as possible, at any depth, and the ideal places to measure concentration are near the bottom of the aquifer between the center of the transition zone and its inland fringe. These observations are located in and near high-sensitivity regions of system parameters, which may be identified in a sensitivity analysis with respect to several parameters. However, both the form of error distribution in the observations and the observation weights impact the spatial sensitivity distributions, and different choices for error distributions or weights can result in significantly different regions of high sensitivity. Thus, in order to design effective sampling networks, the error form and weights must be carefully considered. For the Henry problem, permeability and freshwater inflow can be estimated with low estimation variance from only pressure or only concentration observations. Permeability, freshwater inflow, solute molecular diffusivity, and porosity can be estimated with roughly equivalent confidence using observations of only the logarithm of concentration. Furthermore, covariance analysis allows a logical reduction of the number of estimated parameters for ill-posed inverse seawater intrusion problems. Ill-posed problems may exhibit poor estimation convergence, have a non-unique solution, have multiple minima, or require excessive computational effort, and the condition often occurs when estimating too many or co-dependent parameters. For the Henry problem, such analysis allows selection of the two parameters that control system physics from among all possible system parameters. ?? 2005 Elsevier Ltd. All rights reserved.

  3. Investigating the relationship between a soils classification and the spatial parameters of a conceptual catchment-scale hydrological model

    NASA Astrophysics Data System (ADS)

    Dunn, S. M.; Lilly, A.

    2001-10-01

    There are now many examples of hydrological models that utilise the capabilities of Geographic Information Systems to generate spatially distributed predictions of behaviour. However, the spatial variability of hydrological parameters relating to distributions of soils and vegetation can be hard to establish. In this paper, the relationship between a soil hydrological classification Hydrology of Soil Types (HOST) and the spatial parameters of a conceptual catchment-scale model is investigated. A procedure involving inverse modelling using Monte-Carlo simulations on two catchments is developed to identify relative values for soil related parameters of the DIY model. The relative values determine the internal variability of hydrological processes as a function of the soil type. For three out of the four soil parameters studied, the variability between HOST classes was found to be consistent across two catchments when tested independently. Problems in identifying values for the fourth 'fast response distance' parameter have highlighted a potential limitation with the present structure of the model. The present assumption that this parameter can be related simply to soil type rather than topography appears to be inadequate. With the exclusion of this parameter, calibrated parameter sets from one catchment can be converted into equivalent parameter sets for the alternate catchment on the basis of their HOST distributions, to give a reasonable simulation of flow. Following further testing on different catchments, and modifications to the definition of the fast response distance parameter, the technique provides a methodology whereby it is possible to directly derive spatial soil parameters for new catchments.

  4. Surface quality and topographic inspection of variable compliance part after precise turning

    NASA Astrophysics Data System (ADS)

    Nieslony, P.; Krolczyk, G. M.; Wojciechowski, S.; Chudy, R.; Zak, K.; Maruda, R. W.

    2018-03-01

    The paper presents the problem of precise turning of the mould parts with variable compliance and demonstrates a topographic inspection of the machined surface quality. The study was conducted for the cutting tools made of cemented carbide with coatings, in a range of variable cutting parameters. The long shaft with special axial hole, made of hardened 55NiCrMoV6 steel was selected as a workpiece. The carried out study included the stiffness measurement of the machining system, as well as the investigation of cutting force components. In this context, the surface topography parameters were evaluated using the stylus profile meter and analysed. The research revealed that the surface topography, alongside the 3D functional parameters, and PSD influences the performance of the machined surface. The lowest surface roughness parameters values, equalled to Sa = 1 μm and Sz = 4.3 μm have been obtained during turning with cutting speed vc = 90 m/min. The stable turning of variable compliance part affects the surface texture formation with a unidirectional perpendicular, anisotropic structure. Nevertheless, in case of unstable turning, the characteristic chatter marks are observed, and process dynamics has greater contribution in formation of surface finish than turning kinematics and elastic plastic deformation of workpiece.

  5. Small area estimation (SAE) model: Case study of poverty in West Java Province

    NASA Astrophysics Data System (ADS)

    Suhartini, Titin; Sadik, Kusman; Indahwati

    2016-02-01

    This paper showed the comparative of direct estimation and indirect/Small Area Estimation (SAE) model. Model selection included resolve multicollinearity problem in auxiliary variable, such as choosing only variable non-multicollinearity and implemented principal component (PC). Concern parameters in this paper were the proportion of agricultural venture poor households and agricultural poor households area level in West Java Province. The approach for estimating these parameters could be performed based on direct estimation and SAE. The problem of direct estimation, three area even zero and could not be conducted by directly estimation, because small sample size. The proportion of agricultural venture poor households showed 19.22% and agricultural poor households showed 46.79%. The best model from agricultural venture poor households by choosing only variable non-multicollinearity and the best model from agricultural poor households by implemented PC. The best estimator showed SAE better then direct estimation both of the proportion of agricultural venture poor households and agricultural poor households area level in West Java Province. The solution overcame small sample size and obtained estimation for small area was implemented small area estimation method for evidence higher accuracy and better precision improved direct estimator.

  6. Estimation of the Ratio of Scale Parameters in the Two Sample Problem with Arbitrary Right Censorship.

    DTIC Science & Technology

    1980-06-01

    70. AWST RC 7 Coeittu an rewwase ati of nee*aa.ean mimDdentify by black n,.mboJ T two-sample version of the Cram~ r -von Mines statistic for right...estimator for exponential distributions. KEY WORDS: Cram~ r -von Mtses distance; Kaplan-Meier estimators; Right censorship; Scale parameter; lodgea and...suppose that two positive random variables ’i 2 S0 and ’ r differ in distribution only by their scale parameters. That is, there exists a positive

  7. A dynamic programming-based particle swarm optimization algorithm for an inventory management problem under uncertainty

    NASA Astrophysics Data System (ADS)

    Xu, Jiuping; Zeng, Ziqiang; Han, Bernard; Lei, Xiao

    2013-07-01

    This article presents a dynamic programming-based particle swarm optimization (DP-based PSO) algorithm for solving an inventory management problem for large-scale construction projects under a fuzzy random environment. By taking into account the purchasing behaviour and strategy under rules of international bidding, a multi-objective fuzzy random dynamic programming model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform fuzzy random parameters into fuzzy variables that are subsequently defuzzified by using an expected value operator with optimistic-pessimistic index. The iterative nature of the authors' model motivates them to develop a DP-based PSO algorithm. More specifically, their approach treats the state variables as hidden parameters. This in turn eliminates many redundant feasibility checks during initialization and particle updates at each iteration. Results and sensitivity analysis are presented to highlight the performance of the authors' optimization method, which is very effective as compared to the standard PSO algorithm.

  8. Recent experience in simultaneous control-structure optimization

    NASA Technical Reports Server (NTRS)

    Salama, M.; Ramaker, R.; Milman, M.

    1989-01-01

    To show the feasibility of simultaneous optimization as design procedure, low order problems were used in conjunction with simple control formulations. The numerical results indicate that simultaneous optimization is not only feasible, but also advantageous. Such advantages come at the expense of introducing complexities beyond those encountered in structure optimization alone, or control optimization alone. Examples include: larger design parameter space, optimization may combine continuous and combinatoric variables, and the combined objective function may be nonconvex. Future extensions to include large order problems, more complex objective functions and constraints, and more sophisticated control formulations will require further research to ensure that the additional complexities do not outweigh the advantages of simultaneous optimization. Some areas requiring more efficient tools than currently available include: multiobjective criteria and nonconvex optimization. Efficient techniques to deal with optimization over combinatoric and continuous variables, and with truncation issues for structure and control parameters of both the model space as well as the design space need to be developed.

  9. R programming for parameters estimation of geographically weighted ordinal logistic regression (GWOLR) model based on Newton Raphson

    NASA Astrophysics Data System (ADS)

    Zuhdi, Shaifudin; Saputro, Dewi Retno Sari

    2017-03-01

    GWOLR model used for represent relationship between dependent variable has categories and scale of category is ordinal with independent variable influenced the geographical location of the observation site. Parameters estimation of GWOLR model use maximum likelihood provide system of nonlinear equations and hard to be found the result in analytic resolution. By finishing it, it means determine the maximum completion, this thing associated with optimizing problem. The completion nonlinear system of equations optimize use numerical approximation, which one is Newton Raphson method. The purpose of this research is to make iteration algorithm Newton Raphson and program using R software to estimate GWOLR model. Based on the research obtained that program in R can be used to estimate the parameters of GWOLR model by forming a syntax program with command "while".

  10. Do spatiotemporal parameters and gait variability differ across the lifespan of healthy adults? A systematic review.

    PubMed

    Herssens, Nolan; Verbecque, Evi; Hallemans, Ann; Vereeck, Luc; Van Rompaey, Vincent; Saeys, Wim

    2018-06-12

    Aging is often associated with changes in the musculoskeletal system, peripheral and central nervous system. These age-related changes often result in mobility problems influencing gait performance. Compensatory strategies are used as a way to adapt to these physiological changes. The aim of this review is to investigate the differences in spatiotemporal and gait variability measures throughout the healthy adult life. This systematic review was conducted according to the PRISMA guidelines and registered in the PROSPERO database (no. CRD42017057720). Databases MEDLINE (Pubmed), Web of Science (Web of Knowledge), Cochrane Library and ScienceDirect were systematically searched until March 2018. Eighteen of the 3195 original studies met the eligibility criteria and were included in this review. The majority of studies reported spatiotemporal and gait variability measures in adults above the age of 65, followed by the young adult population, information of middle-aged adults is lacking. Spatiotemporal parameters and gait variability measures were extracted from 2112 healthy adults between 18 and 98 years old and, in general, tend to deteriorate with increasing age. Variability measures were only reported in an elderly population and show great variety between studies. The findings of this review suggest that most spatiotemporal parameters significantly differ across different age groups. Elderly populations show a reduction of preferred walking speed, cadence, step and stride length, all related to a more cautious gait, while gait variability measures remain stable over time. A preliminary framework of normative reference data is provided, enabling insights into the influence of aging on spatiotemporal parameters, however spatiotemporal parameters of middle-aged adults should be investigated more thoroughly. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Integral Method of Boundary Characteristics: Neumann Condition

    NASA Astrophysics Data System (ADS)

    Kot, V. A.

    2018-05-01

    A new algorithm, based on systems of identical equalities with integral and differential boundary characteristics, is proposed for solving boundary-value problems on the heat conduction in bodies canonical in shape at a Neumann boundary condition. Results of a numerical analysis of the accuracy of solving heat-conduction problems with variable boundary conditions with the use of this algorithm are presented. The solutions obtained with it can be considered as exact because their errors comprise hundredths and ten-thousandths of a persent for a wide range of change in the parameters of a problem.

  12. Adaptable structural synthesis using advanced analysis and optimization coupled by a computer operating system

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Bhat, R. B.

    1979-01-01

    A finite element program is linked with a general purpose optimization program in a 'programing system' which includes user supplied codes that contain problem dependent formulations of the design variables, objective function and constraints. The result is a system adaptable to a wide spectrum of structural optimization problems. In a sample of numerical examples, the design variables are the cross-sectional dimensions and the parameters of overall shape geometry, constraints are applied to stresses, displacements, buckling and vibration characteristics, and structural mass is the objective function. Thin-walled, built-up structures and frameworks are included in the sample. Details of the system organization and characteristics of the component programs are given.

  13. Integrated Controls-Structures Design Methodology for Flexible Spacecraft

    NASA Technical Reports Server (NTRS)

    Maghami, P. G.; Joshi, S. M.; Price, D. B.

    1995-01-01

    This paper proposes an approach for the design of flexible spacecraft, wherein the structural design and the control system design are performed simultaneously. The integrated design problem is posed as an optimization problem in which both the structural parameters and the control system parameters constitute the design variables, which are used to optimize a common objective function, thereby resulting in an optimal overall design. The approach is demonstrated by application to the integrated design of a geostationary platform, and to a ground-based flexible structure experiment. The numerical results obtained indicate that the integrated design approach generally yields spacecraft designs that are substantially superior to the conventional approach, wherein the structural design and control design are performed sequentially.

  14. Quantum Adiabatic Optimization and Combinatorial Landscapes

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, V. N.; Knysh, S.; Morris, R. D.

    2003-01-01

    In this paper we analyze the performance of the Quantum Adiabatic Evolution (QAE) algorithm on a variant of Satisfiability problem for an ensemble of random graphs parametrized by the ratio of clauses to variables, gamma = M / N. We introduce a set of macroscopic parameters (landscapes) and put forward an ansatz of universality for random bit flips. We then formulate the problem of finding the smallest eigenvalue and the excitation gap as a statistical mechanics problem. We use the so-called annealing approximation with a refinement that a finite set of macroscopic variables (verses only energy) is used, and are able to show the existence of a dynamic threshold gamma = gammad, beyond which QAE should take an exponentially long time to find a solution. We compare the results for extended and simplified sets of landscapes and provide numerical evidence in support of our universality ansatz.

  15. Matching experimental and three dimensional numerical models for structural vibration problems with uncertainties

    NASA Astrophysics Data System (ADS)

    Langer, P.; Sepahvand, K.; Guist, C.; Bär, J.; Peplow, A.; Marburg, S.

    2018-03-01

    The simulation model which examines the dynamic behavior of real structures needs to address the impact of uncertainty in both geometry and material parameters. This article investigates three-dimensional finite element models for structural dynamics problems with respect to both model and parameter uncertainties. The parameter uncertainties are determined via laboratory measurements on several beam-like samples. The parameters are then considered as random variables to the finite element model for exploring the uncertainty effects on the quality of the model outputs, i.e. natural frequencies. The accuracy of the output predictions from the model is compared with the experimental results. To this end, the non-contact experimental modal analysis is conducted to identify the natural frequency of the samples. The results show a good agreement compared with experimental data. Furthermore, it is demonstrated that geometrical uncertainties have more influence on the natural frequencies compared to material parameters and material uncertainties are about two times higher than geometrical uncertainties. This gives valuable insights for improving the finite element model due to various parameter ranges required in a modeling process involving uncertainty.

  16. Using Simulation Technique to overcome the multi-collinearity problem for estimating fuzzy linear regression parameters.

    NASA Astrophysics Data System (ADS)

    Mansoor Gorgees, Hazim; Hilal, Mariam Mohammed

    2018-05-01

    Fatigue cracking is one of the common types of pavement distresses and is an indicator of structural failure; cracks allow moisture infiltration, roughness, may further deteriorate to a pothole. Some causes of pavement deterioration are: traffic loading; environment influences; drainage deficiencies; materials quality problems; construction deficiencies and external contributors. Many researchers have made models that contain many variables like asphalt content, asphalt viscosity, fatigue life, stiffness of asphalt mixture, temperature and other parameters that affect the fatigue life. For this situation, a fuzzy linear regression model was employed and analyzed by using the traditional methods and our proposed method in order to overcome the multi-collinearity problem. The total spread error was used as a criterion to compare the performance of the studied methods. Simulation program was used to obtain the required results.

  17. Investigation of heat and mass transfer under the influence of variable diffusion coefficient and thermal conductivity

    NASA Astrophysics Data System (ADS)

    Mohyud Din, S. T.; Zubair, T.; Usman, M.; Hamid, M.; Rafiq, M.; Mohsin, S.

    2018-04-01

    This study is devoted to analyze the influence of variable diffusion coefficient and variable thermal conductivity on heat and mass transfer in Casson fluid flow. The behavior of concentration and temperature profiles in the presence of Joule heating and viscous dissipation is also studied. The dimensionless conversation laws with suitable BCs are solved via Modified Gegenbauer Wavelets Method (MGWM). It has been observed that increase in Casson fluid parameter (β ) and parameter ɛ enhances the Nusselt number. Moreover, Nusselt number of Newtonian fluid is less than that of the Casson fluid. The phenomenon of mass transport can be increased by solute of variable diffusion coefficient rather than solute of constant diffusion coefficient. A detailed analysis of results is appropriately highlighted. The obtained results, error estimates, and convergence analysis reconfirm the credibility of proposed algorithm. It is concluded that MGWM is an appropriate tool to tackle nonlinear physical models and hence may be extended to some other nonlinear problems of diversified physical nature also.

  18. Structural Analysis of Covariance and Correlation Matrices.

    ERIC Educational Resources Information Center

    Joreskog, Karl G.

    1978-01-01

    A general approach to analysis of covariance structures is considered, in which the variances and covariances or correlations of the observed variables are directly expressed in terms of the parameters of interest. The statistical problems of identification, estimation and testing of such covariance or correlation structures are discussed.…

  19. Propagation of variability in railway dynamic simulations: application to virtual homologation

    NASA Astrophysics Data System (ADS)

    Funfschilling, Christine; Perrin, Guillaume; Kraft, Sönke

    2012-01-01

    Railway dynamic simulations are increasingly used to predict and analyse the behaviour of the vehicle and of the track during their whole life cycle. Up to now however, no simulation has been used in the certification procedure even if the expected benefits are important: cheaper and shorter procedures, more objectivity, better knowledge of the behaviour around critical situations. Deterministic simulations are nevertheless too poor to represent the whole physical of the track/vehicle system which contains several sources of variability: variability of the mechanical parameters of a train among a class of vehicles (mass, stiffness and damping of different suspensions), variability of the contact parameters (friction coefficient, wheel and rail profiles) and variability of the track design and quality. This variability plays an important role on the safety, on the ride quality, and thus on the certification criteria. When using the simulation for certification purposes, it seems therefore crucial to take into account the variability of the different inputs. The main goal of this article is thus to propose a method to introduce the variability in railway dynamics. A four-step method is described namely the definition of the stochastic problem, the modelling of the inputs variability, the propagation and the analysis of the output. Each step is illustrated with railway examples.

  20. Annual Research Review: Reaction time variability in ADHD and autism spectrum disorders: measurement and mechanisms of a proposed trans-diagnostic phenotype

    PubMed Central

    Karalunas, Sarah L.; Geurts, Hilde M.; Konrad, Kerstin; Bender, Stephan; Nigg, Joel T.

    2014-01-01

    Background Intraindividual variability in reaction time (RT) has received extensive discussion as an indicator of cognitive performance, a putative intermediate phenotype of many clinical disorders, and a possible trans-diagnostic phenotype that may elucidate shared risk factors for mechanisms of psychiatric illnesses. Scope and Methodology Using the examples of attention deficit hyperactivity disorder (ADHD) and autism spectrum disorders (ASD), we discuss RT variability. We first present a new meta-analysis of RT variability in ASD with and without comorbid ADHD. We then discuss potential mechanisms that may account for RT variability and statistical models that disentangle the cognitive processes affecting RTs. We then report a second meta-analysis comparing ADHD and non-ADHD children on diffusion model parameters. We consider how findings inform the search for neural correlates of RT variability. Findings Results suggest that RT variability is increased in ASD only when children with comorbid ADHD are included in the sample. Furthermore, RT variability in ADHD is explained by moderate to large increases (d = 0.63–0.99) in the ex-Gaussian parameter τ and the diffusion parameter drift rate, as well as by smaller differences (d = 0.32) in the diffusion parameter of nondecision time. The former may suggest problems in state regulation or arousal and difficulty detecting signal from noise, whereas the latter may reflect contributions from deficits in motor organization or output. The neuroimaging literature converges with this multicomponent interpretation and also highlights the role of top-down control circuits. Conclusion We underscore the importance of considering the interactions between top-down control, state regulation (e.g. arousal), and motor preparation when interpreting RT variability and conclude that decomposition of the RT signal provides superior interpretive power and suggests mechanisms convergent with those implicated using other cognitive paradigms. We conclude with specific recommendations for the field for next steps in the study of RT variability in neurodevelopmental disorders. PMID:24628425

  1. Variability estimation of urban wastewater biodegradable fractions by respirometry.

    PubMed

    Lagarde, Fabienne; Tusseau-Vuillemin, Marie-Hélène; Lessard, Paul; Héduit, Alain; Dutrop, François; Mouchel, Jean-Marie

    2005-11-01

    This paper presents a methodology for assessing the variability of biodegradable chemical oxygen demand (COD) fractions in urban wastewaters. Thirteen raw wastewater samples from combined and separate sewers feeding the same plant were characterised, and two optimisation procedures were applied in order to evaluate the variability in biodegradable fractions and related kinetic parameters. Through an overall optimisation on all the samples, a unique kinetic parameter set was obtained with a three-substrate model including an adsorption stage. This method required powerful numerical treatment, but improved the identifiability problem compared to the usual sample-to-sample optimisation. The results showed that the fractionation of samples collected in the combined sewer was much more variable (standard deviation of 70% of the mean values) than the fractionation of the separate sewer samples, and the slowly biodegradable COD fraction was the most significant fraction (45% of the total COD on average). Because these samples were collected under various rain conditions, the standard deviations obtained here on the combined sewer biodegradable fractions could be used as a first estimation of the variability of this type of sewer system.

  2. Solution of the weighted symmetric similarity transformations based on quaternions

    NASA Astrophysics Data System (ADS)

    Mercan, H.; Akyilmaz, O.; Aydin, C.

    2017-12-01

    A new method through Gauss-Helmert model of adjustment is presented for the solution of the similarity transformations, either 3D or 2D, in the frame of errors-in-variables (EIV) model. EIV model assumes that all the variables in the mathematical model are contaminated by random errors. Total least squares estimation technique may be used to solve the EIV model. Accounting for the heteroscedastic uncertainty both in the target and the source coordinates, that is the more common and general case in practice, leads to a more realistic estimation of the transformation parameters. The presented algorithm can handle the heteroscedastic transformation problems, i.e., positions of the both target and the source points may have full covariance matrices. Therefore, there is no limitation such as the isotropic or the homogenous accuracy for the reference point coordinates. The developed algorithm takes the advantage of the quaternion definition which uniquely represents a 3D rotation matrix. The transformation parameters: scale, translations, and the quaternion (so that the rotation matrix) along with their covariances, are iteratively estimated with rapid convergence. Moreover, prior least squares (LS) estimation of the unknown transformation parameters is not required to start the iterations. We also show that the developed method can also be used to estimate the 2D similarity transformation parameters by simply treating the problem as a 3D transformation problem with zero (0) values assigned for the z-components of both target and source points. The efficiency of the new algorithm is presented with the numerical examples and comparisons with the results of the previous studies which use the same data set. Simulation experiments for the evaluation and comparison of the proposed and the conventional weighted LS (WLS) method is also presented.

  3. Regression dilution in the proportional hazards model.

    PubMed

    Hughes, M D

    1993-12-01

    The problem of regression dilution arising from covariate measurement error is investigated for survival data using the proportional hazards model. The naive approach to parameter estimation is considered whereby observed covariate values are used, inappropriately, in the usual analysis instead of the underlying covariate values. A relationship between the estimated parameter in large samples and the true parameter is obtained showing that the bias does not depend on the form of the baseline hazard function when the errors are normally distributed. With high censorship, adjustment of the naive estimate by the factor 1 + lambda, where lambda is the ratio of within-person variability about an underlying mean level to the variability of these levels in the population sampled, removes the bias. As censorship increases, the adjustment required increases and when there is no censorship is markedly higher than 1 + lambda and depends also on the true risk relationship.

  4. Numerical Simulation for Magneto Nanofluid Flow Through a Porous Space with Melting Heat Transfer

    NASA Astrophysics Data System (ADS)

    Hayat, T.; Shah, Faisal; Alsaedi, A.; Waqas, M.

    2018-02-01

    Melting heat transfer and non-Darcy porous medium effects in MHD stagnation point flow toward a stretching surface of variable thickness are addressed. Brownian motion and thermophoresis in nanofluid modeling are retained. Zero mass flux condition for concentration at surface is imposed. The problem of ordinary differential system are analyzed numerically through shooting technique. Graphically results of various physical variables on the velocity, temperature and concentration are studied. Skin friction coefficient local Nusselt number and Sherwood number are also addressed through tabulated values. The results described here illustrate that the velocity field is higher via larger melting parameter. However reverse situation is examined for Hartman number. Moreover the influence of thermophoresis parameter on temperature and concentration is noted similar.

  5. Reverse design and characteristic study of multi-range HMCVT

    NASA Astrophysics Data System (ADS)

    Zhu, Zhen; Chen, Long; Zeng, Falin

    2017-09-01

    The reduction of fuel consumption and increase of transmission efficiency is one of the key problems of the agricultural machinery. Many promising technologies such as hydromechanical continuously variable transmissions (HMCVT) are the focus of research and investments, but there is little technical documentation that describes the design principle and presents the design parameters. This paper presents the design idea and characteristic study of HMCVT, in order to find out the suitable scheme for the big horsepower tractors. Analyzed the kinematics and dynamics of a large horsepower tractor, according to the characteristic parameters, a hydro-mechanical continuously variable transmission has been designed. Compared with the experimental curves and theoretical curves of the stepless speed regulation of transmission, the experimental result illustrates the rationality of the design scheme.

  6. Numerical Simulation for Magneto Nanofluid Flow Through a Porous Space with Melting Heat Transfer

    NASA Astrophysics Data System (ADS)

    Hayat, T.; Shah, Faisal; Alsaedi, A.; Waqas, M.

    2018-05-01

    Melting heat transfer and non-Darcy porous medium effects in MHD stagnation point flow toward a stretching surface of variable thickness are addressed. Brownian motion and thermophoresis in nanofluid modeling are retained. Zero mass flux condition for concentration at surface is imposed. The problem of ordinary differential system are analyzed numerically through shooting technique. Graphically results of various physical variables on the velocity, temperature and concentration are studied. Skin friction coefficient local Nusselt number and Sherwood number are also addressed through tabulated values. The results described here illustrate that the velocity field is higher via larger melting parameter. However reverse situation is examined for Hartman number. Moreover the influence of thermophoresis parameter on temperature and concentration is noted similar.

  7. Effects of variable electrical conductivity and thermal conductivity on unsteady MHD free convection flow past an exponential accelerated inclined plate

    NASA Astrophysics Data System (ADS)

    Rana, B. M. Jewel; Ahmed, Rubel; Ahmmed, S. F.

    2017-06-01

    An analysis is carried out to investigate the effects of variable viscosity, thermal radiation, absorption of radiation and cross diffusion past an inclined exponential accelerated plate under the influence of variable heat and mass transfer. A set of suitable transformations has been used to obtain the non-dimensional coupled governing equations. Explicit finite difference technique has been used to solve the obtained numerical solutions of the present problem. Stability and convergence of the finite difference scheme have been carried out for this problem. Compaq Visual Fortran 6.6a has been used to calculate the numerical results. The effects of various physical parameters on the fluid velocity, temperature, concentration, coefficient of skin friction, rate of heat transfer, rate of mass transfer, streamlines and isotherms on the flow field have been presented graphically and discussed in details.

  8. One shot methods for optimal control of distributed parameter systems 1: Finite dimensional control

    NASA Technical Reports Server (NTRS)

    Taasan, Shlomo

    1991-01-01

    The efficient numerical treatment of optimal control problems governed by elliptic partial differential equations (PDEs) and systems of elliptic PDEs, where the control is finite dimensional is discussed. Distributed control as well as boundary control cases are discussed. The main characteristic of the new methods is that they are designed to solve the full optimization problem directly, rather than accelerating a descent method by an efficient multigrid solver for the equations involved. The methods use the adjoint state in order to achieve efficient smoother and a robust coarsening strategy. The main idea is the treatment of the control variables on appropriate scales, i.e., control variables that correspond to smooth functions are solved for on coarse grids depending on the smoothness of these functions. Solution of the control problems is achieved with the cost of solving the constraint equations about two to three times (by a multigrid solver). Numerical examples demonstrate the effectiveness of the method proposed in distributed control case, pointwise control and boundary control problems.

  9. The use of auxiliary variables in capture-recapture and removal experiments

    USGS Publications Warehouse

    Pollock, K.H.; Hines, J.E.; Nichols, J.D.

    1984-01-01

    The dependence of animal capture probabilities on auxiliary variables is an important practical problem which has not been considered in the development of estimation procedures for capture-recapture and removal experiments. In this paper the linear logistic binary regression model is used to relate the probability of capture to continuous auxiliary variables. The auxiliary variables could be environmental quantities such as air or water temperature, or characteristics of individual animals, such as body length or weight. Maximum likelihood estimators of the population parameters are considered for a variety of models which all assume a closed population. Testing between models is also considered. The models can also be used when one auxiliary variable is a measure of the effort expended in obtaining the sample.

  10. A program for identification of linear systems

    NASA Technical Reports Server (NTRS)

    Buell, J.; Kalaba, R.; Ruspini, E.; Yakush, A.

    1971-01-01

    A program has been written for the identification of parameters in certain linear systems. These systems appear in biomedical problems, particularly in compartmental models of pharmacokinetics. The method presented here assumes that some of the state variables are regularly modified by jump conditions. This simulates administration of drugs following some prescribed drug regime. Parameters are identified by a least-square fit of the linear differential system to a set of experimental observations. The method is especially suited when the interval of observation of the system is very long.

  11. On determining important aspects of mathematical models: Application to problems in physics and chemistry

    NASA Technical Reports Server (NTRS)

    Rabitz, Herschel

    1987-01-01

    The use of parametric and functional gradient sensitivity analysis techniques is considered for models described by partial differential equations. By interchanging appropriate dependent and independent variables, questions of inverse sensitivity may be addressed to gain insight into the inversion of observational data for parameter and function identification in mathematical models. It may be argued that the presence of a subset of dominantly strong coupled dependent variables will result in the overall system sensitivity behavior collapsing into a simple set of scaling and self similarity relations amongst elements of the entire matrix of sensitivity coefficients. These general tools are generic in nature, but herein their application to problems arising in selected areas of physics and chemistry is presented.

  12. How is good and poor sleep in older adults and college students related to daytime sleepiness, fatigue, and ability to concentrate?

    PubMed

    Alapin, I; Fichten, C S; Libman, E; Creti, L; Bailes, S; Wright, J

    2000-11-01

    We compared good sleepers with minimally and highly distressed poor sleepers on three measures of daytime functioning: self-reported fatigue, sleepiness, and cognitive inefficiency. In two samples (194 older adults, 136 college students), we tested the hypotheses that (1) poor sleepers experience more problems with daytime functioning than good sleepers, (2) highly distressed poor sleepers report greater impairment in functioning during the day than either good sleepers or minimally distressed poor sleepers, (3) daytime symptoms are more closely related to psychological adjustment and to psychologically laden sleep variables than to quantitative sleep parameters, and (4) daytime symptoms are more closely related to longer nocturnal wake times than to shorter sleep times. Results in both samples indicated that poor sleepers reported more daytime difficulties than good sleepers. While low- and high-distress poor sleepers did not differ on sleep parameters, highly distressed poor sleepers reported consistently more difficulty in functioning during the day and experienced greater tension and depression than minimally distressed poor sleepers. Severity of all three daytime problems was generally significantly and positively related to poor psychological adjustment, psychologically laden sleep variables, and, with the exception of sleepiness, to quantitative sleep parameters. Results are used to discuss discrepancies between experiential and quantitative measures of daytime functioning.

  13. Development of weighting value for ecodrainage implementation assessment criteria

    NASA Astrophysics Data System (ADS)

    Andajani, S.; Hidayat, D. P. A.; Yuwono, B. E.

    2018-01-01

    This research aim to generate weighting value for each factor and find out the most influential factor for identify implementation of ecodrain concept using loading factor and Cronbach Alpha. The drainage problem especially in urban areas are getting more complex and need to be handled as soon as possible. Flood and drought problem can’t be solved by the conventional paradigm of drainage (to drain runoff flow as faster as possible to the nearest drainage area). The new paradigm of drainage that based on environmental approach called “ecodrain” can solve both of flood and drought problems. For getting the optimal result, ecodrain should be applied in smallest scale (domestic scale), until the biggest scale (city areas). It is necessary to identify drainage condition based on environmental approach. This research implement ecodrain concept by a guidelines that consist of parameters and assessment criteria. It was generating the 2 variables, 7 indicators and 63 key factors from previous research and related regulations. the conclusion of the research is the most influential indicator on technical management variable is storage system, while on non-technical management variable is government role.

  14. A numerical algorithm for optimal feedback gains in high dimensional linear quadratic regulator problems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Ito, K.

    1991-01-01

    A hybrid method for computing the feedback gains in linear quadratic regulator problem is proposed. The method, which combines use of a Chandrasekhar type system with an iteration of the Newton-Kleinman form with variable acceleration parameter Smith schemes, is formulated to efficiently compute directly the feedback gains rather than solutions of an associated Riccati equation. The hybrid method is particularly appropriate when used with large dimensional systems such as those arising in approximating infinite-dimensional (distributed parameter) control systems (e.g., those governed by delay-differential and partial differential equations). Computational advantages of the proposed algorithm over the standard eigenvector (Potter, Laub-Schur) based techniques are discussed, and numerical evidence of the efficacy of these ideas is presented.

  15. A numerical algorithm for optimal feedback gains in high dimensional LQR problems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Ito, K.

    1986-01-01

    A hybrid method for computing the feedback gains in linear quadratic regulator problems is proposed. The method, which combines the use of a Chandrasekhar type system with an iteration of the Newton-Kleinman form with variable acceleration parameter Smith schemes, is formulated so as to efficiently compute directly the feedback gains rather than solutions of an associated Riccati equation. The hybrid method is particularly appropriate when used with large dimensional systems such as those arising in approximating infinite dimensional (distributed parameter) control systems (e.g., those governed by delay-differential and partial differential equations). Computational advantage of the proposed algorithm over the standard eigenvector (Potter, Laub-Schur) based techniques are discussed and numerical evidence of the efficacy of our ideas presented.

  16. Fuzzy simulation in concurrent engineering

    NASA Technical Reports Server (NTRS)

    Kraslawski, A.; Nystrom, L.

    1992-01-01

    Concurrent engineering is becoming a very important practice in manufacturing. A problem in concurrent engineering is the uncertainty associated with the values of the input variables and operating conditions. The problem discussed in this paper concerns the simulation of processes where the raw materials and the operational parameters possess fuzzy characteristics. The processing of fuzzy input information is performed by the vertex method and the commercial simulation packages POLYMATH and GEMS. The examples are presented to illustrate the usefulness of the method in the simulation of chemical engineering processes.

  17. Random Predictor Models for Rigorous Uncertainty Quantification: Part 2

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean, the variance, and the range of the model's parameter, thus of the output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, is bounded rigorously.

  18. Random Predictor Models for Rigorous Uncertainty Quantification: Part 1

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean and the variance of the model's parameters, thus of the predicted output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, can be bounded tightly and rigorously.

  19. Topology Synthesis of Structures Using Parameter Relaxation and Geometric Refinement

    NASA Technical Reports Server (NTRS)

    Hull, P. V.; Tinker, M. L.

    2007-01-01

    Typically, structural topology optimization problems undergo relaxation of certain design parameters to allow the existence of intermediate variable optimum topologies. Relaxation permits the use of a variety of gradient-based search techniques and has been shown to guarantee the existence of optimal solutions and eliminate mesh dependencies. This Technical Publication (TP) will demonstrate the application of relaxation to a control point discretization of the design workspace for the structural topology optimization process. The control point parameterization with subdivision has been offered as an alternative to the traditional method of discretized finite element design domain. The principle of relaxation demonstrates the increased utility of the control point parameterization. One of the significant results of the relaxation process offered in this TP is that direct manufacturability of the optimized design will be maintained without the need for designer intervention or translation. In addition, it will be shown that relaxation of certain parameters may extend the range of problems that can be addressed; e.g., in permitting limited out-of-plane motion to be included in a path generation problem.

  20. Measurement problem and local hidden variables with entangled photons

    NASA Astrophysics Data System (ADS)

    Muchowski, Eugen

    2017-12-01

    It is shown that there is no remote action with polarization measurements of photons in singlet state. A model is presented introducing a hidden parameter which determines the polarizer output. This model is able to explain the polarization measurement results with entangled photons. It is not ruled out by Bell's Theorem.

  1. Exploring the Connection Between Sampling Problems in Bayesian Inference and Statistical Mechanics

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew

    2006-01-01

    The Bayesian and statistical mechanical communities often share the same objective in their work - estimating and integrating probability distribution functions (pdfs) describing stochastic systems, models or processes. Frequently, these pdfs are complex functions of random variables exhibiting multiple, well separated local minima. Conventional strategies for sampling such pdfs are inefficient, sometimes leading to an apparent non-ergodic behavior. Several recently developed techniques for handling this problem have been successfully applied in statistical mechanics. In the multicanonical and Wang-Landau Monte Carlo (MC) methods, the correct pdfs are recovered from uniform sampling of the parameter space by iteratively establishing proper weighting factors connecting these distributions. Trivial generalizations allow for sampling from any chosen pdf. The closely related transition matrix method relies on estimating transition probabilities between different states. All these methods proved to generate estimates of pdfs with high statistical accuracy. In another MC technique, parallel tempering, several random walks, each corresponding to a different value of a parameter (e.g. "temperature"), are generated and occasionally exchanged using the Metropolis criterion. This method can be considered as a statistically correct version of simulated annealing. An alternative approach is to represent the set of independent variables as a Hamiltonian system. Considerab!e progress has been made in understanding how to ensure that the system obeys the equipartition theorem or, equivalently, that coupling between the variables is correctly described. Then a host of techniques developed for dynamical systems can be used. Among them, probably the most powerful is the Adaptive Biasing Force method, in which thermodynamic integration and biased sampling are combined to yield very efficient estimates of pdfs. The third class of methods deals with transitions between states described by rate constants. These problems are isomorphic with chemical kinetics problems. Recently, several efficient techniques for this purpose have been developed based on the approach originally proposed by Gillespie. Although the utility of the techniques mentioned above for Bayesian problems has not been determined, further research along these lines is warranted

  2. Scalability of surrogate-assisted multi-objective optimization of antenna structures exploiting variable-fidelity electromagnetic simulation models

    NASA Astrophysics Data System (ADS)

    Koziel, Slawomir; Bekasiewicz, Adrian

    2016-10-01

    Multi-objective optimization of antenna structures is a challenging task owing to the high computational cost of evaluating the design objectives as well as the large number of adjustable parameters. Design speed-up can be achieved by means of surrogate-based optimization techniques. In particular, a combination of variable-fidelity electromagnetic (EM) simulations, design space reduction techniques, response surface approximation models and design refinement methods permits identification of the Pareto-optimal set of designs within a reasonable timeframe. Here, a study concerning the scalability of surrogate-assisted multi-objective antenna design is carried out based on a set of benchmark problems, with the dimensionality of the design space ranging from six to 24 and a CPU cost of the EM antenna model from 10 to 20 min per simulation. Numerical results indicate that the computational overhead of the design process increases more or less quadratically with the number of adjustable geometric parameters of the antenna structure at hand, which is a promising result from the point of view of handling even more complex problems.

  3. How the 2SLS/IV estimator can handle equality constraints in structural equation models: a system-of-equations approach.

    PubMed

    Nestler, Steffen

    2014-05-01

    Parameters in structural equation models are typically estimated using the maximum likelihood (ML) approach. Bollen (1996) proposed an alternative non-iterative, equation-by-equation estimator that uses instrumental variables. Although this two-stage least squares/instrumental variables (2SLS/IV) estimator has good statistical properties, one problem with its application is that parameter equality constraints cannot be imposed. This paper presents a mathematical solution to this problem that is based on an extension of the 2SLS/IV approach to a system of equations. We present an example in which our approach was used to examine strong longitudinal measurement invariance. We also investigated the new approach in a simulation study that compared it with ML in the examination of the equality of two latent regression coefficients and strong measurement invariance. Overall, the results show that the suggested approach is a useful extension of the original 2SLS/IV estimator and allows for the effective handling of equality constraints in structural equation models. © 2013 The British Psychological Society.

  4. Assessment of uncertainties of the models used in thermal-hydraulic computer codes

    NASA Astrophysics Data System (ADS)

    Gricay, A. S.; Migrov, Yu. A.

    2015-09-01

    The article deals with matters concerned with the problem of determining the statistical characteristics of variable parameters (the variation range and distribution law) in analyzing the uncertainty and sensitivity of calculation results to uncertainty in input data. A comparative analysis of modern approaches to uncertainty in input data is presented. The need to develop an alternative method for estimating the uncertainty of model parameters used in thermal-hydraulic computer codes, in particular, in the closing correlations of the loop thermal hydraulics block, is shown. Such a method shall feature the minimal degree of subjectivism and must be based on objective quantitative assessment criteria. The method includes three sequential stages: selecting experimental data satisfying the specified criteria, identifying the key closing correlation using a sensitivity analysis, and carrying out case calculations followed by statistical processing of the results. By using the method, one can estimate the uncertainty range of a variable parameter and establish its distribution law in the above-mentioned range provided that the experimental information is sufficiently representative. Practical application of the method is demonstrated taking as an example the problem of estimating the uncertainty of a parameter appearing in the model describing transition to post-burnout heat transfer that is used in the thermal-hydraulic computer code KORSAR. The performed study revealed the need to narrow the previously established uncertainty range of this parameter and to replace the uniform distribution law in the above-mentioned range by the Gaussian distribution law. The proposed method can be applied to different thermal-hydraulic computer codes. In some cases, application of the method can make it possible to achieve a smaller degree of conservatism in the expert estimates of uncertainties pertinent to the model parameters used in computer codes.

  5. Natural variability of biochemical biomarkers in the macro-zoobenthos: Dependence on life stage and environmental factors.

    PubMed

    Scarduelli, Lucia; Giacchini, Roberto; Parenti, Paolo; Migliorati, Sonia; Di Brisco, Agnese Maria; Vighi, Marco

    2017-11-01

    Biomarkers are widely used in ecotoxicology as indicators of exposure to toxicants. However, their ability to provide ecologically relevant information remains controversial. One of the major problems is understanding whether the measured responses are determined by stress factors or lie within the natural variability range. In a previous work, the natural variability of enzymatic levels in invertebrates sampled in pristine rivers was proven to be relevant across both space and time. In the present study, the experimental design was improved by considering different life stages of the selected taxa and by measuring more environmental parameters. The experimental design considered sampling sites in 2 different rivers, 8 sampling dates covering the whole seasonal cycle, 4 species from 3 different taxonomic groups (Plecoptera, Perla grandis; Ephemeroptera, Baetis alpinus and Epeorus alpicula; Tricoptera, Hydropsyche pellucidula), different life stages for each species, and 4 enzymes (acetylcholinesterase, glutathione S-transferase, alkaline phosphatase, and catalase). Biomarker levels were related to environmental (physicochemical) parameters to verify any kind of dependence. Data were statistically elaborated using hierarchical multilevel Bayesian models. Natural variability was found to be relevant across both space and time. The results of the present study proved that care should be paid when interpreting biomarker results. Further research is needed to better understand the dependence of the natural variability on environmental parameters. Environ Toxicol Chem 2017;36:3158-3167. © 2017 SETAC. © 2017 SETAC.

  6. Computing the structural influence matrix for biological systems.

    PubMed

    Giordano, Giulia; Cuba Samaniego, Christian; Franco, Elisa; Blanchini, Franco

    2016-06-01

    We consider the problem of identifying structural influences of external inputs on steady-state outputs in a biological network model. We speak of a structural influence if, upon a perturbation due to a constant input, the ensuing variation of the steady-state output value has the same sign as the input (positive influence), the opposite sign (negative influence), or is zero (perfect adaptation), for any feasible choice of the model parameters. All these signs and zeros can constitute a structural influence matrix, whose (i, j) entry indicates the sign of steady-state influence of the jth system variable on the ith variable (the output caused by an external persistent input applied to the jth variable). Each entry is structurally determinate if the sign does not depend on the choice of the parameters, but is indeterminate otherwise. In principle, determining the influence matrix requires exhaustive testing of the system steady-state behaviour in the widest range of parameter values. Here we show that, in a broad class of biological networks, the influence matrix can be evaluated with an algorithm that tests the system steady-state behaviour only at a finite number of points. This algorithm also allows us to assess the structural effect of any perturbation, such as variations of relevant parameters. Our method is applied to nontrivial models of biochemical reaction networks and population dynamics drawn from the literature, providing a parameter-free insight into the system dynamics.

  7. Parameter Estimation as a Problem in Statistical Thermodynamics.

    PubMed

    Earle, Keith A; Schneider, David J

    2011-03-14

    In this work, we explore the connections between parameter fitting and statistical thermodynamics using the maxent principle of Jaynes as a starting point. In particular, we show how signal averaging may be described by a suitable one particle partition function, modified for the case of a variable number of particles. These modifications lead to an entropy that is extensive in the number of measurements in the average. Systematic error may be interpreted as a departure from ideal gas behavior. In addition, we show how to combine measurements from different experiments in an unbiased way in order to maximize the entropy of simultaneous parameter fitting. We suggest that fit parameters may be interpreted as generalized coordinates and the forces conjugate to them may be derived from the system partition function. From this perspective, the parameter fitting problem may be interpreted as a process where the system (spectrum) does work against internal stresses (non-optimum model parameters) to achieve a state of minimum free energy/maximum entropy. Finally, we show how the distribution function allows us to define a geometry on parameter space, building on previous work[1, 2]. This geometry has implications for error estimation and we outline a program for incorporating these geometrical insights into an automated parameter fitting algorithm.

  8. Bell's theorem and the problem of decidability between the views of Einstein and Bohr.

    PubMed

    Hess, K; Philipp, W

    2001-12-04

    Einstein, Podolsky, and Rosen (EPR) have designed a gedanken experiment that suggested a theory that was more complete than quantum mechanics. The EPR design was later realized in various forms, with experimental results close to the quantum mechanical prediction. The experimental results by themselves have no bearing on the EPR claim that quantum mechanics must be incomplete nor on the existence of hidden parameters. However, the well known inequalities of Bell are based on the assumption that local hidden parameters exist and, when combined with conflicting experimental results, do appear to prove that local hidden parameters cannot exist. This fact leaves only instantaneous actions at a distance (called "spooky" by Einstein) to explain the experiments. The Bell inequalities are based on a mathematical model of the EPR experiments. They have no experimental confirmation, because they contradict the results of all EPR experiments. In addition to the assumption that hidden parameters exist, Bell tacitly makes a variety of other assumptions; for instance, he assumes that the hidden parameters are governed by a single probability measure independent of the analyzer settings. We argue that the mathematical model of Bell excludes a large set of local hidden variables and a large variety of probability densities. Our set of local hidden variables includes time-like correlated parameters and a generalized probability density. We prove that our extended space of local hidden variables does permit derivation of the quantum result and is consistent with all known experiments.

  9. Independent contrasts and PGLS regression estimators are equivalent.

    PubMed

    Blomberg, Simon P; Lefevre, James G; Wells, Jessie A; Waterhouse, Mary

    2012-05-01

    We prove that the slope parameter of the ordinary least squares regression of phylogenetically independent contrasts (PICs) conducted through the origin is identical to the slope parameter of the method of generalized least squares (GLSs) regression under a Brownian motion model of evolution. This equivalence has several implications: 1. Understanding the structure of the linear model for GLS regression provides insight into when and why phylogeny is important in comparative studies. 2. The limitations of the PIC regression analysis are the same as the limitations of the GLS model. In particular, phylogenetic covariance applies only to the response variable in the regression and the explanatory variable should be regarded as fixed. Calculation of PICs for explanatory variables should be treated as a mathematical idiosyncrasy of the PIC regression algorithm. 3. Since the GLS estimator is the best linear unbiased estimator (BLUE), the slope parameter estimated using PICs is also BLUE. 4. If the slope is estimated using different branch lengths for the explanatory and response variables in the PIC algorithm, the estimator is no longer the BLUE, so this is not recommended. Finally, we discuss whether or not and how to accommodate phylogenetic covariance in regression analyses, particularly in relation to the problem of phylogenetic uncertainty. This discussion is from both frequentist and Bayesian perspectives.

  10. Analysis of a Two-Dimensional Thermal Cloaking Problem on the Basis of Optimization

    NASA Astrophysics Data System (ADS)

    Alekseev, G. V.

    2018-04-01

    For a two-dimensional model of thermal scattering, inverse problems arising in the development of tools for cloaking material bodies on the basis of a mixed thermal cloaking strategy are considered. By applying the optimization approach, these problems are reduced to optimization ones in which the role of controls is played by variable parameters of the medium occupying the cloaking shell and by the heat flux through a boundary segment of the basic domain. The solvability of the direct and optimization problems is proved, and an optimality system is derived. Based on its analysis, sufficient conditions on the input data are established that ensure the uniqueness and stability of optimal solutions.

  11. Hybrid Metaheuristics for Solving a Fuzzy Single Batch-Processing Machine Scheduling Problem

    PubMed Central

    Molla-Alizadeh-Zavardehi, S.; Tavakkoli-Moghaddam, R.; Lotfi, F. Hosseinzadeh

    2014-01-01

    This paper deals with a problem of minimizing total weighted tardiness of jobs in a real-world single batch-processing machine (SBPM) scheduling in the presence of fuzzy due date. In this paper, first a fuzzy mixed integer linear programming model is developed. Then, due to the complexity of the problem, which is NP-hard, we design two hybrid metaheuristics called GA-VNS and VNS-SA applying the advantages of genetic algorithm (GA), variable neighborhood search (VNS), and simulated annealing (SA) frameworks. Besides, we propose three fuzzy earliest due date heuristics to solve the given problem. Through computational experiments with several random test problems, a robust calibration is applied on the parameters. Finally, computational results on different-scale test problems are presented to compare the proposed algorithms. PMID:24883359

  12. Synchronization of a Josephson junction array in terms of global variables

    NASA Astrophysics Data System (ADS)

    Vlasov, Vladimir; Pikovsky, Arkady

    2013-08-01

    We consider an array of Josephson junctions with a common LCR load. Application of the Watanabe-Strogatz approach [Physica DPDNPDT0167-278910.1016/0167-2789(94)90196-1 74, 197 (1994)] allows us to formulate the dynamics of the array via the global variables only. For identical junctions this is a finite set of equations, analysis of which reveals the regions of bistability of the synchronous and asynchronous states. For disordered arrays with distributed parameters of the junctions, the problem is formulated as an integro-differential equation for the global variables; here stability of the asynchronous states and the properties of the transition synchrony-asynchrony are established numerically.

  13. On parametric Gevrey asymptotics for some nonlinear initial value Cauchy problems

    NASA Astrophysics Data System (ADS)

    Lastra, A.; Malek, S.

    2015-11-01

    We study a nonlinear initial value Cauchy problem depending upon a complex perturbation parameter ɛ with vanishing initial data at complex time t = 0 and whose coefficients depend analytically on (ɛ, t) near the origin in C2 and are bounded holomorphic on some horizontal strip in C w.r.t. the space variable. This problem is assumed to be non-Kowalevskian in time t, therefore analytic solutions at t = 0 cannot be expected in general. Nevertheless, we are able to construct a family of actual holomorphic solutions defined on a common bounded open sector with vertex at 0 in time and on the given strip above in space, when the complex parameter ɛ belongs to a suitably chosen set of open bounded sectors whose union form a covering of some neighborhood Ω of 0 in C*. These solutions are achieved by means of Laplace and Fourier inverse transforms of some common ɛ-depending function on C × R, analytic near the origin and with exponential growth on some unbounded sectors with appropriate bisecting directions in the first variable and exponential decay in the second, when the perturbation parameter belongs to Ω. Moreover, these solutions satisfy the remarkable property that the difference between any two of them is exponentially flat for some integer order w.r.t. ɛ. With the help of the classical Ramis-Sibuya theorem, we obtain the existence of a formal series (generally divergent) in ɛ which is the common Gevrey asymptotic expansion of the built up actual solutions considered above.

  14. Theoretical regime diagrams for thermally driven flows in a beta-plane channel in the presence of variable gravity

    NASA Technical Reports Server (NTRS)

    Geisler, J. E.; Fowlis, W. W.

    1980-01-01

    The effect of a power law gravity field on baroclinic instability is examined, with a focus on the case of inverse fifth power gravity, since this is the power law produced when terrestrial gravity is simulated in spherical geometry by a dielectric force. Growth rates are obtained of unstable normal modes as a function of parameters of the problem by solving a second order differential equation numerically. It is concluded that over the range of parameter space explored, there is no significant change in the character of theoretical regime diagrams if the vertically averaged gravity is used as parameter.

  15. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal solution, and the region from which new configurations can be selected shrinks as the search continues. The key difference between these algorithms is that in the SA algorithm, a single path, or trajectory, is taken in parameter space, from the starting point to the globally optimal solution, while in the RBSA algorithm, many trajectories are taken; by exploring multiple regions of the parameter space simultaneously, the algorithm has been shown to converge on the globally optimal solution about an order of magnitude faster than when using conventional algorithms. Novel features of the RBSA algorithm include: 1. More efficient searching of the parameter space due to the branching structure, in which multiple random configurations are generated and multiple promising regions of the parameter space are explored; 2. The implementation of a trust region for each parameter in the parameter space, which provides a natural way of enforcing upper- and lower-bound constraints on the parameters; and 3. The optional use of a constrained gradient- search optimization, performed on the continuous variables around each branch s configuration in parameter space to improve search efficiency by allowing for fast fine-tuning of the continuous variables within the trust region at that configuration point.

  16. VARIABLE SELECTION FOR REGRESSION MODELS WITH MISSING DATA

    PubMed Central

    Garcia, Ramon I.; Ibrahim, Joseph G.; Zhu, Hongtu

    2009-01-01

    We consider the variable selection problem for a class of statistical models with missing data, including missing covariate and/or response data. We investigate the smoothly clipped absolute deviation penalty (SCAD) and adaptive LASSO and propose a unified model selection and estimation procedure for use in the presence of missing data. We develop a computationally attractive algorithm for simultaneously optimizing the penalized likelihood function and estimating the penalty parameters. Particularly, we propose to use a model selection criterion, called the ICQ statistic, for selecting the penalty parameters. We show that the variable selection procedure based on ICQ automatically and consistently selects the important covariates and leads to efficient estimates with oracle properties. The methodology is very general and can be applied to numerous situations involving missing data, from covariates missing at random in arbitrary regression models to nonignorably missing longitudinal responses and/or covariates. Simulations are given to demonstrate the methodology and examine the finite sample performance of the variable selection procedures. Melanoma data from a cancer clinical trial is presented to illustrate the proposed methodology. PMID:20336190

  17. Quaternion Regularization of the Equations of the Perturbed Spatial Restricted Three-Body Problem: I

    NASA Astrophysics Data System (ADS)

    Chelnokov, Yu. N.

    2017-11-01

    We develop a quaternion method for regularizing the differential equations of the perturbed spatial restricted three-body problem by using the Kustaanheimo-Stiefel variables, which is methodologically closely related to the quaternion method for regularizing the differential equations of perturbed spatial two-body problem, which was proposed by the author of the present paper. A survey of papers related to the regularization of the differential equations of the two- and threebody problems is given. The original Newtonian equations of perturbed spatial restricted three-body problem are considered, and the problem of their regularization is posed; the energy relations and the differential equations describing the variations in the energies of the system in the perturbed spatial restricted three-body problem are given, as well as the first integrals of the differential equations of the unperturbed spatial restricted circular three-body problem (Jacobi integrals); the equations of perturbed spatial restricted three-body problem written in terms of rotating coordinate systems whose angular motion is described by the rotation quaternions (Euler (Rodrigues-Hamilton) parameters) are considered; and the differential equations for angular momenta in the restricted three-body problem are given. Local regular quaternion differential equations of perturbed spatial restricted three-body problem in the Kustaanheimo-Stiefel variables, i.e., equations regular in a neighborhood of the first and second body of finite mass, are obtained. The equations are systems of nonlinear nonstationary eleventhorder differential equations. These equations employ, as additional dependent variables, the energy characteristics of motion of the body under study (a body of a negligibly small mass) and the time whose derivative with respect to a new independent variable is equal to the distance from the body of negligibly small mass to the first or second body of finite mass. The equations obtained in the paper permit developing regular methods for determining solutions, in analytical or numerical form, of problems difficult for classicalmethods, such as the motion of a body of negligibly small mass in a neighborhood of the other two bodies of finite masses.

  18. Analytic Approximations to the Free Boundary and Multi-dimensional Problems in Financial Derivatives Pricing

    NASA Astrophysics Data System (ADS)

    Lau, Chun Sing

    This thesis studies two types of problems in financial derivatives pricing. The first type is the free boundary problem, which can be formulated as a partial differential equation (PDE) subject to a set of free boundary condition. Although the functional form of the free boundary condition is given explicitly, the location of the free boundary is unknown and can only be determined implicitly by imposing continuity conditions on the solution. Two specific problems are studied in details, namely the valuation of fixed-rate mortgages and CEV American options. The second type is the multi-dimensional problem, which involves multiple correlated stochastic variables and their governing PDE. One typical problem we focus on is the valuation of basket-spread options, whose underlying asset prices are driven by correlated geometric Brownian motions (GBMs). Analytic approximate solutions are derived for each of these three problems. For each of the two free boundary problems, we propose a parametric moving boundary to approximate the unknown free boundary, so that the original problem transforms into a moving boundary problem which can be solved analytically. The governing parameter of the moving boundary is determined by imposing the first derivative continuity condition on the solution. The analytic form of the solution allows the price and the hedging parameters to be computed very efficiently. When compared against the benchmark finite-difference method, the computational time is significantly reduced without compromising the accuracy. The multi-stage scheme further allows the approximate results to systematically converge to the benchmark results as one recasts the moving boundary into a piecewise smooth continuous function. For the multi-dimensional problem, we generalize the Kirk (1995) approximate two-asset spread option formula to the case of multi-asset basket-spread option. Since the final formula is in closed form, all the hedging parameters can also be derived in closed form. Numerical examples demonstrate that the pricing and hedging errors are in general less than 1% relative to the benchmark prices obtained by numerical integration or Monte Carlo simulation. By exploiting an explicit relationship between the option price and the underlying probability distribution, we further derive an approximate distribution function for the general basket-spread variable. It can be used to approximate the transition probability distribution of any linear combination of correlated GBMs. Finally, an implicit perturbation is applied to reduce the pricing errors by factors of up to 100. When compared against the existing methods, the basket-spread option formula coupled with the implicit perturbation turns out to be one of the most robust and accurate approximation methods.

  19. On Chaotic and Hyperchaotic Complex Nonlinear Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Mahmoud, Gamal M.

    Dynamical systems described by real and complex variables are currently one of the most popular areas of scientific research. These systems play an important role in several fields of physics, engineering, and computer sciences, for example, laser systems, control (or chaos suppression), secure communications, and information science. Dynamical basic properties, chaos (hyperchaos) synchronization, chaos control, and generating hyperchaotic behavior of these systems are briefly summarized. The main advantage of introducing complex variables is the reduction of phase space dimensions by a half. They are also used to describe and simulate the physics of detuned laser and thermal convection of liquid flows, where the electric field and the atomic polarization amplitudes are both complex. Clearly, if the variables of the system are complex the equations involve twice as many variables and control parameters, thus making it that much harder for a hostile agent to intercept and decipher the coded message. Chaotic and hyperchaotic complex systems are stated as examples. Finally there are many open problems in the study of chaotic and hyperchaotic complex nonlinear dynamical systems, which need further investigations. Some of these open problems are given.

  20. LSENS: A General Chemical Kinetics and Sensitivity Analysis Code for homogeneous gas-phase reactions. Part 1: Theory and numerical solution procedures

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 1 of a series of three reference publications that describe LENS, provide a detailed guide to its usage, and present many example problems. Part 1 derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved. The accuracy and efficiency of LSENS are examined by means of various test problems, and comparisons with other methods and codes are presented. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.

  1. MODFLOW-2000, the U.S. Geological Survey Modular Ground-Water Model--Documentation of the SEAWAT-2000 Version with the Variable-Density Flow Process (VDF) and the Integrated MT3DMS Transport Process (IMT)

    USGS Publications Warehouse

    Langevin, Christian D.; Shoemaker, W. Barclay; Guo, Weixing

    2003-01-01

    SEAWAT-2000 is the latest release of the SEAWAT computer program for simulation of three-dimensional, variable-density, transient ground-water flow in porous media. SEAWAT-2000 was designed by combining a modified version of MODFLOW-2000 and MT3DMS into a single computer program. The code was developed using the MODFLOW-2000 concept of a process, which is defined as ?part of the code that solves a fundamental equation by a specified numerical method.? SEAWAT-2000 contains all of the processes distributed with MODFLOW-2000 and also includes the Variable-Density Flow Process (as an alternative to the constant-density Ground-Water Flow Process) and the Integrated MT3DMS Transport Process. Processes may be active or inactive, depending on simulation objectives; however, not all processes are compatible. For example, the Sensitivity and Parameter Estimation Processes are not compatible with the Variable-Density Flow and Integrated MT3DMS Transport Processes. The SEAWAT-2000 computer code was tested with the common variable-density benchmark problems and also with problems representing evaporation from a salt lake and rotation of immiscible fluids.

  2. A chaos wolf optimization algorithm with self-adaptive variable step-size

    NASA Astrophysics Data System (ADS)

    Zhu, Yong; Jiang, Wanlu; Kong, Xiangdong; Quan, Lingxiao; Zhang, Yongshun

    2017-10-01

    To explore the problem of parameter optimization for complex nonlinear function, a chaos wolf optimization algorithm (CWOA) with self-adaptive variable step-size was proposed. The algorithm was based on the swarm intelligence of wolf pack, which fully simulated the predation behavior and prey distribution way of wolves. It possessed three intelligent behaviors such as migration, summons and siege. And the competition rule as "winner-take-all" and the update mechanism as "survival of the fittest" were also the characteristics of the algorithm. Moreover, it combined the strategies of self-adaptive variable step-size search and chaos optimization. The CWOA was utilized in parameter optimization of twelve typical and complex nonlinear functions. And the obtained results were compared with many existing algorithms, including the classical genetic algorithm, the particle swarm optimization algorithm and the leader wolf pack search algorithm. The investigation results indicate that CWOA possess preferable optimization ability. There are advantages in optimization accuracy and convergence rate. Furthermore, it demonstrates high robustness and global searching ability.

  3. LSENS: A General Chemical Kinetics and Sensitivity Analysis Code for homogeneous gas-phase reactions. Part 3: Illustrative test problems

    NASA Technical Reports Server (NTRS)

    Bittker, David A.; Radhakrishnan, Krishnan

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 3 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 3 explains the kinetics and kinetics-plus-sensitivity analysis problems supplied with LSENS and presents sample results. These problems illustrate the various capabilities of, and reaction models that can be solved by, the code and may provide a convenient starting point for the user to construct the problem data file required to execute LSENS. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.

  4. Stochastic Simulation Tool for Aerospace Structural Analysis

    NASA Technical Reports Server (NTRS)

    Knight, Norman F.; Moore, David F.

    2006-01-01

    Stochastic simulation refers to incorporating the effects of design tolerances and uncertainties into the design analysis model and then determining their influence on the design. A high-level evaluation of one such stochastic simulation tool, the MSC.Robust Design tool by MSC.Software Corporation, has been conducted. This stochastic simulation tool provides structural analysts with a tool to interrogate their structural design based on their mathematical description of the design problem using finite element analysis methods. This tool leverages the analyst's prior investment in finite element model development of a particular design. The original finite element model is treated as the baseline structural analysis model for the stochastic simulations that are to be performed. A Monte Carlo approach is used by MSC.Robust Design to determine the effects of scatter in design input variables on response output parameters. The tool was not designed to provide a probabilistic assessment, but to assist engineers in understanding cause and effect. It is driven by a graphical-user interface and retains the engineer-in-the-loop strategy for design evaluation and improvement. The application problem for the evaluation is chosen to be a two-dimensional shell finite element model of a Space Shuttle wing leading-edge panel under re-entry aerodynamic loading. MSC.Robust Design adds value to the analysis effort by rapidly being able to identify design input variables whose variability causes the most influence in response output parameters.

  5. Magnetohydrodynamic dissipative flow across the slendering stretching sheet with temperature dependent variable viscosity

    NASA Astrophysics Data System (ADS)

    Jayachandra Babu, M.; Sandeep, N.; Ali, M. E.; Nuhait, Abdullah O.

    The boundary layer flow across a slendering stretching sheet has gotten awesome consideration due to its inexhaustible pragmatic applications in nuclear reactor technology, acoustical components, chemical and manufacturing procedures, for example, polymer extrusion, and machine design. By keeping this in view, we analyzed the two-dimensional MHD flow across a slendering stretching sheet within the sight of variable viscosity and viscous dissipation. The sheet is thought to be convectively warmed. Convective boundary conditions through heat and mass are employed. Similarity transformations used to change over the administering nonlinear partial differential equations as a group of nonlinear ordinary differential equations. Runge-Kutta based shooting technique is utilized to solve the converted equations. Numerical estimations of the physical parameters involved in the problem are calculated for the friction factor, local Nusselt and Sherwood numbers. Viscosity variation parameter and chemical reaction parameter shows the opposite impact to each other on the concentration profile. Heat and mass transfer Biot numbers are helpful to enhance the temperature and concentration respectively.

  6. Centrifugal compressor fault diagnosis based on qualitative simulation and thermal parameters

    NASA Astrophysics Data System (ADS)

    Lu, Yunsong; Wang, Fuli; Jia, Mingxing; Qi, Yuanchen

    2016-12-01

    This paper concerns fault diagnosis of centrifugal compressor based on thermal parameters. An improved qualitative simulation (QSIM) based fault diagnosis method is proposed to diagnose the faults of centrifugal compressor in a gas-steam combined-cycle power plant (CCPP). The qualitative models under normal and two faulty conditions have been built through the analysis of the principle of centrifugal compressor. To solve the problem of qualitative description of the observations of system variables, a qualitative trend extraction algorithm is applied to extract the trends of the observations. For qualitative states matching, a sliding window based matching strategy which consists of variables operating ranges constraints and qualitative constraints is proposed. The matching results are used to determine which QSIM model is more consistent with the running state of system. The correct diagnosis of two typical faults: seal leakage and valve stuck in the centrifugal compressor has validated the targeted performance of the proposed method, showing the advantages of fault roots containing in thermal parameters.

  7. Variable neighborhood search to solve the vehicle routing problem for hazardous materials transportation.

    PubMed

    Bula, Gustavo Alfredo; Prodhon, Caroline; Gonzalez, Fabio Augusto; Afsar, H Murat; Velasco, Nubia

    2017-02-15

    This work focuses on the Heterogeneous Fleet Vehicle Routing problem (HFVRP) in the context of hazardous materials (HazMat) transportation. The objective is to determine a set of routes that minimizes the total expected routing risk. This is a nonlinear function, and it depends on the vehicle load and the population exposed when an incident occurs. Thus, a piecewise linear approximation is used to estimate it. For solving the problem, a variant of the Variable Neighborhood Search (VNS) algorithm is employed. To improve its performance, a post-optimization procedure is implemented via a Set Partitioning (SP) problem. The SP is solved on a pool of routes obtained from executions of the local search procedure embedded on the VNS. The algorithm is tested on two sets of HFVRP instances based on literature with up to 100 nodes, these instances are modified to include vehicle and arc risk parameters. The results are competitive in terms of computational efficiency and quality attested by a comparison with Mixed Integer Linear Programming (MILP) previously proposed. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Solution of Tikhonov's Motion-Separation Problem Using the Modified Newton-Kantorovich Theorem

    NASA Astrophysics Data System (ADS)

    Belolipetskii, A. A.; Ter-Krikorov, A. M.

    2018-02-01

    The paper presents a new way to prove the existence of a solution of the well-known Tikhonov's problem on systems of ordinary differential equations in which one part of the variables performs "fast" motions and the other part, "slow" motions. Tikhonov's problem has been the subject of a large number of works in connection with its applications to a wide range of mathematical models in natural science and economics. Only a short list of publications, which present the proof of the existence of solutions in this problem, is cited. The aim of the paper is to demonstrate the possibility of applying the modified Newton-Kantorovich theorem to prove the existence of a solution in Tikhonov's problem. The technique proposed can be used to prove the existence of solutions of other classes of problems with a small parameter.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, J.D.; Woan, G.

    Data from the Laser Interferometer Space Antenna (LISA) is expected to be dominated by frequency noise from its lasers. However, the noise from any one laser appears more than once in the data and there are combinations of the data that are insensitive to this noise. These combinations, called time delay interferometry (TDI) variables, have received careful study and point the way to how LISA data analysis may be performed. Here we approach the problem from the direction of statistical inference, and show that these variables are a direct consequence of a principal component analysis of the problem. We presentmore » a formal analysis for a simple LISA model and show that there are eigenvectors of the noise covariance matrix that do not depend on laser frequency noise. Importantly, these orthogonal basis vectors correspond to linear combinations of TDI variables. As a result we show that the likelihood function for source parameters using LISA data can be based on TDI combinations of the data without loss of information.« less

  10. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 1. Theory

    USGS Publications Warehouse

    Yen, Chung-Cheng; Guymon, Gary L.

    1990-01-01

    An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.

  11. An Efficient Deterministic-Probabilistic Approach to Modeling Regional Groundwater Flow: 1. Theory

    NASA Astrophysics Data System (ADS)

    Yen, Chung-Cheng; Guymon, Gary L.

    1990-07-01

    An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.

  12. A single-loop optimization method for reliability analysis with second order uncertainty

    NASA Astrophysics Data System (ADS)

    Xie, Shaojun; Pan, Baisong; Du, Xiaoping

    2015-08-01

    Reliability analysis may involve random variables and interval variables. In addition, some of the random variables may have interval distribution parameters owing to limited information. This kind of uncertainty is called second order uncertainty. This article develops an efficient reliability method for problems involving the three aforementioned types of uncertain input variables. The analysis produces the maximum and minimum reliability and is computationally demanding because two loops are needed: a reliability analysis loop with respect to random variables and an interval analysis loop for extreme responses with respect to interval variables. The first order reliability method and nonlinear optimization are used for the two loops, respectively. For computational efficiency, the two loops are combined into a single loop by treating the Karush-Kuhn-Tucker (KKT) optimal conditions of the interval analysis as constraints. Three examples are presented to demonstrate the proposed method.

  13. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    PubMed

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  14. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data

    PubMed Central

    Dazard, Jean-Eudes; Rao, J. Sunil

    2012-01-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput “omics” data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel “similarity statistic”-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called ‘MVR’ (‘Mean-Variance Regularization’), downloadable from the CRAN website. PMID:22711950

  15. Impact of the calibration period on the conceptual rainfall-runoff model parameter estimates

    NASA Astrophysics Data System (ADS)

    Todorovic, Andrijana; Plavsic, Jasna

    2015-04-01

    A conceptual rainfall-runoff model is defined by its structure and parameters, which are commonly inferred through model calibration. Parameter estimates depend on objective function(s), optimisation method, and calibration period. Model calibration over different periods may result in dissimilar parameter estimates, while model efficiency decreases outside calibration period. Problem of model (parameter) transferability, which conditions reliability of hydrologic simulations, has been investigated for decades. In this paper, dependence of the parameter estimates and model performance on calibration period is analysed. The main question that is addressed is: are there any changes in optimised parameters and model efficiency that can be linked to the changes in hydrologic or meteorological variables (flow, precipitation and temperature)? Conceptual, semi-distributed HBV-light model is calibrated over five-year periods shifted by a year (sliding time windows). Length of the calibration periods is selected to enable identification of all parameters. One water year of model warm-up precedes every simulation, which starts with the beginning of a water year. The model is calibrated using the built-in GAP optimisation algorithm. The objective function used for calibration is composed of Nash-Sutcliffe coefficient for flows and logarithms of flows, and volumetric error, all of which participate in the composite objective function with approximately equal weights. Same prior parameter ranges are used in all simulations. The model is calibrated against flows observed at the Slovac stream gauge on the Kolubara River in Serbia (records from 1954 to 2013). There are no trends in precipitation nor in flows, however, there is a statistically significant increasing trend in temperatures at this catchment. Parameter variability across the calibration periods is quantified in terms of standard deviations of normalised parameters, enabling detection of the most variable parameters. Correlation coefficients among optimised model parameters and total precipitation P, mean temperature T and mean flow Q are calculated to give an insight into parameter dependence on the hydrometeorological drivers. The results reveal high sensitivity of almost all model parameters towards calibration period. The highest variability is displayed by the refreezing coefficient, water holding capacity, and temperature gradient. The only statistically significant (decreasing) trend is detected in the evapotranspiration reduction threshold. Statistically significant correlation is detected between the precipitation gradient and precipitation depth, and between the time-area histogram base and flows. All other correlations are not statistically significant, implying that changes in optimised parameters cannot generally be linked to the changes in P, T or Q. As for the model performance, the model reproduces the observed runoff satisfactorily, though the runoff is slightly overestimated in wet periods. The Nash-Sutcliffe efficiency coefficient (NSE) ranges from 0.44 to 0.79. Higher NSE values are obtained over wetter periods, what is supported by statistically significant correlation between NSE and flows. Overall, no systematic variations in parameters or in model performance are detected. Parameter variability may therefore rather be attributed to errors in data or inadequacies in the model structure. Further research is required to examine the impact of the calibration strategy or model structure on the variability in optimised parameters in time.

  16. Application of multivariable search techniques to the optimization of airfoils in a low speed nonlinear inviscid flow field

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Merz, A. W.

    1975-01-01

    Multivariable search techniques are applied to a particular class of airfoil optimization problems. These are the maximization of lift and the minimization of disturbance pressure magnitude in an inviscid nonlinear flow field. A variety of multivariable search techniques contained in an existing nonlinear optimization code, AESOP, are applied to this design problem. These techniques include elementary single parameter perturbation methods, organized search such as steepest-descent, quadratic, and Davidon methods, randomized procedures, and a generalized search acceleration technique. Airfoil design variables are seven in number and define perturbations to the profile of an existing NACA airfoil. The relative efficiency of the techniques are compared. It is shown that elementary one parameter at a time and random techniques compare favorably with organized searches in the class of problems considered. It is also shown that significant reductions in disturbance pressure magnitude can be made while retaining reasonable lift coefficient values at low free stream Mach numbers.

  17. Optimal design of earth-moving machine elements with cusp catastrophe theory application

    NASA Astrophysics Data System (ADS)

    Pitukhin, A. V.; Skobtsov, I. G.

    2017-10-01

    This paper deals with the optimal design problem solution for the operator of an earth-moving machine with a roll-over protective structure (ROPS) in terms of the catastrophe theory. A brief description of the catastrophe theory is presented, the cusp catastrophe is considered, control parameters are viewed as Gaussian stochastic quantities in the first part of the paper. The statement of optimal design problem is given in the second part of the paper. It includes the choice of the objective function and independent design variables, establishment of system limits. The objective function is determined as mean total cost that includes initial cost and cost of failure according to the cusp catastrophe probability. Algorithm of random search method with an interval reduction subject to side and functional constraints is given in the last part of the paper. The way of optimal design problem solution can be applied to choose rational ROPS parameters, which will increase safety and reduce production and exploitation expenses.

  18. Inverse problems and computational cell metabolic models: a statistical approach

    NASA Astrophysics Data System (ADS)

    Calvetti, D.; Somersalo, E.

    2008-07-01

    In this article, we give an overview of the Bayesian modelling of metabolic systems at the cellular and subcellular level. The models are based on detailed description of key biochemical reactions occurring in tissue, which may in turn be compartmentalized into cytosol and mitochondria, and of transports between the compartments. The classical deterministic approach which models metabolic systems as dynamical systems with Michaelis-Menten kinetics, is replaced by a stochastic extension where the model parameters are interpreted as random variables with an appropriate probability density. The inverse problem of cell metabolism in this setting consists of estimating the density of the model parameters. After discussing some possible approaches to solving the problem, we address the issue of how to assess the reliability of the predictions of a stochastic model by proposing an output analysis in terms of model uncertainties. Visualization modalities for organizing the large amount of information provided by the Bayesian dynamic sensitivity analysis are also illustrated.

  19. Kalman Filtering with Inequality Constraints for Turbofan Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2003-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops two analytic methods of incorporating state variable inequality constraints in the Kalman filter. The first method is a general technique of using hard constraints to enforce inequalities on the state variable estimates. The resultant filter is a combination of a standard Kalman filter and a quadratic programming problem. The second method uses soft constraints to estimate state variables that are known to vary slowly with time. (Soft constraints are constraints that are required to be approximately satisfied rather than exactly satisfied.) The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is proven theoretically and shown via simulation results. The use of the algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate health parameters. The turbofan engine model contains 16 state variables, 12 measurements, and 8 component health parameters. It is shown that the new algorithms provide improved performance in this example over unconstrained Kalman filtering.

  20. Ordinary differential equations with applications in molecular biology.

    PubMed

    Ilea, M; Turnea, M; Rotariu, M

    2012-01-01

    Differential equations are of basic importance in molecular biology mathematics because many biological laws and relations appear mathematically in the form of a differential equation. In this article we presented some applications of mathematical models represented by ordinary differential equations in molecular biology. The vast majority of quantitative models in cell and molecular biology are formulated in terms of ordinary differential equations for the time evolution of concentrations of molecular species. Assuming that the diffusion in the cell is high enough to make the spatial distribution of molecules homogenous, these equations describe systems with many participating molecules of each kind. We propose an original mathematical model with small parameter for biological phospholipid pathway. All the equations system includes small parameter epsilon. The smallness of epsilon is relative to the size of the solution domain. If we reduce the size of the solution region the same small epsilon will result in a different condition number. It is clear that the solution for a smaller region is less difficult. We introduce the mathematical technique known as boundary function method for singular perturbation system. In this system, the small parameter is an asymptotic variable, different from the independent variable. In general, the solutions of such equations exhibit multiscale phenomena. Singularly perturbed problems form a special class of problems containing a small parameter which may tend to zero. Many molecular biology processes can be quantitatively characterized by ordinary differential equations. Mathematical cell biology is a very active and fast growing interdisciplinary area in which mathematical concepts, techniques, and models are applied to a variety of problems in developmental medicine and bioengineering. Among the different modeling approaches, ordinary differential equations (ODE) are particularly important and have led to significant advances. Ordinary differential equations are used to model biological processes on various levels ranging from DNA molecules or biosynthesis phospholipids on the cellular level.

  1. Un-reduction in field theory.

    PubMed

    Arnaudon, Alexis; López, Marco Castrillón; Holm, Darryl D

    2018-01-01

    The un-reduction procedure introduced previously in the context of classical mechanics is extended to covariant field theory. The new covariant un-reduction procedure is applied to the problem of shape matching of images which depend on more than one independent variable (for instance, time and an additional labelling parameter). Other possibilities are also explored: nonlinear [Formula: see text]-models and the hyperbolic flows of curves.

  2. Analytical design of an industrial two-term controller for optimal regulatory control of open-loop unstable processes under operational constraints.

    PubMed

    Tchamna, Rodrigue; Lee, Moonyong

    2018-01-01

    This paper proposes a novel optimization-based approach for the design of an industrial two-term proportional-integral (PI) controller for the optimal regulatory control of unstable processes subjected to three common operational constraints related to the process variable, manipulated variable and its rate of change. To derive analytical design relations, the constrained optimal control problem in the time domain was transformed into an unconstrained optimization problem in a new parameter space via an effective parameterization. The resulting optimal PI controller has been verified to yield optimal performance and stability of an open-loop unstable first-order process under operational constraints. The proposed analytical design method explicitly takes into account the operational constraints in the controller design stage and also provides useful insights into the optimal controller design. Practical procedures for designing optimal PI parameters and a feasible constraint set exclusive of complex optimization steps are also proposed. The proposed controller was compared with several other PI controllers to illustrate its performance. The robustness of the proposed controller against plant-model mismatch has also been investigated. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Shape optimization techniques for musical instrument design

    NASA Astrophysics Data System (ADS)

    Henrique, Luis; Antunes, Jose; Carvalho, Joao S.

    2002-11-01

    The design of musical instruments is still mostly based on empirical knowledge and costly experimentation. One interesting improvement is the shape optimization of resonating components, given a number of constraints (allowed parameter ranges, shape smoothness, etc.), so that vibrations occur at specified modal frequencies. Each admissible geometrical configuration generates an error between computed eigenfrequencies and the target set. Typically, error surfaces present many local minima, corresponding to suboptimal designs. This difficulty can be overcome using global optimization techniques, such as simulated annealing. However these methods are greedy, concerning the number of function evaluations required. Thus, the computational effort can be unacceptable if complex problems, such as bell optimization, are tackled. Those issues are addressed in this paper, and a method for improving optimization procedures is proposed. Instead of using the local geometric parameters as searched variables, the system geometry is modeled in terms of truncated series of orthogonal space-funcitons, and optimization is performed on their amplitude coefficients. Fourier series and orthogonal polynomials are typical such functions. This technique reduces considerably the number of searched variables, and has a potential for significant computational savings in complex problems. It is illustrated by optimizing the shapes of both current and uncommon marimba bars.

  4. Differential equations with applications in cancer diseases.

    PubMed

    Ilea, M; Turnea, M; Rotariu, M

    2013-01-01

    Mathematical modeling is a process by which a real world problem is described by a mathematical formulation. The cancer modeling is a highly challenging problem at the frontier of applied mathematics. A variety of modeling strategies have been developed, each focusing on one or more aspects of cancer. The vast majority of mathematical models in cancer diseases biology are formulated in terms of differential equations. We propose an original mathematical model with small parameter for the interactions between these two cancer cell sub-populations and the mathematical model of a vascular tumor. We work on the assumption that, the quiescent cells' nutrient consumption is long. One the equations system includes small parameter epsilon. The smallness of epsilon is relative to the size of the solution domain. MATLAB simulations obtained for transition rate from the quiescent cells' nutrient consumption is long, we show a similar asymptotic behavior for two solutions of the perturbed problem. In this system, the small parameter is an asymptotic variable, different from the independent variable. The graphical output for a mathematical model of a vascular tumor shows the differences in the evolution of the tumor populations of proliferating, quiescent and necrotic cells. The nutrient concentration decreases sharply through the viable rim and tends to a constant level in the core due to the nearly complete necrosis in this region. Many mathematical models can be quantitatively characterized by ordinary differential equations or partial differential equations. The use of MATLAB in this article illustrates the important role of informatics in research in mathematical modeling. The study of avascular tumor growth cells is an exciting and important topic in cancer research and will profit considerably from theoretical input. Interpret these results to be a permanent collaboration between math's and medical oncologists.

  5. Parameter assessment for virtual Stackelberg game in aerodynamic shape optimization

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Xie, Fangfang; Zheng, Yao; Zhang, Jifa

    2018-05-01

    In this paper, parametric studies of virtual Stackelberg game (VSG) are conducted to assess the impact of critical parameters on aerodynamic shape optimization, including design cycle, split of design variables and role assignment. Typical numerical cases, including the inverse design and drag reduction design of airfoil, have been carried out. The numerical results confirm the effectiveness and efficiency of VSG. Furthermore, the most significant parameters are identified, e.g. the increase of design cycle can improve the optimization results but it will also add computational burden. These studies will maximize the productivity of the effort in aerodynamic optimization for more complicated engineering problems, such as the multi-element airfoil and wing-body configurations.

  6. Doubly stochastic radial basis function methods

    NASA Astrophysics Data System (ADS)

    Yang, Fenglian; Yan, Liang; Ling, Leevan

    2018-06-01

    We propose a doubly stochastic radial basis function (DSRBF) method for function recoveries. Instead of a constant, we treat the RBF shape parameters as stochastic variables whose distribution were determined by a stochastic leave-one-out cross validation (LOOCV) estimation. A careful operation count is provided in order to determine the ranges of all the parameters in our methods. The overhead cost for setting up the proposed DSRBF method is O (n2) for function recovery problems with n basis. Numerical experiments confirm that the proposed method not only outperforms constant shape parameter formulation (in terms of accuracy with comparable computational cost) but also the optimal LOOCV formulation (in terms of both accuracy and computational cost).

  7. Optimal dynamic pricing and replenishment policy for perishable items with inventory-level-dependent demand

    NASA Astrophysics Data System (ADS)

    Lu, Lihao; Zhang, Jianxiong; Tang, Wansheng

    2016-04-01

    An inventory system for perishable items with limited replenishment capacity is introduced in this paper. The demand rate depends on the stock quantity displayed in the store as well as the sales price. With the goal to realise profit maximisation, an optimisation problem is addressed to seek for the optimal joint dynamic pricing and replenishment policy which is obtained by solving the optimisation problem with Pontryagin's maximum principle. A joint mixed policy, in which the sales price is a static decision variable and the replenishment rate remains to be a dynamic decision variable, is presented to compare with the joint dynamic policy. Numerical results demonstrate the advantages of the joint dynamic one, and further show the effects of different system parameters on the optimal joint dynamic policy and the maximal total profit.

  8. On Some Separated Algorithms for Separable Nonlinear Least Squares Problems.

    PubMed

    Gan, Min; Chen, C L Philip; Chen, Guang-Yong; Chen, Long

    2017-10-03

    For a class of nonlinear least squares problems, it is usually very beneficial to separate the variables into a linear and a nonlinear part and take full advantage of reliable linear least squares techniques. Consequently, the original problem is turned into a reduced problem which involves only nonlinear parameters. We consider in this paper four separated algorithms for such problems. The first one is the variable projection (VP) algorithm with full Jacobian matrix of Golub and Pereyra. The second and third ones are VP algorithms with simplified Jacobian matrices proposed by Kaufman and Ruano et al. respectively. The fourth one only uses the gradient of the reduced problem. Monte Carlo experiments are conducted to compare the performance of these four algorithms. From the results of the experiments, we find that: 1) the simplified Jacobian proposed by Ruano et al. is not a good choice for the VP algorithm; moreover, it may render the algorithm hard to converge; 2) the fourth algorithm perform moderately among these four algorithms; 3) the VP algorithm with the full Jacobian matrix perform more stable than that of the VP algorithm with Kuafman's simplified one; and 4) the combination of VP algorithm and Levenberg-Marquardt method is more effective than the combination of VP algorithm and Gauss-Newton method.

  9. HEATING 7. 1 user's manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Childs, K.W.

    1991-07-01

    HEATING is a FORTRAN program designed to solve steady-state and/or transient heat conduction problems in one-, two-, or three- dimensional Cartesian, cylindrical, or spherical coordinates. A model may include multiple materials, and the thermal conductivity, density, and specific heat of each material may be both time- and temperature-dependent. The thermal conductivity may be anisotropic. Materials may undergo change of phase. Thermal properties of materials may be input or may be extracted from a material properties library. Heating generation rates may be dependent on time, temperature, and position, and boundary temperatures may be time- and position-dependent. The boundary conditions, which maymore » be surface-to-boundary or surface-to-surface, may be specified temperatures or any combination of prescribed heat flux, forced convection, natural convection, and radiation. The boundary condition parameters may be time- and/or temperature-dependent. General graybody radiation problems may be modeled with user-defined factors for radiant exchange. The mesh spacing may be variable along each axis. HEATING is variably dimensioned and utilizes free-form input. Three steady-state solution techniques are available: point-successive-overrelaxation iterative method with extrapolation, direct-solution (for one-dimensional or two-dimensional problems), and conjugate gradient. Transient problems may be solved using one of several finite-difference schemes: Crank-Nicolson implicit, Classical Implicit Procedure (CIP), Classical Explicit Procedure (CEP), or Levy explicit method (which for some circumstances allows a time step greater than the CEP stability criterion). The solution of the system of equations arising from the implicit techniques is accomplished by point-successive-overrelaxation iteration and includes procedures to estimate the optimum acceleration parameter.« less

  10. Multidimensional density shaping by sigmoids.

    PubMed

    Roth, Z; Baram, Y

    1996-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the output entropy of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's optimization method, applied to the estimated density, yields a recursive estimator for a random variable or a random sequence. A constrained connectivity structure yields a linear estimator, which is particularly suitable for "real time" prediction. A Gaussian nonlinearity yields a closed-form solution for the network's parameters, which may also be used for initializing the optimization algorithm when other nonlinearities are employed. A triangular connectivity between the neurons and the input, which is naturally suggested by the statistical setting, reduces the number of parameters. Applications to classification and forecasting problems are demonstrated.

  11. High-Resolution Source Parameter and Site Characteristics Using Near-Field Recordings - Decoding the Trade-off Problems Between Site and Source

    NASA Astrophysics Data System (ADS)

    Chen, X.; Abercrombie, R. E.; Pennington, C.

    2017-12-01

    Recorded seismic waveforms include contributions from earthquake source properties and propagation effects, leading to long-standing trade-off problems between site/path effects and source effects. With near-field recordings, the path effect is relatively small, so the trade-off problem can be simplified to between source and site effects (commonly referred as "kappa value"). This problem is especially significant for small earthquakes where the corner frequencies are within similar ranges of kappa values, so direct spectrum fitting often leads to systematic biases due to corner frequency and magnitude. In response to the significantly increased seismicity rate in Oklahoma, several local networks have been deployed following major earthquakes: the Prague, Pawnee and Fairview earthquakes. Each network provides dense observations within 20 km surrounding the fault zone, recording tens of thousands of aftershocks between M1 to M3. Using near-field recordings in the Prague area, we apply a stacking approach to separate path/site and source effects. The resulting source parameters are consistent with parameters derived from ground motion and spectral ratio methods from other studies; they exhibit spatial coherence within the fault zone for different fault patches. We apply these source parameter constraints in an analysis of kappa values for stations within 20 km of the fault zone. The resulting kappa values show significantly reduced variability compared to those from direct spectral fitting without constraints on the source spectrum; they are not biased by earthquake magnitudes. With these improvements, we plan to apply the stacking analysis to other local arrays to analyze source properties and site characteristics. For selected individual earthquakes, we will also use individual-pair empirical Green's function (EGF) analysis to validate the source parameter estimations.

  12. Some issues in the simulation of two-phase flows: The relative velocity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gräbel, J.; Hensel, S.; Ueberholz, P.

    In this paper we compare numerical approximations for solving the Riemann problem for a hyperbolic two-phase flow model in two-dimensional space. The model is based on mixture parameters of state where the relative velocity between the two-phase systems is taken into account. This relative velocity appears as a main discontinuous flow variable through the complete wave structure and cannot be recovered correctly by some numerical techniques when simulating the associated Riemann problem. Simulations are validated by comparing the results of the numerical calculation qualitatively with OpenFOAM software. Simulations also indicate that OpenFOAM is unable to resolve the relative velocity associatedmore » with the Riemann problem.« less

  13. Distributed Optimization

    NASA Technical Reports Server (NTRS)

    Macready, William; Wolpert, David

    2005-01-01

    We demonstrate a new framework for analyzing and controlling distributed systems, by solving constrained optimization problems with an algorithm based on that framework. The framework is ar. information-theoretic extension of conventional full-rationality game theory to allow bounded rational agents. The associated optimization algorithm is a game in which agents control the variables of the optimization problem. They do this by jointly minimizing a Lagrangian of (the probability distribution of) their joint state. The updating of the Lagrange parameters in that Lagrangian is a form of automated annealing, one that focuses the multi-agent system on the optimal pure strategy. We present computer experiments for the k-sat constraint satisfaction problem and for unconstrained minimization of NK functions.

  14. Optimized tuner selection for engine performance estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)

    2013-01-01

    A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.

  15. An Algorithm for Efficient Maximum Likelihood Estimation and Confidence Interval Determination in Nonlinear Estimation Problems

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick Charles

    1985-01-01

    An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The algorithm was developed for airplane parameter estimation problems but is well suited for most nonlinear, multivariable, dynamic systems. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort. MNRES determines the sensitivities with less computational effort than using either a finite-difference method or integrating the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, thus eliminating algorithm reformulation with each new model and providing flexibility to use model equations in any format that is convenient. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. It is observed that the degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. The CR bounds were found to be close to the bounds determined by the search when the degree of nonlinearity was small. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels for the parameter confidence limits. The primary utility of the measure, however, was found to be in predicting the degree of agreement between Cramer-Rao bounds and search estimates.

  16. Efficient geostatistical inversion of transient groundwater flow using preconditioned nonlinear conjugate gradients

    NASA Astrophysics Data System (ADS)

    Klein, Ole; Cirpka, Olaf A.; Bastian, Peter; Ippisch, Olaf

    2017-04-01

    In the geostatistical inverse problem of subsurface hydrology, continuous hydraulic parameter fields, in most cases hydraulic conductivity, are estimated from measurements of dependent variables, such as hydraulic heads, under the assumption that the parameter fields are autocorrelated random space functions. Upon discretization, the continuous fields become large parameter vectors with O (104 -107) elements. While cokriging-like inversion methods have been shown to be efficient for highly resolved parameter fields when the number of measurements is small, they require the calculation of the sensitivity of each measurement with respect to all parameters, which may become prohibitive with large sets of measured data such as those arising from transient groundwater flow. We present a Preconditioned Conjugate Gradient method for the geostatistical inverse problem, in which a single adjoint equation needs to be solved to obtain the gradient of the objective function. Using the autocovariance matrix of the parameters as preconditioning matrix, expensive multiplications with its inverse can be avoided, and the number of iterations is significantly reduced. We use a randomized spectral decomposition of the posterior covariance matrix of the parameters to perform a linearized uncertainty quantification of the parameter estimate. The feasibility of the method is tested by virtual examples of head observations in steady-state and transient groundwater flow. These synthetic tests demonstrate that transient data can reduce both parameter uncertainty and time spent conducting experiments, while the presented methods are able to handle the resulting large number of measurements.

  17. Assimilating AmeriFlux Site Data into the Community Land Model with Carbon-Nitrogen Coupling via the Ensemble Kalman Filter

    NASA Astrophysics Data System (ADS)

    Pettijohn, J. C.; Law, B. E.; Williams, M. D.; Stoeckli, R.; Thornton, P. E.; Hudiburg, T. M.; Thomas, C. K.; Martin, J.; Hill, T. C.

    2009-12-01

    The assimilation of terrestrial carbon, water and nutrient cycle measurements into land surface models of these processes is fundamental to improving our ability to predict how these ecosystems may respond to climate change. A combination of measurements and models, each with their own systematic biases, must be considered when constraining the nonlinear behavior of these coupled dynamics. As such, we use the sequential Ensemble Kalman Filter (EnKF) to assimilate eddy covariance (EC) and other site-level AmeriFlux measurements into the NCAR Community Land Model with Carbon-Nitrogen coupling (CLM-CN v3.5), run in single-column mode at a 30-minute time step, to improve estimates of relatively unconstrained model state variables and parameters. Specifically, we focus on a semi-arid ponderosa pine site (US-ME2) in the Pacific Northwest to identify the mechanisms by which this ecosystem responds to severe late summer drought. Our EnKF analysis includes water, carbon, energy and nitrogen state variables (e.g., 10 volumetric soil moisture levels (0-3.43 m), ponderosa pine and shrub evapotranspiration and net ecosystem exchange of carbon dioxide stocks and flux components, snow depth, etc.) and associated parameters (e.g., PFT-level rooting distribution parameters, maximum subsurface runoff coefficient, soil hydraulic conductivity decay factor, snow aging parameters, maximum canopy conductance, C:N ratios, etc.). The effectiveness of the EnKF in constraining state variables and associated parameters is sensitive to their relative frequencies, in that C-N state variables and parameters with long time constants require similarly long time series in the analysis. We apply the EnKF kernel perturbation routine to disrupt preliminary convergence of covariances, which has been found in recent studies to be a problem more characteristic of low frequency vegetation state variables and parameters than high frequency ones more heavily coupled with highly varying climate (e.g., shallow soil moisture, snow depth). Preliminary results demonstrate that the assimilation of EC and other available AmeriFlux site physical, chemical and biological data significantly helps quantify and reduce CLM-CN model uncertainties and helps to constrain ‘hidden’ states and parameters that are essential in the coupled water, carbon, energy and nutrient dynamics of these sites. Such site-level calibration of CLM-CN is an initial step in identifying model deficiencies and in forecasts of future ecosystem responses to climate change.

  18. Improved modeling of clinical data with kernel methods.

    PubMed

    Daemen, Anneleen; Timmerman, Dirk; Van den Bosch, Thierry; Bottomley, Cecilia; Kirk, Emma; Van Holsbeke, Caroline; Valentin, Lil; Bourne, Tom; De Moor, Bart

    2012-02-01

    Despite the rise of high-throughput technologies, clinical data such as age, gender and medical history guide clinical management for most diseases and examinations. To improve clinical management, available patient information should be fully exploited. This requires appropriate modeling of relevant parameters. When kernel methods are used, traditional kernel functions such as the linear kernel are often applied to the set of clinical parameters. These kernel functions, however, have their disadvantages due to the specific characteristics of clinical data, being a mix of variable types with each variable its own range. We propose a new kernel function specifically adapted to the characteristics of clinical data. The clinical kernel function provides a better representation of patients' similarity by equalizing the influence of all variables and taking into account the range r of the variables. Moreover, it is robust with respect to changes in r. Incorporated in a least squares support vector machine, the new kernel function results in significantly improved diagnosis, prognosis and prediction of therapy response. This is illustrated on four clinical data sets within gynecology, with an average increase in test area under the ROC curve (AUC) of 0.023, 0.021, 0.122 and 0.019, respectively. Moreover, when combining clinical parameters and expression data in three case studies on breast cancer, results improved overall with use of the new kernel function and when considering both data types in a weighted fashion, with a larger weight assigned to the clinical parameters. The increase in AUC with respect to a standard kernel function and/or unweighted data combination was maximum 0.127, 0.042 and 0.118 for the three case studies. For clinical data consisting of variables of different types, the proposed kernel function--which takes into account the type and range of each variable--has shown to be a better alternative for linear and non-linear classification problems. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Representing general theoretical concepts in structural equation models: The role of composite variables

    USGS Publications Warehouse

    Grace, J.B.; Bollen, K.A.

    2008-01-01

    Structural equation modeling (SEM) holds the promise of providing natural scientists the capacity to evaluate complex multivariate hypotheses about ecological systems. Building on its predecessors, path analysis and factor analysis, SEM allows for the incorporation of both observed and unobserved (latent) variables into theoretically-based probabilistic models. In this paper we discuss the interface between theory and data in SEM and the use of an additional variable type, the composite. In simple terms, composite variables specify the influences of collections of other variables and can be helpful in modeling heterogeneous concepts of the sort commonly of interest to ecologists. While long recognized as a potentially important element of SEM, composite variables have received very limited use, in part because of a lack of theoretical consideration, but also because of difficulties that arise in parameter estimation when using conventional solution procedures. In this paper we present a framework for discussing composites and demonstrate how the use of partially-reduced-form models can help to overcome some of the parameter estimation and evaluation problems associated with models containing composites. Diagnostic procedures for evaluating the most appropriate and effective use of composites are illustrated with an example from the ecological literature. It is argued that an ability to incorporate composite variables into structural equation models may be particularly valuable in the study of natural systems, where concepts are frequently multifaceted and the influence of suites of variables are often of interest. ?? Springer Science+Business Media, LLC 2007.

  20. A methodology for airplane parameter estimation and confidence interval determination in nonlinear estimation problems. Ph.D. Thesis - George Washington Univ., Apr. 1985

    NASA Technical Reports Server (NTRS)

    Murphy, P. C.

    1986-01-01

    An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. With the fitted surface, sensitivity information can be updated at each iteration with less computational effort than that required by either a finite-difference method or integration of the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, and thus provides flexibility to use model equations in any convenient format. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. The degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels and to predict the degree of agreement between CR bounds and search estimates.

  1. Vortex conception of rotor and mutual effect of screw/propellers

    NASA Technical Reports Server (NTRS)

    Lepilkin, A. M.

    1986-01-01

    A vortex theory of screw/propellers with variable circulation according to the blade and its azimuth is proposed, the problem is formulated and circulation is expanded in a Fourier series. Equations are given for inductive velocities in space for crews, including those with an infinitely large number of blades and expansion of the inductive velocity by blade azimuth of a second screw. Multiparameter improper integrals are given as a combination of elliptical integrals and elementary functions, and it is shown how to reduce elliptical integrals of the third kind with a complex parameter to integrals with a real parameter.

  2. Using internal discharge data in a distributed conceptual model to reduce uncertainty in streamflow simulations

    NASA Astrophysics Data System (ADS)

    Guerrero, J.; Halldin, S.; Xu, C.; Lundin, L.

    2011-12-01

    Distributed hydrological models are important tools in water management as they account for the spatial variability of the hydrological data, as well as being able to produce spatially distributed outputs. They can directly incorporate and assess potential changes in the characteristics of our basins. A recognized problem for models in general is equifinality, which is only exacerbated for distributed models who tend to have a large number of parameters. We need to deal with the fundamentally ill-posed nature of the problem that such models force us to face, i.e. a large number of parameters and very few variables that can be used to constrain them, often only the catchment discharge. There is a growing but yet limited literature showing how the internal states of a distributed model can be used to calibrate/validate its predictions. In this paper, a distributed version of WASMOD, a conceptual rainfall runoff model with only three parameters, combined with a routing algorithm based on the high-resolution HydroSHEDS data was used to simulate the discharge in the Paso La Ceiba basin in Honduras. The parameter space was explored using Monte-Carlo simulations and the region of space containing the parameter-sets that were considered behavioral according to two different criteria was delimited using the geometric concept of alpha-shapes. The discharge data from five internal sub-basins was used to aid in the calibration of the model and to answer the following questions: Can this information improve the simulations at the outlet of the catchment, or decrease their uncertainty? Also, after reducing the number of model parameters needing calibration through sensitivity analysis: Is it possible to relate them to basin characteristics? The analysis revealed that in most cases the internal discharge data can be used to reduce the uncertainty in the discharge at the outlet, albeit with little improvement in the overall simulation results.

  3. The Role of Heart-Rate Variability Parameters in Activity Recognition and Energy-Expenditure Estimation Using Wearable Sensors.

    PubMed

    Park, Heesu; Dong, Suh-Yeon; Lee, Miran; Youn, Inchan

    2017-07-24

    Human-activity recognition (HAR) and energy-expenditure (EE) estimation are major functions in the mobile healthcare system. Both functions have been investigated for a long time; however, several challenges remain unsolved, such as the confusion between activities and the recognition of energy-consuming activities involving little or no movement. To solve these problems, we propose a novel approach using an accelerometer and electrocardiogram (ECG). First, we collected a database of six activities (sitting, standing, walking, ascending, resting and running) of 13 voluntary participants. We compared the HAR performances of three models with respect to the input data type (with none, all, or some of the heart-rate variability (HRV) parameters). The best recognition performance was 96.35%, which was obtained with some selected HRV parameters. EE was also estimated for different choices of the input data type (with or without HRV parameters) and the model type (single and activity-specific). The best estimation performance was found in the case of the activity-specific model with HRV parameters. Our findings indicate that the use of human physiological data, obtained by wearable sensors, has a significant impact on both HAR and EE estimation, which are crucial functions in the mobile healthcare system.

  4. [Cardiovascular risk parameters, metabolic syndrome and alcohol consumption by workers].

    PubMed

    Vicente-Herrero, María Teófila; López González, Ángel Arturo; Ramírez-Iñiguez de la Torre, María Victoria; Capdevila-García, Luisa; Terradillos-García, María Jesús; Aguilar-Jiménez, Encarna

    2015-04-01

    Prevalence of alcohol consumption is high in the general population and generates specific problems at the workplace. To establish benchmarks between levels of alcohol consumption and cardiovascular risk variables and metabolic syndrome. A cross-sectional study of 7,644 workers of Spanish companies (2,828 females and 4,816 males). Alcohol consumption and its relation to cardiovascular risk was assessed using Framingham calibrated for the Spanish population (REGICOR) and SCORE, and metabolic syndrome was assessed using modified ATPIII and IDF criteria and Castelli and atherogenic index and triglycerides/HDL ratio. A multivariate analysis was performed using logistic regression and odds ratios were estimated. Statistically significant differences were seen in the mean values of the different parameters studied in prevalence of metabolic syndrome, for both sexes and with modified ATPIII, IDF and REGICOR and SCORE. The sex, age, alcohol, and smoking variables were associated to cardiovascular risk parameters and metabolic syndrome. Physical exercise and stress are only associated to with some of them. The alcohol consumption affects all cardiovascular risk parameters and metabolic syndrome, being more negative the result in high level drinkers. Copyright © 2014 SEEN. Published by Elsevier España, S.L.U. All rights reserved.

  5. INM Integrated Noise Model Version 2. Programmer’s Guide

    DTIC Science & Technology

    1979-09-01

    cost, turnaround time, and system-dependent limitations. 3.2 CONVERSION PROBLEMS Item Item Item No. Desciption Category 1 BLOCK DATA Initialization IBM ...Restricted 2 Boolean Operations Differences Call Statement Parameters Extensions 4 Data Initialization IBM Restricted 5 ENTRY Differences 6 EQUIVALENCE...Machine Dependent 7 Format: A CDC Extension 8 Hollerith Strings IBM Restricted 9 Hollerith Variables IBM Restricted 10 Identifier Names CDC Extension

  6. Regularization of the Perturbed Spatial Restricted Three-Body Problem by L-Transformations

    NASA Astrophysics Data System (ADS)

    Poleshchikov, S. M.

    2018-03-01

    Equations of motion for the perturbed circular restricted three-body problem have been regularized in canonical variables in a moving coordinate system. Two different L-matrices of the fourth order are used in the regularization. Conditions for generalized symplecticity of the constructed transform have been checked. In the unperturbed case, the regular equations have a polynomial structure. The regular equations have been numerically integrated using the Runge-Kutta-Fehlberg method. The results of numerical experiments are given for the Earth-Moon system parameters taking into account the perturbation of the Sun for different L-matrices.

  7. Stability analysis of a liquid fuel annular combustion chamber. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Mcdonald, G. H.

    1978-01-01

    High frequency combustion instability problems in a liquid fuel annular combustion chamber are examined. A modified Galerkin method was used to produce a set of modal amplitude equations from the general nonlinear partial differential acoustic wave equation in order to analyze the problem of instability. From these modal amplitude equations, the two variable perturbation method was used to develop a set of approximate equations of a given order of magnitude. These equations were modeled to show the effects of velocity sensitive combustion instabilities by evaluating the effects of certain parameters in the given set of equations.

  8. Using ridge regression in systematic pointing error corrections

    NASA Technical Reports Server (NTRS)

    Guiar, C. N.

    1988-01-01

    A pointing error model is used in the antenna calibration process. Data from spacecraft or radio star observations are used to determine the parameters in the model. However, the regression variables are not truly independent, displaying a condition known as multicollinearity. Ridge regression, a biased estimation technique, is used to combat the multicollinearity problem. Two data sets pertaining to Voyager 1 spacecraft tracking (days 105 and 106 of 1987) were analyzed using both linear least squares and ridge regression methods. The advantages and limitations of employing the technique are presented. The problem is not yet fully resolved.

  9. Stochastic analysis of uncertain thermal parameters for random thermal regime of frozen soil around a single freezing pipe

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Zhou, Guoqing; Wang, Jianzhou; Zhou, Lei

    2018-03-01

    The artificial ground freezing method (AGF) is widely used in civil and mining engineering, and the thermal regime of frozen soil around the freezing pipe affects the safety of design and construction. The thermal parameters can be truly random due to heterogeneity of the soil properties, which lead to the randomness of thermal regime of frozen soil around the freezing pipe. The purpose of this paper is to study the one-dimensional (1D) random thermal regime problem on the basis of a stochastic analysis model and the Monte Carlo (MC) method. Considering the uncertain thermal parameters of frozen soil as random variables, stochastic processes and random fields, the corresponding stochastic thermal regime of frozen soil around a single freezing pipe are obtained and analyzed. Taking the variability of each stochastic parameter into account individually, the influences of each stochastic thermal parameter on stochastic thermal regime are investigated. The results show that the mean temperatures of frozen soil around the single freezing pipe with three analogy method are the same while the standard deviations are different. The distributions of standard deviation have a great difference at different radial coordinate location and the larger standard deviations are mainly at the phase change area. The computed data with random variable method and stochastic process method have a great difference from the measured data while the computed data with random field method well agree with the measured data. Each uncertain thermal parameter has a different effect on the standard deviation of frozen soil temperature around the single freezing pipe. These results can provide a theoretical basis for the design and construction of AGF.

  10. Structural Damage Detection Using Virtual Passive Controllers

    NASA Technical Reports Server (NTRS)

    Lew, Jiann-Shiun; Juang, Jer-Nan

    2001-01-01

    This paper presents novel approaches for structural damage detection which uses the virtual passive controllers attached to structures, where passive controllers are energy dissipative devices and thus guarantee the closed-loop stability. The use of the identified parameters of various closed-loop systems can solve the problem that reliable identified parameters, such as natural frequencies of the open-loop system may not provide enough information for damage detection. Only a small number of sensors are required for the proposed approaches. The identified natural frequencies, which are generally much less sensitive to noise and more reliable than the identified natural frequencies, are used for damage detection. Two damage detection techniques are presented. One technique is based on the structures with direct output feedback controllers while the other technique uses the second-order dynamic feedback controllers. A least-squares technique, which is based on the sensitivity of natural frequencies to damage variables, is used for accurately identifying the damage variables.

  11. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting

    PubMed Central

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen; Wald, Lawrence L.

    2017-01-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization. PMID:26915119

  12. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.

    PubMed

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L

    2016-08-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.

  13. [Hypothesis on the equilibrium point and variability of amplitude, speed and time of single-joint movement].

    PubMed

    Latash, M; Gottleib, G

    1990-01-01

    Problems of single-joint movement variability are analysed in the framework of the equilibrium-point hypothesis (the lambda-model). Control of the movements is described with three parameters related to movement amplitude speed, and time. Three strategies emerge from this description. Only one of them is likely to lead to a Fitts' type speed-accuracy trade-off. Experiments were performed to test one of the predictions of the model. Subjects performed identical sets of single-joint fast movements with open or closed eyes and some-what different instructions. Movements performed with closed eyes were characterized with higher peak speeds and unchanged variability in seeming violation of the Fitt's law and in a good correspondence to the model.

  14. Incorporating imperfect detection into joint models of communites: A response to Warton et al.

    USGS Publications Warehouse

    Beissinger, Steven R.; Iknayan, Kelly J.; Guillera-Arroita, Gurutzeta; Zipkin, Elise; Dorazio, Robert; Royle, Andy; Kery, Marc

    2016-01-01

    Warton et al. [1] advance community ecology by describing a statistical framework that can jointly model abundances (or distributions) across many taxa to quantify how community properties respond to environmental variables. This framework specifies the effects of both measured and unmeasured (latent) variables on the abundance (or occurrence) of each species. Latent variables are random effects that capture the effects of both missing environmental predictors and correlations in parameter values among different species. As presented in Warton et al., however, the joint modeling framework fails to account for the common problem of detection or measurement errors that always accompany field sampling of abundance or occupancy, and are well known to obscure species- and community-level inferences.

  15. What are the most important variables for Poaceae airborne pollen forecasting?

    PubMed

    Navares, Ricardo; Aznarte, José Luis

    2017-02-01

    In this paper, the problem of predicting future concentrations of airborne pollen is solved through a computational intelligence data-driven approach. The proposed method is able to identify the most important variables among those considered by other authors (mainly recent pollen concentrations and weather parameters), without any prior assumptions about the phenological relevance of the variables. Furthermore, an inferential procedure based on non-parametric hypothesis testing is presented to provide statistical evidence of the results, which are coherent to the literature and outperform previous proposals in terms of accuracy. The study is built upon Poaceae airborne pollen concentrations recorded in seven different locations across the Spanish province of Madrid. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Finite-size analysis of a continuous-variable quantum key distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leverrier, Anthony; Grosshans, Frederic; Grangier, Philippe

    2010-06-15

    The goal of this paper is to extend the framework of finite-size analysis recently developed for quantum key distribution to continuous-variable protocols. We do not solve this problem completely here, and we mainly consider the finite-size effects on the parameter estimation procedure. Despite the fact that some questions are left open, we are able to give an estimation of the secret key rate for protocols which do not contain a postselection procedure. As expected, these results are significantly more pessimistic than those obtained in the asymptotic regime. However, we show that recent continuous-variable protocols are able to provide fully securemore » secret keys in the finite-size scenario, over distances larger than 50 km.« less

  17. An Engineering Approach to the Variable Fluid Property Problem in Free Convection

    NASA Technical Reports Server (NTRS)

    Gregg, J. L.; Sparrow, E. M.

    1956-01-01

    An analysis is made for the variable fluid property problem for laminar free convection on an isothermal vertical flat plate. For a number of specific cases, solutions of the boundary layer equations appropriate to the variable property situation were carried out for gases and liquid mercury. Utilizing these findings, a simple and accurate shorthand procedure is presented for calculating free convection heat transfer under variable property conditions. This calculation method is well established in the heat transfer field. It involves the use of results which have been derived for constant property fluids, and of a set of rules (called reference temperatures) for extending these constant property results to variable property situations. For gases, the constant property heat transfer results are generalized to the variable property situation by replacing beta (expansion coefficient) by one over T sub infinity and evaluating the other properties at T sub r equals T sub w minus zero point thirty-eight (T sub w minus T sub infinity). For liquid mercury, the generalization may be accomplished by evaluating all the properties (including beta) at this same T sub r. It is worthwhile noting that for these fluids, the film temperature (with beta equals one over T sub infinity for gases) appears to serve as an adequate reference temperature for most applications. Results are also presented for boundary layer thickness and velocity parameters.

  18. Energy balance in Saturn's upper atmosphere: Joint Lyman-α airglow observations with HST and Cassini

    NASA Astrophysics Data System (ADS)

    Ben-Jaffel, L.; Baines, K. H.; Ballester, G.; Holberg, H. B.; Koskinen, T.; Moses, J. I.; West, R. A.; Yelle, R. V.

    2017-12-01

    We are conducting Hubble Space Telescope UV spectroscopy of Saturn's disk-reflected Lyman-α line (Ly-α) at the same time as Cassini airglow measurements. Saturn's Ly-α emission is composed of solar and interplanetary (IPH) Ly-α photons scattered by its upper atmosphere. The H I Ly-a line probes different upper atmospheric layers down to the homopause, providing an independent way to investigate the H I abundance and energy balance. However, this is a degenerate, multi-parameter, radiative-transfer problem that depends on: H I column density, scattering process by thermal and superthermal hydrogen, time-variable solar and IPH sources, and instrument calibration. Our joint HST-Cassini campaign should break the degeneracy in the Saturn airglow problem. First, line integrated fluxes simultaneously measured by HST/STIS (dayside) and Cassini/UVIS (nightside), avoiding solar variability, should resolve the solar and IPH sources. Second, high-resolution spectroscopy with STIS will reveal superthermal line broadening not accessible with a low-resolution spectrometer like UVIS. Third, a second visit observing the same limb of Saturn will cross-calibrate the instruments and, with the STIS linewidth information, will yield the H I abundance, a key photochemical parameter not measured by Cassini. Finally, the STIS latitudinal mapping of the Ly-α linewidth will be correlated with Cassini's latitudinal temperature profile of the thermosphere, to provide an independent constraint on the thermospheric energy budget, a fundamental outstanding problem for giant planets. Here, we report the first results from the HST-Cassini campaign.

  19. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context.

    PubMed

    Martinez, Josue G; Carroll, Raymond J; Müller, Samuel; Sampson, Joshua N; Chatterjee, Nilanjan

    2011-11-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso.

  20. Computing multiple periodic solutions of nonlinear vibration problems using the harmonic balance method and Groebner bases

    NASA Astrophysics Data System (ADS)

    Grolet, Aurelien; Thouverez, Fabrice

    2015-02-01

    This paper is devoted to the study of vibration of mechanical systems with geometric nonlinearities. The harmonic balance method is used to derive systems of polynomial equations whose solutions give the frequency component of the possible steady states. Groebner basis methods are used for computing all solutions of polynomial systems. This approach allows to reduce the complete system to an unique polynomial equation in one variable driving all solutions of the problem. In addition, in order to decrease the number of variables, we propose to first work on the undamped system, and recover solution of the damped system using a continuation on the damping parameter. The search for multiple solutions is illustrated on a simple system, where the influence of the retained number of harmonic is studied. Finally, the procedure is applied on a simple cyclic system and we give a representation of the multiple states versus frequency.

  1. New solitary wave and multiple soliton solutions for fifth order nonlinear evolution equation with time variable coefficients

    NASA Astrophysics Data System (ADS)

    Jaradat, H. M.; Syam, Muhammed; Jaradat, M. M. M.; Mustafa, Zead; Moman, S.

    2018-03-01

    In this paper, we investigate the multiple soliton solutions and multiple singular soliton solutions of a class of the fifth order nonlinear evolution equation with variable coefficients of t using the simplified bilinear method based on a transformation method combined with the Hirota's bilinear sense. In addition, we present analysis for some parameters such as the soliton amplitude and the characteristic line. Several equation in the literature are special cases of the class which we discuss such as Caudrey-Dodd-Gibbon equation and Sawada-Kotera. Comparison with several methods in the literature, such as Helmholtz solution of the inverse variational problem, rational exponential function method, tanh method, homotopy perturbation method, exp-function method, and coth method, are made. From these comparisons, we conclude that the proposed method is efficient and our solutions are correct. It is worth mention that the proposed solution can solve many physical problems.

  2. Variance-based interaction index measuring heteroscedasticity

    NASA Astrophysics Data System (ADS)

    Ito, Keiichi; Couckuyt, Ivo; Poles, Silvia; Dhaene, Tom

    2016-06-01

    This work is motivated by the need to deal with models with high-dimensional input spaces of real variables. One way to tackle high-dimensional problems is to identify interaction or non-interaction among input parameters. We propose a new variance-based sensitivity interaction index that can detect and quantify interactions among the input variables of mathematical functions and computer simulations. The computation is very similar to first-order sensitivity indices by Sobol'. The proposed interaction index can quantify the relative importance of input variables in interaction. Furthermore, detection of non-interaction for screening can be done with as low as 4 n + 2 function evaluations, where n is the number of input variables. Using the interaction indices based on heteroscedasticity, the original function may be decomposed into a set of lower dimensional functions which may then be analyzed separately.

  3. Axisymmetric flow of Casson fluid by a swirling cylinder

    NASA Astrophysics Data System (ADS)

    Javed, Muhammad Faisal; Khan, Muhammad Imran; Khan, Niaz Bahadur; Muhammad, Riaz; Rehman, Muftooh Ur; Khan, Sajjad Wali; Khan, Tufail A.

    2018-06-01

    The present communication aims to investigate the influence of heat generation/absorption on axisymmetric Casson liquid flow over a stretched cylinder. Flow is caused due to torsional motion of cylinder. The governing physical problem is modelled and transferred into set of coupled nonlinear ordinary differential equations. These equations are solved numerically using built-in-Shooting method. Influence of sundry variables on the swirling velocity, temperature, coefficient of skin friction and heat transfer rate are computed and analyzed in a physical manner. Magnitude of axial skin friction is enhances for larger Reynold number and magnetic parameter while local Nusselt number decays with the enhancement of Casson parameter, heat generation/absorption and magnetic parameter. Comparison with already existing results is also given in the limiting case.

  4. MHD Jeffrey nanofluid past a stretching sheet with viscous dissipation effect

    NASA Astrophysics Data System (ADS)

    Zokri, S. M.; Arifin, N. S.; Salleh, M. Z.; Kasim, A. R. M.; Mohammad, N. F.; Yusoff, W. N. S. W.

    2017-09-01

    This study investigates the influence of viscous dissipation on magnetohydrodynamic (MHD) flow of Jeffrey nanofluid over a stretching sheet with convective boundary conditions. The nonlinear partial differential equations are reduced into the nonlinear ordinary differential equations by utilizing the similarity transformation variables. The Runge-Kutta Fehlberg method is used to solve the problem numerically. The numerical solutions obtained are presented graphically for several dimensionless parameters such as Brownian motion, Lewis number and Eckert number on the specified temperature and concentration profiles. It is noted that the temperature profile is accelerated due to increasing values of Brownian motion parameter and Eckert number. In contrast, both the Brownian motion parameter and Lewis number have caused the deceleration in the concentration profiles.

  5. Multiobjective sampling design for parameter estimation and model discrimination in groundwater solute transport

    USGS Publications Warehouse

    Knopman, Debra S.; Voss, Clifford I.

    1989-01-01

    Sampling design for site characterization studies of solute transport in porous media is formulated as a multiobjective problem. Optimal design of a sampling network is a sequential process in which the next phase of sampling is designed on the basis of all available physical knowledge of the system. Three objectives are considered: model discrimination, parameter estimation, and cost minimization. For the first two objectives, physically based measures of the value of information obtained from a set of observations are specified. In model discrimination, value of information of an observation point is measured in terms of the difference in solute concentration predicted by hypothesized models of transport. Points of greatest difference in predictions can contribute the most information to the discriminatory power of a sampling design. Sensitivity of solute concentration to a change in a parameter contributes information on the relative variance of a parameter estimate. Inclusion of points in a sampling design with high sensitivities to parameters tends to reduce variance in parameter estimates. Cost minimization accounts for both the capital cost of well installation and the operating costs of collection and analysis of field samples. Sensitivities, discrimination information, and well installation and sampling costs are used to form coefficients in the multiobjective problem in which the decision variables are binary (zero/one), each corresponding to the selection of an observation point in time and space. The solution to the multiobjective problem is a noninferior set of designs. To gain insight into effective design strategies, a one-dimensional solute transport problem is hypothesized. Then, an approximation of the noninferior set is found by enumerating 120 designs and evaluating objective functions for each of the designs. Trade-offs between pairs of objectives are demonstrated among the models. The value of an objective function for a given design is shown to correspond to the ability of a design to actually meet an objective.

  6. The Interface Between Theory and Data in Structural Equation Models

    USGS Publications Warehouse

    Grace, James B.; Bollen, Kenneth A.

    2006-01-01

    Structural equation modeling (SEM) holds the promise of providing natural scientists the capacity to evaluate complex multivariate hypotheses about ecological systems. Building on its predecessors, path analysis and factor analysis, SEM allows for the incorporation of both observed and unobserved (latent) variables into theoretically based probabilistic models. In this paper we discuss the interface between theory and data in SEM and the use of an additional variable type, the composite, for representing general concepts. In simple terms, composite variables specify the influences of collections of other variables and can be helpful in modeling general relationships of the sort commonly of interest to ecologists. While long recognized as a potentially important element of SEM, composite variables have received very limited use, in part because of a lack of theoretical consideration, but also because of difficulties that arise in parameter estimation when using conventional solution procedures. In this paper we present a framework for discussing composites and demonstrate how the use of partially reduced form models can help to overcome some of the parameter estimation and evaluation problems associated with models containing composites. Diagnostic procedures for evaluating the most appropriate and effective use of composites are illustrated with an example from the ecological literature. It is argued that an ability to incorporate composite variables into structural equation models may be particularly valuable in the study of natural systems, where concepts are frequently multifaceted and the influences of suites of variables are often of interest.

  7. Axi-symmetric generalized thermoelastic diffusion problem with two-temperature and initial stress under fractional order heat conduction

    NASA Astrophysics Data System (ADS)

    Deswal, Sunita; Kalkal, Kapil Kumar; Sheoran, Sandeep Singh

    2016-09-01

    A mathematical model of fractional order two-temperature generalized thermoelasticity with diffusion and initial stress is proposed to analyze the transient wave phenomenon in an infinite thermoelastic half-space. The governing equations are derived in cylindrical coordinates for a two dimensional axi-symmetric problem. The analytical solution is procured by employing the Laplace and Hankel transforms for time and space variables respectively. The solutions are investigated in detail for a time dependent heat source. By using numerical inversion method of integral transforms, we obtain the solutions for displacement, stress, temperature and diffusion fields in physical domain. Computations are carried out for copper material and displayed graphically. The effect of fractional order parameter, two-temperature parameter, diffusion, initial stress and time on the different thermoelastic and diffusion fields is analyzed on the basis of analytical and numerical results. Some special cases have also been deduced from the present investigation.

  8. Sensitivity Analysis of Genetic Algorithm Parameters for Optimal Groundwater Monitoring Network Design

    NASA Astrophysics Data System (ADS)

    Abdeh-Kolahchi, A.; Satish, M.; Datta, B.

    2004-05-01

    A state art groundwater monitoring network design is introduced. The method combines groundwater flow and transport results with optimization Genetic Algorithm (GA) to identify optimal monitoring well locations. Optimization theory uses different techniques to find a set of parameter values that minimize or maximize objective functions. The suggested groundwater optimal monitoring network design is based on the objective of maximizing the probability of tracking a transient contamination plume by determining sequential monitoring locations. The MODFLOW and MT3DMS models included as separate modules within the Groundwater Modeling System (GMS) are used to develop three dimensional groundwater flow and contamination transport simulation. The groundwater flow and contamination simulation results are introduced as input to the optimization model, using Genetic Algorithm (GA) to identify the groundwater optimal monitoring network design, based on several candidate monitoring locations. The groundwater monitoring network design model is used Genetic Algorithms with binary variables representing potential monitoring location. As the number of decision variables and constraints increase, the non-linearity of the objective function also increases which make difficulty to obtain optimal solutions. The genetic algorithm is an evolutionary global optimization technique, which is capable of finding the optimal solution for many complex problems. In this study, the GA approach capable of finding the global optimal solution to a groundwater monitoring network design problem involving 18.4X 1018 feasible solutions will be discussed. However, to ensure the efficiency of the solution process and global optimality of the solution obtained using GA, it is necessary that appropriate GA parameter values be specified. The sensitivity analysis of genetic algorithms parameters such as random number, crossover probability, mutation probability, and elitism are discussed for solution of monitoring network design.

  9. LSENS, a general chemical kinetics and sensitivity analysis code for homogeneous gas-phase reactions. 2: Code description and usage

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 2 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 2 describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part 1 (NASA RP-1328) derives the governing equations describes the numerical solution procedures for the types of problems that can be solved by lSENS. Part 3 (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.

  10. Sources of Uncertainty in the Prediction of LAI / fPAR from MODIS

    NASA Technical Reports Server (NTRS)

    Dungan, Jennifer L.; Ganapol, Barry D.; Brass, James A. (Technical Monitor)

    2002-01-01

    To explicate the sources of uncertainty in the prediction of biophysical variables over space, consider the general equation: where z is a variable with values on some nominal, ordinal, interval or ratio scale; y is a vector of input variables; u is the spatial support of y and z ; x and u are the spatial locations of y and z , respectively; f is a model and B is the vector of the parameters of this model. Any y or z has a value and a spatial extent which is called its support. Viewed in this way, categories of uncertainty are from variable (e.g. measurement), parameter, positional. support and model (e.g. structural) sources. The prediction of Leaf Area Index (LAI) and the fraction of absorbed photosynthetically active radiation (fPAR) are examples of z variables predicted using model(s) as a function of y variables and spatially constant parameters. The MOD15 algorithm is an example of f, called f(sub 1), with parameters including those defined by one of six biome types and solar and view angles. The Leaf Canopy Model (LCM)2, a nested model that combines leaf radiative transfer with a full canopy reflectance model through the phase function, is a simpler though similar radiative transfer approach to f(sub 1). In a previous study, MOD15 and LCM2 gave similar results for the broadleaf forest biome. Differences between these two models can be used to consider the structural uncertainty in prediction results. In an effort to quantify each of the five sources of uncertainty and rank their relative importance for the LAI/fPAR prediction problem, we used recent data for an EOS Core Validation Site in the broadleaf biome with coincident surface reflectance, vegetation index, fPAR and LAI products from the Moderate Resolution Imaging Spectrometer (MODIS). Uncertainty due to support on the input reflectance variable was characterized using Landsat ETM+ data. Input uncertainties were propagated through the LCM2 model and compared with published uncertainties from the MOD15 algorithm.

  11. Optimum strata boundaries and sample sizes in health surveys using auxiliary variables

    PubMed Central

    2018-01-01

    Using convenient stratification criteria such as geographical regions or other natural conditions like age, gender, etc., is not beneficial in order to maximize the precision of the estimates of variables of interest. Thus, one has to look for an efficient stratification design to divide the whole population into homogeneous strata that achieves higher precision in the estimation. In this paper, a procedure for determining Optimum Stratum Boundaries (OSB) and Optimum Sample Sizes (OSS) for each stratum of a variable of interest in health surveys is developed. The determination of OSB and OSS based on the study variable is not feasible in practice since the study variable is not available prior to the survey. Since many variables in health surveys are generally skewed, the proposed technique considers the readily-available auxiliary variables to determine the OSB and OSS. This stratification problem is formulated into a Mathematical Programming Problem (MPP) that seeks minimization of the variance of the estimated population parameter under Neyman allocation. It is then solved for the OSB by using a dynamic programming (DP) technique. A numerical example with a real data set of a population, aiming to estimate the Haemoglobin content in women in a national Iron Deficiency Anaemia survey, is presented to illustrate the procedure developed in this paper. Upon comparisons with other methods available in literature, results reveal that the proposed approach yields a substantial gain in efficiency over the other methods. A simulation study also reveals similar results. PMID:29621265

  12. Optimum strata boundaries and sample sizes in health surveys using auxiliary variables.

    PubMed

    Reddy, Karuna Garan; Khan, Mohammad G M; Khan, Sabiha

    2018-01-01

    Using convenient stratification criteria such as geographical regions or other natural conditions like age, gender, etc., is not beneficial in order to maximize the precision of the estimates of variables of interest. Thus, one has to look for an efficient stratification design to divide the whole population into homogeneous strata that achieves higher precision in the estimation. In this paper, a procedure for determining Optimum Stratum Boundaries (OSB) and Optimum Sample Sizes (OSS) for each stratum of a variable of interest in health surveys is developed. The determination of OSB and OSS based on the study variable is not feasible in practice since the study variable is not available prior to the survey. Since many variables in health surveys are generally skewed, the proposed technique considers the readily-available auxiliary variables to determine the OSB and OSS. This stratification problem is formulated into a Mathematical Programming Problem (MPP) that seeks minimization of the variance of the estimated population parameter under Neyman allocation. It is then solved for the OSB by using a dynamic programming (DP) technique. A numerical example with a real data set of a population, aiming to estimate the Haemoglobin content in women in a national Iron Deficiency Anaemia survey, is presented to illustrate the procedure developed in this paper. Upon comparisons with other methods available in literature, results reveal that the proposed approach yields a substantial gain in efficiency over the other methods. A simulation study also reveals similar results.

  13. Systematic investigation of non-Boussinesq effects in variable-density groundwater flow simulations.

    PubMed

    Guevara Morel, Carlos R; van Reeuwijk, Maarten; Graf, Thomas

    2015-12-01

    The validity of three mathematical models describing variable-density groundwater flow is systematically evaluated: (i) a model which invokes the Oberbeck-Boussinesq approximation (OB approximation), (ii) a model of intermediate complexity (NOB1) and (iii) a model which solves the full set of equations (NOB2). The NOB1 and NOB2 descriptions have been added to the HydroGeoSphere (HGS) model, which originally contained an implementation of the OB description. We define the Boussinesq parameter ερ=βω Δω where βω is the solutal expansivity and Δω is the characteristic difference in solute mass fraction. The Boussinesq parameter ερ is used to systematically investigate three flow scenarios covering a range of free and mixed convection problems: 1) the low Rayleigh number Elder problem (Van Reeuwijk et al., 2009), 2) a convective fingering problem (Xie et al., 2011) and 3) a mixed convective problem (Schincariol et al., 1994). Results indicate that small density differences (ερ≤ 0.05) produce no apparent changes in the total solute mass in the system, plume penetration depth, center of mass and mass flux independent of the mathematical model used. Deviations between OB, NOB1 and NOB2 occur for large density differences (ερ>0.12), where lower description levels will underestimate the vertical plume position and overestimate mass flux. Based on the cases considered here, we suggest the following guidelines for saline convection: the OB approximation is valid for cases with ερ<0.05, and the full NOB set of equations needs to be used for cases with ερ>0.10. Whether NOB effects are important in the intermediate region differ from case to case. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2011-01-01

    An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation applications such as model-based diagnostic, controls, and life usage calculations. The advantage of the innovation is the significant reduction in estimation errors that it can provide relative to the conventional approach of selecting a subset of health parameters to serve as the model tuning parameter vector. Because this technique needs only to be performed during the system design process, it places no additional computation burden on the onboard Kalman filter implementation. The technique has been developed for aircraft engine onboard estimation applications, as this application typically presents an under-determined estimation problem. However, this generic technique could be applied to other industries using gas turbine engine technology.

  15. The role of simulated small-scale ocean variability in inverse computations for ocean acoustic tomography.

    PubMed

    Dushaw, Brian D; Sagen, Hanne

    2017-12-01

    Ocean acoustic tomography depends on a suitable reference ocean environment with which to set the basic parameters of the inverse problem. Some inverse problems may require a reference ocean that includes the small-scale variations from internal waves, small mesoscale, or spice. Tomographic inversions that employ data of stable shadow zone arrivals, such as those that have been observed in the North Pacific and Canary Basin, are an example. Estimating temperature from the unique acoustic data that have been obtained in Fram Strait is another example. The addition of small-scale variability to augment a smooth reference ocean is essential to understanding the acoustic forward problem in these cases. Rather than a hindrance, the stochastic influences of the small scale can be exploited to obtain accurate inverse estimates. Inverse solutions are readily obtained, and they give computed arrival patterns that matched the observations. The approach is not ad hoc, but universal, and it has allowed inverse estimates for ocean temperature variations in Fram Strait to be readily computed on several acoustic paths for which tomographic data were obtained.

  16. Fitting ordinary differential equations to short time course data.

    PubMed

    Brewer, Daniel; Barenco, Martino; Callard, Robin; Hubank, Michael; Stark, Jaroslav

    2008-02-28

    Ordinary differential equations (ODEs) are widely used to model many systems in physics, chemistry, engineering and biology. Often one wants to compare such equations with observed time course data, and use this to estimate parameters. Surprisingly, practical algorithms for doing this are relatively poorly developed, particularly in comparison with the sophistication of numerical methods for solving both initial and boundary value problems for differential equations, and for locating and analysing bifurcations. A lack of good numerical fitting methods is particularly problematic in the context of systems biology where only a handful of time points may be available. In this paper, we present a survey of existing algorithms and describe the main approaches. We also introduce and evaluate a new efficient technique for estimating ODEs linear in parameters particularly suited to situations where noise levels are high and the number of data points is low. It employs a spline-based collocation scheme and alternates linear least squares minimization steps with repeated estimates of the noise-free values of the variables. This is reminiscent of expectation-maximization methods widely used for problems with nuisance parameters or missing data.

  17. Optimization of design and operating parameters of a space-based optical-electronic system with a distributed aperture.

    PubMed

    Tcherniavski, Iouri; Kahrizi, Mojtaba

    2008-11-20

    Using a gradient optimization method with objective functions formulated in terms of a signal-to-noise ratio (SNR) calculated at given values of the prescribed spatial ground resolution, optimization problems of geometrical parameters of a distributed optical system and a charge-coupled device of a space-based optical-electronic system are solved for samples of the optical systems consisting of two and three annular subapertures. The modulation transfer function (MTF) of the distributed aperture is expressed in terms of an average MTF taking residual image alignment (IA) and optical path difference (OPD) errors into account. The results show optimal solutions of the optimization problems depending on diverse variable parameters. The information on the magnitudes of the SNR can be used to determine the number of the subapertures and their sizes, while the information on the SNR decrease depending on the IA and OPD errors can be useful in design of a beam combination control system to produce the necessary requirements to its accuracy on the basis of the permissible deterioration in the image quality.

  18. Computation of solar perturbations with Poisson series

    NASA Technical Reports Server (NTRS)

    Broucke, R.

    1974-01-01

    Description of a project for computing first-order perturbations of natural or artificial satellites by integrating the equations of motion on a computer with automatic Poisson series expansions. A basic feature of the method of solution is that the classical variation-of-parameters formulation is used rather than rectangular coordinates. However, the variation-of-parameters formulation uses the three rectangular components of the disturbing force rather than the classical disturbing function, so that there is no problem in expanding the disturbing function in series. Another characteristic of the variation-of-parameters formulation employed is that six rather unusual variables are used in order to avoid singularities at the zero eccentricity and zero (or 90 deg) inclination. The integration process starts by assuming that all the orbit elements present on the right-hand sides of the equations of motion are constants. These right-hand sides are then simple Poisson series which can be obtained with the use of the Bessel expansions of the two-body problem in conjunction with certain interation methods. These Poisson series can then be integrated term by term, and a first-order solution is obtained.

  19. Multiobjective Optimization of Atmospheric Plasma Spray Process Parameters to Deposit Yttria-Stabilized Zirconia Coatings Using Response Surface Methodology

    NASA Astrophysics Data System (ADS)

    Ramachandran, C. S.; Balasubramanian, V.; Ananthapadmanabhan, P. V.

    2011-03-01

    Atmospheric plasma spraying is used extensively to make Thermal Barrier Coatings of 7-8% yttria-stabilized zirconia powders. The main problem faced in the manufacture of yttria-stabilized zirconia coatings by the atmospheric plasma spraying process is the selection of the optimum combination of input variables for achieving the required qualities of coating. This problem can be solved by the development of empirical relationships between the process parameters (input power, primary gas flow rate, stand-off distance, powder feed rate, and carrier gas flow rate) and the coating quality characteristics (deposition efficiency, tensile bond strength, lap shear bond strength, porosity, and hardness) through effective and strategic planning and the execution of experiments by response surface methodology. This article highlights the use of response surface methodology by designing a five-factor five-level central composite rotatable design matrix with full replication for planning, conduction, execution, and development of empirical relationships. Further, response surface methodology was used for the selection of optimum process parameters to achieve desired quality of yttria-stabilized zirconia coating deposits.

  20. Proposed Test of Relative Phase as Hidden Variable in Quantum Mechanics

    DTIC Science & Technology

    2012-01-01

    implicitly due to its ubiquity in quantum theory , but searches for dependence of measurement outcome on other parameters have been lacking. For a two -state...implemen- tation for the specific case of an atomic two -state system with laser-induced fluores- cence for measurement. Keywords Quantum measurement...Measurement postulate · Born rule 1 Introduction 1.1 Problems with Quantum Measurement Quantum theory prescribes probabilities for outcomes of measurements

  1. The Prediction of Transducer Element Performance from In-Air Measurements.

    DTIC Science & Technology

    1982-01-19

    33 13. Predicted and Measured Transducer Impedance . . . 35 14. Principle of Operation of Fotonic Sensor . . . . 40 15. Experimental Set-up for...inferred from tests of the assembled element, and cannot account for assembly problems such as misalignment and improper glue joints. Thus, the...the results neither predict nor account for the element variability found in actual practice. Our purpose, then, is to derive the lumped-parameter

  2. Application of the simplex method to the optimal adjustment of the parameters of a ventilation network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamba, G.M.; Jacques, E.; Patigny, J.

    1995-12-31

    Literature is rather abundant on the topic of steady-state network analysis programs. Many versions exist, some of them have real extended facilities such as full graphical manipulation, fire simulation in motion, etc. These programs are certainly of great help to any ventilation planning and often assist the ventilation engineer in his operational decision making. However, what ever the efficiency of the calculation algorithms might be, their weak point still is the overall validity of the model. This numerical model, apart from maybe the questionable application of some physical laws, depends directly on the quality of the data used to identifymore » its most influencing parameters such as the passive (resistance) or active (fan) characteristic of each of the branches in the network. Considering the non-linear character of the problem and the great number of variables involved, finding the closest numerical model of a real mine ventilation network is without any doubt a very difficult problem. This problem, often referred to as the parameter adjustment problem, is in almost every practical case solved on an experimental and {open_quotes}feeling{close_quotes} basis. Only a few papers put forward a mathematical solution based on a least square approach as the best fit criterion. The aim of this paper is to examine the possibility to apply the well-known simplex method to this problem. The performance of this method and its capability to reach the global optimum which corresponds to the best fit is discussed and compared to that of other methods.« less

  3. Sequential estimation of intrinsic activity and synaptic input in single neurons by particle filtering with optimal importance density

    NASA Astrophysics Data System (ADS)

    Closas, Pau; Guillamon, Antoni

    2017-12-01

    This paper deals with the problem of inferring the signals and parameters that cause neural activity to occur. The ultimate challenge being to unveil brain's connectivity, here we focus on a microscopic vision of the problem, where single neurons (potentially connected to a network of peers) are at the core of our study. The sole observation available are noisy, sampled voltage traces obtained from intracellular recordings. We design algorithms and inference methods using the tools provided by stochastic filtering that allow a probabilistic interpretation and treatment of the problem. Using particle filtering, we are able to reconstruct traces of voltages and estimate the time course of auxiliary variables. By extending the algorithm, through PMCMC methodology, we are able to estimate hidden physiological parameters as well, like intrinsic conductances or reversal potentials. Last, but not least, the method is applied to estimate synaptic conductances arriving at a target cell, thus reconstructing the synaptic excitatory/inhibitory input traces. Notably, the performance of these estimations achieve the theoretical lower bounds even in spiking regimes.

  4. An interior-point method for total variation regularized positron emission tomography image reconstruction

    NASA Astrophysics Data System (ADS)

    Bai, Bing

    2012-03-01

    There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.

  5. Application of modern control theory to scheduling and path-stretching maneuvers of aircraft in the near terminal area

    NASA Technical Reports Server (NTRS)

    Athans, M.

    1974-01-01

    A design concept of the dynamic control of aircraft in the near terminal area is discussed. An arbitrary set of nominal air routes, with possible multiple merging points, all leading to a single runway, is considered. The system allows for the automated determination of acceleration/deceleration of aircraft along the nominal air routes, as well as for the automated determination of path-stretching delay maneuvers. In addition to normal operating conditions, the system accommodates: (1) variable commanded separations over the outer marker to allow for takeoffs and between successive landings and (2) emergency conditions under which aircraft in distress have priority. The system design is based on a combination of three distinct optimal control problems involving a standard linear-quadratic problem, a parameter optimization problem, and a minimum-time rendezvous problem.

  6. Numerical investigation of CO{sub 2} emission and thermal stability of a convective and radiative stockpile of reactive material in a cylindrical pipe of variable thermal conductivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lebelo, Ramoshweu Solomon, E-mail: sollyl@vut.ac.za

    In this paper the CO{sub 2} emission and thermal stability in a long cylindrical pipe of combustible reactive material with variable thermal conductivity are investigated. It is assumed that the cylindrical pipe loses heat by both convection and radiation at the surface. The nonlinear differential equations governing the problem are tackled numerically using Runge-Kutta-Fehlberg method coupled with shooting technique method. The effects of various thermophysical parameters on the temperature and carbon dioxide fields, together with critical conditions for thermal ignition are illustrated and discussed quantitatively.

  7. An exploration of viscosity models in the realm of kinetic theory of liquids originated fluids

    NASA Astrophysics Data System (ADS)

    Hussain, Azad; Ghafoor, Saadia; Malik, M. Y.; Jamal, Sarmad

    The preeminent perspective of this article is to study flow of an Eyring Powell fluid model past a penetrable plate. To find the effects of variable viscosity on fluid model, continuity, momentum and energy equations are elaborated. Here, viscosity is taken as function of temperature. To understand the phenomenon, Reynold and Vogel models of variable viscosity are incorporated. The highly non-linear partial differential equations are transfigured into ordinary differential equations with the help of suitable similarity transformations. The numerical solution of the problem is presented. Graphs are plotted to visualize the behavior of pertinent parameters on the velocity and temperature profiles.

  8. Optimal observables for multiparameter seismic tomography

    NASA Astrophysics Data System (ADS)

    Bernauer, Moritz; Fichtner, Andreas; Igel, Heiner

    2014-08-01

    We propose a method for the design of seismic observables with maximum sensitivity to a target model parameter class, and minimum sensitivity to all remaining parameter classes. The resulting optimal observables thereby minimize interparameter trade-offs in multiparameter inverse problems. Our method is based on the linear combination of fundamental observables that can be any scalar measurement extracted from seismic waveforms. Optimal weights of the fundamental observables are determined with an efficient global search algorithm. While most optimal design methods assume variable source and/or receiver positions, our method has the flexibility to operate with a fixed source-receiver geometry, making it particularly attractive in studies where the mobility of sources and receivers is limited. In a series of examples we illustrate the construction of optimal observables, and assess the potentials and limitations of the method. The combination of Rayleigh-wave traveltimes in four frequency bands yields an observable with strongly enhanced sensitivity to 3-D density structure. Simultaneously, sensitivity to S velocity is reduced, and sensitivity to P velocity is eliminated. The original three-parameter problem thereby collapses into a simpler two-parameter problem with one dominant parameter. By defining parameter classes to equal earth model properties within specific regions, our approach mimics the Backus-Gilbert method where data are combined to focus sensitivity in a target region. This concept is illustrated using rotational ground motion measurements as fundamental observables. Forcing dominant sensitivity in the near-receiver region produces an observable that is insensitive to the Earth structure at more than a few wavelengths' distance from the receiver. This observable may be used for local tomography with teleseismic data. While our test examples use a small number of well-understood fundamental observables, few parameter classes and a radially symmetric earth model, the method itself does not impose such restrictions. It can easily be applied to large numbers of fundamental observables and parameters classes, as well as to 3-D heterogeneous earth models.

  9. Testing life history predictions in a long-lived seabird: A population matrix approach with improved parameter estimation

    USGS Publications Warehouse

    Doherty, P.F.; Schreiber, E.A.; Nichols, J.D.; Hines, J.E.; Link, W.A.; Schenk, G.A.; Schreiber, R.W.

    2004-01-01

    Life history theory and associated empirical generalizations predict that population growth rate (λ) in long-lived animals should be most sensitive to adult survival; the rates to which λ is most sensitive should be those with the smallest temporal variances; and stochastic environmental events should most affect the rates to which λ is least sensitive. To date, most analyses attempting to examine these predictions have been inadequate, their validity being called into question by problems in estimating parameters, problems in estimating the variability of parameters, and problems in measuring population sensitivities to parameters. We use improved methodologies in these three areas and test these life-history predictions in a population of red-tailed tropicbirds (Phaethon rubricauda). We support our first prediction that λ is most sensitive to survival rates. However the support for the second prediction that these rates have the smallest temporal variance was equivocal. Previous support for the second prediction may be an artifact of a high survival estimate near the upper boundary of 1 and not a result of natural selection canalizing variances alone. We did not support our third prediction that effects of environmental stochasticity (El Niño) would most likely be detected in vital rates to which λ was least sensitive and which are thought to have high temporal variances. Comparative data-sets on other seabirds, within and among orders, and in other locations, are needed to understand these environmental effects.

  10. Geographically weighted regression and multicollinearity: dispelling the myth

    NASA Astrophysics Data System (ADS)

    Fotheringham, A. Stewart; Oshan, Taylor M.

    2016-10-01

    Geographically weighted regression (GWR) extends the familiar regression framework by estimating a set of parameters for any number of locations within a study area, rather than producing a single parameter estimate for each relationship specified in the model. Recent literature has suggested that GWR is highly susceptible to the effects of multicollinearity between explanatory variables and has proposed a series of local measures of multicollinearity as an indicator of potential problems. In this paper, we employ a controlled simulation to demonstrate that GWR is in fact very robust to the effects of multicollinearity. Consequently, the contention that GWR is highly susceptible to multicollinearity issues needs rethinking.

  11. Design and performance study of an orthopaedic surgery robotized module for automatic bone drilling.

    PubMed

    Boiadjiev, George; Kastelov, Rumen; Boiadjiev, Tony; Kotev, Vladimir; Delchev, Kamen; Zagurski, Kazimir; Vitkov, Vladimir

    2013-12-01

    Many orthopaedic operations involve drilling and tapping before the insertion of screws into a bone. This drilling is usually performed manually, thus introducing many problems. These include attaining a specific drilling accuracy, preventing blood vessels from breaking, and minimizing drill oscillations that would widen the hole. Bone overheating is the most important problem. To avoid such problems and reduce the subjective factor, automated drilling is recommended. Because numerous parameters influence the drilling process, this study examined some experimental methods. These concerned the experimental identification of technical drilling parameters, including the bone resistance force and temperature in the drilling process. During the drilling process, the following parameters were monitored: time, linear velocity, angular velocity, resistance force, penetration depth, and temperature. Specific drilling effects were revealed during the experiments. The accuracy was improved at the starting point of the drilling, and the error for the entire process was less than 0.2 mm. The temperature deviations were kept within tolerable limits. The results of various experiments with different drilling velocities, drill bit diameters, and penetration depths are presented in tables, as well as the curves of the resistance force and temperature with respect to time. Real-time digital indications of the progress of the drilling process are shown. Automatic bone drilling could entirely solve the problems that usually arise during manual drilling. An experimental setup was designed to identify bone drilling parameters such as the resistance force arising from variable bone density, appropriate mechanical drilling torque, linear speed of the drill, and electromechanical characteristics of the motors, drives, and corresponding controllers. Automatic drilling guarantees greater safety for the patient. Moreover, the robot presented is user-friendly because it is simple to set robot tasks, and process data are collected in real time. Copyright © 2013 John Wiley & Sons, Ltd.

  12. Reliability analysis of composite structures

    NASA Technical Reports Server (NTRS)

    Kan, Han-Pin

    1992-01-01

    A probabilistic static stress analysis methodology has been developed to estimate the reliability of a composite structure. Closed form stress analysis methods are the primary analytical tools used in this methodology. These structural mechanics methods are used to identify independent variables whose variations significantly affect the performance of the structure. Once these variables are identified, scatter in their values is evaluated and statistically characterized. The scatter in applied loads and the structural parameters are then fitted to appropriate probabilistic distribution functions. Numerical integration techniques are applied to compute the structural reliability. The predicted reliability accounts for scatter due to variability in material strength, applied load, fabrication and assembly processes. The influence of structural geometry and mode of failure are also considerations in the evaluation. Example problems are given to illustrate various levels of analytical complexity.

  13. Self-Learning Variable Structure Control for a Class of Sensor-Actuator Systems

    PubMed Central

    Chen, Sanfeng; Li, Shuai; Liu, Bo; Lou, Yuesheng; Liang, Yongsheng

    2012-01-01

    Variable structure strategy is widely used for the control of sensor-actuator systems modeled by Euler-Lagrange equations. However, accurate knowledge on the model structure and model parameters are often required for the control design. In this paper, we consider model-free variable structure control of a class of sensor-actuator systems, where only the online input and output of the system are available while the mathematic model of the system is unknown. The problem is formulated from an optimal control perspective and the implicit form of the control law are analytically obtained by using the principle of optimality. The control law and the optimal cost function are explicitly solved iteratively. Simulations demonstrate the effectiveness and the efficiency of the proposed method. PMID:22778633

  14. Visualization of Global Sensitivity Analysis Results Based on a Combination of Linearly Dependent and Independent Directions

    NASA Technical Reports Server (NTRS)

    Davies, Misty D.; Gundy-Burlet, Karen

    2010-01-01

    A useful technique for the validation and verification of complex flight systems is Monte Carlo Filtering -- a global sensitivity analysis that tries to find the inputs and ranges that are most likely to lead to a subset of the outputs. A thorough exploration of the parameter space for complex integrated systems may require thousands of experiments and hundreds of controlled and measured variables. Tools for analyzing this space often have limitations caused by the numerical problems associated with high dimensionality and caused by the assumption of independence of all of the dimensions. To combat both of these limitations, we propose a technique that uses a combination of the original variables with the derived variables obtained during a principal component analysis.

  15. A Two-Timescale Discretization Scheme for Collocation

    NASA Technical Reports Server (NTRS)

    Desai, Prasun; Conway, Bruce A.

    2004-01-01

    The development of a two-timescale discretization scheme for collocation is presented. This scheme allows a larger discretization to be utilized for smoothly varying state variables and a second finer discretization to be utilized for state variables having higher frequency dynamics. As such. the discretization scheme can be tailored to the dynamics of the particular state variables. In so doing. the size of the overall Nonlinear Programming (NLP) problem can be reduced significantly. Two two-timescale discretization architecture schemes are described. Comparison of results between the two-timescale method and conventional collocation show very good agreement. Differences of less than 0.5 percent are observed. Consequently. a significant reduction (by two-thirds) in the number of NLP parameters and iterations required for convergence can be achieved without sacrificing solution accuracy.

  16. Sensitivity analysis of discrete structural systems: A survey

    NASA Technical Reports Server (NTRS)

    Adelman, H. M.; Haftka, R. T.

    1984-01-01

    Methods for calculating sensitivity derivatives for discrete structural systems are surveyed, primarily covering literature published during the past two decades. Methods are described for calculating derivatives of static displacements and stresses, eigenvalues and eigenvectors, transient structural response, and derivatives of optimum structural designs with respect to problem parameters. The survey is focused on publications addressed to structural analysis, but also includes a number of methods developed in nonstructural fields such as electronics, controls, and physical chemistry which are directly applicable to structural problems. Most notable among the nonstructural-based methods are the adjoint variable technique from control theory, and the Green's function and FAST methods from physical chemistry.

  17. The Concept of Resource Use Efficiency as a Theoretical Basis for Promising Coal Mining Technologies

    NASA Astrophysics Data System (ADS)

    Mikhalchenko, Vadim

    2017-11-01

    The article is devoted to solving one of the most relevant problems of the coal mining industry - its high resource use efficiency, which results in high environmental and economic costs of operating enterprises. It is shown that it is the high resource use efficiency of traditional, historically developed coal production systems that generates a conflict between indicators of economic efficiency and indicators of resistance to uncertainty and variability of market environment parameters. The traditional technological paradigm of exploitation of coal deposits also predetermines high, technology-driven, economic risks. The solution is shown and a real example of the problem solution is considered.

  18. Algorithms for Maneuvering Spacecraft Around Small Bodies

    NASA Technical Reports Server (NTRS)

    Acikmese, A. Bechet; Bayard, David

    2006-01-01

    A document describes mathematical derivations and applications of autonomous guidance algorithms for maneuvering spacecraft in the vicinities of small astronomical bodies like comets or asteroids. These algorithms compute fuel- or energy-optimal trajectories for typical maneuvers by solving the associated optimal-control problems with relevant control and state constraints. In the derivations, these problems are converted from their original continuous (infinite-dimensional) forms to finite-dimensional forms through (1) discretization of the time axis and (2) spectral discretization of control inputs via a finite number of Chebyshev basis functions. In these doubly discretized problems, the Chebyshev coefficients are the variables. These problems are, variously, either convex programming problems or programming problems that can be convexified. The resulting discrete problems are convex parameter-optimization problems; this is desirable because one can take advantage of very efficient and robust algorithms that have been developed previously and are well established for solving such problems. These algorithms are fast, do not require initial guesses, and always converge to global optima. Following the derivations, the algorithms are demonstrated by applying them to numerical examples of flyby, descent-to-hover, and ascent-from-hover maneuvers.

  19. Aquifer Hydrogeologic Layer Zonation at the Hanford Site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savelieva-Trofimova, Elena A.; Kanevski, Mikhail; timonin, v.

    2003-09-10

    Sedimentary aquifer layers are characterized by spatial variability of hydraulic properties. Nevertheless, zones with similar values of hydraulic parameters (parameter zones) can be distinguished. This parameter zonation approach is an alternative to the analysis of spatial variation of the continuous hydraulic parameters. The parameter zonation approach is primarily motivated by the lack of measurements that would be needed for direct spatial modeling of the hydraulic properties. The current work is devoted to the problem of zonation of the Hanford formation, the uppermost sedimentary aquifer unit (U1) included in hydrogeologic models at the Hanford site. U1 is characterized by 5 zonesmore » with different hydraulic properties. Each sampled location is ascribed to a parameter zone by an expert. This initial classification is accompanied by a measure of quality (also indicated by an expert) that addresses the level of classification confidence. In the current study, the coneptual zonation map developed by an expert geologist was used as an a priori model. The parameter zonation problem was formulated as a multiclass classification task. Different geostatistical and machine learning algorithms were adapted and applied to solve this problem, including: indicator kriging, conditional simulations, neural networks of different architectures, and support vector machines. All methods were trained using additional soft information based on expert estimates. Regularization methods were used to overcome possible overfitting. The zonation problem was complicated because there were few samples for some zones (classes) and by the spatial non-stationarity of the data. Special approaches were developed to overcome these complications. The comparison of different methods was performed using qualitative and quantitative statistical methods and image analysis. We examined the correspondence of the results with the geologically based interpretation, including the reproduction of the spatial orientation of the different classes and the spatial correlation structure of the classes. The uncertainty of the classification task was examined using both probabilistic interpretation of the estimators and by examining the results of a set of stochastic realizations. Characterization of the classification uncertainty is the main advantage of the proposed methods.« less

  20. On the treatment of evapotranspiration, soil moisture accounting, and aquifer recharge in monthly water balance models

    USGS Publications Warehouse

    Alley, William M.

    1984-01-01

    Several two- to six-parameter regional water balance models are examined by using 50-year records of monthly streamflow at 10 sites in New Jersey. These models include variants of the Thornthwaite-Mather model, the Palmer model, and the more recent Thomas abcd model. Prediction errors are relatively similar among the models. However, simulated values of state variables such as soil moisture storage differ substantially among the models, and fitted parameter values for different models sometimes indicated an entirely different type of basin response to precipitation. Some problems in parameter identification are noted, including difficulties in identifying an appropriate time lag factor for the Thornthwaite-Mather-type model for basins with little groundwater storage, very high correlations between upper and lower storages in the Palmer-type model, and large sensitivity of parameter a of the abcd model to bias in estimates of precipitation and potential evapotranspiration. Modifications to the threshold concept of the Thornthwaite-Mather model were statistically valid for the six stations in northern New Jersey. The abcd model resulted in a simulated seasonal cycle of groundwater levels similar to fluctuations observed in nearby wells but with greater persistence. These results suggest that extreme caution should be used in attaching physical significance to model parameters and in using the state variables of the models in indices of drought and basin productivity.

  1. A Gaussian Mixture Model Representation of Endmember Variability in Hyperspectral Unmixing

    NASA Astrophysics Data System (ADS)

    Zhou, Yuan; Rangarajan, Anand; Gader, Paul D.

    2018-05-01

    Hyperspectral unmixing while considering endmember variability is usually performed by the normal compositional model (NCM), where the endmembers for each pixel are assumed to be sampled from unimodal Gaussian distributions. However, in real applications, the distribution of a material is often not Gaussian. In this paper, we use Gaussian mixture models (GMM) to represent the endmember variability. We show, given the GMM starting premise, that the distribution of the mixed pixel (under the linear mixing model) is also a GMM (and this is shown from two perspectives). The first perspective originates from the random variable transformation and gives a conditional density function of the pixels given the abundances and GMM parameters. With proper smoothness and sparsity prior constraints on the abundances, the conditional density function leads to a standard maximum a posteriori (MAP) problem which can be solved using generalized expectation maximization. The second perspective originates from marginalizing over the endmembers in the GMM, which provides us with a foundation to solve for the endmembers at each pixel. Hence, our model can not only estimate the abundances and distribution parameters, but also the distinct endmember set for each pixel. We tested the proposed GMM on several synthetic and real datasets, and showed its potential by comparing it to current popular methods.

  2. Multi-criteria optimization of chassis parameters of Nissan 200 SX for drifting competitions

    NASA Astrophysics Data System (ADS)

    Maniowski, M.

    2016-09-01

    The work objective is to increase performance of Nissan 200sx S13 prepared for a quasi-static state of drifting on a circular path with given constant radius (R=15 m) and tyre-road friction coefficient (μ = 0.9). First, a high fidelity “miMA” multibody model of the vehicle is formulated. Then, a multicriteria optimization problem is solved with one of the goals to maximize a stable drift angle (β) of the vehicle. The decision variables contain 11 parameters of the vehicle chassis (describing the wheel suspension stiffness and geometry) and 2 parameters responsible for a driver steering and accelerator actions, that control this extreme closed-loop manoeuvre. The optimized chassis setup results in the drift angle increase by 14% from 35 to 40 deg.

  3. Coupling-parameter expansion in thermodynamic perturbation theory.

    PubMed

    Ramana, A Sai Venkata; Menon, S V G

    2013-02-01

    An approach to the coupling-parameter expansion in the liquid state theory of simple fluids is presented by combining the ideas of thermodynamic perturbation theory and integral equation theories. This hybrid scheme avoids the problems of the latter in the two phase region. A method to compute the perturbation series to any arbitrary order is developed and applied to square well fluids. Apart from the Helmholtz free energy, the method also gives the radial distribution function and the direct correlation function of the perturbed system. The theory is applied for square well fluids of variable ranges and compared with simulation data. While the convergence of perturbation series and the overall performance of the theory is good, improvements are needed for potentials with shorter ranges. Possible directions for further developments in the coupling-parameter expansion are indicated.

  4. A GA based penalty function technique for solving constrained redundancy allocation problem of series system with interval valued reliability of components

    NASA Astrophysics Data System (ADS)

    Gupta, R. K.; Bhunia, A. K.; Roy, D.

    2009-10-01

    In this paper, we have considered the problem of constrained redundancy allocation of series system with interval valued reliability of components. For maximizing the overall system reliability under limited resource constraints, the problem is formulated as an unconstrained integer programming problem with interval coefficients by penalty function technique and solved by an advanced GA for integer variables with interval fitness function, tournament selection, uniform crossover, uniform mutation and elitism. As a special case, considering the lower and upper bounds of the interval valued reliabilities of the components to be the same, the corresponding problem has been solved. The model has been illustrated with some numerical examples and the results of the series redundancy allocation problem with fixed value of reliability of the components have been compared with the existing results available in the literature. Finally, sensitivity analyses have been shown graphically to study the stability of our developed GA with respect to the different GA parameters.

  5. Nonparametric relevance-shifted multiple testing procedures for the analysis of high-dimensional multivariate data with small sample sizes.

    PubMed

    Frömke, Cornelia; Hothorn, Ludwig A; Kropf, Siegfried

    2008-01-27

    In many research areas it is necessary to find differences between treatment groups with several variables. For example, studies of microarray data seek to find a significant difference in location parameters from zero or one for ratios thereof for each variable. However, in some studies a significant deviation of the difference in locations from zero (or 1 in terms of the ratio) is biologically meaningless. A relevant difference or ratio is sought in such cases. This article addresses the use of relevance-shifted tests on ratios for a multivariate parallel two-sample group design. Two empirical procedures are proposed which embed the relevance-shifted test on ratios. As both procedures test a hypothesis for each variable, the resulting multiple testing problem has to be considered. Hence, the procedures include a multiplicity correction. Both procedures are extensions of available procedures for point null hypotheses achieving exact control of the familywise error rate. Whereas the shift of the null hypothesis alone would give straight-forward solutions, the problems that are the reason for the empirical considerations discussed here arise by the fact that the shift is considered in both directions and the whole parameter space in between these two limits has to be accepted as null hypothesis. The first algorithm to be discussed uses a permutation algorithm, and is appropriate for designs with a moderately large number of observations. However, many experiments have limited sample sizes. Then the second procedure might be more appropriate, where multiplicity is corrected according to a concept of data-driven order of hypotheses.

  6. Role of morphometry in the cytological differentiation of benign and malignant thyroid lesions

    PubMed Central

    Khatri, Pallavi; Choudhury, Monisha; Jain, Manjula; Thomas, Shaji

    2017-01-01

    Context: Thyroid nodules represent a common problem, with an estimated prevalence of 4–7%. Although fine needle aspiration cytology (FNAC) has been accepted as a first line diagnostic test, the rate of false negative reports of malignancy is still high. Nuclear morphometry is the measurement of nuclear parameters by image analysis. Image analysis can merge the advantages of morphologic interpretation with those of quantitative data. Aims: To evaluate the nuclear morphometric parameters in fine needle aspirates of thyroid lesions and to study its role in differentiating benign from malignant thyroid lesions. Material and Methods: The study included 19 benign and 16 malignant thyroid lesions. Image analysis was performed on Giemsa-stained FNAC slides by Nikon NIS-Elements Advanced Research software (Version 4.00). Nuclear morphometric parameters analyzed included nuclear size, shape, texture, and density parameters. Statistical Analysis: Normally distributed continuous variables were compared using the unpaired t-test for two groups and analysis of variance was used for three or more groups. Tukey or Tamhane's T2 multiple comparison test was used to assess the differences between the individual groups. Categorical variables were analyzed using the chi square test. Results and Conclusion: Five out of the six nuclear size parameters as well as all the texture and density parameters studied were significant in distinguishing between benign and malignant thyroid lesions (P < 0.05). Cut-off values were derived to differentiate between benign and malignant cases. PMID:28182069

  7. More Effective Distributed ML via a Stale Synchronous Parallel Parameter Server

    PubMed Central

    Ho, Qirong; Cipar, James; Cui, Henggang; Kim, Jin Kyu; Lee, Seunghak; Gibbons, Phillip B.; Gibson, Garth A.; Ganger, Gregory R.; Xing, Eric P.

    2014-01-01

    We propose a parameter server system for distributed ML, which follows a Stale Synchronous Parallel (SSP) model of computation that maximizes the time computational workers spend doing useful work on ML algorithms, while still providing correctness guarantees. The parameter server provides an easy-to-use shared interface for read/write access to an ML model’s values (parameters and variables), and the SSP model allows distributed workers to read older, stale versions of these values from a local cache, instead of waiting to get them from a central storage. This significantly increases the proportion of time workers spend computing, as opposed to waiting. Furthermore, the SSP model ensures ML algorithm correctness by limiting the maximum age of the stale values. We provide a proof of correctness under SSP, as well as empirical results demonstrating that the SSP model achieves faster algorithm convergence on several different ML problems, compared to fully-synchronous and asynchronous schemes. PMID:25400488

  8. An adaptive tracking observer for failure-detection systems

    NASA Technical Reports Server (NTRS)

    Sidar, M.

    1982-01-01

    The design problem of adaptive observers applied to linear, constant and variable parameters, multi-input, multi-output systems, is considered. It is shown that, in order to keep the observer's (or Kalman filter) false-alarm rate (FAR) under a certain specified value, it is necessary to have an acceptable proper matching between the observer (or KF) model and the system parameters. An adaptive observer algorithm is introduced in order to maintain desired system-observer model matching, despite initial mismatching and/or system parameter variations. Only a properly designed adaptive observer is able to detect abrupt changes in the system (actuator, sensor failures, etc.) with adequate reliability and FAR. Conditions for convergence for the adaptive process were obtained, leading to a simple adaptive law (algorithm) with the possibility of an a priori choice of fixed adaptive gains. Simulation results show good tracking performance with small observer output errors and accurate and fast parameter identification, in both deterministic and stochastic cases.

  9. A new numerical benchmark for variably saturated variable-density flow and transport in porous media

    NASA Astrophysics Data System (ADS)

    Guevara, Carlos; Graf, Thomas

    2016-04-01

    In subsurface hydrological systems, spatial and temporal variations in solute concentration and/or temperature may affect fluid density and viscosity. These variations could lead to potentially unstable situations, in which a dense fluid overlies a less dense fluid. These situations could produce instabilities that appear as dense plume fingers migrating downwards counteracted by vertical upwards flow of freshwater (Simmons et al., Transp. Porous Medium, 2002). As a result of unstable variable-density flow, solute transport rates are increased over large distances and times as compared to constant-density flow. The numerical simulation of variable-density flow in saturated and unsaturated media requires corresponding benchmark problems against which a computer model is validated (Diersch and Kolditz, Adv. Water Resour, 2002). Recorded data from a laboratory-scale experiment of variable-density flow and solute transport in saturated and unsaturated porous media (Simmons et al., Transp. Porous Medium, 2002) is used to define a new numerical benchmark. The HydroGeoSphere code (Therrien et al., 2004) coupled with PEST (www.pesthomepage.org) are used to obtain an optimized parameter set capable of adequately representing the data set by Simmons et al., (2002). Fingering in the numerical model is triggered using random hydraulic conductivity fields. Due to the inherent randomness, a large number of simulations were conducted in this study. The optimized benchmark model adequately predicts the plume behavior and the fate of solutes. This benchmark is useful for model verification of variable-density flow problems in saturated and/or unsaturated media.

  10. Conceptual Layout of Wing Structure Using Topology Optimization for Morphing Micro Air Vehicles in a Perching Maneuver

    DTIC Science & Technology

    2012-03-22

    the fraction of the design space to be filled with material (termed “volume fraction”), and any other desired design restrictions such as a ...topology problem is called a distributed parameter system because the design variables represent a field or continuum with infinite degrees of freedom... with the addition of a few solutions that were a combination of honeycomb and fiber cells. Unlike

  11. LSENS, A General Chemical Kinetics and Sensitivity Analysis Code for Homogeneous Gas-Phase Reactions. Part 2; Code Description and Usage

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part II of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part II describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part I (NASA RP-1328) derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved by LSENS. Part III (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.

  12. Estimating model parameters for an impact-produced shock-wave simulation: Optimal use of partial data with the extended Kalman filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kao, Jim; Flicker, Dawn; Ide, Kayo

    2006-05-20

    This paper builds upon our recent data assimilation work with the extended Kalman filter (EKF) method [J. Kao, D. Flicker, R. Henninger, S. Frey, M. Ghil, K. Ide, Data assimilation with an extended Kalman filter for an impact-produced shock-wave study, J. Comp. Phys. 196 (2004) 705-723.]. The purpose is to test the capability of EKF in optimizing a model's physical parameters. The problem is to simulate the evolution of a shock produced through a high-speed flyer plate. In the earlier work, we have showed that the EKF allows one to estimate the evolving state of the shock wave from amore » single pressure measurement, assuming that all model parameters are known. In the present paper, we show that imperfectly known model parameters can also be estimated accordingly, along with the evolving model state, from the same single measurement. The model parameter optimization using the EKF can be achieved through a simple modification of the original EKF formalism by including the model parameters into an augmented state variable vector. While the regular state variables are governed by both deterministic and stochastic forcing mechanisms, the parameters are only subject to the latter. The optimally estimated model parameters are thus obtained through a unified assimilation operation. We show that improving the accuracy of the model parameters also improves the state estimate. The time variation of the optimized model parameters results from blending the data and the corresponding values generated from the model and lies within a small range, of less than 2%, from the parameter values of the original model. The solution computed with the optimized parameters performs considerably better and has a smaller total variance than its counterpart using the original time-constant parameters. These results indicate that the model parameters play a dominant role in the performance of the shock-wave hydrodynamic code at hand.« less

  13. Generalized self-similar unsteady gas flows behind the strong shock wave front

    NASA Astrophysics Data System (ADS)

    Bogatko, V. I.; Potekhina, E. A.

    2018-05-01

    Two-dimensional (plane and axially symmetric) nonstationary gas flows behind the front of a strong shock wave are considered. All the gas parameters are functions of the ratio of Cartesian coordinates to some degree of time tn, where n is a self-similarity index. The problem is solved in Lagrangian variables. It is shown that the resulting system of partial differential equations is suitable for constructing an iterative process. ¢he "thin shock layer" method is used to construct an approximate analytical solution of the problem. The limit solution of the problem is constructed. A formula for determining the path traversed by a gas particle in the shock layer along the front of a shock wave is obtained. A system of equations for determining the first approximation corrections is constructed.

  14. An Automated Solution of the Low-Thrust Interplanetary Trajectory Problem.

    PubMed

    Englander, Jacob A; Conway, Bruce A

    2017-01-01

    Preliminary design of low-thrust interplanetary missions is a highly complex process. The mission designer must choose discrete parameters such as the number of flybys, the bodies at which those flybys are performed, and in some cases the final destination. In addition, a time-history of control variables must be chosen that defines the trajectory. There are often many thousands, if not millions, of possible trajectories to be evaluated, which can be a very expensive process in terms of the number of human analyst hours required. An automated approach is therefore very desirable. This work presents such an approach by posing the mission design problem as a hybrid optimal control problem. The method is demonstrated on hypothetical missions to Mercury, the main asteroid belt, and Pluto.

  15. An Automated Solution of the Low-Thrust Interplanetary Trajectory Problem

    NASA Technical Reports Server (NTRS)

    Englander, Jacob A.; Conway, Bruce

    2016-01-01

    Preliminary design of low-thrust interplanetary missions is a highly complex process. The mission designer must choose discrete parameters such as the number of flybys, the bodies at which those flybys are performed, and in some cases the final destination. In addition, a time-history of control variables must be chosen that defines the trajectory. There are often many thousands, if not millions, of possible trajectories to be evaluated, which can be a very expensive process in terms of the number of human analyst hours required. An automated approach is therefore very desirable. This work presents such an approach by posing the mission design problem as a hybrid optimal control problem. The method is demonstrated on hypothetical missions to Mercury, the main asteroid belt, and Pluto.

  16. An Automated Solution of the Low-Thrust Interplanetary Trajectory Problem

    PubMed Central

    Englander, Jacob A.; Conway, Bruce A.

    2017-01-01

    Preliminary design of low-thrust interplanetary missions is a highly complex process. The mission designer must choose discrete parameters such as the number of flybys, the bodies at which those flybys are performed, and in some cases the final destination. In addition, a time-history of control variables must be chosen that defines the trajectory. There are often many thousands, if not millions, of possible trajectories to be evaluated, which can be a very expensive process in terms of the number of human analyst hours required. An automated approach is therefore very desirable. This work presents such an approach by posing the mission design problem as a hybrid optimal control problem. The method is demonstrated on hypothetical missions to Mercury, the main asteroid belt, and Pluto. PMID:29515289

  17. Aircraft Turbofan Engine Health Estimation Using Constrained Kalman Filtering

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2003-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops an analytic method of incorporating state variable inequality constraints in the Kalman filter. The resultant filter is a combination of a standard Kalman filter and a quadratic programming problem. The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is proven theoretically and shown via simulation results obtained from application to a turbofan engine model. This model contains 16 state variables, 12 measurements, and 8 component health parameters. It is shown that the new algorithms provide improved performance in this example over unconstrained Kalman filtering.

  18. GAMBIT: A Parameterless Model-Based Evolutionary Algorithm for Mixed-Integer Problems.

    PubMed

    Sadowski, Krzysztof L; Thierens, Dirk; Bosman, Peter A N

    2018-01-01

    Learning and exploiting problem structure is one of the key challenges in optimization. This is especially important for black-box optimization (BBO) where prior structural knowledge of a problem is not available. Existing model-based Evolutionary Algorithms (EAs) are very efficient at learning structure in both the discrete, and in the continuous domain. In this article, discrete and continuous model-building mechanisms are integrated for the Mixed-Integer (MI) domain, comprising discrete and continuous variables. We revisit a recently introduced model-based evolutionary algorithm for the MI domain, the Genetic Algorithm for Model-Based mixed-Integer opTimization (GAMBIT). We extend GAMBIT with a parameterless scheme that allows for practical use of the algorithm without the need to explicitly specify any parameters. We furthermore contrast GAMBIT with other model-based alternatives. The ultimate goal of processing mixed dependences explicitly in GAMBIT is also addressed by introducing a new mechanism for the explicit exploitation of mixed dependences. We find that processing mixed dependences with this novel mechanism allows for more efficient optimization. We further contrast the parameterless GAMBIT with Mixed-Integer Evolution Strategies (MIES) and other state-of-the-art MI optimization algorithms from the General Algebraic Modeling System (GAMS) commercial algorithm suite on problems with and without constraints, and show that GAMBIT is capable of solving problems where variable dependences prevent many algorithms from successfully optimizing them.

  19. TRUMP; transient and steady state temperature distribution. [IBM360,370; CDC7600; FORTRAN IV (95%) and BAL (5%) (IBM); FORTRAN IV (CDC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elrod, D.C.; Turner, W.D.

    TRUMP solves a general nonlinear parabolic partial differential equation describing flow in various kinds of potential fields, such as fields of temperature, pressure, or electricity and magnetism; simultaneously, it will solve two additional equations representing, in thermal problems, heat production by decomposition of two reactants having rate constants with a general Arrhenius temperature dependence. Steady-state and transient flow in one, two, or three dimensions are considered in geometrical configurations having simple or complex shapes and structures. Problem parameters may vary with spatial position, time, or primary dependent variables--temperature, pressure, or field strength. Initial conditions may vary with spatial position, andmore » among the criteria that may be specified for ending a problem are upper and lower limits on the size of the primary dependent variable, upper limits on the problem time or on the number of time-steps or on the computer time, and attainment of steady state.IBM360,370;CDC7600; FORTRAN IV (95%) and BAL (5%) (IBM); FORTRAN IV (CDC); OS/360 (IBM360), OS/370 (IBM370), SCOPE 2.1.5 (CDC7600); As dimensioned, the program requires 400K bytes of storage on an IBM370 and 145,100 (octal) words on a CDC7600.« less

  20. A MATLAB implementation of the minimum relative entropy method for linear inverse problems

    NASA Astrophysics Data System (ADS)

    Neupauer, Roseanna M.; Borchers, Brian

    2001-08-01

    The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.

  1. A differential game solution to the Coplanar tail-chase aerial combat problem

    NASA Technical Reports Server (NTRS)

    Merz, A. W.; Hague, D. S.

    1976-01-01

    Numerical results obtained in a simplified version of the one on one aerial combat problem are presented. The primary aim of the data is to specify the roles of pursuer and evader as functions of the relative geometry and of the significant physical parameters of the problem. Numerical results are given in a case in which the slower aircraft is more maneuverable than the faster aircraft. A third order dynamic model of the relative motion is described, for which the state variables are relative range, bearing, and heading. The ranges at termination are arbitary in the present version of the problem, so the weapon systems of both aircraft can be visualized as forward firing high velocity weapons, which must be aimed at the tail pipe of the evader. It was found that, for the great majority of the ralative geometries, each aircraft can evade the weapon system of the other.

  2. Optimal solution of full fuzzy transportation problems using total integral ranking

    NASA Astrophysics Data System (ADS)

    Sam’an, M.; Farikhin; Hariyanto, S.; Surarso, B.

    2018-03-01

    Full fuzzy transportation problem (FFTP) is a transportation problem where transport costs, demand, supply and decision variables are expressed in form of fuzzy numbers. To solve fuzzy transportation problem, fuzzy number parameter must be converted to a crisp number called defuzzyfication method. In this new total integral ranking method with fuzzy numbers from conversion of trapezoidal fuzzy numbers to hexagonal fuzzy numbers obtained result of consistency defuzzyfication on symmetrical fuzzy hexagonal and non symmetrical type 2 numbers with fuzzy triangular numbers. To calculate of optimum solution FTP used fuzzy transportation algorithm with least cost method. From this optimum solution, it is found that use of fuzzy number form total integral ranking with index of optimism gives different optimum value. In addition, total integral ranking value using hexagonal fuzzy numbers has an optimal value better than the total integral ranking value using trapezoidal fuzzy numbers.

  3. Online quantitative analysis of multispectral images of human body tissues

    NASA Astrophysics Data System (ADS)

    Lisenko, S. A.

    2013-08-01

    A method is developed for online monitoring of structural and morphological parameters of biological tissues (haemoglobin concentration, degree of blood oxygenation, average diameter of capillaries and the parameter characterising the average size of tissue scatterers), which involves multispectral tissue imaging, image normalisation to one of its spectral layers and determination of unknown parameters based on their stable regression relation with the spectral characteristics of the normalised image. Regression is obtained by simulating numerically the diffuse reflectance spectrum of the tissue by the Monte Carlo method at a wide variation of model parameters. The correctness of the model calculations is confirmed by the good agreement with the experimental data. The error of the method is estimated under conditions of general variability of structural and morphological parameters of the tissue. The method developed is compared with the traditional methods of interpretation of multispectral images of biological tissues, based on the solution of the inverse problem for each pixel of the image in the approximation of different analytical models.

  4. Computational modeling of unsteady third-grade fluid flow over a vertical cylinder: A study of heat transfer visualization

    NASA Astrophysics Data System (ADS)

    Reddy, G. Janardhana; Hiremath, Ashwini; Kumar, Mahesh

    2018-03-01

    The present paper aims to investigate the effect of Prandtl number for unsteady third-grade fluid flow over a uniformly heated vertical cylinder using Bejan's heat function concept. The mathematical model of this problem is given by highly time-dependent non-linear coupled equations and are resolved by an efficient unconditionally stable implicit scheme. The time histories of average values of momentum and heat transport coefficients as well as the steady-state flow variables are displayed graphically for distinct values of non-dimensional control parameters arising in the system. As the non-dimensional parameter value gets amplified, the time taken for the fluid flow variables to attain the time-independent state is decreasing. The dimensionless heat function values are closely associated with an overall rate of heat transfer. Thermal energy transfer visualization implies that the heat function contours are compact in the neighborhood of the leading edge of the hot cylindrical wall. It is noticed that the deviations of flow-field variables from the hot wall for a non-Newtonian third-grade fluid flow are significant compared to the usual Newtonian fluid flow.

  5. Effect of mobile phone radiation on heart rate variability.

    PubMed

    Ahamed, V I Thajudin; Karthick, N G; Joseph, Paul K

    2008-06-01

    The rapid increase in the use of mobile phones (MPs) in recent years has raised the problem of health risk connected with high-frequency electromagnetic fields. There are reports of headache, dizziness, numbness in the thigh, and heaviness in the chest among MP users. This paper deals with the neurological effect of electromagnetic fields radiated from MPs, by studies on heart rate variability (HRV) of 14 male volunteers. As heart rate is modulated by the autonomic nervous system, study of HRV can be used for assessing the neurological effect. The parameters used in this study for quantifying the effect on HRV are scaling exponent and sample entropy. The result indicates an increase in both the parameters when MP is kept close to the chest and a decrease when kept close to the head. MP has caused changes in HRV indices and the change varied with its position, but the changes cannot be considered significant as the p values are high.

  6. Some Open Issues on Rockfall Hazard Analysis in Fractured Rock Mass: Problems and Prospects

    NASA Astrophysics Data System (ADS)

    Ferrero, Anna Maria; Migliazza, Maria Rita; Pirulli, Marina; Umili, Gessica

    2016-09-01

    Risk is part of every sector of engineering design. It is a consequence of the uncertainties connected with the cognitive boundaries and with the natural variability of the relevant variables. In soil and rock engineering, in particular, uncertainties are linked to geometrical and mechanical aspects and the model used for the problem schematization. While the uncertainties due to the cognitive gaps could be filled by improving the quality of numerical codes and measuring instruments, nothing can be done to remove the randomness of natural variables, except defining their variability with stochastic approaches. Probabilistic analyses represent a useful tool to run parametric analyses and to identify the more significant aspects of a given phenomenon: They can be used for a rational quantification and mitigation of risk. The connection between the cognitive level and the probability of failure is at the base of the determination of hazard, which is often quantified through the assignment of safety factors. But these factors suffer from conceptual limits, which can be only overcome by adopting mathematical techniques with sound bases, not so used up to now (Einstein et al. in rock mechanics in civil and environmental engineering, CRC Press, London, 3-13, 2010; Brown in J Rock Mech Geotech Eng 4(3):193-204, 2012). The present paper describes the problems and the more reliable techniques used to quantify the uncertainties that characterize the large number of parameters that are involved in rock slope hazard assessment through a real case specifically related to rockfall. Limits of the existing approaches and future developments of the research are also provided.

  7. Optimization of Composite Structures with Curved Fiber Trajectories

    NASA Astrophysics Data System (ADS)

    Lemaire, Etienne; Zein, Samih; Bruyneel, Michael

    2014-06-01

    This paper studies the problem of optimizing composites shells manufactured using Automated Tape Layup (ATL) or Automated Fiber Placement (AFP) processes. The optimization procedure relies on a new approach to generate equidistant fiber trajectories based on Fast Marching Method. Starting with a (possibly curved) reference fiber direction defined on a (possibly curved) meshed surface, the new method allows determining fibers orientation resulting from a uniform thickness layup. The design variables are the parameters defining the position and the shape of the reference curve which results in very few design variables. Thanks to this efficient parameterization, maximum stiffness optimization numerical applications are proposed. The shape of the design space is discussed, regarding local and global optimal solutions.

  8. Estimating urban ground-level PM10 using MODIS 3km AOD product and meteorological parameters from WRF model

    NASA Astrophysics Data System (ADS)

    Ghotbi, Saba; Sotoudeheian, Saeed; Arhami, Mohammad

    2016-09-01

    Satellite remote sensing products of AOD from MODIS along with appropriate meteorological parameters were used to develop statistical models and estimate ground-level PM10. Most of previous studies obtained meteorological data from synoptic weather stations, with rather sparse spatial distribution, and used it along with 10 km AOD product to develop statistical models, applicable for PM variations in regional scale (resolution of ≥10 km). In the current study, meteorological parameters were simulated with 3 km resolution using WRF model and used along with the rather new 3 km AOD product (launched in 2014). The resulting PM statistical models were assessed for a polluted and largely variable urban area, Tehran, Iran. Despite the critical particulate pollution problem, very few PM studies were conducted in this area. The issue of rather poor direct PM-AOD associations existed, due to different factors such as variations in particles optical properties, in addition to bright background issue for satellite data, as the studied area located in the semi-arid areas of Middle East. Statistical approach of linear mixed effect (LME) was used, and three types of statistical models including single variable LME model (using AOD as independent variable) and multiple variables LME model by using meteorological data from two sources, WRF model and synoptic stations, were examined. Meteorological simulations were performed using a multiscale approach and creating an appropriate physic for the studied region, and the results showed rather good agreements with recordings of the synoptic stations. The single variable LME model was able to explain about 61%-73% of daily PM10 variations, reflecting a rather acceptable performance. Statistical models performance improved through using multivariable LME and incorporating meteorological data as auxiliary variables, particularly by using fine resolution outputs from WRF (R2 = 0.73-0.81). In addition, rather fine resolution for PM estimates was mapped for the studied city, and resulting concentration maps were consistent with PM recordings at the existing stations.

  9. Impacts of variable thermal conductivity on stagnation point boundary layer flow past a Riga plate with variable thickness using generalized Fourier's law

    NASA Astrophysics Data System (ADS)

    Shah, S.; Hussain, S.; Sagheer, M.

    2018-06-01

    This article explores the problem of two-dimensional, laminar, steady and boundary layer stagnation point slip flow over a Riga plate. The incompressible upper-convected Maxwell fluid has been considered as a rheological fluid model. The heat transfer characteristics are investigated with generalized Fourier's law. The fluid thermal conductivity is assumed to be temperature dependent in this study. A system of partial differential equations governing the flow of an upper-convected Maxwell fluid, heat and mass transfer using generalized Fourier's law is developed. The main objective of the article is to inspect the impacts of pertinent physical parameters such as the stretching ratio parameter (0 ⩽ A ⩽ 0.3) , Deborah number (0 ⩽ β ⩽ 0.6) , thermal relaxation parameter (0 ⩽ γ ⩽ 0.5) , wall thickness parameter (0.1 ⩽ α ⩽ 3.5) , slip parameter (0 ⩽ R ⩽ 1.5) , thermal conductivity parameter (0.1 ⩽ δ ⩽ 1.0) and modified Hartmann number (0 ⩽ Q ⩽ 3) on the velocity and temperature profiles. Suitable local similarity transformations have been used to get a system of non-linear ODEs from the governing PDEs. The numerical solutions for the dimensionless velocity and temperature distributions have been achieved by employing an effective numerical method called the shooting method. It is seen that the velocity profile shows the reduction in the velocity for the higher values of viscoelastic parameter and the thermal relaxation parameter. In addition, to enhance the reliability at the maximum level of the obtained numerical results by shooting method, a MATLAB built-in solver bvp4c has also been utilized.

  10. Technical support for creating an artificial intelligence system for feature extraction and experimental design

    NASA Technical Reports Server (NTRS)

    Glick, B. J.

    1985-01-01

    Techniques for classifying objects into groups or clases go under many different names including, most commonly, cluster analysis. Mathematically, the general problem is to find a best mapping of objects into an index set consisting of class identifiers. When an a priori grouping of objects exists, the process of deriving the classification rules from samples of classified objects is known as discrimination. When such rules are applied to objects of unknown class, the process is denoted classification. The specific problem addressed involves the group classification of a set of objects that are each associated with a series of measurements (ratio, interval, ordinal, or nominal levels of measurement). Each measurement produces one variable in a multidimensional variable space. Cluster analysis techniques are reviewed and methods for incuding geographic location, distance measures, and spatial pattern (distribution) as parameters in clustering are examined. For the case of patterning, measures of spatial autocorrelation are discussed in terms of the kind of data (nominal, ordinal, or interval scaled) to which they may be applied.

  11. A diffusion model of protected population on bilocal habitat with generalized resource

    NASA Astrophysics Data System (ADS)

    Vasilyev, Maxim D.; Trofimtsev, Yuri I.; Vasilyeva, Natalya V.

    2017-11-01

    A model of population distribution in a two-dimensional area divided by an ecological barrier, i.e. the boundaries of natural reserve, is considered. Distribution of the population is defined by diffusion, directed migrations and areal resource. The exchange of specimens occurs between two parts of the habitat. The mathematical model is presented in the form of a boundary value problem for a system of non-linear parabolic equations with variable parameters of diffusion and growth function. The splitting space variables, sweep method and simple iteration methods were used for the numerical solution of a system. A set of programs was coded in Python. Numerical simulation results for the two-dimensional unsteady non-linear problem are analyzed in detail. The influence of migration flow coefficients and functions of natural birth/death ratio on the distributions of population densities is investigated. The results of the research would allow to describe the conditions of the stable and sustainable existence of populations in bilocal habitat containing the protected and non-protected zones.

  12. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context

    PubMed Central

    Martinez, Josue G.; Carroll, Raymond J.; Müller, Samuel; Sampson, Joshua N.; Chatterjee, Nilanjan

    2012-01-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso. PMID:22347720

  13. Multiple Attribute Group Decision-Making Methods Based on Trapezoidal Fuzzy Two-Dimensional Linguistic Partitioned Bonferroni Mean Aggregation Operators.

    PubMed

    Yin, Kedong; Yang, Benshuo; Li, Xuemei

    2018-01-24

    In this paper, we investigate multiple attribute group decision making (MAGDM) problems where decision makers represent their evaluation of alternatives by trapezoidal fuzzy two-dimensional uncertain linguistic variable. To begin with, we introduce the definition, properties, expectation, operational laws of trapezoidal fuzzy two-dimensional linguistic information. Then, to improve the accuracy of decision making in some case where there are a sort of interrelationship among the attributes, we analyze partition Bonferroni mean (PBM) operator in trapezoidal fuzzy two-dimensional variable environment and develop two operators: trapezoidal fuzzy two-dimensional linguistic partitioned Bonferroni mean (TF2DLPBM) aggregation operator and trapezoidal fuzzy two-dimensional linguistic weighted partitioned Bonferroni mean (TF2DLWPBM) aggregation operator. Furthermore, we develop a novel method to solve MAGDM problems based on TF2DLWPBM aggregation operator. Finally, a practical example is presented to illustrate the effectiveness of this method and analyses the impact of different parameters on the results of decision-making.

  14. Multiple Attribute Group Decision-Making Methods Based on Trapezoidal Fuzzy Two-Dimensional Linguistic Partitioned Bonferroni Mean Aggregation Operators

    PubMed Central

    Yin, Kedong; Yang, Benshuo

    2018-01-01

    In this paper, we investigate multiple attribute group decision making (MAGDM) problems where decision makers represent their evaluation of alternatives by trapezoidal fuzzy two-dimensional uncertain linguistic variable. To begin with, we introduce the definition, properties, expectation, operational laws of trapezoidal fuzzy two-dimensional linguistic information. Then, to improve the accuracy of decision making in some case where there are a sort of interrelationship among the attributes, we analyze partition Bonferroni mean (PBM) operator in trapezoidal fuzzy two-dimensional variable environment and develop two operators: trapezoidal fuzzy two-dimensional linguistic partitioned Bonferroni mean (TF2DLPBM) aggregation operator and trapezoidal fuzzy two-dimensional linguistic weighted partitioned Bonferroni mean (TF2DLWPBM) aggregation operator. Furthermore, we develop a novel method to solve MAGDM problems based on TF2DLWPBM aggregation operator. Finally, a practical example is presented to illustrate the effectiveness of this method and analyses the impact of different parameters on the results of decision-making. PMID:29364849

  15. A hierarchical Bayesian method for vibration-based time domain force reconstruction problems

    NASA Astrophysics Data System (ADS)

    Li, Qiaofeng; Lu, Qiuhai

    2018-05-01

    Traditional force reconstruction techniques require prior knowledge on the force nature to determine the regularization term. When such information is unavailable, the inappropriate term is easily chosen and the reconstruction result becomes unsatisfactory. In this paper, we propose a novel method to automatically determine the appropriate q as in ℓq regularization and reconstruct the force history. The method incorporates all to-be-determined variables such as the force history, precision parameters and q into a hierarchical Bayesian formulation. The posterior distributions of variables are evaluated by a Metropolis-within-Gibbs sampler. The point estimates of variables and their uncertainties are given. Simulations of a cantilever beam and a space truss under various loading conditions validate the proposed method in providing adaptive determination of q and better reconstruction performance than existing Bayesian methods.

  16. Un formalisme de systemes a sauts pour la recirculation optimale des casses dans une machine a papier

    NASA Astrophysics Data System (ADS)

    Khanbaghi, Maryam

    Increasing closure of white water circuits is making mill productivity and quality of paper produced increasingly affected by the occurrence of paper breaks. In this thesis the main objective is the development of white water and broke recirculation policies. The thesis consists of three main parts, respectively corresponding to the synthesis of a statistical model of paper breaks in a paper mill, the basic mathematical setup for the formulation of white water and broke recirculation policies in the mill as a jump linear quadratic regulation problem, and finally the tuning of the control law based on first passage-time theory, and its extension to the case of control sensitive paper break rates. More specifically, in the first part a statistical model of paper machine breaks is developed. We start from the hypothesis that the breaks process is a Markov chain with three states: the first state is the operational one, while the two others are associated with the general types of paper-breaks that can take place in the mill (wet breaks and dry breaks). The Markovian hypothesis is empirically validated. We also establish how paper-break rates are correlated with machine speed and broke recirculation ratio. Subsequently, we show how the obtained Markov chain model of paper-breaks can be used to formulate a machine operating speed parameter optimization problem. In the second part, upon recognizing that paper breaks can be modelled as a Markov chain type of process which, when interacting with the continuous mill dynamics, yields a jump Markov model, jump linear theory is proposed as a means of constructing white water and broke recirculation strategies which minimize process variability. Reduced process variability comes at the expense of relatively large swings in white water and broke tanks level. Since the linear design does not specifically account for constraints on the state-space, under the resulting law, damaging events of tank overflow or emptiness can occur. A heuristic simulation-based approach is proposed to choose the performance measure design parameters to keep the mean time between incidents of fluid in broke and white water tanks either overflowing, or reaching dangerously low levels, sufficiently long. In the third part, a methodology, mainly founded on the first passage-time theory of stochastic processes, is proposed to choose the performance measure design parameters to limit process variability while accounting for the possibility of undesirable tank overflows or tank emptiness. The heart of the approach is an approximation technique for evaluating mean first passage-times of the controlled tanks levels. This technique appears to have an applicability which largely exceeds the problem area it was designed for. Furthermore, the introduction of control sensitive break rates and the analysis of the ensuing control problem are presented. This is to account for the experimentally observed increase in breaks concomitant with flow rate variability.

  17. Computational intelligence-based optimization of maximally stable extremal region segmentation for object detection

    NASA Astrophysics Data System (ADS)

    Davis, Jeremy E.; Bednar, Amy E.; Goodin, Christopher T.; Durst, Phillip J.; Anderson, Derek T.; Bethel, Cindy L.

    2017-05-01

    Particle swarm optimization (PSO) and genetic algorithms (GAs) are two optimization techniques from the field of computational intelligence (CI) for search problems where a direct solution can not easily be obtained. One such problem is finding an optimal set of parameters for the maximally stable extremal region (MSER) algorithm to detect areas of interest in imagery. Specifically, this paper describes the design of a GA and PSO for optimizing MSER parameters to detect stop signs in imagery produced via simulation for use in an autonomous vehicle navigation system. Several additions to the GA and PSO are required to successfully detect stop signs in simulated images. These additions are a primary focus of this paper and include: the identification of an appropriate fitness function, the creation of a variable mutation operator for the GA, an anytime algorithm modification to allow the GA to compute a solution quickly, the addition of an exponential velocity decay function to the PSO, the addition of an "execution best" omnipresent particle to the PSO, and the addition of an attractive force component to the PSO velocity update equation. Experimentation was performed with the GA using various combinations of selection, crossover, and mutation operators and experimentation was also performed with the PSO using various combinations of neighborhood topologies, swarm sizes, cognitive influence scalars, and social influence scalars. The results of both the GA and PSO optimized parameter sets are presented. This paper details the benefits and drawbacks of each algorithm in terms of detection accuracy, execution speed, and additions required to generate successful problem specific parameter sets.

  18. Catchment Tomography - Joint Estimation of Surface Roughness and Hydraulic Conductivity with the EnKF

    NASA Astrophysics Data System (ADS)

    Baatz, D.; Kurtz, W.; Hendricks Franssen, H. J.; Vereecken, H.; Kollet, S. J.

    2017-12-01

    Parameter estimation for physically based, distributed hydrological models becomes increasingly challenging with increasing model complexity. The number of parameters is usually large and the number of observations relatively small, which results in large uncertainties. A moving transmitter - receiver concept to estimate spatially distributed hydrological parameters is presented by catchment tomography. In this concept, precipitation, highly variable in time and space, serves as a moving transmitter. As response to precipitation, runoff and stream discharge are generated along different paths and time scales, depending on surface and subsurface flow properties. Stream water levels are thus an integrated signal of upstream parameters, measured by stream gauges which serve as the receivers. These stream water level observations are assimilated into a distributed hydrological model, which is forced with high resolution, radar based precipitation estimates. Applying a joint state-parameter update with the Ensemble Kalman Filter, the spatially distributed Manning's roughness coefficient and saturated hydraulic conductivity are estimated jointly. The sequential data assimilation continuously integrates new information into the parameter estimation problem, especially during precipitation events. Every precipitation event constrains the possible parameter space. In the approach, forward simulations are performed with ParFlow, a variable saturated subsurface and overland flow model. ParFlow is coupled to the Parallel Data Assimilation Framework for the data assimilation and the joint state-parameter update. In synthetic, 3-dimensional experiments including surface and subsurface flow, hydraulic conductivity and the Manning's coefficient are efficiently estimated with the catchment tomography approach. A joint update of the Manning's coefficient and hydraulic conductivity tends to improve the parameter estimation compared to a single parameter update, especially in cases of biased initial parameter ensembles. The computational experiments additionally show to which degree of spatial heterogeneity and to which degree of uncertainty of subsurface flow parameters the Manning's coefficient and hydraulic conductivity can be estimated efficiently.

  19. f1: a code to compute Appell's F1 hypergeometric function

    NASA Astrophysics Data System (ADS)

    Colavecchia, F. D.; Gasaneo, G.

    2004-02-01

    In this work we present the FORTRAN code to compute the hypergeometric function F1( α, β1, β2, γ, x, y) of Appell. The program can compute the F1 function for real values of the variables { x, y}, and complex values of the parameters { α, β1, β2, γ}. The code uses different strategies to calculate the function according to the ideas outlined in [F.D. Colavecchia et al., Comput. Phys. Comm. 138 (1) (2001) 29]. Program summaryTitle of the program: f1 Catalogue identifier: ADSJ Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSJ Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Computers: PC compatibles, SGI Origin2∗ Operating system under which the program has been tested: Linux, IRIX Programming language used: Fortran 90 Memory required to execute with typical data: 4 kbytes No. of bits in a word: 32 No. of bytes in distributed program, including test data, etc.: 52 325 Distribution format: tar gzip file External subprograms used: Numerical Recipes hypgeo [W.H. Press et al., Numerical Recipes in Fortran 77, Cambridge Univ. Press, 1996] or chyp routine of R.C. Forrey [J. Comput. Phys. 137 (1997) 79], rkf45 [L.F. Shampine and H.H. Watts, Rep. SAND76-0585, 1976]. Keywords: Numerical methods, special functions, hypergeometric functions, Appell functions, Gauss function Nature of the physical problem: Computing the Appell F1 function is relevant in atomic collisions and elementary particle physics. It is usually the result of multidimensional integrals involving Coulomb continuum states. Method of solution: The F1 function has a convergent-series definition for | x|<1 and | y|<1, and several analytic continuations for other regions of the variable space. The code tests the values of the variables and selects one of the precedent cases. In the convergence region the program uses the series definition near the origin of coordinates, and a numerical integration of the third-order differential parametric equation for the F1 function. Also detects several special cases according to the values of the parameters. Restrictions on the complexity of the problem: The code is restricted to real values of the variables { x, y}. Also, there are some parameter domains that are not covered. These usually imply differences between integer parameters that lead to negative integer arguments of Gamma functions. Typical running time: Depends basically on the variables. The computation of Table 4 of [F.D. Colavecchia et al., Comput. Phys. Comm. 138 (1) (2001) 29] (64 functions) requires approximately 0.33 s in a Athlon 900 MHz processor.

  20. String limit of the isotropic Heisenberg chain in the four-particle sector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antipov, A. G., E-mail: aga2@csa.ru; Komarov, I. V., E-mail: ivkoma@rambler.r

    2008-05-15

    The quantum method of variable separation is applied to the spectral problem of the isotropic Heisenberg model. The Baxter difference equation is resolved by means of a special quasiclassical asymptotic expansion. States are identified by multiplicities of limiting values of the Bethe parameters. The string limit of the four-particle sector is investigated. String solutions are singled out and classified. It is shown that only a minor fraction of solutions demonstrate string behavior.

  1. A model of a fishery with fish stock involving delay equations.

    PubMed

    Auger, P; Ducrot, Arnaud

    2009-12-13

    The aim of this paper is to provide a new mathematical model for a fishery by including a stock variable for the resource. This model takes the form of an infinite delay differential equation. It is mathematically studied and a bifurcation analysis of the steady states is fulfilled. Depending on the different parameters of the problem, we show that Hopf bifurcation may occur leading to oscillating behaviours of the system. The mathematical results are finally discussed.

  2. Seismic modeling with radial basis function-generated finite differences (RBF-FD) – a simplified treatment of interfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, Bradley, E-mail: brma7253@colorado.edu; Fornberg, Bengt, E-mail: Fornberg@colorado.edu

    In a previous study of seismic modeling with radial basis function-generated finite differences (RBF-FD), we outlined a numerical method for solving 2-D wave equations in domains with material interfaces between different regions. The method was applicable on a mesh-free set of data nodes. It included all information about interfaces within the weights of the stencils (allowing the use of traditional time integrators), and was shown to solve problems of the 2-D elastic wave equation to 3rd-order accuracy. In the present paper, we discuss a refinement of that method that makes it simpler to implement. It can also improve accuracy formore » the case of smoothly-variable model parameter values near interfaces. We give several test cases that demonstrate the method solving 2-D elastic wave equation problems to 4th-order accuracy, even in the presence of smoothly-curved interfaces with jump discontinuities in the model parameters.« less

  3. Stochastic Control Synthesis of Systems with Structured Uncertainty

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L. (Technical Monitor); Crespo, Luis G.

    2003-01-01

    This paper presents a study on the design of robust controllers by using random variables to model structured uncertainty for both SISO and MIMO feedback systems. Once the parameter uncertainty is prescribed with probability density functions, its effects are propagated through the analysis leading to stochastic metrics for the system's output. Control designs that aim for satisfactory performances while guaranteeing robust closed loop stability are attained by solving constrained non-linear optimization problems in the frequency domain. This approach permits not only to quantify the probability of having unstable and unfavorable responses for a particular control design but also to search for controls while favoring the values of the parameters with higher chance of occurrence. In this manner, robust optimality is achieved while the characteristic conservatism of conventional robust control methods is eliminated. Examples that admit closed form expressions for the probabilistic metrics of the output are used to elucidate the nature of the problem at hand and validate the proposed formulations.

  4. Seismic modeling with radial basis function-generated finite differences (RBF-FD) - a simplified treatment of interfaces

    NASA Astrophysics Data System (ADS)

    Martin, Bradley; Fornberg, Bengt

    2017-04-01

    In a previous study of seismic modeling with radial basis function-generated finite differences (RBF-FD), we outlined a numerical method for solving 2-D wave equations in domains with material interfaces between different regions. The method was applicable on a mesh-free set of data nodes. It included all information about interfaces within the weights of the stencils (allowing the use of traditional time integrators), and was shown to solve problems of the 2-D elastic wave equation to 3rd-order accuracy. In the present paper, we discuss a refinement of that method that makes it simpler to implement. It can also improve accuracy for the case of smoothly-variable model parameter values near interfaces. We give several test cases that demonstrate the method solving 2-D elastic wave equation problems to 4th-order accuracy, even in the presence of smoothly-curved interfaces with jump discontinuities in the model parameters.

  5. Finding Bayesian Optimal Designs for Nonlinear Models: A Semidefinite Programming-Based Approach.

    PubMed

    Duarte, Belmiro P M; Wong, Weng Kee

    2015-08-01

    This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D-, A- or E-optimality. As an illustrative example, we demonstrate the approach using the power-logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D-optimal designs with two regressors for a logistic model and a two-variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted.

  6. Some Muirhead Mean Operators for Intuitionistic Fuzzy Numbers and Their Applications to Group Decision Making.

    PubMed

    Liu, Peide; Li, Dengfeng

    2017-01-01

    Muirhead mean (MM) is a well-known aggregation operator which can consider interrelationships among any number of arguments assigned by a variable vector. Besides, it is a universal operator since it can contain other general operators by assigning some special parameter values. However, the MM can only process the crisp numbers. Inspired by the MM' advantages, the aim of this paper is to extend MM to process the intuitionistic fuzzy numbers (IFNs) and then to solve the multi-attribute group decision making (MAGDM) problems. Firstly, we develop some intuitionistic fuzzy Muirhead mean (IFMM) operators by extending MM to intuitionistic fuzzy information. Then, we prove some properties and discuss some special cases with respect to the parameter vector. Moreover, we present two new methods to deal with MAGDM problems with the intuitionistic fuzzy information based on the proposed MM operators. Finally, we verify the validity and reliability of our methods by using an application example, and analyze the advantages of our methods by comparing with other existing methods.

  7. Finding Bayesian Optimal Designs for Nonlinear Models: A Semidefinite Programming-Based Approach

    PubMed Central

    Duarte, Belmiro P. M.; Wong, Weng Kee

    2014-01-01

    Summary This paper uses semidefinite programming (SDP) to construct Bayesian optimal design for nonlinear regression models. The setup here extends the formulation of the optimal designs problem as an SDP problem from linear to nonlinear models. Gaussian quadrature formulas (GQF) are used to compute the expectation in the Bayesian design criterion, such as D-, A- or E-optimality. As an illustrative example, we demonstrate the approach using the power-logistic model and compare results in the literature. Additionally, we investigate how the optimal design is impacted by different discretising schemes for the design space, different amounts of uncertainty in the parameter values, different choices of GQF and different prior distributions for the vector of model parameters, including normal priors with and without correlated components. Further applications to find Bayesian D-optimal designs with two regressors for a logistic model and a two-variable generalised linear model with a gamma distributed response are discussed, and some limitations of our approach are noted. PMID:26512159

  8. Modeling the human development index and the percentage of poor people using quantile smoothing splines

    NASA Astrophysics Data System (ADS)

    Mulyani, Sri; Andriyana, Yudhie; Sudartianto

    2017-03-01

    Mean regression is a statistical method to explain the relationship between the response variable and the predictor variable based on the central tendency of the data (mean) of the response variable. The parameter estimation in mean regression (with Ordinary Least Square or OLS) generates a problem if we apply it to the data with a symmetric, fat-tailed, or containing outlier. Hence, an alternative method is necessary to be used to that kind of data, for example quantile regression method. The quantile regression is a robust technique to the outlier. This model can explain the relationship between the response variable and the predictor variable, not only on the central tendency of the data (median) but also on various quantile, in order to obtain complete information about that relationship. In this study, a quantile regression is developed with a nonparametric approach such as smoothing spline. Nonparametric approach is used if the prespecification model is difficult to determine, the relation between two variables follow the unknown function. We will apply that proposed method to poverty data. Here, we want to estimate the Percentage of Poor People as the response variable involving the Human Development Index (HDI) as the predictor variable.

  9. Model reduction in integrated controls-structures design

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.

    1993-01-01

    It is the objective of this paper to present a model reduction technique developed for the integrated controls-structures design of flexible structures. Integrated controls-structures design problems are typically posed as nonlinear mathematical programming problems, where the design variables consist of both structural and control parameters. In the solution process, both structural and control design variables are constantly changing; therefore, the dynamic characteristics of the structure are also changing. This presents a problem in obtaining a reduced-order model for active control design and analysis which will be valid for all design points within the design space. In other words, the frequency and number of the significant modes of the structure (modes that should be included) may vary considerably throughout the design process. This is also true as the locations and/or masses of the sensors and actuators change. Moreover, since the number of design evaluations in the integrated design process could easily run into thousands, any feasible order-reduction method should not require model reduction analysis at every design iteration. In this paper a novel and efficient technique for model reduction in the integrated controls-structures design process, which addresses these issues, is presented.

  10. Multilevel structural equation models for assessing moderation within and across levels of analysis.

    PubMed

    Preacher, Kristopher J; Zhang, Zhen; Zyphur, Michael J

    2016-06-01

    Social scientists are increasingly interested in multilevel hypotheses, data, and statistical models as well as moderation or interactions among predictors. The result is a focus on hypotheses and tests of multilevel moderation within and across levels of analysis. Unfortunately, existing approaches to multilevel moderation have a variety of shortcomings, including conflated effects across levels of analysis and bias due to using observed cluster averages instead of latent variables (i.e., "random intercepts") to represent higher-level constructs. To overcome these problems and elucidate the nature of multilevel moderation effects, we introduce a multilevel structural equation modeling (MSEM) logic that clarifies the nature of the problems with existing practices and remedies them with latent variable interactions. This remedy uses random coefficients and/or latent moderated structural equations (LMS) for unbiased tests of multilevel moderation. We describe our approach and provide an example using the publicly available High School and Beyond data with Mplus syntax in Appendix. Our MSEM method eliminates problems of conflated multilevel effects and reduces bias in parameter estimates while offering a coherent framework for conceptualizing and testing multilevel moderation effects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  11. Bayesian estimation of dynamic matching function for U-V analysis in Japan

    NASA Astrophysics Data System (ADS)

    Kyo, Koki; Noda, Hideo; Kitagawa, Genshiro

    2012-05-01

    In this paper we propose a Bayesian method for analyzing unemployment dynamics. We derive a Beveridge curve for unemployment and vacancy (U-V) analysis from a Bayesian model based on a labor market matching function. In our framework, the efficiency of matching and the elasticities of new hiring with respect to unemployment and vacancy are regarded as time varying parameters. To construct a flexible model and obtain reasonable estimates in an underdetermined estimation problem, we treat the time varying parameters as random variables and introduce smoothness priors. The model is then described in a state space representation, enabling the parameter estimation to be carried out using Kalman filter and fixed interval smoothing. In such a representation, dynamic features of the cyclic unemployment rate and the structural-frictional unemployment rate can be accurately captured.

  12. The solution of the problem of oil spill risk control in the Baltic Sea taking into account the processes of oil propagation and degradation

    NASA Astrophysics Data System (ADS)

    Aseev, Nikita; Agoshkov, Valery

    2015-04-01

    The report is devoted to the one approach to the problem of oil spill risk control of protected areas in the Baltic Sea (Aseev et al., 2014). By the problem of risk control is meant a problem of determination of optimal resources quantity which are necessary for decreasing the risk to some acceptable value. It is supposed that only moment of accident is a random variable. Mass of oil slick is chosen as a function of control. For the realization of the random variable the quadratic 'functional of cost' is introduced. It comprises cleaning costs and deviation of damage of oil pollution from its acceptable value. The problem of minimization of this functional is solved based on the methods of optimal control and the theory of adjoint equations (Agoshkov, 2003, Agoshkov et al., 2012). The solution of this problem is explicitly found. In order to solve the realistic problem of oil spill risk control in the Baltic Sea the 2d model of oil spill propagation on the sea surface based on the Seatrack Web model (Liungman, Mattson, 2011) is developed. The model takes into account such processes as oil transportation by sea currents and wind, turbulent diffusion, spreading, evaporation from sea surface, dispersion and formation of emulsion 'water-in-oil'. The model allows to calculate basic oil slick parameters: localization, mass, volume, thickness, density of oil, water content and viscosity of emulsion. The results of several numerical experiments in the Baltic Sea using the model and the methodology of oil spill risk control are presented. Along with moment of accident other parameters of oil spill and environment could be chosen as a random variables. The methodology of solution of oil spill risk control problem will remain the same but the computational complexity will increase. Conversion of the function of control to quantity of resources with a glance to methods of pollution removal should be processed. As a result, the developed 2d model of oil spill propagation combined with the methodology of solution of oil spill risk control problem could provide the basis for oil spill simulation systems, systems of evaluation and control of oil spill risk and damage in seas or decision support systems. References V.I. Agoshkov. The methods of optimal control and adjoint equations in problems of mathematical physics. // Moscow: INM RAS, 2003, 256 p. (in Russian). V.I. Agoshkov, N.A. Aseev, I.S. Novikov. The methods of investigation and solution of the problems of local sources and local or integral observations. // Moscow: INM RAS, 2012. 151 p. (in Russian). N.A. Aseev, V.I. Agoshkov, V.B. Zalesny, R. Aps, P. Kujala, and J. Rytkonen. The problem of control of oil pollution risk in the Baltic Sea // Russ. J. Numer. Analysis and Math. Modelling, 2014, V 29, No. 2, 93-105. O. Liungman, J. Mattson. Scientific documentation of Seatrack Web; physical processes, algorithms and references, 2011. // https://stw-helcom.smhi.se/

  13. Progress on a Taylor weak statement finite element algorithm for high-speed aerodynamic flows

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Freels, J. D.

    1989-01-01

    A new finite element numerical Computational Fluid Dynamics (CFD) algorithm has matured to the point of efficiently solving two-dimensional high speed real-gas compressible flow problems in generalized coordinates on modern vector computer systems. The algorithm employs a Taylor Weak Statement classical Galerkin formulation, a variably implicit Newton iteration, and a tensor matrix product factorization of the linear algebra Jacobian under a generalized coordinate transformation. Allowing for a general two-dimensional conservation law system, the algorithm has been exercised on the Euler and laminar forms of the Navier-Stokes equations. Real-gas fluid properties are admitted, and numerical results verify solution accuracy, efficiency, and stability over a range of test problem parameters.

  14. Trajectory optimization of spacecraft high-thrust orbit transfer using a modified evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Shirazi, Abolfazl

    2016-10-01

    This article introduces a new method to optimize finite-burn orbital manoeuvres based on a modified evolutionary algorithm. Optimization is carried out based on conversion of the orbital manoeuvre into a parameter optimization problem by assigning inverse tangential functions to the changes in direction angles of the thrust vector. The problem is analysed using boundary delimitation in a common optimization algorithm. A method is introduced to achieve acceptable values for optimization variables using nonlinear simulation, which results in an enlarged convergence domain. The presented algorithm benefits from high optimality and fast convergence time. A numerical example of a three-dimensional optimal orbital transfer is presented and the accuracy of the proposed algorithm is shown.

  15. Biomechanical and functional efficacy of knee sleeves: A literature review.

    PubMed

    Mohd Sharif, Nahdatul Aishah; Goh, Siew-Li; Usman, Juliana; Wan Safwani, Wan Kamarul Zaman

    2017-11-01

    Knee sleeves are widely used for the symptomatic relief and subjective improvements of knee problems. To date, however, their biomechanical effects have not been well understood. To determine whether knee sleeves can significantly improve the biomechanical variables for knee problems. Systematic literature search was conducted on four online databases - PubMed, Web of Science, ScienceDirect and Springer Link - to find peer-reviewed and relevant scientific papers on knee sleeves published from January 2005 to January 2015. Study quality was assessed using the Structured Effectiveness Quality Evaluation Scale (SEQES). Twenty studies on knee sleeves usage identified from the search were included in the review because of their heterogeneous scope of coverage. Twelve studies found significant improvement in gait parameters (3) and functional parameters (9), while eight studies did not find any significant effects of knee sleeves usage. Most improvements were observed in: proprioception for healthy knees, gait and balance for osteoarthritic knees, and functional improvement of injured knees. This review suggests that knee sleeves can effect functional improvements to knee problems. However, further work is needed to confirm this hypothesis, due to the lack of homogeneity and rigor of existing studies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Heat transfer enhancement in free convection flow of CNTs Maxwell nanofluids with four different types of molecular liquids.

    PubMed

    Aman, Sidra; Khan, Ilyas; Ismail, Zulkhibri; Salleh, Mohd Zuki; Al-Mdallal, Qasem M

    2017-05-26

    This article investigates heat transfer enhancement in free convection flow of Maxwell nanofluids with carbon nanotubes (CNTs) over a vertically static plate with constant wall temperature. Two kinds of CNTs i.e. single walls carbon nanotubes (SWCNTs) and multiple walls carbon nanotubes (MWCNTs) are suspended in four different types of base liquids (Kerosene oil, Engine oil, water and ethylene glycol). Kerosene oil-based nanofluids are given a special consideration due to their higher thermal conductivities, unique properties and applications. The problem is modelled in terms of PDE's with initial and boundary conditions. Some relevant non-dimensional variables are inserted in order to transmute the governing problem into dimensionless form. The resulting problem is solved via Laplace transform technique and exact solutions for velocity, shear stress and temperature are acquired. These solutions are significantly controlled by the variations of parameters including the relaxation time, Prandtl number, Grashof number and nanoparticles volume fraction. Velocity and temperature increases with elevation in Grashof number while Shear stress minimizes with increasing Maxwell parameter. A comparison between SWCNTs and MWCNTs in each case is made. Moreover, a graph showing the comparison amongst four different types of nanofluids for both CNTs is also plotted.

  17. Multi-objective optimization of combustion, performance and emission parameters in a jatropha biodiesel engine using Non-dominated sorting genetic algorithm-II

    NASA Astrophysics Data System (ADS)

    Dhingra, Sunil; Bhushan, Gian; Dubey, Kashyap Kumar

    2014-03-01

    The present work studies and identifies the different variables that affect the output parameters involved in a single cylinder direct injection compression ignition (CI) engine using jatropha biodiesel. Response surface methodology based on Central composite design (CCD) is used to design the experiments. Mathematical models are developed for combustion parameters (Brake specific fuel consumption (BSFC) and peak cylinder pressure (Pmax)), performance parameter brake thermal efficiency (BTE) and emission parameters (CO, NO x , unburnt HC and smoke) using regression techniques. These regression equations are further utilized for simultaneous optimization of combustion (BSFC, Pmax), performance (BTE) and emission (CO, NO x , HC, smoke) parameters. As the objective is to maximize BTE and minimize BSFC, Pmax, CO, NO x , HC, smoke, a multiobjective optimization problem is formulated. Nondominated sorting genetic algorithm-II is used in predicting the Pareto optimal sets of solution. Experiments are performed at suitable optimal solutions for predicting the combustion, performance and emission parameters to check the adequacy of the proposed model. The Pareto optimal sets of solution can be used as guidelines for the end users to select optimal combination of engine output and emission parameters depending upon their own requirements.

  18. The electrical MHD and Hall current impact on micropolar nanofluid flow between rotating parallel plates

    NASA Astrophysics Data System (ADS)

    Shah, Zahir; Islam, Saeed; Gul, Taza; Bonyah, Ebenezer; Altaf Khan, Muhammad

    2018-06-01

    The current research aims to examine the combined effect of magnetic and electric field on micropolar nanofluid between two parallel plates in a rotating system. The nanofluid flow between two parallel plates is taken under the influence of Hall current. The flow of micropolar nanofluid has been assumed in steady state. The rudimentary governing equations have been changed to a set of differential nonlinear and coupled equations using suitable similarity variables. An optimal approach has been used to acquire the solution of the modelled problems. The convergence of the method has been shown numerically. The impact of the Skin friction on velocity profile, Nusslet number on temperature profile and Sherwood number on concentration profile have been studied. The influences of the Hall currents, rotation, Brownian motion and thermophoresis analysis of micropolar nanofluid have been mainly focused in this work. Moreover, for comprehension the physical presentation of the embedded parameters that is, coupling parameter N1 , viscosity parameter Re , spin gradient viscosity parameter N2 , rotating parameter Kr , Micropolar fluid constant N3 , magnetic parameter M , Prandtl number Pr , Thermophoretic parameter Nt , Brownian motion parameter Nb , and Schmidt number Sc have been plotted and deliberated graphically.

  19. Multi-objective optimal design of magnetorheological engine mount based on an improved non-dominated sorting genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Ling; Duan, Xuwei; Deng, Zhaoxue; Li, Yinong

    2014-03-01

    A novel flow-mode magneto-rheological (MR) engine mount integrated a diaphragm de-coupler and the spoiler plate is designed and developed to isolate engine and the transmission from the chassis in a wide frequency range and overcome the stiffness in high frequency. A lumped parameter model of the MR engine mount in single degree of freedom system is further developed based on bond graph method to predict the performance of the MR engine mount accurately. The optimization mathematical model is established to minimize the total of force transmissibility over several frequency ranges addressed. In this mathematical model, the lumped parameters are considered as design variables. The maximum of force transmissibility and the corresponding frequency in low frequency range as well as individual lumped parameter are limited as constraints. The multiple interval sensitivity analysis method is developed to select the optimized variables and improve the efficiency of optimization process. An improved non-dominated sorting genetic algorithm (NSGA-II) is used to solve the multi-objective optimization problem. The synthesized distance between the individual in Pareto set and the individual in possible set in engineering is defined and calculated. A set of real design parameters is thus obtained by the internal relationship between the optimal lumped parameters and practical design parameters for the MR engine mount. The program flowchart for the improved non-dominated sorting genetic algorithm (NSGA-II) is given. The obtained results demonstrate the effectiveness of the proposed optimization approach in minimizing the total of force transmissibility over several frequency ranges addressed.

  20. Variable Viscosity Effects on Time Dependent Magnetic Nanofluid Flow past a Stretchable Rotating Plate

    NASA Astrophysics Data System (ADS)

    Ram, Paras; Joshi, Vimal Kumar; Sharma, Kushal; Walia, Mittu; Yadav, Nisha

    2016-01-01

    An attempt has been made to describe the effects of geothermal viscosity with viscous dissipation on the three dimensional time dependent boundary layer flow of magnetic nanofluids due to a stretchable rotating plate in the presence of a porous medium. The modelled governing time dependent equations are transformed a from boundary value problem to an initial value problem, and thereafter solved by a fourth order Runge-Kutta method in MATLAB with a shooting technique for the initial guess. The influences of mixed temperature, depth dependent viscosity, and the rotation strength parameter on the flow field and temperature field generated on the plate surface are investigated. The derived results show direct impact in the problems of heat transfer in high speed computer disks (Herrero et al. [1]) and turbine rotor systems (Owen and Rogers [2]).

  1. Estimating multivariate response surface model with data outliers, case study in enhancing surface layer properties of an aircraft aluminium alloy

    NASA Astrophysics Data System (ADS)

    Widodo, Edy; Kariyam

    2017-03-01

    To determine the input variable settings that create the optimal compromise in response variable used Response Surface Methodology (RSM). There are three primary steps in the RSM problem, namely data collection, modelling, and optimization. In this study focused on the establishment of response surface models, using the assumption that the data produced is correct. Usually the response surface model parameters are estimated by OLS. However, this method is highly sensitive to outliers. Outliers can generate substantial residual and often affect the estimator models. Estimator models produced can be biased and could lead to errors in the determination of the optimal point of fact, that the main purpose of RSM is not reached. Meanwhile, in real life, the collected data often contain some response variable and a set of independent variables. Treat each response separately and apply a single response procedures can result in the wrong interpretation. So we need a development model for the multi-response case. Therefore, it takes a multivariate model of the response surface that is resistant to outliers. As an alternative, in this study discussed on M-estimation as a parameter estimator in multivariate response surface models containing outliers. As an illustration presented a case study on the experimental results to the enhancement of the surface layer of aluminium alloy air by shot peening.

  2. A multicenter study to standardize reporting and analyses of fluorescence-activated cell-sorted murine intestinal epithelial cells

    PubMed Central

    Magness, Scott T.; Puthoff, Brent J.; Crissey, Mary Ann; Dunn, James; Henning, Susan J.; Houchen, Courtney; Kaddis, John S.; Kuo, Calvin J.; Li, Linheng; Lynch, John; Martin, Martin G.; May, Randal; Niland, Joyce C.; Olack, Barbara; Qian, Dajun; Stelzner, Matthias; Swain, John R.; Wang, Fengchao; Wang, Jiafang; Wang, Xinwei; Yan, Kelley; Yu, Jian

    2013-01-01

    Fluorescence-activated cell sorting (FACS) is an essential tool for studies requiring isolation of distinct intestinal epithelial cell populations. Inconsistent or lack of reporting of the critical parameters associated with FACS methodologies has complicated interpretation, comparison, and reproduction of important findings. To address this problem a comprehensive multicenter study was designed to develop guidelines that limit experimental and data reporting variability and provide a foundation for accurate comparison of data between studies. Common methodologies and data reporting protocols for tissue dissociation, cell yield, cell viability, FACS, and postsort purity were established. Seven centers tested the standardized methods by FACS-isolating a specific crypt-based epithelial population (EpCAM+/CD44+) from murine small intestine. Genetic biomarkers for stem/progenitor (Lgr5 and Atoh 1) and differentiated cell lineages (lysozyme, mucin2, chromogranin A, and sucrase isomaltase) were interrogated in target and control populations to assess intra- and intercenter variability. Wilcoxon's rank sum test on gene expression levels showed limited intracenter variability between biological replicates. Principal component analysis demonstrated significant intercenter reproducibility among four centers. Analysis of data collected by standardized cell isolation methods and data reporting requirements readily identified methodological problems, indicating that standard reporting parameters facilitate post hoc error identification. These results indicate that the complexity of FACS isolation of target intestinal epithelial populations can be highly reproducible between biological replicates and different institutions by adherence to common cell isolation methods and FACS gating strategies. This study can be considered a foundation for continued method development and a starting point for investigators that are developing cell isolation expertise to study physiology and pathophysiology of the intestinal epithelium. PMID:23928185

  3. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    NASA Astrophysics Data System (ADS)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.

  4. Are Subject-Specific Musculoskeletal Models Robust to the Uncertainties in Parameter Identification?

    PubMed Central

    Valente, Giordano; Pitto, Lorenzo; Testi, Debora; Seth, Ajay; Delp, Scott L.; Stagni, Rita; Viceconti, Marco; Taddei, Fulvia

    2014-01-01

    Subject-specific musculoskeletal modeling can be applied to study musculoskeletal disorders, allowing inclusion of personalized anatomy and properties. Independent of the tools used for model creation, there are unavoidable uncertainties associated with parameter identification, whose effect on model predictions is still not fully understood. The aim of the present study was to analyze the sensitivity of subject-specific model predictions (i.e., joint angles, joint moments, muscle and joint contact forces) during walking to the uncertainties in the identification of body landmark positions, maximum muscle tension and musculotendon geometry. To this aim, we created an MRI-based musculoskeletal model of the lower limbs, defined as a 7-segment, 10-degree-of-freedom articulated linkage, actuated by 84 musculotendon units. We then performed a Monte-Carlo probabilistic analysis perturbing model parameters according to their uncertainty, and solving a typical inverse dynamics and static optimization problem using 500 models that included the different sets of perturbed variable values. Model creation and gait simulations were performed by using freely available software that we developed to standardize the process of model creation, integrate with OpenSim and create probabilistic simulations of movement. The uncertainties in input variables had a moderate effect on model predictions, as muscle and joint contact forces showed maximum standard deviation of 0.3 times body-weight and maximum range of 2.1 times body-weight. In addition, the output variables significantly correlated with few input variables (up to 7 out of 312) across the gait cycle, including the geometry definition of larger muscles and the maximum muscle tension in limited gait portions. Although we found subject-specific models not markedly sensitive to parameter identification, researchers should be aware of the model precision in relation to the intended application. In fact, force predictions could be affected by an uncertainty in the same order of magnitude of its value, although this condition has low probability to occur. PMID:25390896

  5. Scaling Linguistic Characterization of Precipitation Variability

    NASA Astrophysics Data System (ADS)

    Primo, C.; Gutierrez, J. M.

    2003-04-01

    Rainfall variability is influenced by changes in the aggregation of daily rainfall. This problem is of great importance for hydrological, agricultural and ecological applications. Rainfall averages, or accumulations, are widely used as standard climatic parameters. However different aggregation schemes may lead to the same average or accumulated values. In this paper we present a fractal method to characterize different aggregation schemes. The method provides scaling exponents characterizing weekly or monthly rainfall patterns for a given station. To this aim, we establish an analogy with linguistic analysis, considering precipitation as a discrete variable (e.g., rain, no rain). Each weekly, or monthly, symbolic precipitation sequence of observed precipitation is then considered as a "word" (in this case, a binary word) which defines a specific weekly rainfall pattern. Thus, each site defines a "language" characterized by the words observed in that site during a period representative of the climatology. Then, the more variable the observed weekly precipitation sequences, the more complex the obtained language. To characterize these languages, we first applied the Zipf's method obtaining scaling histograms of rank ordered frequencies. However, to obtain significant exponents, the scaling must be maintained some orders of magnitude, requiring long sequences of daily precipitation which are not available at particular stations. Thus this analysis is not suitable for applications involving particular stations (such as regionalization). Then, we introduce an alternative fractal method applicable to data from local stations. The so-called Chaos-Game method uses Iterated Function Systems (IFS) for graphically representing rainfall languages, in a way that complex languages define complex graphical patterns. The box-counting dimension and the entropy of the resulting patterns are used as linguistic parameters to quantitatively characterize the complexity of the patterns. We illustrate the high climatological discrimination power of the linguistic parameters in the Iberian peninsula, when compared with other standard techniques (such as seasonal mean accumulated precipitation). As an example, standard and linguistic parameters are used as inputs for a clustering regionalization method, comparing the resulting clusters.

  6. Extracting Prior Distributions from a Large Dataset of In-Situ Measurements to Support SWOT-based Estimation of River Discharge

    NASA Astrophysics Data System (ADS)

    Hagemann, M.; Gleason, C. J.

    2017-12-01

    The upcoming (2021) Surface Water and Ocean Topography (SWOT) NASA satellite mission aims, in part, to estimate discharge on major rivers worldwide using reach-scale measurements of stream width, slope, and height. Current formalizations of channel and floodplain hydraulics are insufficient to fully constrain this problem mathematically, resulting in an infinitely large solution set for any set of satellite observations. Recent work has reformulated this problem in a Bayesian statistical setting, in which the likelihood distributions derive directly from hydraulic flow-law equations. When coupled with prior distributions on unknown flow-law parameters, this formulation probabilistically constrains the parameter space, and results in a computationally tractable description of discharge. Using a curated dataset of over 200,000 in-situ acoustic Doppler current profiler (ADCP) discharge measurements from over 10,000 USGS gaging stations throughout the United States, we developed empirical prior distributions for flow-law parameters that are not observable by SWOT, but that are required in order to estimate discharge. This analysis quantified prior uncertainties on quantities including cross-sectional area, at-a-station hydraulic geometry width exponent, and discharge variability, that are dependent on SWOT-observable variables including reach-scale statistics of width and height. When compared against discharge estimation approaches that do not use this prior information, the Bayesian approach using ADCP-derived priors demonstrated consistently improved performance across a range of performance metrics. This Bayesian approach formally transfers information from in-situ gaging stations to remote-sensed estimation of discharge, in which the desired quantities are not directly observable. Further investigation using large in-situ datasets is therefore a promising way forward in improving satellite-based estimates of river discharge.

  7. Optimisation of Fabric Reinforced Polymer Composites Using a Variant of Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Axinte, Andrei; Taranu, Nicolae; Bejan, Liliana; Hudisteanu, Iuliana

    2017-12-01

    Fabric reinforced polymeric composites are high performance materials with a rather complex fabric geometry. Therefore, modelling this type of material is a cumbersome task, especially when an efficient use is targeted. One of the most important issue of its design process is the optimisation of the individual laminae and of the laminated structure as a whole. In order to do that, a parametric model of the material has been defined, emphasising the many geometric variables needed to be correlated in the complex process of optimisation. The input parameters involved in this work, include: widths or heights of the tows and the laminate stacking sequence, which are discrete variables, while the gaps between adjacent tows and the height of the neat matrix are continuous variables. This work is one of the first attempts of using a Genetic Algorithm ( GA) to optimise the geometrical parameters of satin reinforced multi-layer composites. Given the mixed type of the input parameters involved, an original software called SOMGA (Satin Optimisation with a Modified Genetic Algorithm) has been conceived and utilised in this work. The main goal is to find the best possible solution to the problem of designing a composite material which is able to withstand to a given set of external, in-plane, loads. The optimisation process has been performed using a fitness function which can analyse and compare mechanical behaviour of different fabric reinforced composites, the results being correlated with the ultimate strains, which demonstrate the efficiency of the composite structure.

  8. Dissecting Magnetar Variability with Bayesian Hierarchical Models

    NASA Astrophysics Data System (ADS)

    Huppenkothen, Daniela; Brewer, Brendon J.; Hogg, David W.; Murray, Iain; Frean, Marcus; Elenbaas, Chris; Watts, Anna L.; Levin, Yuri; van der Horst, Alexander J.; Kouveliotou, Chryssa

    2015-09-01

    Neutron stars are a prime laboratory for testing physical processes under conditions of strong gravity, high density, and extreme magnetic fields. Among the zoo of neutron star phenomena, magnetars stand out for their bursting behavior, ranging from extremely bright, rare giant flares to numerous, less energetic recurrent bursts. The exact trigger and emission mechanisms for these bursts are not known; favored models involve either a crust fracture and subsequent energy release into the magnetosphere, or explosive reconnection of magnetic field lines. In the absence of a predictive model, understanding the physical processes responsible for magnetar burst variability is difficult. Here, we develop an empirical model that decomposes magnetar bursts into a superposition of small spike-like features with a simple functional form, where the number of model components is itself part of the inference problem. The cascades of spikes that we model might be formed by avalanches of reconnection, or crust rupture aftershocks. Using Markov Chain Monte Carlo sampling augmented with reversible jumps between models with different numbers of parameters, we characterize the posterior distributions of the model parameters and the number of components per burst. We relate these model parameters to physical quantities in the system, and show for the first time that the variability within a burst does not conform to predictions from ideas of self-organized criticality. We also examine how well the properties of the spikes fit the predictions of simplified cascade models for the different trigger mechanisms.

  9. DISSECTING MAGNETAR VARIABILITY WITH BAYESIAN HIERARCHICAL MODELS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huppenkothen, Daniela; Elenbaas, Chris; Watts, Anna L.

    Neutron stars are a prime laboratory for testing physical processes under conditions of strong gravity, high density, and extreme magnetic fields. Among the zoo of neutron star phenomena, magnetars stand out for their bursting behavior, ranging from extremely bright, rare giant flares to numerous, less energetic recurrent bursts. The exact trigger and emission mechanisms for these bursts are not known; favored models involve either a crust fracture and subsequent energy release into the magnetosphere, or explosive reconnection of magnetic field lines. In the absence of a predictive model, understanding the physical processes responsible for magnetar burst variability is difficult. Here,more » we develop an empirical model that decomposes magnetar bursts into a superposition of small spike-like features with a simple functional form, where the number of model components is itself part of the inference problem. The cascades of spikes that we model might be formed by avalanches of reconnection, or crust rupture aftershocks. Using Markov Chain Monte Carlo sampling augmented with reversible jumps between models with different numbers of parameters, we characterize the posterior distributions of the model parameters and the number of components per burst. We relate these model parameters to physical quantities in the system, and show for the first time that the variability within a burst does not conform to predictions from ideas of self-organized criticality. We also examine how well the properties of the spikes fit the predictions of simplified cascade models for the different trigger mechanisms.« less

  10. Altered vision destabilizes gait in older persons.

    PubMed

    Helbostad, Jorunn L; Vereijken, Beatrix; Hesseberg, Karin; Sletvold, Olav

    2009-08-01

    This study assessed the effects of dim light and four experimentally induced changes in vision on gait speed and footfall and trunk parameters in older persons walking on level ground. Using a quasi-experimental design, gait characteristics were assessed in full light, dim light, and in dim light combined with manipulations resulting in reduced depth vision, double vision, blurred vision, and tunnel vision, respectively. A convenience sample of 24 home-dwelling older women and men (mean age 78.5 years, SD 3.4) with normal vision for their age and able to walk at least 10 m without assistance participated. Outcome measures were gait speed and spatial and temporal parameters of footfall and trunk acceleration, derived from an electronic gait mat and accelerometers. Dim light alone had no effect. Vision manipulations combined with dim light had effect on most footfall parameters but few trunk parameters. The largest effects were found regarding double and tunnel vision. Men increased and women decreased gait speed following manipulations (p=0.017), with gender differences also in stride velocity variability (p=0.017) and inter-stride medio-lateral trunk acceleration variability (p=0.014). Gender effects were related to differences in body height and physical functioning. Results indicate that visual problems lead to a more cautious and unstable gait pattern even under relatively simple conditions. This points to the importance of assessing vision in older persons and correcting visual impairments where possible.

  11. The efficiency of geophysical adjoint codes generated by automatic differentiation tools

    NASA Astrophysics Data System (ADS)

    Vlasenko, A. V.; Köhl, A.; Stammer, D.

    2016-02-01

    The accuracy of numerical models that describe complex physical or chemical processes depends on the choice of model parameters. Estimating an optimal set of parameters by optimization algorithms requires knowledge of the sensitivity of the process of interest to model parameters. Typically the sensitivity computation involves differentiation of the model, which can be performed by applying algorithmic differentiation (AD) tools to the underlying numerical code. However, existing AD tools differ substantially in design, legibility and computational efficiency. In this study we show that, for geophysical data assimilation problems of varying complexity, the performance of adjoint codes generated by the existing AD tools (i) Open_AD, (ii) Tapenade, (iii) NAGWare and (iv) Transformation of Algorithms in Fortran (TAF) can be vastly different. Based on simple test problems, we evaluate the efficiency of each AD tool with respect to computational speed, accuracy of the adjoint, the efficiency of memory usage, and the capability of each AD tool to handle modern FORTRAN 90-95 elements such as structures and pointers, which are new elements that either combine groups of variables or provide aliases to memory addresses, respectively. We show that, while operator overloading tools are the only ones suitable for modern codes written in object-oriented programming languages, their computational efficiency lags behind source transformation by orders of magnitude, rendering the application of these modern tools to practical assimilation problems prohibitive. In contrast, the application of source transformation tools appears to be the most efficient choice, allowing handling even large geophysical data assimilation problems. However, they can only be applied to numerical models written in earlier generations of programming languages. Our study indicates that applying existing AD tools to realistic geophysical problems faces limitations that urgently need to be solved to allow the continuous use of AD tools for solving geophysical problems on modern computer architectures.

  12. A pseudo-penalized quasi-likelihood approach to the spatial misalignment problem with non-normal data.

    PubMed

    Lopiano, Kenneth K; Young, Linda J; Gotway, Carol A

    2014-09-01

    Spatially referenced datasets arising from multiple sources are routinely combined to assess relationships among various outcomes and covariates. The geographical units associated with the data, such as the geographical coordinates or areal-level administrative units, are often spatially misaligned, that is, observed at different locations or aggregated over different geographical units. As a result, the covariate is often predicted at the locations where the response is observed. The method used to align disparate datasets must be accounted for when subsequently modeling the aligned data. Here we consider the case where kriging is used to align datasets in point-to-point and point-to-areal misalignment problems when the response variable is non-normally distributed. If the relationship is modeled using generalized linear models, the additional uncertainty induced from using the kriging mean as a covariate introduces a Berkson error structure. In this article, we develop a pseudo-penalized quasi-likelihood algorithm to account for the additional uncertainty when estimating regression parameters and associated measures of uncertainty. The method is applied to a point-to-point example assessing the relationship between low-birth weights and PM2.5 levels after the onset of the largest wildfire in Florida history, the Bugaboo scrub fire. A point-to-areal misalignment problem is presented where the relationship between asthma events in Florida's counties and PM2.5 levels after the onset of the fire is assessed. Finally, the method is evaluated using a simulation study. Our results indicate the method performs well in terms of coverage for 95% confidence intervals and naive methods that ignore the additional uncertainty tend to underestimate the variability associated with parameter estimates. The underestimation is most profound in Poisson regression models. © 2014, The International Biometric Society.

  13. Demographic behavior and the welfare state: econometric issues in the identification of the effects of tax and transfer programs.

    PubMed

    Moffitt, R

    1989-01-01

    It is difficult and risky to identify the effects of tax and transfer programs on demographic behavior. The primary concern of this article is to see if real exogenous variation in these programs' parameters exist to adequately evaluate the effects of the programs on behavior. A 1982 study examined the effect of the Aid to Families with Dependent Children (AFDC), a commonly used example of a US transfer program, on the probability that a female heads a household of children 18 years old with no adult male present. The dependent variable merged household, marital status, and fertility choice into 1 variable. The independent variables included leisure hours and income which also defined a woman's utility function. In this study, the parameters used to represent AFDC effects were not only identified by variation in the AFDC variables. 2 other studies attempting to examine AFDC's effects on demographic behavior (Hutchens [1979] and Ellwood and Bane [1985]) also failed to identify these effects. Ellwood and Bane appropriately concentrated on exogenous program variation (since benefits vary from state to state) and how it might be used in evaluating the effects of AFDC on behavior. They erroneously determined, however, that state variation should not be considered in their model. The studies reviewed in this article looked at AFDC, a program with significant intracountry parameter variation, yet these studies relied on potentially illegitimate sources of variation. Intracountry program variation is less likely to occur in Western Europe and therefore the problem of identifying effects of tax and transfer programs on demographic behavior is apt to be even more severe. Any further such studies should address these issues.

  14. Tracking variable sedimentation rates in orbitally forced paleoclimate proxy series

    NASA Astrophysics Data System (ADS)

    Li, M.; Kump, L. R.; Hinnov, L.

    2017-12-01

    This study addresses two fundamental issues in cyclostratigraphy: quantitative testing of orbital forcing in cyclic sedimentary sequences and tracking variable sedimentation rates. The methodology proposed here addresses these issues as an inverse problem, and estimates the product-moment correlation coefficient between the frequency spectra of orbital solutions and paleoclimate proxy series over a range of "test" sedimentation rates. It is inspired by the ASM method (1). The number of orbital parameters involved in the estimation is also considered. The method relies on the hypothesis that orbital forcing had a significant impact on the paleoclimate proxy variations, and thus is also tested. The null hypothesis of no astronomical forcing is evaluated using the Beta distribution, for which the shape parameters are estimated using a Monte Carlo simulation approach. We introduce a metric to estimate the most likely sedimentation rate using the product-moment correlation coefficient, H0 significance level, and the number of contributing orbital parameters, i.e., the CHO value. The CHO metric is applied with a sliding window to track variable sedimentation rates along the paleoclimate proxy series. Two forward models with uniform and variable sedimentation rates are evaluated to demonstrate the robustness of the method. The CHO method is applied to the classical Late Triassic Newark depth rank series; the estimated sedimentation rates match closely with previously published sedimentation rates and provide a more highly time-resolved estimate (2,3). References: (1) Meyers, S.R., Sageman, B.B., Amer. J. Sci., 307, 773-792, 2007; (2) Kent, D.V., Olsen, P.E., Muttoni, G., Earth-Sci. Rev.166, 153-180, 2017; (3) Li, M., Zhang, Y., Huang, C., Ogg, J., Hinnov, L., Wang, Y., Zou, Z., Li, L., 2017. Earth Plant. Sc. Lett. doi:10.1016/j.epsl.2017.07.015

  15. Groebner Basis Solutions to Satellite Trajectory Control by Pole Placement

    NASA Astrophysics Data System (ADS)

    Kukelova, Z.; Krsek, P.; Smutny, V.; Pajdla, T.

    2013-09-01

    Satellites play an important role, e.g., in telecommunication, navigation and weather monitoring. Controlling their trajectories is an important problem. In [1], an approach to the pole placement for the synthesis of a linear controller has been presented. It leads to solving five polynomial equations in nine unknown elements of the state space matrices of a compensator. This is an underconstrained system and therefore four of the unknown elements need to be considered as free parameters and set to some prior values to obtain a system of five equations in five unknowns. In [1], this system was solved for one chosen set of free parameters with the help of Dixon resultants. In this work, we study and present Groebner basis solutions to this problem of computation of a dynamic compensator for the satellite for different combinations of input free parameters. We show that the Groebner basis method for solving systems of polynomial equations leads to very simple solutions for all combinations of free parameters. These solutions require to perform only the Gauss-Jordan elimination of a small matrix and computation of roots of a single variable polynomial. The maximum degree of this polynomial is not greater than six in general but for most combinations of the input free parameters its degree is even lower. [1] B. Palancz. Application of Dixon resultant to satellite trajectory control by pole placement. Journal of Symbolic Computation, Volume 50, March 2013, Pages 79-99, Elsevier.

  16. What keeps low-SES children from sleeping well: the role of presleep worries and sleep environment

    PubMed Central

    Bagley, Erika J.; Kelly, Ryan J.; Buckhalt, Joseph A.; El-Sheikh, Mona

    2014-01-01

    Objectives Children in families of low socioeconomic status (SES) have been found to have poor sleep, yet the reasons for this finding are unclear. Two possible mediators, presleep worries and home environment conditions, were investigated as indirect pathways between SES and children’s sleep. Participants/Methods The participants consisted of 271 children (M (age) = 11.33 years; standard deviation (SD) = 7.74 months) from families varying in SES as indexed by the income-to-needs ratio. Sleep was assessed with actigraphy (sleep minutes, night waking duration, and variability in sleep schedule) and child self-reported sleep/wake problems (e.g., oversleeping and trouble falling asleep) and sleepiness (e.g., sleeping in class and falling asleep while doing homework). Presleep worries and home environment conditions were assessed with questionnaires. Results Lower SES was associated with more subjective sleep/wake problems and daytime sleepiness, and increased exposure to disruptive sleep conditions and greater presleep worries were mediators of these associations. In addition, environmental conditions served as an intervening variable linking SES to variability in an actigraphy-derived sleep schedule, and, similarly, presleep worry was an intervening variable linking SES to actigraphy-based night waking duration. Across sleep parameters, the model explained 5–29% of variance. Conclusions Sleep environment and psychological factors are associated with socioeconomic disparities, which affect children’s sleep. PMID:25701537

  17. What keeps low-SES children from sleeping well: the role of presleep worries and sleep environment.

    PubMed

    Bagley, Erika J; Kelly, Ryan J; Buckhalt, Joseph A; El-Sheikh, Mona

    2015-04-01

    Children in families of low socioeconomic status (SES) have been found to have poor sleep, yet the reasons for this finding are unclear. Two possible mediators, presleep worries and home environment conditions, were investigated as indirect pathways between SES and children's sleep. The participants consisted of 271 children (M (age) = 11.33 years; standard deviation (SD) = 7.74 months) from families varying in SES as indexed by the income-to-needs ratio. Sleep was assessed with actigraphy (sleep minutes, night waking duration, and variability in sleep schedule) and child self-reported sleep/wake problems (e.g., oversleeping and trouble falling asleep) and sleepiness (e.g., sleeping in class and falling asleep while doing homework). Presleep worries and home environment conditions were assessed with questionnaires. Lower SES was associated with more subjective sleep/wake problems and daytime sleepiness, and increased exposure to disruptive sleep conditions and greater presleep worries were mediators of these associations. In addition, environmental conditions served as an intervening variable linking SES to variability in an actigraphy-derived sleep schedule, and, similarly, presleep worry was an intervening variable linking SES to actigraphy-based night waking duration. Across sleep parameters, the model explained 5-29% of variance. Sleep environment and psychological factors are associated with socioeconomic disparities, which affect children's sleep. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Concurrently adjusting interrelated control parameters to achieve optimal engine performance

    DOEpatents

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2015-12-01

    Methods and systems for real-time engine control optimization are provided. A value of an engine performance variable is determined, a value of a first operating condition and a value of a second operating condition of a vehicle engine are detected, and initial values for a first engine control parameter and a second engine control parameter are determined based on the detected first operating condition and the detected second operating condition. The initial values for the first engine control parameter and the second engine control parameter are adjusted based on the determined value of the engine performance variable to cause the engine performance variable to approach a target engine performance variable. In order to cause the engine performance variable to approach the target engine performance variable, adjusting the initial value for the first engine control parameter necessitates a corresponding adjustment of the initial value for the second engine control parameter.

  19. Using Bayesian hierarchical models to better understand nitrate sources and sinks in agricultural watersheds.

    PubMed

    Xia, Yongqiu; Weller, Donald E; Williams, Meghan N; Jordan, Thomas E; Yan, Xiaoyuan

    2016-11-15

    Export coefficient models (ECMs) are often used to predict nutrient sources and sinks in watersheds because ECMs can flexibly incorporate processes and have minimal data requirements. However, ECMs do not quantify uncertainties in model structure, parameters, or predictions; nor do they account for spatial and temporal variability in land characteristics, weather, and management practices. We applied Bayesian hierarchical methods to address these problems in ECMs used to predict nitrate concentration in streams. We compared four model formulations, a basic ECM and three models with additional terms to represent competing hypotheses about the sources of error in ECMs and about spatial and temporal variability of coefficients: an ADditive Error Model (ADEM), a SpatioTemporal Parameter Model (STPM), and a Dynamic Parameter Model (DPM). The DPM incorporates a first-order random walk to represent spatial correlation among parameters and a dynamic linear model to accommodate temporal correlation. We tested the modeling approach in a proof of concept using watershed characteristics and nitrate export measurements from watersheds in the Coastal Plain physiographic province of the Chesapeake Bay drainage. Among the four models, the DPM was the best--it had the lowest mean error, explained the most variability (R 2  = 0.99), had the narrowest prediction intervals, and provided the most effective tradeoff between fit complexity (its deviance information criterion, DIC, was 45.6 units lower than any other model, indicating overwhelming support for the DPM). The superiority of the DPM supports its underlying hypothesis that the main source of error in ECMs is their failure to account for parameter variability rather than structural error. Analysis of the fitted DPM coefficients for cropland export and instream retention revealed some of the factors controlling nitrate concentration: cropland nitrate exports were positively related to stream flow and watershed average slope, while instream nitrate retention was positively correlated with nitrate concentration. By quantifying spatial and temporal variability in sources and sinks, the DPM provides new information to better target management actions to the most effective times and places. Given the wide use of ECMs as research and management tools, our approach can be broadly applied in other watersheds and to other materials. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Adaptive Decision Making and Coordination in Variable Structure Organizations

    DTIC Science & Technology

    1994-09-01

    behavior of the net. The design problem is addressed by (a) focusing on algorithms that relate structural properties of’ the Petri Net model to... behavioral characteristics; and (b) by incorporating design requirements in the Lattice algorithm. ’K94-30756 9 4 9 2 P 0 8 II083II Bl l~ll i1111 I! 14...the more resource- consuming the process is. The architecture designer has to deal with these two parameters and perform some tradeoffs. The more

  1. Hydrological Parameter Estimations from a Conservative Tracer Test With Variable-Density Effects at the Boise Hydrogeophysical Research Site

    DTIC Science & Technology

    2011-12-15

    the measured porosity values can be taken as equivalent to effective porosity values for this aquifer with the risk of only very limited overestimation...information to constrain/control an increasingly ill-posed problem, and (3) risk estimation of a model with more heterogeneity than is needed to explain...coarse fluvial deposits: Boise Hydrogeophysical Research Site, Geological Society of America Bulletin, 116(9–10), 1059–1073. Barrash, W., T. Clemo

  2. A framework model for water-sharing among co-basin states of a river basin

    NASA Astrophysics Data System (ADS)

    Garg, N. K.; Azad, Shambhu

    2018-05-01

    A new framework model is presented in this study for sharing of water in a river basin using certain governing variables, in an effort to enhance the objectivity for a reasonable and equitable allocation of water among co-basin states. The governing variables were normalised to reduce the governing variables of different co-basin states of a river basin on same scale. In the absence of objective methods for evaluating the weights to be assigned to co-basin states for water allocation, a framework was conceptualised and formulated to determine the normalised weighting factors of different co-basin states as a function of the governing variables. The water allocation to any co-basin state had been assumed to be proportional to its struggle for equity, which in turn was assumed to be a function of the normalised discontent, satisfaction, and weighting factors of each co-basin state. System dynamics was used effectively to represent and solve the proposed model formulation. The proposed model was successfully applied to the Vamsadhara river basin located in the South-Eastern part of India, and a sensitivity analysis of the proposed model parameters was carried out to prove its robustness in terms of the proposed model convergence and validity over the broad spectrum values of the proposed model parameters. The solution converged quickly to a final allocation of 1444 million cubic metre (MCM) in the case of the Odisha co-basin state, and to 1067 MCM for the Andhra Pradesh co-basin state. The sensitivity analysis showed that the proposed model's allocation varied from 1584 MCM to 1336 MCM for Odisha state and from 927 to 1175 MCM for Andhra, depending upon the importance weights given to the governing variables for the calculation of the weighting factors. Thus, the proposed model was found to be very flexible to explore various policy options to arrive at a decision in a water sharing problem. It can therefore be effectively applied to any trans-boundary problem where there is conflict about water-sharing among co-basin states.

  3. Cognitive models of risky choice: parameter stability and predictive accuracy of prospect theory.

    PubMed

    Glöckner, Andreas; Pachur, Thorsten

    2012-04-01

    In the behavioral sciences, a popular approach to describe and predict behavior is cognitive modeling with adjustable parameters (i.e., which can be fitted to data). Modeling with adjustable parameters allows, among other things, measuring differences between people. At the same time, parameter estimation also bears the risk of overfitting. Are individual differences as measured by model parameters stable enough to improve the ability to predict behavior as compared to modeling without adjustable parameters? We examined this issue in cumulative prospect theory (CPT), arguably the most widely used framework to model decisions under risk. Specifically, we examined (a) the temporal stability of CPT's parameters; and (b) how well different implementations of CPT, varying in the number of adjustable parameters, predict individual choice relative to models with no adjustable parameters (such as CPT with fixed parameters, expected value theory, and various heuristics). We presented participants with risky choice problems and fitted CPT to each individual's choices in two separate sessions (which were 1 week apart). All parameters were correlated across time, in particular when using a simple implementation of CPT. CPT allowing for individual variability in parameter values predicted individual choice better than CPT with fixed parameters, expected value theory, and the heuristics. CPT's parameters thus seem to pick up stable individual differences that need to be considered when predicting risky choice. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Visualizing the ill-posedness of the inversion of a canopy radiative transfer model: A case study for Sentinel-2

    NASA Astrophysics Data System (ADS)

    Zurita-Milla, R.; Laurent, V. C. E.; van Gijsel, J. A. E.

    2015-12-01

    Monitoring biophysical and biochemical vegetation variables in space and time is key to understand the earth system. Operational approaches using remote sensing imagery rely on the inversion of radiative transfer models, which describe the interactions between light and vegetation canopies. The inversion required to estimate vegetation variables is, however, an ill-posed problem because of variable compensation effects that can cause different combinations of soil and canopy variables to yield extremely similar spectral responses. In this contribution, we present a novel approach to visualise the ill-posed problem using self-organizing maps (SOM), which are a type of unsupervised neural network. The approach is demonstrated with simulations for Sentinel-2 data (13 bands) made with the Soil-Leaf-Canopy (SLC) radiative transfer model. A look-up table of 100,000 entries was built by randomly sampling 14 SLC model input variables between their minimum and maximum allowed values while using both a dark and a bright soil. The Sentinel-2 spectral simulations were used to train a SOM of 200 × 125 neurons. The training projected similar spectral signatures onto either the same, or contiguous, neuron(s). Tracing back the inputs that generated each spectral signature, we created a 200 × 125 map for each of the SLC variables. The lack of spatial patterns and the variability in these maps indicate ill-posed situations, where similar spectral signatures correspond to different canopy variables. For Sentinel-2, our results showed that leaf area index, crown cover and leaf chlorophyll, water and brown pigment content are less confused in the inversion than variables with noisier maps like fraction of brown canopy area, leaf dry matter content and the PROSPECT mesophyll parameter. This study supports both educational and on-going research activities on inversion algorithms and might be useful to evaluate the uncertainties of retrieved canopy biophysical and biochemical state variables.

  5. The role of under-determined approximations in engineering and science application

    NASA Technical Reports Server (NTRS)

    Carpenter, William C.

    1992-01-01

    There is currently a great deal of interest in using response surfaces in the optimization of aircraft performance. The objective function and/or constraint equations involved in these optimization problems may come from numerous disciplines such as structures, aerodynamics, environmental engineering, etc. In each of these disciplines, the mathematical complexity of the governing equations usually dictates that numerical results be obtained from large computer programs such as a finite element method program. Thus, when performing optimization studies, response surfaces are a convenient way of transferring information from the various disciplines to the optimization algorithm as opposed to bringing all the sundry computer programs together in a massive computer code. Response surfaces offer another advantage in the optimization of aircraft structures. A characteristic of these types of optimization problems is that evaluation of the objective function and response equations (referred to as a functional evaluation) can be very expensive in a computational sense. Because of the computational expense in obtaining functional evaluations, the present study was undertaken to investigate under-determinined approximations. An under-determined approximation is one in which there are fewer training pairs (pieces of information about a function) than there are undetermined parameters (coefficients or weights) associated with the approximation. Both polynomial approximations and neural net approximations were examined. Three main example problems were investigated: (1) a function of one design variable was considered; (2) a function of two design variables was considered; and (3) a 35 bar truss with 4 design variables was considered.

  6. Geochemical variability of natural soils and reclaimed minespoil soils in the San Juan Basin, New Mexico

    USGS Publications Warehouse

    Gough, L.P.; Severson, R.C.

    1981-01-01

    An inventory of total-and extractable-element concentrations in soils was made for three areas of the San Juan Basin in New Mexico: (1) the broad area likely to be affected by energy-related development. (2) an area of soils considered to have potential for use as topsoil in mined-land reclamation. and (3) an area of the San Juan coal mine that has been regraded. topsoiled, and revegetated. Maps made of concentrations of 16 elements in area 1 soils show no gradational pattern across the region. Further. these maps do not correspond to those showing geology or soil types. Sodic or saline problems, and a possible but unproven deficiency of zinc available to plants. may make some of the soils in this area undesirable for use as topsoil in mined-land reclamation. Taxonomic great groups of soil in this area cannot be distinguished because each great group tends to have a large within-group variability if compared to the between-group variability. In area 2 the major soils sampled were of the Sheppard. Shiprock. and Doak association. These soils are quite uniform in chemical composition and are not greatly saline or sodic. As in area 1 soils. zinc deficiency may cause a problem in revegetating most of these soils. It is difficult to distinguish soil taxonomic families by using their respective chemical compositions. because of small between-family variability. Topsoil from a reclaimed area of the San Juan mine (area 3) most closely resembles the chemical composition of natural C horizons of soil from area 1. Spoil material that has not been topsoiled is likely to cause sodic-and saline-related problems in revegetation and may cause boron toxicity in plants. Topsoiling has apparently ameliorated these potential problems for plant growth on mine spoil. Total and extractable concentrations for elements and other parameters for each area of the San Juan Basin provide background information for the evaluation of the chemical quality of soils in each area.

  7. Simulation of light propagation in the thin-film waveguide lens

    NASA Astrophysics Data System (ADS)

    Malykh, M. D.; Divakov, D. V.; Sevastianov, L. A.; Sevastianov, A. L.

    2018-04-01

    In this paper we investigate the solution of the problem of modeling the propagation of electromagnetic radiation in three-dimensional integrated optical structures, such as waveguide lenses. When propagating through three-dimensional waveguide structures the waveguide modes can be hybridized, so the mathematical model of their propagation must take into account the connection of TE- and TM-mode components. Therefore, an adequate consideration of hybridization of the waveguide modes is possible only in vector formulation of the problem. An example of three-dimensional structure that hybridizes waveguide modes is the Luneburg waveguide lens, which also has focusing properties. If the waveguide lens has a radius of the order of several tens of wavelengths, its variable thickness at distances of the order of several wavelengths is almost constant. Assuming in this case that the electromagnetic field also varies slowly in the direction perpendicular to the direction of propagation, one can introduce a small parameter characterizing this slow varying and decompose the solution in powers of the small parameter. In this approach, in the zeroth approximation, scalar diffraction problems are obtained, the solution of which is less resource-consuming than the solution of vector problems. The calculated first-order corrections of smallness describe the connection of TE- and TM-modes, so the solutions obtained are weakly-hybridized modes. The formulation of problems and methods for their numerical solution in this paper are based on the authors' research on waveguide diffraction on a lens in a scalar formulation.

  8. Fractional Order Two-Temperature Dual-Phase-Lag Thermoelasticity with Variable Thermal Conductivity

    PubMed Central

    Mallik, Sadek Hossain; Kanoria, M.

    2014-01-01

    A new theory of two-temperature generalized thermoelasticity is constructed in the context of a new consideration of dual-phase-lag heat conduction with fractional orders. The theory is then adopted to study thermoelastic interaction in an isotropic homogenous semi-infinite generalized thermoelastic solids with variable thermal conductivity whose boundary is subjected to thermal and mechanical loading. The basic equations of the problem have been written in the form of a vector-matrix differential equation in the Laplace transform domain, which is then solved by using a state space approach. The inversion of Laplace transforms is computed numerically using the method of Fourier series expansion technique. The numerical estimates of the quantities of physical interest are obtained and depicted graphically. Some comparisons of the thermophysical quantities are shown in figures to study the effects of the variable thermal conductivity, temperature discrepancy, and the fractional order parameter. PMID:27419210

  9. Recourse-based facility-location problems in hybrid uncertain environment.

    PubMed

    Wang, Shuming; Watada, Junzo; Pedrycz, Witold

    2010-08-01

    The objective of this paper is to study facility-location problems in the presence of a hybrid uncertain environment involving both randomness and fuzziness. A two-stage fuzzy-random facility-location model with recourse (FR-FLMR) is developed in which both the demands and costs are assumed to be fuzzy-random variables. The bounds of the optimal objective value of the two-stage FR-FLMR are derived. As, in general, the fuzzy-random parameters of the FR-FLMR can be regarded as continuous fuzzy-random variables with an infinite number of realizations, the computation of the recourse requires solving infinite second-stage programming problems. Owing to this requirement, the recourse function cannot be determined analytically, and, hence, the model cannot benefit from the use of techniques of classical mathematical programming. In order to solve the location problems of this nature, we first develop a technique of fuzzy-random simulation to compute the recourse function. The convergence of such simulation scenarios is discussed. In the sequel, we propose a hybrid mutation-based binary ant-colony optimization (MBACO) approach to the two-stage FR-FLMR, which comprises the fuzzy-random simulation and the simplex algorithm. A numerical experiment illustrates the application of the hybrid MBACO algorithm. The comparison shows that the hybrid MBACO finds better solutions than the one using other discrete metaheuristic algorithms, such as binary particle-swarm optimization, genetic algorithm, and tabu search.

  10. Computational method for analysis of polyethylene biodegradation

    NASA Astrophysics Data System (ADS)

    Watanabe, Masaji; Kawai, Fusako; Shibata, Masaru; Yokoyama, Shigeo; Sudate, Yasuhiro

    2003-12-01

    In a previous study concerning the biodegradation of polyethylene, we proposed a mathematical model based on two primary factors: the direct consumption or absorption of small molecules and the successive weight loss of large molecules due to β-oxidation. Our model is an initial value problem consisting of a differential equation whose independent variable is time. Its unknown variable represents the total weight of all the polyethylene molecules that belong to a molecular-weight class specified by a parameter. In this paper, we describe a numerical technique to introduce experimental results into analysis of our model. We first establish its mathematical foundation in order to guarantee its validity, by showing that the initial value problem associated with the differential equation has a unique solution. Our computational technique is based on a linear system of differential equations derived from the original problem. We introduce some numerical results to illustrate our technique as a practical application of the linear approximation. In particular, we show how to solve the inverse problem to determine the consumption rate and the β-oxidation rate numerically, and illustrate our numerical technique by analyzing the GPC patterns of polyethylene wax obtained before and after 5 weeks cultivation of a fungus, Aspergillus sp. AK-3. A numerical simulation based on these degradation rates confirms that the primary factors of the polyethylene biodegradation posed in modeling are indeed appropriate.

  11. On implementation of the extended interior penalty function. [optimum structural design

    NASA Technical Reports Server (NTRS)

    Cassis, J. H.; Schmit, L. A., Jr.

    1976-01-01

    The extended interior penalty function formulation is implemented. A rational method for determining the transition between the interior and extended parts is set forth. The formulation includes a straightforward method for avoiding design points with some negative components, which are physically meaningless in structural analysis. The technique, when extended to problems involving parametric constraints, can facilitate closed form integration of the penalty terms over the most important parts of the parameter interval. The method lends itself well to the use of approximation concepts, such as design variable linking, constraint deletion and Taylor series expansions of response quantities in terms of design variables. Examples demonstrating the algorithm, in the context of planar orthogonal frames subjected to ground motion, are included.

  12. Modeling the zonal disintegration of rocks near deep level tunnels by gradient internal variable continuous phase transition theory

    NASA Astrophysics Data System (ADS)

    Haoxiang, Chen; Qi, Chengzhi; Peng, Liu; Kairui, Li; Aifantis, Elias C.

    2015-12-01

    The occurrence of alternating damage zones surrounding underground openings (commonly known as zonal disintegration) is treated as a "far from thermodynamic equilibrium" dynamical process or a nonlinear continuous phase transition phenomenon. The approach of internal variable gradient theory with diffusive transport, which may be viewed as a subclass of Landau's phase transition theory, is adopted. The order parameter is identified with an irreversible strain quantity, the gradient of which enters into the expression for the free energy of the rock system. The gradient term stabilizes the material behavior in the post-softening regime, where zonal disintegration occurs. The results of a simplified linearized analysis are confirmed by the numerical solution of the nonlinear problem.

  13. A combined reconstruction-classification method for diffuse optical tomography.

    PubMed

    Hiltunen, P; Prince, S J D; Arridge, S

    2009-11-07

    We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.

  14. Generalized Radiative Transfer as an Efficient Computational Tool for Spatial and/or Spectral Integration over Unresolved Variability in Multi-Angle Observations

    NASA Astrophysics Data System (ADS)

    Davis, A. B.; Xu, F.; Diner, D. J.

    2017-12-01

    Two perennial problems in applied theoretical and computational radiative transfer (RT) are: (1) the impact of unresolved spatial variability on large-scale fluxes (in climate models) or radiances (in remote sensing); and (2) efficient-yet-accurate estimation of broadband spectral integrals in radiant energy budget estimation as well as in remote sensing, in particular, of trace gases.Generalized RT (GRT) is a modification of classic RT in an optical medium with uniform extinction where Beer's exponential law for direct transmission is replaced by a monotonically decreasing function with a slower power-law decay. In a convenient parameterized version of GRT, mean extinction replaces the uniform value and just one new property is introduced. As a non-dimensional metric for the unresolved variability, we use the square of the mean extinction coefficient divided by its variance. This parameter is also the exponent of the power-law tail of the modified transmission law.This specific form of sub-exponential transmission has explored for almost two decades in application to spatial variability in the presence of long-range correlations, much like in turbulent media such as clouds, with a focus on multiple scattering. It has also been proposed by Conley and Collins (JQSRT, 112, 1525-, 2011) to improve on the standard (weak-line) implementation of the correlated-k technique for efficient spectral integration.We have merged these two applications within a rigorous formulation of the combined problem, and solve the new integral RT equations in the single-scattering limit. The result is illustrated by addressing practical problems in multi-angle remote sensing of aerosols using the O2 A-band, an emerging methodology for passive profiling of coarse aerosols and clouds.

  15. Influence plots for LASSO

    DOE PAGES

    Jang, Dae -Heung; Anderson-Cook, Christine Michaela

    2016-11-22

    With many predictors in regression, fitting the full model can induce multicollinearity problems. Least Absolute Shrinkage and Selection Operation (LASSO) is useful when the effects of many explanatory variables are sparse in a high-dimensional dataset. Influential points can have a disproportionate impact on the estimated values of model parameters. Here, this paper describes a new influence plot that can be used to increase understanding of the contributions of individual observations and the robustness of results. This can serve as a complement to other regression diagnostics techniques in the LASSO regression setting. Using this influence plot, we can find influential pointsmore » and their impact on shrinkage of model parameters and model selection. Lastly, we provide two examples to illustrate the methods.« less

  16. Renormalization Group Tutorial

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.

    2004-01-01

    Complex physical systems sometimes have statistical behavior characterized by power- law dependence on the parameters of the system and spatial variability with no particular characteristic scale as the parameters approach critical values. The renormalization group (RG) approach was developed in the fields of statistical mechanics and quantum field theory to derive quantitative predictions of such behavior in cases where conventional methods of analysis fail. Techniques based on these ideas have since been extended to treat problems in many different fields, and in particular, the behavior of turbulent fluids. This lecture will describe a relatively simple but nontrivial example of the RG approach applied to the diffusion of photons out of a stellar medium when the photons have wavelengths near that of an emission line of atoms in the medium.

  17. Dynamical mechanism in aero-engine gas path system using minimum spanning tree and detrended cross-correlation analysis

    NASA Astrophysics Data System (ADS)

    Dong, Keqiang; Zhang, Hong; Gao, You

    2017-01-01

    Identifying the mutual interaction in aero-engine gas path system is a crucial problem that facilitates the understanding of emerging structures in complex system. By employing the multiscale multifractal detrended cross-correlation analysis method to aero-engine gas path system, the cross-correlation characteristics between gas path system parameters are established. Further, we apply multiscale multifractal detrended cross-correlation distance matrix and minimum spanning tree to investigate the mutual interactions of gas path variables. The results can infer that the low-spool rotor speed (N1) and engine pressure ratio (EPR) are main gas path parameters. The application of proposed method contributes to promote our understanding of the internal mechanisms and structures of aero-engine dynamics.

  18. Influence plots for LASSO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jang, Dae -Heung; Anderson-Cook, Christine Michaela

    With many predictors in regression, fitting the full model can induce multicollinearity problems. Least Absolute Shrinkage and Selection Operation (LASSO) is useful when the effects of many explanatory variables are sparse in a high-dimensional dataset. Influential points can have a disproportionate impact on the estimated values of model parameters. Here, this paper describes a new influence plot that can be used to increase understanding of the contributions of individual observations and the robustness of results. This can serve as a complement to other regression diagnostics techniques in the LASSO regression setting. Using this influence plot, we can find influential pointsmore » and their impact on shrinkage of model parameters and model selection. Lastly, we provide two examples to illustrate the methods.« less

  19. Phase transitions in distributed control systems with multiplicative noise

    NASA Astrophysics Data System (ADS)

    Allegra, Nicolas; Bamieh, Bassam; Mitra, Partha; Sire, Clément

    2018-01-01

    Contemporary technological challenges often involve many degrees of freedom in a distributed or networked setting. Three aspects are notable: the variables are usually associated with the nodes of a graph with limited communication resources, hindering centralized control; the communication is subject to noise; and the number of variables can be very large. These three aspects make tools and techniques from statistical physics particularly suitable for the performance analysis of such networked systems in the limit of many variables (analogous to the thermodynamic limit in statistical physics). Perhaps not surprisingly, phase-transition like phenomena appear in these systems, where a sharp change in performance can be observed with a smooth parameter variation, with the change becoming discontinuous or singular in the limit of infinite system size. In this paper, we analyze the so called network consensus problem, prototypical of the above considerations, that has previously been analyzed mostly in the context of additive noise. We show that qualitatively new phase-transition like phenomena appear for this problem in the presence of multiplicative noise. Depending on dimensions, and on the presence or absence of a conservation law, the system performance shows a discontinuous change at a threshold value of the multiplicative noise strength. In the absence of the conservation law, and for graph spectral dimension less than two, the multiplicative noise threshold (the stability margin of the control problem) is zero. This is reminiscent of the absence of robust controllers for certain classes of centralized control problems. Although our study involves a ‘toy’ model, we believe that the qualitative features are generic, with implications for the robust stability of distributed control systems, as well as the effect of roundoff errors and communication noise on distributed algorithms.

  20. Channel Simulation in Quantum Metrology

    NASA Astrophysics Data System (ADS)

    Laurenza, Riccardo; Lupo, Cosmo; Spedalieri, Gaetana; Braunstein, Samuel L.; Pirandola, Stefano

    2018-04-01

    In this review we discuss how channel simulation can be used to simplify the most general protocols of quantum parameter estimation, where unlimited entanglement and adaptive joint operations may be employed. Whenever the unknown parameter encoded in a quantum channel is completely transferred in an environmental program state simulating the channel, the optimal adaptive estimation cannot beat the standard quantum limit. In this setting, we elucidate the crucial role of quantum teleportation as a primitive operation which allows one to completely reduce adaptive protocols over suitable teleportation-covariant channels and derive matching upper and lower bounds for parameter estimation. For these channels,wemay express the quantum Cramér Rao bound directly in terms of their Choi matrices. Our review considers both discrete- and continuous-variable systems, also presenting some new results for bosonic Gaussian channels using an alternative sub-optimal simulation. It is an open problem to design simulations for quantum channels that achieve the Heisenberg limit.

  1. A mathematical model for mixed convective flow of chemically reactive Oldroyd-B fluid between isothermal stretching disks

    NASA Astrophysics Data System (ADS)

    Hashmi, M. S.; Khan, N.; Ullah Khan, Sami; Rashidi, M. M.

    In this study, we have constructed a mathematical model to investigate the heat source/sink effects in mixed convection axisymmetric flow of an incompressible, electrically conducting Oldroyd-B fluid between two infinite isothermal stretching disks. The effects of viscous dissipation and Joule heating are also considered in the heat equation. The governing partial differential equations are converted into ordinary differential equations by using appropriate similarity variables. The series solution of these dimensionless equations is constructed by using homotopy analysis method. The convergence of the obtained solution is carefully examined. The effects of various involved parameters on pressure, velocity and temperature profiles are comprehensively studied. A graphical analysis has been presented for various values of problem parameters. The numerical values of wall shear stress and Nusselt number are computed at both upper and lower disks. Moreover, a graphical and tabular explanation for critical values of Frank-Kamenetskii regarding other flow parameters.

  2. A Marked Poisson Process Driven Latent Shape Model for 3D Segmentation of Reflectance Confocal Microscopy Image Stacks of Human Skin.

    PubMed

    Ghanta, Sindhu; Jordan, Michael I; Kose, Kivanc; Brooks, Dana H; Rajadhyaksha, Milind; Dy, Jennifer G

    2017-01-01

    Segmenting objects of interest from 3D data sets is a common problem encountered in biological data. Small field of view and intrinsic biological variability combined with optically subtle changes of intensity, resolution, and low contrast in images make the task of segmentation difficult, especially for microscopy of unstained living or freshly excised thick tissues. Incorporating shape information in addition to the appearance of the object of interest can often help improve segmentation performance. However, the shapes of objects in tissue can be highly variable and design of a flexible shape model that encompasses these variations is challenging. To address such complex segmentation problems, we propose a unified probabilistic framework that can incorporate the uncertainty associated with complex shapes, variable appearance, and unknown locations. The driving application that inspired the development of this framework is a biologically important segmentation problem: the task of automatically detecting and segmenting the dermal-epidermal junction (DEJ) in 3D reflectance confocal microscopy (RCM) images of human skin. RCM imaging allows noninvasive observation of cellular, nuclear, and morphological detail. The DEJ is an important morphological feature as it is where disorder, disease, and cancer usually start. Detecting the DEJ is challenging, because it is a 2D surface in a 3D volume which has strong but highly variable number of irregularly spaced and variably shaped "peaks and valleys." In addition, RCM imaging resolution, contrast, and intensity vary with depth. Thus, a prior model needs to incorporate the intrinsic structure while allowing variability in essentially all its parameters. We propose a model which can incorporate objects of interest with complex shapes and variable appearance in an unsupervised setting by utilizing domain knowledge to build appropriate priors of the model. Our novel strategy to model this structure combines a spatial Poisson process with shape priors and performs inference using Gibbs sampling. Experimental results show that the proposed unsupervised model is able to automatically detect the DEJ with physiologically relevant accuracy in the range 10- 20 μm .

  3. A Marked Poisson Process Driven Latent Shape Model for 3D Segmentation of Reflectance Confocal Microscopy Image Stacks of Human Skin

    PubMed Central

    Ghanta, Sindhu; Jordan, Michael I.; Kose, Kivanc; Brooks, Dana H.; Rajadhyaksha, Milind; Dy, Jennifer G.

    2016-01-01

    Segmenting objects of interest from 3D datasets is a common problem encountered in biological data. Small field of view and intrinsic biological variability combined with optically subtle changes of intensity, resolution and low contrast in images make the task of segmentation difficult, especially for microscopy of unstained living or freshly excised thick tissues. Incorporating shape information in addition to the appearance of the object of interest can often help improve segmentation performance. However, shapes of objects in tissue can be highly variable and design of a flexible shape model that encompasses these variations is challenging. To address such complex segmentation problems, we propose a unified probabilistic framework that can incorporate the uncertainty associated with complex shapes, variable appearance and unknown locations. The driving application which inspired the development of this framework is a biologically important segmentation problem: the task of automatically detecting and segmenting the dermal-epidermal junction (DEJ) in 3D reflectance confocal microscopy (RCM) images of human skin. RCM imaging allows noninvasive observation of cellular, nuclear and morphological detail. The DEJ is an important morphological feature as it is where disorder, disease and cancer usually start. Detecting the DEJ is challenging because it is a 2D surface in a 3D volume which has strong but highly variable number of irregularly spaced and variably shaped “peaks and valleys”. In addition, RCM imaging resolution, contrast and intensity vary with depth. Thus a prior model needs to incorporate the intrinsic structure while allowing variability in essentially all its parameters. We propose a model which can incorporate objects of interest with complex shapes and variable appearance in an unsupervised setting by utilizing domain knowledge to build appropriate priors of the model. Our novel strategy to model this structure combines a spatial Poisson process with shape priors and performs inference using Gibbs sampling. Experimental results show that the proposed unsupervised model is able to automatically detect the DEJ with physiologically relevant accuracy in the range 10 – 20µm. PMID:27723590

  4. Integrating models that depend on variable data

    NASA Astrophysics Data System (ADS)

    Banks, A. T.; Hill, M. C.

    2016-12-01

    Models of human-Earth systems are often developed with the goal of predicting the behavior of one or more dependent variables from multiple independent variables, processes, and parameters. Often dependent variable values range over many orders of magnitude, which complicates evaluation of the fit of the dependent variable values to observations. Many metrics and optimization methods have been proposed to address dependent variable variability, with little consensus being achieved. In this work, we evaluate two such methods: log transformation (based on the dependent variable being log-normally distributed with a constant variance) and error-based weighting (based on a multi-normal distribution with variances that tend to increase as the dependent variable value increases). Error-based weighting has the advantage of encouraging model users to carefully consider data errors, such as measurement and epistemic errors, while log-transformations can be a black box for typical users. Placing the log-transformation into the statistical perspective of error-based weighting has not formerly been considered, to the best of our knowledge. To make the evaluation as clear and reproducible as possible, we use multiple linear regression (MLR). Simulations are conducted with MatLab. The example represents stream transport of nitrogen with up to eight independent variables. The single dependent variable in our example has values that range over 4 orders of magnitude. Results are applicable to any problem for which individual or multiple data types produce a large range of dependent variable values. For this problem, the log transformation produced good model fit, while some formulations of error-based weighting worked poorly. Results support previous suggestions fthat error-based weighting derived from a constant coefficient of variation overemphasizes low values and degrades model fit to high values. Applying larger weights to the high values is inconsistent with the log-transformation. Greater consistency is obtained by imposing smaller (by up to a factor of 1/35) weights on the smaller dependent-variable values. From an error-based perspective, the small weights are consistent with large standard deviations. This work considers the consequences of these two common ways of addressing variable data.

  5. A New Paradigm for Satellite Retrieval of Hydrologic Variables: The CDRD Methodology

    NASA Astrophysics Data System (ADS)

    Smith, E. A.; Mugnai, A.; Tripoli, G. J.

    2009-09-01

    Historically, retrieval of thermodynamically active geophysical variables in the atmosphere (e.g., temperature, moisture, precipitation) involved some time of inversion scheme - embedded within the retrieval algorithm - to transform radiometric observations (a vector) to the desired geophysical parameter(s) (either a scalar or a vector). Inversion is fundamentally a mathematical operation involving some type of integral-differential radiative transfer equation - often resisting a straightforward algebraic solution - in which the integral side of the equation (typically the right-hand side) contains the desired geophysical vector, while the left-hand side contains the radiative measurement vector often free of operators. Inversion was considered more desirable than forward modeling because the forward model solution had to be selected from a generally unmanageable set of parameter-observation relationships. However, in the classical inversion problem for retrieval of temperature using multiple radiative frequencies along the wing of an absorption band (or line) of a well-mixed radiatively active gas, in either the infrared or microwave spectrums, the inversion equation to be solved consists of a Fredholm integral equation of the 2nd kind - a specific type of transform problem in which there are an infinite number of solutions. This meant that special treatment of the transform process was required in order to obtain a single solution. Inversion had become the method of choice for retrieval in the 1950s because it appealed to the use of mathematical elegance, and because the numerical approaches used to solve the problems (typically some type of relaxation or perturbation scheme) were computationally fast in an age when computers speeds were slow. Like many solution schemes, inversion has lingered on regardless of the fact that computer speeds have increased many orders of magnitude and forward modeling itself has become far more elegant in combination with Bayesian averaging procedures given that the a priori probabilities of occurrence in the true environment of the parameter(s) in question can be approximated (or are actually known). In this presentation, the theory of the more modern retrieval approach using a combination of cloud, radiation and other specialized forward models in conjunction with Bayesian weighted averaging will be reviewed in light of a brief history of inversion. The application of the theory will be cast in the framework of what we call the Cloud-Dynamics-Radiation-Database (CDRD) methodology - which we now use for the retrieval of precipitation from spaceborne passive microwave radiometers. In a companion presentation, we will specifically describe the CDRD methodology and present results for its application within the Mediterranean basin.

  6. An Analytical Planning Model to Estimate the Optimal Density of Charging Stations for Electric Vehicles.

    PubMed

    Ahn, Yongjun; Yeo, Hwasoo

    2015-01-01

    The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC) stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station's density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive adoption of electric vehicles.

  7. Uncertainty and Sensitivity Analysis of Afterbody Radiative Heating Predictions for Earth Entry

    NASA Technical Reports Server (NTRS)

    West, Thomas K., IV; Johnston, Christopher O.; Hosder, Serhat

    2016-01-01

    The objective of this work was to perform sensitivity analysis and uncertainty quantification for afterbody radiative heating predictions of Stardust capsule during Earth entry at peak afterbody radiation conditions. The radiation environment in the afterbody region poses significant challenges for accurate uncertainty quantification and sensitivity analysis due to the complexity of the flow physics, computational cost, and large number of un-certain variables. In this study, first a sparse collocation non-intrusive polynomial chaos approach along with global non-linear sensitivity analysis was used to identify the most significant uncertain variables and reduce the dimensions of the stochastic problem. Then, a total order stochastic expansion was constructed over only the important parameters for an efficient and accurate estimate of the uncertainty in radiation. Based on previous work, 388 uncertain parameters were considered in the radiation model, which came from the thermodynamics, flow field chemistry, and radiation modeling. The sensitivity analysis showed that only four of these variables contributed significantly to afterbody radiation uncertainty, accounting for almost 95% of the uncertainty. These included the electronic- impact excitation rate for N between level 2 and level 5 and rates of three chemical reactions in uencing N, N(+), O, and O(+) number densities in the flow field.

  8. Breath biomarkers for lung cancer detection and assessment of smoking related effects--confounding variables, influence of normalization and statistical algorithms.

    PubMed

    Kischkel, Sabine; Miekisch, Wolfram; Sawacki, Annika; Straker, Eva M; Trefz, Phillip; Amann, Anton; Schubert, Jochen K

    2010-11-11

    Up to now, none of the breath biomarkers or marker sets proposed for cancer recognition has reached clinical relevance. Possible reasons are the lack of standardized methods of sampling, analysis and data processing and effects of environmental contaminants. Concentration profiles of endogenous and exogenous breath markers were determined in exhaled breath of 31 lung cancer patients, 31 smokers and 31 healthy controls by means of SPME-GC-MS. Different correcting and normalization algorithms and a principal component analysis were applied to the data. Differences of exhalation profiles in cancer and non-cancer patients did not persist if physiology and confounding variables were taken into account. Smoking history, inspired substance concentrations, age and gender were recognized as the most important confounding variables. Normalization onto PCO2 or BSA or correction for inspired concentrations only partially solved the problem. In contrast, previous smoking behaviour could be recognized unequivocally. Exhaled substance concentrations may depend on a variety of parameters other than the disease under investigation. Normalization and correcting parameters have to be chosen with care as compensating effects may be different from one substance to the other. Only well-founded biomarker identification, normalization and data processing will provide clinically relevant information from breath analysis. 2010 Elsevier B.V. All rights reserved.

  9. SMURC: High-Dimension Small-Sample Multivariate Regression With Covariance Estimation.

    PubMed

    Bayar, Belhassen; Bouaynaya, Nidhal; Shterenberg, Roman

    2017-03-01

    We consider a high-dimension low sample-size multivariate regression problem that accounts for correlation of the response variables. The system is underdetermined as there are more parameters than samples. We show that the maximum likelihood approach with covariance estimation is senseless because the likelihood diverges. We subsequently propose a normalization of the likelihood function that guarantees convergence. We call this method small-sample multivariate regression with covariance (SMURC) estimation. We derive an optimization problem and its convex approximation to compute SMURC. Simulation results show that the proposed algorithm outperforms the regularized likelihood estimator with known covariance matrix and the sparse conditional Gaussian graphical model. We also apply SMURC to the inference of the wing-muscle gene network of the Drosophila melanogaster (fruit fly).

  10. Simultaneous Co-Clustering and Classification in Customers Insight

    NASA Astrophysics Data System (ADS)

    Anggistia, M.; Saefuddin, A.; Sartono, B.

    2017-04-01

    Building predictive model based on the heterogeneous dataset may yield many problems, such as less precise in parameter and prediction accuracy. Such problem can be solved by segmenting the data into relatively homogeneous groups and then build a predictive model for each cluster. The advantage of using this strategy usually gives result in simpler models, more interpretable, and more actionable without any loss in accuracy and reliability. This work concerns on marketing data set which recorded a customer behaviour across products. There are some variables describing customer and product as attributes. The basic idea of this approach is to combine co-clustering and classification simultaneously. The objective of this research is to analyse the customer across product characteristics, so the marketing strategy implemented precisely.

  11. Mathematical characterization of mechanical behavior of porous frictional granular media

    NASA Technical Reports Server (NTRS)

    Chung, T. J.; Lee, J. K.

    1972-01-01

    A new definition of loading and unloading along the yield surface of Roscoe and Burland is introduced. This is achieved by noting that the strain-hardening parameter in the plastic potential function is deduced from the yield locus equation of Roscoe and Burland. The analytical results are compared with the experimental results for plate-bearing and cone-penetrometer problems and close agreements are demonstrated. The wheel-soil interaction is studied under dynamic loading. The rate-dependent plasticity or viscoelastoplastic behavior is considered. This is accomplished by the internal (hidden) variables associated with time-dependent viscous properties directly superimposed with inelastic behavior governed by the yield criteria of Roscoe and Burland. Effects of inertia and energy dissipation are properly accounted for. Example problems are presented.

  12. [Changes and differences of heart rate variability of patients in a psychiatric rehabilitation clinic].

    PubMed

    Riffer, Friedrich; Streibl, Lore; Sprung, Manuel; Kaiser, Elmar; Riffer, Lena

    2016-12-01

    A reduced heart rate variability (HRV) has been associated with various different pathological physical and psychological conditions and illnesses. The present study is focused on investigating HRV in respect to psychological disorders (depressive disorders anxiety disorders, Burn-out-Syndrome). The results from an investigation with patients from a psychiatric Rehabilitation clinic following a six week in-patient treatment are presented. The results show relevant changes in HRV in the course of the rehabilitative treatment for patients with depressive disorders, anxiety disorders or Burn-out-Syndrome. Simultaneously changes in HRV were linked with improvements in patient's psychological symptoms. Changes in HRV (i. e. an increase of relevant HRV-parameters) were accompanied by a reduction of psychological strain as well as psychological and physical health problems, which typically occur in Burnout-Syndrome. Furthermore, changes in relevant HRV-parameters were predictive of changes in psychological symptoms (depression, anxiety, phobia, Burnout symptoms). The present study did show, that in respect to the investigation of the relationship between HRV and subjective data, primarily those HRV-parameters are important (in terms of significant results) which are based on parasympathetic activity. These results are interesting in the context of theories, which view vagal mediated HRV as positively connected with self-regulation, adaptability and positive interpersonal interaction of individuals.

  13. Finite-time convergent recurrent neural network with a hard-limiting activation function for constrained optimization with piecewise-linear objective functions.

    PubMed

    Liu, Qingshan; Wang, Jun

    2011-04-01

    This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.

  14. Equicontrollability and the model following problem

    NASA Technical Reports Server (NTRS)

    Curran, R. T.

    1971-01-01

    Equicontrollability and its application to the linear time-invariant model-following problem are discussed. The problem is presented in the form of two systems, the plant and the model. The requirement is to find a controller to apply to the plant so that the resultant compensated plant behaves, in an input-output sense, the same as the model. All systems are assumed to be linear and time-invariant. The basic approach is to find suitable equicontrollable realizations of the plant and model and to utilize feedback so as to produce a controller of minimal state dimension. The concept of equicontrollability is a generalization of control canonical (phase variable) form applied to multivariable systems. It allows one to visualize clearly the effects of feedback and to pinpoint the parameters of a multivariable system which are invariant under feedback. The basic contributions are the development of equicontrollable form; solution of the model-following problem in an entirely algorithmic way, suitable for computer programming; and resolution of questions on system decoupling.

  15. Methods of Fitting a Straight Line to Data: Examples in Water Resources

    USGS Publications Warehouse

    Hirsch, Robert M.; Gilroy, Edward J.

    1984-01-01

    Three methods of fitting straight lines to data are described and their purposes are discussed and contrasted in terms of their applicability in various water resources contexts. The three methods are ordinary least squares (OLS), least normal squares (LNS), and the line of organic correlation (OC). In all three methods the parameters are based on moment statistics of the data. When estimation of an individual value is the objective, OLS is the most appropriate. When estimation of many values is the objective and one wants the set of estimates to have the appropriate variance, then OC is most appropriate. When one wishes to describe the relationship between two variables and measurement error is unimportant, then OC is most appropriate. Where the error is important in descriptive problems or in calibration problems, then structural analysis techniques may be most appropriate. Finally, if the problem is one of describing some geographic trajectory, then LNS is most appropriate.

  16. Nonlinear problems of the theory of heterogeneous slightly curved shells

    NASA Technical Reports Server (NTRS)

    Kantor, B. Y.

    1973-01-01

    An account if given of the variational method of the solution of physically and geometrically nonlinear problems of the theory of heterogeneous slightly curved shells. Examined are the bending and supercritical behavior of plates and conical and spherical cupolas of variable thickness in a temperature field, taking into account the dependence of the elastic parameters on temperature. The bending, stability in general and load-bearing capacity of flexible isotropic elastic-plastic shells with different criteria of plasticity, taking into account compressibility and hardening. The effect of the plastic heterogeneity caused by heat treatment, surface work hardening and irradiation by fast neutron flux is investigated. Some problems of the dynamic behavior of flexible shells are solved. Calculations are performed in high approximations. Considerable attention is given to the construction of a machine algorithm and to the checking of the convergence of iterative processes.

  17. Rapid Preliminary Design of Interplanetary Trajectories Using the Evolutionary Mission Trajectory Generator

    NASA Technical Reports Server (NTRS)

    Englander, Jacob

    2016-01-01

    Preliminary design of interplanetary missions is a highly complex process. The mission designer must choose discrete parameters such as the number of flybys, the bodies at which those flybys are performed, and in some cases the final destination. In addition, a time-history of control variables must be chosen that defines the trajectory. There are often many thousands, if not millions, of possible trajectories to be evaluated. This can be a very expensive process in terms of the number of human analyst hours required. An automated approach is therefore very desirable. This work presents such an approach by posing the mission design problem as a hybrid optimal control problem. The method is demonstrated on notional high-thrust chemical and low-thrust electric propulsion missions. In the low-thrust case, the hybrid optimal control problem is augmented to include systems design optimization.

  18. A one-layer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization.

    PubMed

    Liu, Qingshan; Guo, Zhishan; Wang, Jun

    2012-02-01

    In this paper, a one-layer recurrent neural network is proposed for solving pseudoconvex optimization problems subject to linear equality and bound constraints. Compared with the existing neural networks for optimization (e.g., the projection neural networks), the proposed neural network is capable of solving more general pseudoconvex optimization problems with equality and bound constraints. Moreover, it is capable of solving constrained fractional programming problems as a special case. The convergence of the state variables of the proposed neural network to achieve solution optimality is guaranteed as long as the designed parameters in the model are larger than the derived lower bounds. Numerical examples with simulation results illustrate the effectiveness and characteristics of the proposed neural network. In addition, an application for dynamic portfolio optimization is discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Contribution of LFP dynamics to single-neuron spiking variability in motor cortex during movement execution

    PubMed Central

    Rule, Michael E.; Vargas-Irwin, Carlos; Donoghue, John P.; Truccolo, Wilson

    2015-01-01

    Understanding the sources of variability in single-neuron spiking responses is an important open problem for the theory of neural coding. This variability is thought to result primarily from spontaneous collective dynamics in neuronal networks. Here, we investigate how well collective dynamics reflected in motor cortex local field potentials (LFPs) can account for spiking variability during motor behavior. Neural activity was recorded via microelectrode arrays implanted in ventral and dorsal premotor and primary motor cortices of non-human primates performing naturalistic 3-D reaching and grasping actions. Point process models were used to quantify how well LFP features accounted for spiking variability not explained by the measured 3-D reach and grasp kinematics. LFP features included the instantaneous magnitude, phase and analytic-signal components of narrow band-pass filtered (δ,θ,α,β) LFPs, and analytic signal and amplitude envelope features in higher-frequency bands. Multiband LFP features predicted single-neuron spiking (1ms resolution) with substantial accuracy as assessed via ROC analysis. Notably, however, models including both LFP and kinematics features displayed marginal improvement over kinematics-only models. Furthermore, the small predictive information added by LFP features to kinematic models was redundant to information available in fast-timescale (<100 ms) spiking history. Overall, information in multiband LFP features, although predictive of single-neuron spiking during movement execution, was redundant to information available in movement parameters and spiking history. Our findings suggest that, during movement execution, collective dynamics reflected in motor cortex LFPs primarily relate to sensorimotor processes directly controlling movement output, adding little explanatory power to variability not accounted by movement parameters. PMID:26157365

  20. Combined genetic algorithm and multiple linear regression (GA-MLR) optimizer: Application to multi-exponential fluorescence decay surface.

    PubMed

    Fisz, Jacek J

    2006-12-07

    The optimization approach based on the genetic algorithm (GA) combined with multiple linear regression (MLR) method, is discussed. The GA-MLR optimizer is designed for the nonlinear least-squares problems in which the model functions are linear combinations of nonlinear functions. GA optimizes the nonlinear parameters, and the linear parameters are calculated from MLR. GA-MLR is an intuitive optimization approach and it exploits all advantages of the genetic algorithm technique. This optimization method results from an appropriate combination of two well-known optimization methods. The MLR method is embedded in the GA optimizer and linear and nonlinear model parameters are optimized in parallel. The MLR method is the only one strictly mathematical "tool" involved in GA-MLR. The GA-MLR approach simplifies and accelerates considerably the optimization process because the linear parameters are not the fitted ones. Its properties are exemplified by the analysis of the kinetic biexponential fluorescence decay surface corresponding to a two-excited-state interconversion process. A short discussion of the variable projection (VP) algorithm, designed for the same class of the optimization problems, is presented. VP is a very advanced mathematical formalism that involves the methods of nonlinear functionals, algebra of linear projectors, and the formalism of Fréchet derivatives and pseudo-inverses. Additional explanatory comments are added on the application of recently introduced the GA-NR optimizer to simultaneous recovery of linear and weakly nonlinear parameters occurring in the same optimization problem together with nonlinear parameters. The GA-NR optimizer combines the GA method with the NR method, in which the minimum-value condition for the quadratic approximation to chi(2), obtained from the Taylor series expansion of chi(2), is recovered by means of the Newton-Raphson algorithm. The application of the GA-NR optimizer to model functions which are multi-linear combinations of nonlinear functions, is indicated. The VP algorithm does not distinguish the weakly nonlinear parameters from the nonlinear ones and it does not apply to the model functions which are multi-linear combinations of nonlinear functions.

  1. GrammarViz 3.0: Interactive Discovery of Variable-Length Time Series Patterns

    DOE PAGES

    Senin, Pavel; Lin, Jessica; Wang, Xing; ...

    2018-02-23

    The problems of recurrent and anomalous pattern discovery in time series, e.g., motifs and discords, respectively, have received a lot of attention from researchers in the past decade. However, since the pattern search space is usually intractable, most existing detection algorithms require that the patterns have discriminative characteristics and have its length known in advance and provided as input, which is an unreasonable requirement for many real-world problems. In addition, patterns of similar structure, but of different lengths may co-exist in a time series. In order to address these issues, we have developed algorithms for variable-length time series pattern discoverymore » that are based on symbolic discretization and grammar inference—two techniques whose combination enables the structured reduction of the search space and discovery of the candidate patterns in linear time. In this work, we present GrammarViz 3.0—a software package that provides implementations of proposed algorithms and graphical user interface for interactive variable-length time series pattern discovery. The current version of the software provides an alternative grammar inference algorithm that improves the time series motif discovery workflow, and introduces an experimental procedure for automated discretization parameter selection that builds upon the minimum cardinality maximum cover principle and aids the time series recurrent and anomalous pattern discovery.« less

  2. GrammarViz 3.0: Interactive Discovery of Variable-Length Time Series Patterns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Senin, Pavel; Lin, Jessica; Wang, Xing

    The problems of recurrent and anomalous pattern discovery in time series, e.g., motifs and discords, respectively, have received a lot of attention from researchers in the past decade. However, since the pattern search space is usually intractable, most existing detection algorithms require that the patterns have discriminative characteristics and have its length known in advance and provided as input, which is an unreasonable requirement for many real-world problems. In addition, patterns of similar structure, but of different lengths may co-exist in a time series. In order to address these issues, we have developed algorithms for variable-length time series pattern discoverymore » that are based on symbolic discretization and grammar inference—two techniques whose combination enables the structured reduction of the search space and discovery of the candidate patterns in linear time. In this work, we present GrammarViz 3.0—a software package that provides implementations of proposed algorithms and graphical user interface for interactive variable-length time series pattern discovery. The current version of the software provides an alternative grammar inference algorithm that improves the time series motif discovery workflow, and introduces an experimental procedure for automated discretization parameter selection that builds upon the minimum cardinality maximum cover principle and aids the time series recurrent and anomalous pattern discovery.« less

  3. Modeling of polymer photodegradation for solar cell modules

    NASA Technical Reports Server (NTRS)

    Somersall, A. C.; Guillet, J. E.

    1982-01-01

    It was shown that many of the experimental observations in the photooxidation of hydrocarbon polymers can be accounted for with a computer simulation using an elementary mechanistic model with corresponding rate constants for each reaction. For outdoor applications, however, such as in photovoltaics, the variation of temperature must have important effects on the useful lifetimes of such materials. The data bank necessary to replace the isothermal rate constant values with Arrhenius activation parameters: A (the pre-exponential factor) and E (the activation energy) was searched. The best collection of data assembled to data is summarized. Note, however, that the problem is now considerably enlarged since from a theoretical point of view, with 51 of the input variables replaced with 102 parameters. The sensitivity of the overall scheme is such that even after many computer simulations, a successful photooxidation simulation with the expanded variable set was not completed. Many of the species in the complex process undergo a number of competitive pathways, the relative importance of each being often sensitive to small changes in the calculated rate constant values.

  4. Optimal estimation of spatially variable recharge and transmissivity fields under steady-state groundwater flow. Part 2. Case study

    NASA Astrophysics Data System (ADS)

    Graham, Wendy D.; Neff, Christina R.

    1994-05-01

    The first-order analytical solution of the inverse problem for estimating spatially variable recharge and transmissivity under steady-state groundwater flow, developed in Part 1 is applied to the Upper Floridan Aquifer in NE Florida. Parameters characterizing the statistical structure of the log-transmissivity and head fields are estimated from 152 measurements of transmissivity and 146 measurements of hydraulic head available in the study region. Optimal estimates of the recharge, transmissivity and head fields are produced throughout the study region by conditioning on the nearest 10 available transmissivity measurements and the nearest 10 available head measurements. Head observations are shown to provide valuable information for estimating both the transmissivity and the recharge fields. Accurate numerical groundwater model predictions of the aquifer flow system are obtained using the optimal transmissivity and recharge fields as input parameters, and the optimal head field to define boundary conditions. For this case study, both the transmissivity field and the uncertainty of the transmissivity field prediction are poorly estimated, when the effects of random recharge are neglected.

  5. Ideal, nonideal, and no-marker variables: The confirmatory factor analysis (CFA) marker technique works when it matters.

    PubMed

    Williams, Larry J; O'Boyle, Ernest H

    2015-09-01

    A persistent concern in the management and applied psychology literature is the effect of common method variance on observed relations among variables. Recent work (i.e., Richardson, Simmering, & Sturman, 2009) evaluated 3 analytical approaches to controlling for common method variance, including the confirmatory factor analysis (CFA) marker technique. Their findings indicated significant problems with this technique, especially with nonideal marker variables (those with theoretical relations with substantive variables). Based on their simulation results, Richardson et al. concluded that not correcting for method variance provides more accurate estimates than using the CFA marker technique. We reexamined the effects of using marker variables in a simulation study and found the degree of error in estimates of a substantive factor correlation was relatively small in most cases, and much smaller than error associated with making no correction. Further, in instances in which the error was large, the correlations between the marker and substantive scales were higher than that found in organizational research with marker variables. We conclude that in most practical settings, the CFA marker technique yields parameter estimates close to their true values, and the criticisms made by Richardson et al. are overstated. (c) 2015 APA, all rights reserved).

  6. A modified hybrid uncertain analysis method for dynamic response field of the LSOAAC with random and interval parameters

    NASA Astrophysics Data System (ADS)

    Zi, Bin; Zhou, Bin

    2016-07-01

    For the prediction of dynamic response field of the luffing system of an automobile crane (LSOAAC) with random and interval parameters, a hybrid uncertain model is introduced. In the hybrid uncertain model, the parameters with certain probability distribution are modeled as random variables, whereas, the parameters with lower and upper bounds are modeled as interval variables instead of given precise values. Based on the hybrid uncertain model, the hybrid uncertain dynamic response equilibrium equation, in which different random and interval parameters are simultaneously included in input and output terms, is constructed. Then a modified hybrid uncertain analysis method (MHUAM) is proposed. In the MHUAM, based on random interval perturbation method, the first-order Taylor series expansion and the first-order Neumann series, the dynamic response expression of the LSOAAC is developed. Moreover, the mathematical characteristics of extrema of bounds of dynamic response are determined by random interval moment method and monotonic analysis technique. Compared with the hybrid Monte Carlo method (HMCM) and interval perturbation method (IPM), numerical results show the feasibility and efficiency of the MHUAM for solving the hybrid LSOAAC problems. The effects of different uncertain models and parameters on the LSOAAC response field are also investigated deeply, and numerical results indicate that the impact made by the randomness in the thrust of the luffing cylinder F is larger than that made by the gravity of the weight in suspension Q . In addition, the impact made by the uncertainty in the displacement between the lower end of the lifting arm and the luffing cylinder a is larger than that made by the length of the lifting arm L .

  7. Pediatric sleep problems and social-emotional problems. A population-based study.

    PubMed

    Hysing, Mari; Sivertsen, Børge; Garthus-Niegel, Susan; Eberhard-Gran, Malin

    2016-02-01

    To examine the association between sleep and social-emotional development in two-year-old toddlers. The study is part of a longitudinal cohort study, the Akershus Birth Cohort Study, which targeted all women giving birth at Akershus University Hospital in Norway. The current study is from the fourth round of the study, including 2014 women two years after delivery. The Brief Infant Sleep Questionnaire (BISQ) and the Ages and Stages Questionnaire: Social Emotional (ASQ:SE) were filled out by the mothers and were used to assess toddler sleep, and social-emotional development, respectively. Other domains of development (communication problems, gross motor problems, and fine motor problems) were assessed with the Ages and Stages Questionnaire (ASQ). Confirmatory factor analysis was conducted on the ASQ:SE, and logistic regression analyses were used to examine both crude associations between sleep variables and social-emotional problems, and adjusting for potential confounders. The mean sleep duration of the toddlers was 12h and 27 min; the majority of the children (54%) had 1-2 awakenings per night, while 10% of the children had a sleep onset latency of more than 30 min. All sleep parameters, including short sleep duration, nocturnal awakenings and sleep onset problems, were significantly associated with social-emotional problems in a dose-response manner. For example, sleeping less than 11h per night was associated with a five-fold increase in the odds of social-emotional problems, compared to sleeping 13-14 h per night. Adjusting for potential confounders, including maternal age, maternal education, marital status, parity, gestational age, child birth-weight and other developmental problems, did not, or only slightly, attenuate the associations between any of the sleep variables and social-emotional problems. Short sleep duration, nocturnal awakenings and sleep onset problems were all associated with higher odds of social-emotional problems, even after accounting for developmental problems and demographic factors. Thus, a broad assessment of sleep and social-emotional problems when toddlers present with either can be useful. Copyright © 2016. Published by Elsevier Inc.

  8. Influences of system uncertainties on the numerical transfer path analysis of engine systems

    NASA Astrophysics Data System (ADS)

    Acri, A.; Nijman, E.; Acri, A.; Offner, G.

    2017-10-01

    Practical mechanical systems operate with some degree of uncertainty. In numerical models uncertainties can result from poorly known or variable parameters, from geometrical approximation, from discretization or numerical errors, from uncertain inputs or from rapidly changing forcing that can be best described in a stochastic framework. Recently, random matrix theory was introduced to take parameter uncertainties into account in numerical modeling problems. In particular in this paper, Wishart random matrix theory is applied on a multi-body dynamic system to generate random variations of the properties of system components. Multi-body dynamics is a powerful numerical tool largely implemented during the design of new engines. In this paper the influence of model parameter variability on the results obtained from the multi-body simulation of engine dynamics is investigated. The aim is to define a methodology to properly assess and rank system sources when dealing with uncertainties. Particular attention is paid to the influence of these uncertainties on the analysis and the assessment of the different engine vibration sources. Examples of the effects of different levels of uncertainties are illustrated by means of examples using a representative numerical powertrain model. A numerical transfer path analysis, based on system dynamic substructuring, is used to derive and assess the internal engine vibration sources. The results obtained from this analysis are used to derive correlations between parameter uncertainties and statistical distribution of results. The derived statistical information can be used to advance the knowledge of the multi-body analysis and the assessment of system sources when uncertainties in model parameters are considered.

  9. SPECT System Optimization Against A Discrete Parameter Space

    PubMed Central

    Meng, L. J.; Li, N.

    2013-01-01

    In this paper, we present an analytical approach for optimizing the design of a static SPECT system or optimizing the sampling strategy with a variable/adaptive SPECT imaging hardware against an arbitrarily given set of system parameters. This approach has three key aspects. First, it is designed to operate over a discretized system parameter space. Second, we have introduced an artificial concept of virtual detector as the basic building block of an imaging system. With a SPECT system described as a collection of the virtual detectors, one can convert the task of system optimization into a process of finding the optimum imaging time distribution (ITD) across all virtual detectors. Thirdly, the optimization problem (finding the optimum ITD) could be solved with a block-iterative approach or other non-linear optimization algorithms. In essence, the resultant optimum ITD could provide a quantitative measure of the relative importance (or effectiveness) of the virtual detectors and help to identify the system configuration or sampling strategy that leads to an optimum imaging performance. Although we are using SPECT imaging as a platform to demonstrate the system optimization strategy, this development also provides a useful framework for system optimization problems in other modalities, such as positron emission tomography (PET) and X-ray computed tomography (CT) [1, 2]. PMID:23587609

  10. Event-Based Variance-Constrained ${\\mathcal {H}}_{\\infty }$ Filtering for Stochastic Parameter Systems Over Sensor Networks With Successive Missing Measurements.

    PubMed

    Wang, Licheng; Wang, Zidong; Han, Qing-Long; Wei, Guoliang

    2018-03-01

    This paper is concerned with the distributed filtering problem for a class of discrete time-varying stochastic parameter systems with error variance constraints over a sensor network where the sensor outputs are subject to successive missing measurements. The phenomenon of the successive missing measurements for each sensor is modeled via a sequence of mutually independent random variables obeying the Bernoulli binary distribution law. To reduce the frequency of unnecessary data transmission and alleviate the communication burden, an event-triggered mechanism is introduced for the sensor node such that only some vitally important data is transmitted to its neighboring sensors when specific events occur. The objective of the problem addressed is to design a time-varying filter such that both the requirements and the variance constraints are guaranteed over a given finite-horizon against the random parameter matrices, successive missing measurements, and stochastic noises. By recurring to stochastic analysis techniques, sufficient conditions are established to ensure the existence of the time-varying filters whose gain matrices are then explicitly characterized in term of the solutions to a series of recursive matrix inequalities. A numerical simulation example is provided to illustrate the effectiveness of the developed event-triggered distributed filter design strategy.

  11. A rugged landscape model for self-organization and emergent leadership in creative problem solving and production groups.

    PubMed

    Guastello, Stephen J; Craven, Joanna; Zygowicz, Karen M; Bock, Benjamin R

    2005-07-01

    The process by which an initially leaderless group differentiates into one containing leadership and secondary role structures was examined using the swallowtail catastrophe model and principles of selforganization. The objectives were to identify the control variables in the process of leadership emergence in creative problem solving groups and production groups. In the first of two experiments, groups of university students (total N = 114) played a creative problem solving game. Participants later rated each other on leadership behavior, styles, and variables related to the process of conversation. A performance quality measure was included also. Control parameters in the swallowtail catastrophe model were identified through a combination of factor analysis and nonlinear regression. Leaders displayed a broad spectrum of behaviors in the general categories of Controlling the Conversation and Creativity in their role-play. In the second experiment, groups of university students (total N = 197) engaged in a laboratory work experiment that had a substantial production goal component. The same system of ratings and modeling strategy was used along with a work production measure. Leaders in the production task emerged to the extent that they exhibited control over both the creative and production aspects of the task, they could keep tension low, and the externally imposed production goals were realistic.

  12. High performance GPU processing for inversion using uniform grid searches

    NASA Astrophysics Data System (ADS)

    Venetis, Ioannis E.; Saltogianni, Vasso; Stiros, Stathis; Gallopoulos, Efstratios

    2017-04-01

    Many geophysical problems are described by systems of redundant, highly non-linear systems of ordinary equations with constant terms deriving from measurements and hence representing stochastic variables. Solution (inversion) of such problems is based on numerical, optimization methods, based on Monte Carlo sampling or on exhaustive searches in cases of two or even three "free" unknown variables. Recently the TOPological INVersion (TOPINV) algorithm, a grid search-based technique in the Rn space, has been proposed. TOPINV is not based on the minimization of a certain cost function and involves only forward computations, hence avoiding computational errors. The basic concept is to transform observation equations into inequalities on the basis of an optimization parameter k and of their standard errors, and through repeated "scans" of n-dimensional search grids for decreasing values of k to identify the optimal clusters of gridpoints which satisfy observation inequalities and by definition contain the "true" solution. Stochastic optimal solutions and their variance-covariance matrices are then computed as first and second statistical moments. Such exhaustive uniform searches produce an excessive computational load and are extremely time consuming for common computers based on a CPU. An alternative is to use a computing platform based on a GPU, which nowadays is affordable to the research community, which provides a much higher computing performance. Using the CUDA programming language to implement TOPINV allows the investigation of the attained speedup in execution time on such a high performance platform. Based on synthetic data we compared the execution time required for two typical geophysical problems, modeling magma sources and seismic faults, described with up to 18 unknown variables, on both CPU/FORTRAN and GPU/CUDA platforms. The same problems for several different sizes of search grids (up to 1012 gridpoints) and numbers of unknown variables were solved on both platforms, and execution time as a function of the grid dimension for each problem was recorded. Results indicate an average speedup in calculations by a factor of 100 on the GPU platform; for example problems with 1012 grid-points require less than two hours instead of several days on conventional desktop computers. Such a speedup encourages the application of TOPINV on high performance platforms, as a GPU, in cases where nearly real time decisions are necessary, for example finite fault modeling to identify possible tsunami sources.

  13. Pore structures in an implantable sol gel titania ceramic device used in controlled drug release applications: A modeling study

    NASA Astrophysics Data System (ADS)

    Peterson, Aaron; Lopez, Tessy; Islas, Emma Ortiz; Gonzalez, Richard D.

    2007-04-01

    Several process variables, which may be helpful in optimizing the rate at which drugs are released from implantable, sol-gel titania devices have been identified in this study. The controlled rate of drug release is compared for two different anticonvulsant drugs, valproic acid and sodic phenytoin. Contrary to what one might expect, when the concentration is increased in the titania reservoir the rate of initial drug delivery decreases. This is a desirable result, because it may reduce the danger of a high initial discharge, which may harm the epileptic rat. The structure of the porous structure within the titania network has been studied using a generalized form of the BET equation which considers only n layers. In general, following an initial discharge, the rate at which the drug is released will increase with the increasing concentration. Pore mouth blocking can present a problem. However, this problem tends to disappear following the initial discharge. The extent of drug loading is a useful variable parameter, which can be adjusted in order to deliver the amount of drug required in a given application.

  14. Taguchi's off line method and Multivariate loss function approach for quality management and optimization of process parameters -A review

    NASA Astrophysics Data System (ADS)

    Bharti, P. K.; Khan, M. I.; Singh, Harbinder

    2010-10-01

    Off-line quality control is considered to be an effective approach to improve product quality at a relatively low cost. The Taguchi method is one of the conventional approaches for this purpose. Through this approach, engineers can determine a feasible combination of design parameters such that the variability of a product's response can be reduced and the mean is close to the desired target. The traditional Taguchi method was focused on ensuring good performance at the parameter design stage with one quality characteristic, but most products and processes have multiple quality characteristics. The optimal parameter design minimizes the total quality loss for multiple quality characteristics. Several studies have presented approaches addressing multiple quality characteristics. Most of these papers were concerned with maximizing the parameter combination of signal to noise (SN) ratios. The results reveal the advantages of this approach are that the optimal parameter design is the same as the traditional Taguchi method for the single quality characteristic; the optimal design maximizes the amount of reduction of total quality loss for multiple quality characteristics. This paper presents a literature review on solving multi-response problems in the Taguchi method and its successful implementation in various industries.

  15. Possible influences of exercise-intensity-dependent increases in non-cortical hemodynamic variables on NIRS-based neuroimaging analysis during cognitive tasks: Technical note

    PubMed Central

    Byun, Kyeongho; Hyodo, Kazuki; Suwabe, Kazuya; Kujach, Sylwester; Kato, Morimasa; Soya, Hideaki

    2014-01-01

    [Purpose] Functional near-infrared spectroscopy (fNIRS) provides functional imaging of cortical activations by measuring regional oxy- and deoxy-hemoglobin (Hb) changes in the forehead during a cognitive task. There are, however, potential problems regarding NIRS signal contamination by non-cortical hemodynamic (NCH) variables such as skin blood flow, middle cerebral artery blood flow, and heart rate (HR), which are further complicated during acute exercise. It is thus necessary to determine the appropriate post-exercise timing that allows for valid NIRS assessment during a task without any increase in NCH variables. Here, we monitored post-exercise changes in NCH parameters with different intensities of exercise. [Methods] Fourteen healthy young participants cycled 30, 50 and 70% of their peak oxygen uptake (Vo2peak) for 10 min per intensity, each on different days. Changes in skin blood flow velocity (SBFv), middle cerebral artery mean blood velocity (MCA Vmean) and HR were monitored before, during, and after the exercise. [Results] Post-exercise levels of both SBFv and HR in contrast to MCA Vmean remained high compared to basal levels and the times taken to return to baseline levels for both parameters were delayed (2-8 min after exercise), depending upon exercise intensity. [Conclusion] These results indicate that the delayed clearance of NCH variables of up to 8 min into the post-exercise phase may contaminate NIRS measurements, and could be a limitation of NIRS-based neuroimaging studies. PMID:25671198

  16. Using soft computing techniques to predict corrected air permeability using Thomeer parameters, air porosity and grain density

    NASA Astrophysics Data System (ADS)

    Nooruddin, Hasan A.; Anifowose, Fatai; Abdulraheem, Abdulazeez

    2014-03-01

    Soft computing techniques are recently becoming very popular in the oil industry. A number of computational intelligence-based predictive methods have been widely applied in the industry with high prediction capabilities. Some of the popular methods include feed-forward neural networks, radial basis function network, generalized regression neural network, functional networks, support vector regression and adaptive network fuzzy inference system. A comparative study among most popular soft computing techniques is presented using a large dataset published in literature describing multimodal pore systems in the Arab D formation. The inputs to the models are air porosity, grain density, and Thomeer parameters obtained using mercury injection capillary pressure profiles. Corrected air permeability is the target variable. Applying developed permeability models in recent reservoir characterization workflow ensures consistency between micro and macro scale information represented mainly by Thomeer parameters and absolute permeability. The dataset was divided into two parts with 80% of data used for training and 20% for testing. The target permeability variable was transformed to the logarithmic scale as a pre-processing step and to show better correlations with the input variables. Statistical and graphical analysis of the results including permeability cross-plots and detailed error measures were created. In general, the comparative study showed very close results among the developed models. The feed-forward neural network permeability model showed the lowest average relative error, average absolute relative error, standard deviations of error and root means squares making it the best model for such problems. Adaptive network fuzzy inference system also showed very good results.

  17. Control of minimum member size in parameter-free structural shape optimization by a medial axis approximation

    NASA Astrophysics Data System (ADS)

    Schmitt, Oliver; Steinmann, Paul

    2018-06-01

    We introduce a manufacturing constraint for controlling the minimum member size in structural shape optimization problems, which is for example of interest for components fabricated in a molding process. In a parameter-free approach, whereby the coordinates of the FE boundary nodes are used as design variables, the challenging task is to find a generally valid definition for the thickness of non-parametric geometries in terms of their boundary nodes. Therefore we use the medial axis, which is the union of all points with at least two closest points on the boundary of the domain. Since the effort for the exact computation of the medial axis of geometries given by their FE discretization highly increases with the number of surface elements we use the distance function instead to approximate the medial axis by a cloud of points. The approximation is demonstrated on three 2D examples. Moreover, the formulation of a minimum thickness constraint is applied to a sensitivity-based shape optimization problem of one 2D and one 3D model.

  18. Hard Constraints in Optimization Under Uncertainty

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.

    2008-01-01

    This paper proposes a methodology for the analysis and design of systems subject to parametric uncertainty where design requirements are specified via hard inequality constraints. Hard constraints are those that must be satisfied for all parameter realizations within a given uncertainty model. Uncertainty models given by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles, are the focus of this paper. These models, which are also quite practical, allow for a rigorous mathematical treatment within the proposed framework. Hard constraint feasibility is determined by sizing the largest uncertainty set for which the design requirements are satisfied. Analytically verifiable assessments of robustness are attained by comparing this set with the actual uncertainty model. Strategies that enable the comparison of the robustness characteristics of competing design alternatives, the description and approximation of the robust design space, and the systematic search for designs with improved robustness are also proposed. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, this methodology is applicable to a broad range of engineering problems.

  19. Automated Generation of Finite-Element Meshes for Aircraft Conceptual Design

    NASA Technical Reports Server (NTRS)

    Li, Wu; Robinson, Jay

    2016-01-01

    This paper presents a novel approach for automated generation of fully connected finite-element meshes for all internal structural components and skins of a given wing-body geometry model, controlled by a few conceptual-level structural layout parameters. Internal structural components include spars, ribs, frames, and bulkheads. Structural layout parameters include spar/rib locations in wing chordwise/spanwise direction and frame/bulkhead locations in longitudinal direction. A simple shell thickness optimization problem with two load conditions is used to verify versatility and robustness of the automated meshing process. The automation process is implemented in ModelCenter starting from an OpenVSP geometry and ending with a NASTRAN 200 solution. One subsonic configuration and one supersonic configuration are used for numerical verification. Two different structural layouts are constructed for each configuration and five finite-element meshes of different sizes are generated for each layout. The paper includes various comparisons of solutions of 20 thickness optimization problems, as well as discussions on how the optimal solutions are affected by the stress constraint bound and the initial guess of design variables.

  20. Control of minimum member size in parameter-free structural shape optimization by a medial axis approximation

    NASA Astrophysics Data System (ADS)

    Schmitt, Oliver; Steinmann, Paul

    2017-09-01

    We introduce a manufacturing constraint for controlling the minimum member size in structural shape optimization problems, which is for example of interest for components fabricated in a molding process. In a parameter-free approach, whereby the coordinates of the FE boundary nodes are used as design variables, the challenging task is to find a generally valid definition for the thickness of non-parametric geometries in terms of their boundary nodes. Therefore we use the medial axis, which is the union of all points with at least two closest points on the boundary of the domain. Since the effort for the exact computation of the medial axis of geometries given by their FE discretization highly increases with the number of surface elements we use the distance function instead to approximate the medial axis by a cloud of points. The approximation is demonstrated on three 2D examples. Moreover, the formulation of a minimum thickness constraint is applied to a sensitivity-based shape optimization problem of one 2D and one 3D model.

  1. Vertical Variability of Rain Drop Size Distribution from Micro Rain Radar Measurements during IFloodS

    NASA Astrophysics Data System (ADS)

    Adirosi, Elisa; Tokay, Ali; Roberto, Nicoletta; Gorgucci, Eugenio; Montopoli, Mario; Baldini, Luca

    2017-04-01

    Ground based weather radars are highly used to generate rainfall products for meteorological and hydrological applications. However, weather radar quantitative rainfall estimation is obtained at a certain altitude that depends mainly on the radar elevation angle and on the distance from the radar. Therefore, depending on the vertical variability of rainfall, a time-height ambiguity between radar measurement and rainfall at the ground can affect the rainfall products. The vertically pointing radars (such as the Micro Rain Radar, MRR) are great tool to investigate the vertical variability of rainfall and its characteristics and ultimately, to fill the gap between the ground level and the first available radar elevation. Furthermore, the knowledge of rain Drop Size Distribution (DSD) variability is linked to the well-known problem of the non-uniform beam filling that is one of the main uncertainties of Global Precipitation Measurement (GPM) mission Dual frequency Precipitation Radar (DPR). During GPM Ground Validation Iowa Flood Studies (IFloodS) field experiment, data collected with 2D video disdrometers (2DVD), Autonomous OTT Parsivel2 Units (APU), and MRR profilers at different sites were available. In three different sites co-located APU, 2DVD and MRR are available and covered by the S-band Dual Polarimetric Doppler radar (NPOL). The first elevation height of the radar beam varies, among the three sites, between 70 m and 1100 m. The IFloodS set-up has been used to compare disdrometers, MRR and NPOL data and to evaluate the uncertainties of those measurements. First, the performance of disdrometers and MRR in determining different rainfall parameters at ground has been evaluated and then the MRR based parameters have been compared with the ones obtained from NPOL data at the lowest elevations. Furthermore, the vertical variability of DSD and integral rainfall parameters within the MRR bins (from ground to 1085 m each 35 m) has been investigated in order to provide some insight on the variability of the rainfall microphysical characteristics within about 1 km above the ground.

  2. Approaches to highly parameterized inversion: A guide to using PEST for model-parameter and predictive-uncertainty analysis

    USGS Publications Warehouse

    Doherty, John E.; Hunt, Randall J.; Tonkin, Matthew J.

    2010-01-01

    Analysis of the uncertainty associated with parameters used by a numerical model, and with predictions that depend on those parameters, is fundamental to the use of modeling in support of decisionmaking. Unfortunately, predictive uncertainty analysis with regard to models can be very computationally demanding, due in part to complex constraints on parameters that arise from expert knowledge of system properties on the one hand (knowledge constraints) and from the necessity for the model parameters to assume values that allow the model to reproduce historical system behavior on the other hand (calibration constraints). Enforcement of knowledge and calibration constraints on parameters used by a model does not eliminate the uncertainty in those parameters. In fact, in many cases, enforcement of calibration constraints simply reduces the uncertainties associated with a number of broad-scale combinations of model parameters that collectively describe spatially averaged system properties. The uncertainties associated with other combinations of parameters, especially those that pertain to small-scale parameter heterogeneity, may not be reduced through the calibration process. To the extent that a prediction depends on system-property detail, its postcalibration variability may be reduced very little, if at all, by applying calibration constraints; knowledge constraints remain the only limits on the variability of predictions that depend on such detail. Regrettably, in many common modeling applications, these constraints are weak. Though the PEST software suite was initially developed as a tool for model calibration, recent developments have focused on the evaluation of model-parameter and predictive uncertainty. As a complement to functionality that it provides for highly parameterized inversion (calibration) by means of formal mathematical regularization techniques, the PEST suite provides utilities for linear and nonlinear error-variance and uncertainty analysis in these highly parameterized modeling contexts. Availability of these utilities is particularly important because, in many cases, a significant proportion of the uncertainty associated with model parameters-and the predictions that depend on them-arises from differences between the complex properties of the real world and the simplified representation of those properties that is expressed by the calibrated model. This report is intended to guide intermediate to advanced modelers in the use of capabilities available with the PEST suite of programs for evaluating model predictive error and uncertainty. A brief theoretical background is presented on sources of parameter and predictive uncertainty and on the means for evaluating this uncertainty. Applications of PEST tools are then discussed for overdetermined and underdetermined problems, both linear and nonlinear. PEST tools for calculating contributions to model predictive uncertainty, as well as optimization of data acquisition for reducing parameter and predictive uncertainty, are presented. The appendixes list the relevant PEST variables, files, and utilities required for the analyses described in the document.

  3. Mathematics of tsunami: modelling and identification

    NASA Astrophysics Data System (ADS)

    Krivorotko, Olga; Kabanikhin, Sergey

    2015-04-01

    Tsunami (long waves in the deep water) motion caused by underwater earthquakes is described by shallow water equations ( { ηtt = div (gH (x,y)-gradη), (x,y) ∈ Ω, t ∈ (0,T ); η|t=0 = q(x,y), ηt|t=0 = 0, (x,y) ∈ Ω. ( (1) Bottom relief H(x,y) characteristics and the initial perturbation data (a tsunami source q(x,y)) are required for the direct simulation of tsunamis. The main difficulty problem of tsunami modelling is a very big size of the computational domain (Ω = 500 × 1000 kilometres in space and about one hour computational time T for one meter of initial perturbation amplitude max|q|). The calculation of the function η(x,y,t) of three variables in Ω × (0,T) requires large computing resources. We construct a new algorithm to solve numerically the problem of determining the moving tsunami wave height S(x,y) which is based on kinematic-type approach and analytical representation of fundamental solution. Proposed algorithm of determining the function of two variables S(x,y) reduces the number of operations in 1.5 times than solving problem (1). If all functions does not depend on the variable y (one dimensional case), then the moving tsunami wave height satisfies of the well-known Airy-Green formula: S(x) = S(0)° --- 4H (0)/H (x). The problem of identification parameters of a tsunami source using additional measurements of a passing wave is called inverse tsunami problem. We investigate two different inverse problems of determining a tsunami source q(x,y) using two different additional data: Deep-ocean Assessment and Reporting of Tsunamis (DART) measurements and satellite altimeters wave-form images. These problems are severely ill-posed. The main idea consists of combination of two measured data to reconstruct the source parameters. We apply regularization techniques to control the degree of ill-posedness such as Fourier expansion, truncated singular value decomposition, numerical regularization. The algorithm of selecting the truncated number of singular values of an inverse problem operator which is agreed with the error level in measured data is described and analysed. In numerical experiment we used conjugate gradient method for solving inverse tsunami problems. Gradient methods are based on minimizing the corresponding misfit function. To calculate the gradient of the misfit function, the adjoint problem is solved. The conservative finite-difference schemes for solving the direct and adjoint problems in the approximation of shallow water are constructed. Results of numerical experiments of the tsunami source reconstruction are presented and discussed. We show that using a combination of two types of data allows one to increase the stability and efficiency of tsunami source reconstruction. Non-profit organization WAPMERR (World Agency of Planetary Monitoring and Earthquake Risk Reduction) in collaboration with Institute of Computational Mathematics and Mathematical Geophysics of SB RAS developed the Integrated Tsunami Research and Information System (ITRIS) to simulate tsunami waves and earthquakes, river course changes, coastal zone floods, and risk estimates for coastal constructions at wave run-ups and earthquakes. The special scientific plug-in components are embedded in a specially developed GIS-type graphic shell for easy data retrieval, visualization and processing. We demonstrate the tsunami simulation plug-in for historical tsunami events (2004 Indian Ocean tsunami, Simushir tsunami 2006 and others). This work was supported by the Ministry of Education and Science of the Russian Federation.

  4. Variable magnetic field (VMF) effect on the heat transfer of a half-annulus cavity filled by Fe3O4-water nanofluid under constant heat flux

    NASA Astrophysics Data System (ADS)

    Hatami, M.; Zhou, J.; Geng, J.; Jing, D.

    2018-04-01

    In this paper, the effect of a variable magnetic field (VMF) on the natural convection heat transfer of Fe3O4-water nanofluid in a half-annulus cavity is studied by finite element method using FlexPDE commercial code. After deriving the governing equations and solving the problem by defined boundary conditions, the effects of three main parameters (Hartmann Number (Ha), nanoparticles volume fraction (φ) and Rayleigh number (Ra)) on the local and average Nusselt numbers of inner wall are investigated. As a main outcome, results confirm that in low Eckert numbers, increasing the Hartmann number make a decrease on the Nusselt number due to Lorentz force resulting from the presence of stronger magnetic field.

  5. Axisymmetric deformation in a micropolar thermoelastic medium under fractional order theory of thermoelasticity

    NASA Astrophysics Data System (ADS)

    Kumar, Rajneesh; Singh, Kulwinder; Pathania, Devinder Singh

    2017-07-01

    The purpose of this paper is to study the variations in temperature, radial and normal displacement, normal stress, shear stress and couple stress in a micropolar thermoelastic solid in the context of fractional order theory of thermoelasticity. Eigen value approach together with Laplace and Hankel transforms are employed to obtain the general solution of the problem. The field variables corresponding to different fractional order theories of thermoelasticity have been obtained in the transformed domain. The general solution is applied to an infinite space subjected to a concentrated load at the origin. To obtained solution in the physical domain numerical inversion technique has been applied and numerically computed results are depicted graphically to analyze the effects of fractional order parameter on the field variables.

  6. Modern aspects of homogeneous-heterogeneous reactions and variable thickness in nanofluids through carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Hayat, Tasawar; Ahmed, Sohail; Muhammad, Taseer; Alsaedi, Ahmed

    2017-10-01

    This article examines homogeneous-heterogeneous reactions and internal heat generation in Darcy-Forchheimer flow of nanofluids with different base fluids. Flow is generated due to a nonlinear stretchable surface of variable thickness. The characteristics of nanofluid are explored using CNTs (single and multi walled carbon nanotubes). Equal diffusion coefficients are considered for both reactants and auto catalyst. The conversion of partial differential equations (PDEs) to ordinary differential equations (ODEs) is done via appropriate transformations. Optimal homotopy approach is implemented for solutions development of governing problems. Averaged square residual errors are computed. The optimal solution expressions of velocity, temperature and concentration are explored through plots by using several values of physical parameters. Further the coefficient of skin friction and local Nusselt number are examined through graphs.

  7. On chemical reaction and porous medium effect in the MHD flow due to a rotating disk with variable thickness

    NASA Astrophysics Data System (ADS)

    Hayat, Tasawar; Nazar, Hira; Imtiaz, Maria; Alsaedi, Ahmed

    2017-06-01

    The present analysis describes the magnetohydrodynamic (MHD) axisymmetric flow of a viscous fluid due to a rotating disk with variable thickness. An electrically conducting fluid fills the porous space. The first-order chemical reaction is considered. The equations of the present problem representing the flow of a fluid are reduced into nonlinear ordinary differential equations. Convergent series solutions are obtained. The impacts of the various involved dimensionless parameters on fluid flow, temperature, concentration, skin frction coefficient and Nusselt number are examined. The radial, tangential and axial components of velocity are affected in a similar manner on changing the thickness coefficient of the disk. Similar effects of the disk thickness coefficient are observed for both the temperature and concentration profile.

  8. PREDICTING TWO-DIMENSIONAL STEADY-STATE SOIL FREEZING FRONTS USING THE CVBEM.

    USGS Publications Warehouse

    Hromadka, T.V.

    1986-01-01

    The complex variable boundary element method (CVBEM) is used instead of a real variable boundary element method due to the available modeling error evaluation techniques developed. The modeling accuracy is evaluated by the model-user in the determination of an approximative boundary upon which the CVBEM provides an exact solution. Although inhomogeneity (and anisotropy) can be included in the CVBEM model, the resulting fully populated matrix system quickly becomes large. Therefore in this paper, the domain is assumed homogeneous and isotropic except for differences in frozen and thawed conduction parameters on either side of the freezing front. The example problems presented were obtained by use of a popular 64K microcomputer (the current version of the program used in this study has the capacity to accommodate 30 nodal points).

  9. Automated trajectory planning for multiple-flyby interplanetary missions

    NASA Astrophysics Data System (ADS)

    Englander, Jacob

    Many space mission planning problems may be formulated as hybrid optimal control problems (HOCP), i.e. problems that include both real-valued variables and categorical variables. In interplanetary trajectory design problems the categorical variables will typically specify the sequence of planets at which to perform flybys, and the real-valued variables will represent the launch date, ight times between planets, magnitudes and directions of thrust, flyby altitudes, etc. The contribution of this work is a framework for the autonomous optimization of multiple-flyby interplanetary trajectories. The trajectory design problem is converted into a HOCP with two nested loops: an "outer-loop" that finds the sequence of flybys and an "inner-loop" that optimizes the trajectory for each candidate yby sequence. The problem of choosing a sequence of flybys is posed as an integer programming problem and solved using a genetic algorithm (GA). This is an especially difficult problem to solve because GAs normally operate on a fixed-length set of decision variables. Since in interplanetary trajectory design the number of flyby maneuvers is not known a priori, it was necessary to devise a method of parameterizing the problem such that the GA can evolve a variable-length sequence of flybys. A novel "null gene" transcription was developed to meet this need. Then, for each candidate sequence of flybys, a trajectory must be found that visits each of the flyby targets and arrives at the final destination while optimizing some cost metric, such as minimizing ▵v or maximizing the final mass of the spacecraft. Three different classes of trajectory are described in this work, each of which requireda different physical model and optimization method. The choice of a trajectory model and optimization method is especially challenging because of the nature of the hybrid optimal control problem. Because the trajectory optimization problem is generated in real time by the outer-loop, the inner-loop optimization algorithm cannot require any a priori information and must always return a solution. In addition, the upper and lower bounds on each decision variable cannot be chosen a priori by the user because the user has no way to know what problem will be solved. Instead a method of choosing upper and lower bounds via a set of simple rules was developed and used for all three types of trajectory optimization problem. Many optimization algorithms were tested and discarded until suitable algorithms were found for each type of trajectory. The first class of trajectories use chemical propulsion and may only apply a ▵v at the periapse of each flyby. These Multiple Gravity Assist (MGA) trajectories are optimized using a cooperative algorithm of Differential Evolution (DE) and Particle Swarm Optimization (PSO). The second class of trajectories, known as Multiple Gravity Assist with one Deep Space Maneuver (MGA-DSM), also use chemical propulsion but instead of maneuvering at the periapse of each flyby as in the MGA case a maneuver is applied at a free point along each planet-to-planet arc, i.e. there is one maneuver for each pair of flybys. MGA-DSM trajectories are parameterized by more variables than MGA trajectories, and so the cooperative algorithm of DE and PSO that was used to optimize MGA trajectories was found to be less effective when applied to MGA-DSM. Instead, either PSO or DE alone were found to be more effective. The third class of trajectories addressed in this work are those using continuousthrust propulsion. Continuous-thrust trajectory optimization problems are more challenging than impulsive-thrust problems because the control variables are a continuous time series rather than a small set of parameters and because the spacecraft does not follow a conic section trajectory, leading to a large number of nonlinear constraints that must be satisfied to ensure that the spacecraft obeys the equations of motion. Many models and optimization algorithms were applied including direct transcription with nonlinear programming (DTNLP), the inverse-polynomial shapebased method, and feasible region analysis. However the only physical model and optimization method that proved reliable enough were the Sims-Flanagan transcription coupled with a nonlinear programming solver and the monotonic basin hopping (MBH) global search heuristic. The methods developed here are demonstrated to optimize a set of example trajectories, including a recreation of the Cassini mission, a Galileo-like mission, and conceptual continuous-thrust missions to Jupiter, Mercury, and Uranus.

  10. Multi-hole pressure probes to wind tunnel experiments and air data systems

    NASA Astrophysics Data System (ADS)

    Shevchenko, A. M.; Shmakov, A. S.

    2017-10-01

    The problems to develop a multihole pressure system to measure flow angularity, Mach number and dynamic head for wind tunnel experiments or air data systems are discussed. A simple analytical model with separation of variables is derived for the multihole spherical pressure probe. The proposed model is uniform for small subsonic and supersonic speeds. An error analysis was performed. The error functions are obtained, allowing to estimate the influence of the Mach number, the pitch angle, the location of the pressure ports on the uncertainty of determining the flow parameters.

  11. Modeling and design optimization of adhesion between surfaces at the microscale.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sylves, Kevin T.

    2008-08-01

    This research applies design optimization techniques to structures in adhesive contact where the dominant adhesive mechanism is the van der Waals force. Interface finite elements are developed for domains discretized by beam elements, quadrilateral elements or triangular shell elements. Example analysis problems comparing finite element results to analytical solutions are presented. These examples are then optimized, where the objective is matching a force-displacement relationship and the optimization variables are the interface element energy of adhesion or the width of beam elements in the structure. Several parameter studies are conducted and discussed.

  12. Group analysis for natural convection from a vertical plate

    NASA Astrophysics Data System (ADS)

    Rashed, A. S.; Kassem, M. M.

    2008-12-01

    The steady laminar natural convection of a fluid having chemical reaction of order n past a semi-infinite vertical plate is considered. The solution of the problem by means of one-parameter group method reduces the number of independent variables by one leading to a system of nonlinear ordinary differential equations. Two different similarity transformations are found. In each case the set of differential equations are solved numerically using Runge-Kutta and the shooting method. For each transformation different Schmidt numbers and chemical reaction orders are tested.

  13. Probability Distribution Estimated From the Minimum, Maximum, and Most Likely Values: Applied to Turbine Inlet Temperature Uncertainty

    NASA Technical Reports Server (NTRS)

    Holland, Frederic A., Jr.

    2004-01-01

    Modern engineering design practices are tending more toward the treatment of design parameters as random variables as opposed to fixed, or deterministic, values. The probabilistic design approach attempts to account for the uncertainty in design parameters by representing them as a distribution of values rather than as a single value. The motivations for this effort include preventing excessive overdesign as well as assessing and assuring reliability, both of which are important for aerospace applications. However, the determination of the probability distribution is a fundamental problem in reliability analysis. A random variable is often defined by the parameters of the theoretical distribution function that gives the best fit to experimental data. In many cases the distribution must be assumed from very limited information or data. Often the types of information that are available or reasonably estimated are the minimum, maximum, and most likely values of the design parameter. For these situations the beta distribution model is very convenient because the parameters that define the distribution can be easily determined from these three pieces of information. Widely used in the field of operations research, the beta model is very flexible and is also useful for estimating the mean and standard deviation of a random variable given only the aforementioned three values. However, an assumption is required to determine the four parameters of the beta distribution from only these three pieces of information (some of the more common distributions, like the normal, lognormal, gamma, and Weibull distributions, have two or three parameters). The conventional method assumes that the standard deviation is a certain fraction of the range. The beta parameters are then determined by solving a set of equations simultaneously. A new method developed in-house at the NASA Glenn Research Center assumes a value for one of the beta shape parameters based on an analogy with the normal distribution (ref.1). This new approach allows for a very simple and direct algebraic solution without restricting the standard deviation. The beta parameters obtained by the new method are comparable to the conventional method (and identical when the distribution is symmetrical). However, the proposed method generally produces a less peaked distribution with a slightly larger standard deviation (up to 7 percent) than the conventional method in cases where the distribution is asymmetric or skewed. The beta distribution model has now been implemented into the Fast Probability Integration (FPI) module used in the NESSUS computer code for probabilistic analyses of structures (ref. 2).

  14. Psychosocial predictors of natural killer cell mobilization during marital conflict.

    PubMed

    Miller, G E; Dopp, J M; Myers, H F; Stevens, S Y; Fahey, J L

    1999-05-01

    This study examined how specific emotions relate to autonomic nervous and immune system parameters and whether cynical hostility moderates this relationship. Forty-one married couples participated in a 15-min discussion about a marital problem. Observers recorded spouses' emotional expressions during the discussion, and cardiovascular, neuroendocrine, and immunologic parameters were assessed throughout the laboratory session. Among men high in cynical hostility, anger displayed during the conflict was associated with greater elevations in systolic and diastolic blood pressure, cortisol, and increases in natural killer cell numbers and cytotoxicity. Among men low in cynical hostility, anger was associated with smaller increases in heart rate and natural killer cell cytotoxicity. These findings suggest that models describing the impact of stress on physiology should be refined to reflect the joint contribution of situational and dispositional variables.

  15. Homogenization limit for a multiband effective mass model in heterostructures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morandi, O., E-mail: morandi@ipcms.unistra.fr

    We study the homogenization limit of a multiband model that describes the quantum mechanical motion of an electron in a quasi-periodic crystal. In this approach, the distance among the atoms that constitute the material (lattice parameter) is considered a small quantity. Our model include the description of materials with variable chemical composition, intergrowth compounds, and heterostructures. We derive the effective multiband evolution system in the framework of the kp approach. We study the well posedness of the mathematical problem. We compare the effective mass model with the standard kp models for uniform and non-uniforms crystals. We show that in themore » limit of vanishing lattice parameter, the particle density obtained by the effective mass model, converges to the exact probability density of the particle.« less

  16. Dynamics of nonautonomous discrete rogue wave solutions for an Ablowitz-Musslimani equation with PT-symmetric potential.

    PubMed

    Yu, Fajun

    2017-02-01

    Starting from a discrete spectral problem, we derive a hierarchy of nonlinear discrete equations which include the Ablowitz-Ladik (AL) equation. We analytically study the discrete rogue-wave (DRW) solutions of AL equation with three free parameters. The trajectories of peaks and depressions of profiles for the first- and second-order DRWs are produced by means of analytical and numerical methods. In particular, we study the solutions with dispersion in parity-time ( PT) symmetric potential for Ablowitz-Musslimani equation. And we consider the non-autonomous DRW solutions, parameters controlling and their interactions with variable coefficients, and predict the long-living rogue wave solutions. Our results might provide useful information for potential applications of synthetic PT symmetric systems in nonlinear optics and condensed matter physics.

  17. Cold denaturation as a tool to measure protein stability

    PubMed Central

    Sanfelice, Domenico; Temussi, Piero Andrea

    2016-01-01

    Protein stability is an important issue for the interpretation of a wide variety of biological problems but its assessment is at times difficult. The most common parameter employed to describe protein stability is the temperature of melting, at which the populations of folded and unfolded species are identical. This parameter may yield ambiguous results. It would always be preferable to measure the whole stability curve. The calculation of this curve is greatly facilitated whenever it is possible to observe cold denaturation. Using Yfh1, one of the few proteins whose cold denaturation occurs at neutral pH and low ionic strength, we could measure the variation of its full stability curve under several environmental conditions. Here we show the advantages of gauging stability as a function of external variables using stability curves. PMID:26026885

  18. The role of impulse parameters in force variability

    NASA Technical Reports Server (NTRS)

    Carlton, L. G.; Newell, K. M.

    1986-01-01

    One of the principle limitations of the human motor system is the ability to produce consistent motor responses. When asked to repeatedly make the same movement, performance outcomes are characterized by a considerable amount of variability. This occurs whether variability is expressed in terms of kinetics or kinematics. Variability in performance is of considerable importance because for tasks requiring accuracy it is a critical variable in determining the skill of the performer. What has long been sought is a description of the parameter or parameters that determine the degree of variability. Two general experimental protocals were used. One protocal is to use dynamic actions and record variability in kinematic parameters such as spatial or temporal error. A second strategy was to use isometric actions and record kinetic variables such as peak force produced. What might be the important force related factors affecting variability is examined and an experimental approach to examine the influence of each of these variables is provided.

  19. Discrete-continuous variable structural synthesis using dual methods

    NASA Technical Reports Server (NTRS)

    Schmit, L. A.; Fleury, C.

    1980-01-01

    Approximation concepts and dual methods are extended to solve structural synthesis problems involving a mix of discrete and continuous sizing type of design variables. Pure discrete and pure continuous variable problems can be handled as special cases. The basic mathematical programming statement of the structural synthesis problem is converted into a sequence of explicit approximate primal problems of separable form. These problems are solved by constructing continuous explicit dual functions, which are maximized subject to simple nonnegativity constraints on the dual variables. A newly devised gradient projection type of algorithm called DUAL 1, which includes special features for handling dual function gradient discontinuities that arise from the discrete primal variables, is used to find the solution of each dual problem. Computational implementation is accomplished by incorporating the DUAL 1 algorithm into the ACCESS 3 program as a new optimizer option. The power of the method set forth is demonstrated by presenting numerical results for several example problems, including a pure discrete variable treatment of a metallic swept wing and a mixed discrete-continuous variable solution for a thin delta wing with fiber composite skins.

  20. Possibility-based robust design optimization for the structural-acoustic system with fuzzy parameters

    NASA Astrophysics Data System (ADS)

    Yin, Hui; Yu, Dejie; Yin, Shengwen; Xia, Baizhan

    2018-03-01

    The conventional engineering optimization problems considering uncertainties are based on the probabilistic model. However, the probabilistic model may be unavailable because of the lack of sufficient objective information to construct the precise probability distribution of uncertainties. This paper proposes a possibility-based robust design optimization (PBRDO) framework for the uncertain structural-acoustic system based on the fuzzy set model, which can be constructed by expert opinions. The objective of robust design is to optimize the expectation and variability of system performance with respect to uncertainties simultaneously. In the proposed PBRDO, the entropy of the fuzzy system response is used as the variability index; the weighted sum of the entropy and expectation of the fuzzy response is used as the objective function, and the constraints are established in the possibility context. The computations for the constraints and objective function of PBRDO are a triple-loop and a double-loop nested problem, respectively, whose computational costs are considerable. To improve the computational efficiency, the target performance approach is introduced to transform the calculation of the constraints into a double-loop nested problem. To further improve the computational efficiency, a Chebyshev fuzzy method (CFM) based on the Chebyshev polynomials is proposed to estimate the objective function, and the Chebyshev interval method (CIM) is introduced to estimate the constraints, thereby the optimization problem is transformed into a single-loop one. Numerical results on a shell structural-acoustic system verify the effectiveness and feasibility of the proposed methods.

  1. Analysis of heat transfer for unsteady MHD free convection flow of rotating Jeffrey nanofluid saturated in a porous medium

    NASA Astrophysics Data System (ADS)

    Mohd Zin, Nor Athirah; Khan, Ilyas; Shafie, Sharidan; Alshomrani, Ali Saleh

    In this article, the influence of thermal radiation on unsteady magnetohydrodynamics (MHD) free convection flow of rotating Jeffrey nanofluid passing through a porous medium is studied. The silver nanoparticles (AgNPs) are dispersed in the Kerosene Oil (KO) which is chosen as conventional base fluid. Appropriate dimensionless variables are used and the system of equations is transformed into dimensionless form. The resulting problem is solved using the Laplace transform technique. The impact of pertinent parameters including volume fraction φ , material parameters of Jeffrey fluid λ1 , λ , rotation parameter r , Hartmann number Ha , permeability parameter K , Grashof number Gr , Prandtl number Pr , radiation parameter Rd and dimensionless time t on velocity and temperature profiles are presented graphically with comprehensive discussions. It is observed that, the rotation parameter, due to the Coriolis force, tends to decrease the primary velocity but reverse effect is observed in the secondary velocity. It is also observed that, the Lorentz force retards the fluid flow for both primary and secondary velocities. The expressions for skin friction and Nusselt number are also evaluated for different values of emerging parameters. A comparative study with the existing published work is provided in order to verify the present results. An excellent agreement is found.

  2. Numerical optimization methods for controlled systems with parameters

    NASA Astrophysics Data System (ADS)

    Tyatyushkin, A. I.

    2017-10-01

    First- and second-order numerical methods for optimizing controlled dynamical systems with parameters are discussed. In unconstrained-parameter problems, the control parameters are optimized by applying the conjugate gradient method. A more accurate numerical solution in these problems is produced by Newton's method based on a second-order functional increment formula. Next, a general optimal control problem with state constraints and parameters involved on the righthand sides of the controlled system and in the initial conditions is considered. This complicated problem is reduced to a mathematical programming one, followed by the search for optimal parameter values and control functions by applying a multimethod algorithm. The performance of the proposed technique is demonstrated by solving application problems.

  3. Reinforcement learning state estimator.

    PubMed

    Morimoto, Jun; Doya, Kenji

    2007-03-01

    In this study, we propose a novel use of reinforcement learning for estimating hidden variables and parameters of nonlinear dynamical systems. A critical issue in hidden-state estimation is that we cannot directly observe estimation errors. However, by defining errors of observable variables as a delayed penalty, we can apply a reinforcement learning frame-work to state estimation problems. Specifically, we derive a method to construct a nonlinear state estimator by finding an appropriate feedback input gain using the policy gradient method. We tested the proposed method on single pendulum dynamics and show that the joint angle variable could be successfully estimated by observing only the angular velocity, and vice versa. In addition, we show that we could acquire a state estimator for the pendulum swing-up task in which a swing-up controller is also acquired by reinforcement learning simultaneously. Furthermore, we demonstrate that it is possible to estimate the dynamics of the pendulum itself while the hidden variables are estimated in the pendulum swing-up task. Application of the proposed method to a two-linked biped model is also presented.

  4. Parallel Optimization of Polynomials for Large-scale Problems in Stability and Control

    NASA Astrophysics Data System (ADS)

    Kamyar, Reza

    In this thesis, we focus on some of the NP-hard problems in control theory. Thanks to the converse Lyapunov theory, these problems can often be modeled as optimization over polynomials. To avoid the problem of intractability, we establish a trade off between accuracy and complexity. In particular, we develop a sequence of tractable optimization problems --- in the form of Linear Programs (LPs) and/or Semi-Definite Programs (SDPs) --- whose solutions converge to the exact solution of the NP-hard problem. However, the computational and memory complexity of these LPs and SDPs grow exponentially with the progress of the sequence - meaning that improving the accuracy of the solutions requires solving SDPs with tens of thousands of decision variables and constraints. Setting up and solving such problems is a significant challenge. The existing optimization algorithms and software are only designed to use desktop computers or small cluster computers --- machines which do not have sufficient memory for solving such large SDPs. Moreover, the speed-up of these algorithms does not scale beyond dozens of processors. This in fact is the reason we seek parallel algorithms for setting-up and solving large SDPs on large cluster- and/or super-computers. We propose parallel algorithms for stability analysis of two classes of systems: 1) Linear systems with a large number of uncertain parameters; 2) Nonlinear systems defined by polynomial vector fields. First, we develop a distributed parallel algorithm which applies Polya's and/or Handelman's theorems to some variants of parameter-dependent Lyapunov inequalities with parameters defined over the standard simplex. The result is a sequence of SDPs which possess a block-diagonal structure. We then develop a parallel SDP solver which exploits this structure in order to map the computation, memory and communication to a distributed parallel environment. Numerical tests on a supercomputer demonstrate the ability of the algorithm to efficiently utilize hundreds and potentially thousands of processors, and analyze systems with 100+ dimensional state-space. Furthermore, we extend our algorithms to analyze robust stability over more complicated geometries such as hypercubes and arbitrary convex polytopes. Our algorithms can be readily extended to address a wide variety of problems in control such as Hinfinity synthesis for systems with parametric uncertainty and computing control Lyapunov functions.

  5. Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy

  6. Mixed convection and heat generation/absorption aspects in MHD flow of tangent-hyperbolic nanoliquid with Newtonian heat/mass transfer

    NASA Astrophysics Data System (ADS)

    Qayyum, Sajid; Hayat, Tasawar; Shehzad, Sabir Ali; Alsaedi, Ahmed

    2018-03-01

    This article concentrates on the magnetohydrodynamic (MHD) stagnation point flow of tangent hyperbolic nanofluid in the presence of buoyancy forces. Flow analysis caused due to stretching surface. Characteristics of heat transfer are examined under the influence of thermal radiation and heat generation/absorption. Newtonian conditions for heat and mass transfer are employed. Nanofluid model includes Brownian motion and thermophoresis. The governing nonlinear partial differential systems of the problem are transformed into a systems of nonlinear ordinary differential equations through appropriate variables. Impact of embedded parameters on the velocity, temperature and nanoparticle concentration fields are presented graphically. Numerical computations are made to obtain the values of skin friction coefficient, local Nusselt and Sherwood numbers. It is concluded that velocity field enhances in the frame of mixed convection parameter while reverse situation is observed due to power law index. Effect of Brownian motion parameter on the temperature and heat transfer rate is quite reverse. Moreover impact of solutal conjugate parameter on the concentration and local Sherwood number is quite similar.

  7. Leader-follower formation control of underactuated surface vehicles based on sliding mode control and parameter estimation.

    PubMed

    Sun, Zhijian; Zhang, Guoqing; Lu, Yu; Zhang, Weidong

    2018-01-01

    This paper studies the leader-follower formation control of underactuated surface vehicles with model uncertainties and environmental disturbances. A parameter estimation and upper bound estimation based sliding mode control scheme is proposed to solve the problem of the unknown plant parameters and environmental disturbances. For each of these leader-follower formation systems, the dynamic equations of position and attitude are analyzed using coordinate transformation with the aid of the backstepping technique. All the variables are guaranteed to be uniformly ultimately bounded stable in the closed-loop system, which is proven by the distribution design Lyapunov function synthesis. The main advantages of this approach are that: first, parameter estimation based sliding mode control can enhance the robustness of the closed-loop system in presence of model uncertainties and environmental disturbances; second, a continuous function is developed to replace the signum function in the design of sliding mode scheme, which devotes to reduce the chattering of the control system. Finally, numerical simulations are given to demonstrate the effectiveness of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  8. A Bayesian network for modelling blood glucose concentration and exercise in type 1 diabetes.

    PubMed

    Ewings, Sean M; Sahu, Sujit K; Valletta, John J; Byrne, Christopher D; Chipperfield, Andrew J

    2015-06-01

    This article presents a new statistical approach to analysing the effects of everyday physical activity on blood glucose concentration in people with type 1 diabetes. A physiologically based model of blood glucose dynamics is developed to cope with frequently sampled data on food, insulin and habitual physical activity; the model is then converted to a Bayesian network to account for measurement error and variability in the physiological processes. A simulation study is conducted to determine the feasibility of using Markov chain Monte Carlo methods for simultaneous estimation of all model parameters and prediction of blood glucose concentration. Although there are problems with parameter identification in a minority of cases, most parameters can be estimated without bias. Predictive performance is unaffected by parameter misspecification and is insensitive to misleading prior distributions. This article highlights important practical and theoretical issues not previously addressed in the quest for an artificial pancreas as treatment for type 1 diabetes. The proposed methods represent a new paradigm for analysis of deterministic mathematical models of blood glucose concentration. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  9. A global data set of soil hydraulic properties and sub-grid variability of soil water retention and hydraulic conductivity curves

    NASA Astrophysics Data System (ADS)

    Montzka, Carsten; Herbst, Michael; Weihermüller, Lutz; Verhoef, Anne; Vereecken, Harry

    2017-07-01

    Agroecosystem models, regional and global climate models, and numerical weather prediction models require adequate parameterization of soil hydraulic properties. These properties are fundamental for describing and predicting water and energy exchange processes at the transition zone between solid earth and atmosphere, and regulate evapotranspiration, infiltration and runoff generation. Hydraulic parameters describing the soil water retention (WRC) and hydraulic conductivity (HCC) curves are typically derived from soil texture via pedotransfer functions (PTFs). Resampling of those parameters for specific model grids is typically performed by different aggregation approaches such a spatial averaging and the use of dominant textural properties or soil classes. These aggregation approaches introduce uncertainty, bias and parameter inconsistencies throughout spatial scales due to nonlinear relationships between hydraulic parameters and soil texture. Therefore, we present a method to scale hydraulic parameters to individual model grids and provide a global data set that overcomes the mentioned problems. The approach is based on Miller-Miller scaling in the relaxed form by Warrick, that fits the parameters of the WRC through all sub-grid WRCs to provide an effective parameterization for the grid cell at model resolution; at the same time it preserves the information of sub-grid variability of the water retention curve by deriving local scaling parameters. Based on the Mualem-van Genuchten approach we also derive the unsaturated hydraulic conductivity from the water retention functions, thereby assuming that the local parameters are also valid for this function. In addition, via the Warrick scaling parameter λ, information on global sub-grid scaling variance is given that enables modellers to improve dynamical downscaling of (regional) climate models or to perturb hydraulic parameters for model ensemble output generation. The present analysis is based on the ROSETTA PTF of Schaap et al. (2001) applied to the SoilGrids1km data set of Hengl et al. (2014). The example data set is provided at a global resolution of 0.25° at https://doi.org/10.1594/PANGAEA.870605.

  10. Comparative analysis of semantic localization accuracies between adult and pediatric DICOM CT images

    NASA Astrophysics Data System (ADS)

    Robertson, Duncan; Pathak, Sayan D.; Criminisi, Antonio; White, Steve; Haynor, David; Chen, Oliver; Siddiqui, Khan

    2012-02-01

    Existing literature describes a variety of techniques for semantic annotation of DICOM CT images, i.e. the automatic detection and localization of anatomical structures. Semantic annotation facilitates enhanced image navigation, linkage of DICOM image content and non-image clinical data, content-based image retrieval, and image registration. A key challenge for semantic annotation algorithms is inter-patient variability. However, while the algorithms described in published literature have been shown to cope adequately with the variability in test sets comprising adult CT scans, the problem presented by the even greater variability in pediatric anatomy has received very little attention. Most existing semantic annotation algorithms can only be extended to work on scans of both adult and pediatric patients by adapting parameters heuristically in light of patient size. In contrast, our approach, which uses random regression forests ('RRF'), learns an implicit model of scale variation automatically using training data. In consequence, anatomical structures can be localized accurately in both adult and pediatric CT studies without the need for parameter adaptation or additional information about patient scale. We show how the RRF algorithm is able to learn scale invariance from a combined training set containing a mixture of pediatric and adult scans. Resulting localization accuracy for both adult and pediatric data remains comparable with that obtained using RRFs trained and tested using only adult data.

  11. Are Middle School Mathematics Teachers Able to Solve Word Problems without Using Variable?

    ERIC Educational Resources Information Center

    Gökkurt Özdemir, Burçin; Erdem, Emrullah; Örnek, Tugba; Soylu, Yasin

    2018-01-01

    Many people consider problem solving as a complex process in which variables such as "x," "y" are used. Problems may not be solved by only using "variable." Problem solving can be rationalized and made easier using practical strategies. When especially the development of children at younger ages is considered, it is…

  12. Heat transfer analysis on peristaltically induced motion of particle-fluid suspension with variable viscosity: Clot blood model.

    PubMed

    Bhatti, M M; Zeeshan, A; Ellahi, R

    2016-12-01

    In this article, heat transfer analysis on clot blood model of the particle-fluid suspension through a non-uniform annulus has been investigated. The blood propagating along the whole length of the annulus was induced by peristaltic motion. The effects of variable viscosity and slip condition are also taken into account. The governing flow problem is modeled using lubrication approach by taking the assumption of long wavelength and creeping flow regime. The resulting equation for fluid phase and particle phase is solved analytically and closed form solutions are obtained. The physical impact of all the emerging parameters is discussed mathematically and graphically. Particularly, we considered the effects of particle volume fraction, slip parameter, the maximum height of clot, viscosity parameter, average volume flow rate, Prandtl number, Eckert number and fluid parameter on temperature profile, pressure rise and friction forces for outer and inner tube. Numerical computations have been used to determine the behavior of pressure rise and friction along the whole length of the annulus. The present study is also presented for an endoscope as a special case of our study. It is observed that greater influence of clot tends to rise the pressure rise significantly. It is also found that temperature profile increases due to the enhancement in Prandtl number, Eckert number, and fluid parameter. The present study reveals that friction forces for outer tube have higher magnitude as compared to the friction forces for an inner tube. In fact, the results for present study can also be reduced to the Newtonian fluid by taking ζ → ∞. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. The addition of entropy-based regularity parameters improves sleep stage classification based on heart rate variability.

    PubMed

    Aktaruzzaman, M; Migliorini, M; Tenhunen, M; Himanen, S L; Bianchi, A M; Sassi, R

    2015-05-01

    The work considers automatic sleep stage classification, based on heart rate variability (HRV) analysis, with a focus on the distinction of wakefulness (WAKE) from sleep and rapid eye movement (REM) from non-REM (NREM) sleep. A set of 20 automatically annotated one-night polysomnographic recordings was considered, and artificial neural networks were selected for classification. For each inter-heartbeat (RR) series, beside features previously presented in literature, we introduced a set of four parameters related to signal regularity. RR series of three different lengths were considered (corresponding to 2, 6, and 10 successive epochs, 30 s each, in the same sleep stage). Two sets of only four features captured 99 % of the data variance in each classification problem, and both of them contained one of the new regularity features proposed. The accuracy of classification for REM versus NREM (68.4 %, 2 epochs; 83.8 %, 10 epochs) was higher than when distinguishing WAKE versus SLEEP (67.6 %, 2 epochs; 71.3 %, 10 epochs). Also, the reliability parameter (Cohens's Kappa) was higher (0.68 and 0.45, respectively). Sleep staging classification based on HRV was still less precise than other staging methods, employing a larger variety of signals collected during polysomnographic studies. However, cheap and unobtrusive HRV-only sleep classification proved sufficiently precise for a wide range of applications.

  14. Effects of rainfalls variability and physical-chemical parameters on enteroviruses in sewage and lagoon in Yopougon, Côte d'Ivoire

    NASA Astrophysics Data System (ADS)

    Momou, Kouassi Julien; Akoua-Koffi, Chantal; Traoré, Karim Sory; Akré, Djako Sosthène; Dosso, Mireille

    2017-07-01

    The aim of this study was to assess the variability of the content of nutrients, oxidizable organic and particulate matters in raw sewage and the lagoon on the effect of rainfall. Then evaluate the impact of these changes in the concentration of enteroviruses (EVs) in waters. The sewage samples were collected at nine sampling points along the channel, which flows, into a tropical lagoon in Yopougon. Physical-chemical parameters (5-day Biochemical Oxygen Demand, Chemical Oxygen Demand, Suspended Particulate Matter, Total Phosphorus, Orthophosphate, Total Kjeldahl Nitrogen and Nitrate) as well as the concentration of EV in these waters were determined. The average numbers of EV isolated from the outlet of the channel were 9.06 × 104 PFU 100 ml-1. Consequently, EV was present in 55.55 and 33.33 % of the samples in the 2 brackish lagoon collection sites. The effect of rainfall on viral load at the both sewage and brackish lagoon environments is significant correlate (two-way ANOVA, P < 0.05). Furthermore, in lagoon environment, nutrients (Orthophosphate, Total Phosphorus), 5-day Biochemical Oxygen Demand, Chemical Oxygen Demand and Suspended Particulate Matter were significant correlated with EVs loads ( P < 0.05 by Pearson test). The overall results highlight the problem of sewage discharge into the lagoon and correlation between viral loads and water quality parameters in sewage and lagoon.

  15. A stochastic atmospheric model for remote sensing applications

    NASA Technical Reports Server (NTRS)

    Turner, R. E.

    1983-01-01

    There are many factors which reduce the accuracy of classification of objects in the satellite remote sensing of Earth's surface. One important factor is the variability in the scattering and absorptive properties of the atmospheric components such as particulates and the variable gases. For multispectral remote sensing of the Earth's surface in the visible and infrared parts of the spectrum the atmospheric particulates are a major source of variability in the received signal. It is difficult to design a sensor which will determine the unknown atmospheric components by remote sensing methods, at least to the accuracy needed for multispectral classification. The problem of spatial and temporal variations in the atmospheric quantities which can affect the measured radiances are examined. A method based upon the stochastic nature of the atmospheric components was developed, and, using actual data the statistical parameters needed for inclusion into a radiometric model was generated. Methods are then described for an improved correction of radiances. These algorithms will then result in a more accurate and consistent classification procedure.

  16. Nonlinear Waves In A Stenosed Elastic Tube Filled With Viscous Fluid: Forced Perturbed Korteweg-De Vries Equation

    NASA Astrophysics Data System (ADS)

    Gaik*, Tay Kim; Demiray, Hilmi; Tiong, Ong Chee

    In the present work, treating the artery as a prestressed thin-walled and long circularly cylindrical elastic tube with a mild symmetrical stenosis and the blood as an incompressible Newtonian fluid, we have studied the pro pagation of weakly nonlinear waves in such a composite medium, in the long wave approximation, by use of the reductive perturbation method. By intro ducing a set of stretched coordinates suitable for the boundary value type of problems and expanding the field variables into asymptotic series of the small-ness parameter of nonlinearity and dispersion, we obtained a set of nonlinear differential equations governing the terms at various order. By solving these nonlinear differential equations, we obtained the forced perturbed Korteweg-de Vries equation with variable coefficient as the nonlinear evolution equation. By use of the coordinate transformation, it is shown that this type of nonlinear evolution equation admits a progressive wave solution with variable wave speed.

  17. Designing management strategies for carbon dioxide storage and utilization under uncertainty using inexact modelling

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2017-06-01

    Effective application of carbon capture, utilization and storage (CCUS) systems could help to alleviate the influence of climate change by reducing carbon dioxide (CO2) emissions. The research objective of this study is to develop an equilibrium chance-constrained programming model with bi-random variables (ECCP model) for supporting the CCUS management system under random circumstances. The major advantage of the ECCP model is that it tackles random variables as bi-random variables with a normal distribution, where the mean values follow a normal distribution. This could avoid irrational assumptions and oversimplifications in the process of parameter design and enrich the theory of stochastic optimization. The ECCP model is solved by an equilibrium change-constrained programming algorithm, which provides convenience for decision makers to rank the solution set using the natural order of real numbers. The ECCP model is applied to a CCUS management problem, and the solutions could be useful in helping managers to design and generate rational CO2-allocation patterns under complexities and uncertainties.

  18. Parameter estimation procedure for complex non-linear systems: calibration of ASM No. 1 for N-removal in a full-scale oxidation ditch.

    PubMed

    Abusam, A; Keesman, K J; van Straten, G; Spanjers, H; Meinema, K

    2001-01-01

    When applied to large simulation models, the process of parameter estimation is also called calibration. Calibration of complex non-linear systems, such as activated sludge plants, is often not an easy task. On the one hand, manual calibration of such complex systems is usually time-consuming, and its results are often not reproducible. On the other hand, conventional automatic calibration methods are not always straightforward and often hampered by local minima problems. In this paper a new straightforward and automatic procedure, which is based on the response surface method (RSM) for selecting the best identifiable parameters, is proposed. In RSM, the process response (output) is related to the levels of the input variables in terms of a first- or second-order regression model. Usually, RSM is used to relate measured process output quantities to process conditions. However, in this paper RSM is used for selecting the dominant parameters, by evaluating parameters sensitivity in a predefined region. Good results obtained in calibration of ASM No. 1 for N-removal in a full-scale oxidation ditch proved that the proposed procedure is successful and reliable.

  19. Evaluation of standardized and applied variables in predicting treatment outcomes of polytrauma patients.

    PubMed

    Aksamija, Goran; Mulabdic, Adi; Rasic, Ismar; Muhovic, Samir; Gavric, Igor

    2011-01-01

    Polytrauma is defined as an injury where they are affected by at least two different organ systems or body, with at least one life-threatening injuries. Given the multilevel model care of polytrauma patients within KCUS are inevitable weaknesses in the management of this category of patients. To determine the dynamics of existing procedures in treatment of polytrauma patients on admission to KCUS, and based on statistical analysis of variables applied to determine and define the factors that influence the final outcome of treatment, and determine their mutual relationship, which may result in eliminating the flaws in the approach to the problem. The study was based on 263 polytrauma patients. Parametric and non-parametric statistical methods were used. Basic statistics were calculated, based on the calculated parameters for the final achievement of research objectives, multicoleration analysis, image analysis, discriminant analysis and multifactorial analysis were used. From the universe of variables for this study we selected sample of n = 25 variables, of which the first two modular, others belong to the common measurement space (n = 23) and in this paper defined as a system variable methods, procedures and assessments of polytrauma patients. After the multicoleration analysis, since the image analysis gave a reliable measurement results, we started the analysis of eigenvalues, that is defining the factors upon which they obtain information about the system solve the problem of the existing model and its correlation with treatment outcome. The study singled out the essential factors that determine the current organizational model of care, which may affect the treatment and better outcome of polytrauma patients. This analysis has shown the maximum correlative relationships between these practices and contributed to development guidelines that are defined by isolated factors.

  20. A constraint-based evolutionary learning approach to the expectation maximization for optimal estimation of the hidden Markov model for speech signal modeling.

    PubMed

    Huda, Shamsul; Yearwood, John; Togneri, Roberto

    2009-02-01

    This paper attempts to overcome the tendency of the expectation-maximization (EM) algorithm to locate a local rather than global maximum when applied to estimate the hidden Markov model (HMM) parameters in speech signal modeling. We propose a hybrid algorithm for estimation of the HMM in automatic speech recognition (ASR) using a constraint-based evolutionary algorithm (EA) and EM, the CEL-EM. The novelty of our hybrid algorithm (CEL-EM) is that it is applicable for estimation of the constraint-based models with many constraints and large numbers of parameters (which use EM) like HMM. Two constraint-based versions of the CEL-EM with different fusion strategies have been proposed using a constraint-based EA and the EM for better estimation of HMM in ASR. The first one uses a traditional constraint-handling mechanism of EA. The other version transforms a constrained optimization problem into an unconstrained problem using Lagrange multipliers. Fusion strategies for the CEL-EM use a staged-fusion approach where EM has been plugged with the EA periodically after the execution of EA for a specific period of time to maintain the global sampling capabilities of EA in the hybrid algorithm. A variable initialization approach (VIA) has been proposed using a variable segmentation to provide a better initialization for EA in the CEL-EM. Experimental results on the TIMIT speech corpus show that CEL-EM obtains higher recognition accuracies than the traditional EM algorithm as well as a top-standard EM (VIA-EM, constructed by applying the VIA to EM).

  1. Application of the gravity search algorithm to multi-reservoir operation optimization

    NASA Astrophysics Data System (ADS)

    Bozorg-Haddad, Omid; Janbaz, Mahdieh; Loáiciga, Hugo A.

    2016-12-01

    Complexities in river discharge, variable rainfall regime, and drought severity merit the use of advanced optimization tools in multi-reservoir operation. The gravity search algorithm (GSA) is an evolutionary optimization algorithm based on the law of gravity and mass interactions. This paper explores the GSA's efficacy for solving benchmark functions, single reservoir, and four-reservoir operation optimization problems. The GSA's solutions are compared with those of the well-known genetic algorithm (GA) in three optimization problems. The results show that the GSA's results are closer to the optimal solutions than the GA's results in minimizing the benchmark functions. The average values of the objective function equal 1.218 and 1.746 with the GSA and GA, respectively, in solving the single-reservoir hydropower operation problem. The global solution equals 1.213 for this same problem. The GSA converged to 99.97% of the global solution in its average-performing history, while the GA converged to 97% of the global solution of the four-reservoir problem. Requiring fewer parameters for algorithmic implementation and reaching the optimal solution in fewer number of functional evaluations are additional advantages of the GSA over the GA. The results of the three optimization problems demonstrate a superior performance of the GSA for optimizing general mathematical problems and the operation of reservoir systems.

  2. Training and overtraining markers in selected sport events.

    PubMed

    Hartmann, U; Mester, J

    2000-01-01

    Varieties of symptoms are supposed to detect overtraining (OT). Besides the problems of diagnosis and analysis in elite athletes, a daily monitoring of training status takes place with measurement of the parameters serum urea (SU) and serum creatine kinase (CK); therefore, their meaningfulness will be examined, with special respect inter- and intra-individually. Data were obtained from determinations during training from athletes in rowing and athletes of international level. For 6981 SU determinations (male, N = 717; female, N = 285), a slightly asymmetric normal distribution was found (male, 80%, 5-7 mmol x L(-1); female, 75%, 4-6 mmol x L(-1)). Values for women were approximately 1.5 mmol x L(-1) lower. Individual variability was enormous; there seems little point in setting fixed value as 8.3 mmol x L(-1) for men and 7.0 mmol x L(-1) for women as a critical limit for OT. CK has also been measured and evaluated in sports as an essential parameter for determination of muscular stress. Frequency distributions of CK in 2790 samples (male, N = 497; female, N = 350) presented an asymmetric normal distribution with distinct trend toward higher values being evident for the range between 100 and 250 U x L(-1). Conspicuously elevated values occurred in the ranges 250-350 U x L(-1) and 1000-2000 U x L(-1). Men's maximal values were 3000 U x L(-1) and 1150 U x L(-1) for women. Individual variability was enormous. Athletes with chronically low CK exhibited mainly low variability; those with chronically higher values exhibited considerable variability. Establishment of both parameters should be useful to determine individual baselines from a large number of samples. Determinations should be made at least every 3 d in standardized conditions. If a large increase is observed in combination with reduced exercise tolerance after a phase of exertion (2-4 d), then the possibility of a catabolic/metabolic activity or insufficient exercise tolerance becomes much more likely.

  3. Iontophoretic delivery of lisinopril: Optimization of process variables by Box-Behnken statistical design.

    PubMed

    Gannu, Ramesh; Yamsani, Vamshi Vishnu; Palem, Chinna Reddy; Yamsani, Shravan Kumar; Yamsani, Madhusudan Rao

    2010-01-01

    The objective of the investigation was to optimize the iontophoresis process parameters of lisinopril (LSP) by 3 x 3 factorial design, Box-Behnken statistical design. LSP is an ideal candidate for iontophoretic delivery to avoid the incomplete absorption problem associated after its oral administration. Independent variables selected were current (X(1)), salt (sodium chloride) concentration (X(2)) and medium/pH (X(3)). The dependent variables studied were amount of LSP permeated in 4 h (Y(1): Q(4)), 24 h (Y(2): Q(24)) and lag time (Y(3)). Mathematical equations and response surface plots were used to relate the dependent and independent variables. The regression equation generated for the iontophoretic permeation was Y(1) = 1.98 + 1.23X(1) - 0.49X(2) + 0.025X(3) - 0.49X(1)X(2) + 0.040X(1)X(3) - 0.010X(2)X(3) + 0.58X(1)(2) - 0.17X(2)(2) - 0.18X(3)(2); Y(2) = 7.28 + 3.32X(1) - 1.52X(2) + 0.22X(3) - 1.30X(1)X(2) + 0.49X(1)X(3) - 0.090X(2)X(3) + 0.79X(1)(2) - 0.62X(2)(2) - 0.33X(3)(2) and Y(3) = 0.60 + 0.0038X(1) + 0.12X(2) - 0.011X(3) + 0.005X(1)X(2) - 0.018X(1)X(3) - 0.015X(2)X(3) - 0.00075X(1)(2) + 0.017X(2)(2) - 0.11X(3)(2). The statistical validity of the polynomials was established and optimized process parameters were selected by feasibility and grid search. Validation of the optimization study with 8 confirmatory runs indicated high degree of prognostic ability of response surface methodology. The use of Box-Behnken design approach helped in identifying the critical process parameters in the iontophoretic delivery of lisinopril.

  4. Genetic algorithm parameters tuning for resource-constrained project scheduling problem

    NASA Astrophysics Data System (ADS)

    Tian, Xingke; Yuan, Shengrui

    2018-04-01

    Project Scheduling Problem (RCPSP) is a kind of important scheduling problem. To achieve a certain optimal goal such as the shortest duration, the smallest cost, the resource balance and so on, it is required to arrange the start and finish of all tasks under the condition of satisfying project timing constraints and resource constraints. In theory, the problem belongs to the NP-hard problem, and the model is abundant. Many combinatorial optimization problems are special cases of RCPSP, such as job shop scheduling, flow shop scheduling and so on. At present, the genetic algorithm (GA) has been used to deal with the classical RCPSP problem and achieved remarkable results. Vast scholars have also studied the improved genetic algorithm for the RCPSP problem, which makes it to solve the RCPSP problem more efficiently and accurately. However, for the selection of the main parameters of the genetic algorithm, there is no parameter optimization in these studies. Generally, we used the empirical method, but it cannot ensure to meet the optimal parameters. In this paper, the problem was carried out, which is the blind selection of parameters in the process of solving the RCPSP problem. We made sampling analysis, the establishment of proxy model and ultimately solved the optimal parameters.

  5. [Survival strategy of photosynthetic organisms. 1. Variability of the extent of light-harvesting pigment aggregation as a structural factor optimizing the function of oligomeric photosynthetic antenna. Model calculations].

    PubMed

    Fetisova, Z G

    2004-01-01

    In accordance with our concept of rigorous optimization of photosynthetic machinery by a functional criterion, this series of papers continues purposeful search in natural photosynthetic units (PSU) for the basic principles of their organization that we predicted theoretically for optimal model light-harvesting systems. This approach allowed us to determine the basic principles for the organization of a PSU of any fixed size. This series of papers deals with the problem of structural optimization of light-harvesting antenna of variable size controlled in vivo by the light intensity during the growth of organisms, which accentuates the problem of antenna structure optimization because optimization requirements become more stringent as the PSU increases in size. In this work, using mathematical modeling for the functioning of natural PSUs, we have shown that the aggregation of pigments of model light-harvesting antenna, being one of universal optimizing factors, furthermore allows controlling the antenna efficiency if the extent of pigment aggregation is a variable parameter. In this case, the efficiency of antenna increases with the size of the elementary antenna aggregate, thus ensuring the high efficiency of the PSU irrespective of its size; i.e., variation in the extent of pigment aggregation controlled by the size of light-harvesting antenna is biologically expedient.

  6. Coping with Uncertainty: Woodpecker Finches (Cactospiza pallida) from an Unpredictable Habitat Are More Flexible than Birds from a Stable Habitat

    PubMed Central

    Tebbich, Sabine; Teschke, Irmgard

    2014-01-01

    Behavioural flexibility is thought to be a major factor in evolution. It may facilitate the discovery and exploitation of new resources, which in turn may expose populations to novel selective forces and facilitate adaptive radiation. Darwin's finches are a textbook example of adaptive radiation. They are fast learners and show a range of unusual foraging techniques, probably as a result of their flexibility. In this study we aimed to test whether variability of the environment is correlated with flexibility. We compared woodpecker finches from a dry area (hereafter, Arid Zone), where food availability is variable, with individuals from a cloud forest (hereafter, Scalesia zone) where food abundance is stable. As parameters for flexibility, we measured neophilia and neophobia, which are two aspects of reaction to novelty, reversal learning and problem-solving. We found no differences in performance on a problem-solving task but, in line with our prediction, individuals from the Arid Zone were significantly faster reversal learners and more neophilic than their conspecifics from the Scalesia zone. The latter result supports the notion that environmental variability drives flexibility. In contrast to our prediction, Arid Zone birds were even more neophobic than birds from the Scalesia Zone. The latter result could be the consequence of differences in predation pressure between the two vegetation zones. PMID:24638107

  7. Model-Driven Approach for Body Area Network Application Development.

    PubMed

    Venčkauskas, Algimantas; Štuikys, Vytautas; Jusas, Nerijus; Burbaitė, Renata

    2016-05-12

    This paper introduces the sensor-networked IoT model as a prototype to support the design of Body Area Network (BAN) applications for healthcare. Using the model, we analyze the synergistic effect of the functional requirements (data collection from the human body and transferring it to the top level) and non-functional requirements (trade-offs between energy-security-environmental factors, treated as Quality-of-Service (QoS)). We use feature models to represent the requirements at the earliest stage for the analysis and describe a model-driven methodology to design the possible BAN applications. Firstly, we specify the requirements as the problem domain (PD) variability model for the BAN applications. Next, we introduce the generative technology (meta-programming as the solution domain (SD)) and the mapping procedure to map the PD feature-based variability model onto the SD feature model. Finally, we create an executable meta-specification that represents the BAN functionality to describe the variability of the problem domain though transformations. The meta-specification (along with the meta-language processor) is a software generator for multiple BAN-oriented applications. We validate the methodology with experiments and a case study to generate a family of programs for the BAN sensor controllers. This enables to obtain the adequate measure of QoS efficiently through the interactive adjustment of the meta-parameter values and re-generation process for the concrete BAN application.

  8. Model-Driven Approach for Body Area Network Application Development

    PubMed Central

    Venčkauskas, Algimantas; Štuikys, Vytautas; Jusas, Nerijus; Burbaitė, Renata

    2016-01-01

    This paper introduces the sensor-networked IoT model as a prototype to support the design of Body Area Network (BAN) applications for healthcare. Using the model, we analyze the synergistic effect of the functional requirements (data collection from the human body and transferring it to the top level) and non-functional requirements (trade-offs between energy-security-environmental factors, treated as Quality-of-Service (QoS)). We use feature models to represent the requirements at the earliest stage for the analysis and describe a model-driven methodology to design the possible BAN applications. Firstly, we specify the requirements as the problem domain (PD) variability model for the BAN applications. Next, we introduce the generative technology (meta-programming as the solution domain (SD)) and the mapping procedure to map the PD feature-based variability model onto the SD feature model. Finally, we create an executable meta-specification that represents the BAN functionality to describe the variability of the problem domain though transformations. The meta-specification (along with the meta-language processor) is a software generator for multiple BAN-oriented applications. We validate the methodology with experiments and a case study to generate a family of programs for the BAN sensor controllers. This enables to obtain the adequate measure of QoS efficiently through the interactive adjustment of the meta-parameter values and re-generation process for the concrete BAN application. PMID:27187394

  9. Assessing geotechnical centrifuge modelling in addressing variably saturated flow in soil and fractured rock.

    PubMed

    Jones, Brendon R; Brouwers, Luke B; Van Tonder, Warren D; Dippenaar, Matthys A

    2017-05-01

    The vadose zone typically comprises soil underlain by fractured rock. Often, surface water and groundwater parameters are readily available, but variably saturated flow through soil and rock are oversimplified or estimated as input for hydrological models. In this paper, a series of geotechnical centrifuge experiments are conducted to contribute to the knowledge gaps in: (i) variably saturated flow and dispersion in soil and (ii) variably saturated flow in discrete vertical and horizontal fractures. Findings from the research show that the hydraulic gradient, and not the hydraulic conductivity, is scaled for seepage flow in the geotechnical centrifuge. Furthermore, geotechnical centrifuge modelling has been proven as a viable experimental tool for the modelling of hydrodynamic dispersion as well as the replication of similar flow mechanisms for unsaturated fracture flow, as previously observed in literature. Despite the imminent challenges of modelling variable saturation in the vadose zone, the geotechnical centrifuge offers a powerful experimental tool to physically model and observe variably saturated flow. This can be used to give valuable insight into mechanisms associated with solid-fluid interaction problems under these conditions. Findings from future research can be used to validate current numerical modelling techniques and address the subsequent influence on aquifer recharge and vulnerability, contaminant transport, waste disposal, dam construction, slope stability and seepage into subsurface excavations.

  10. Parameter optimization of the QUAL2K model for a multiple-reach river using an influence coefficient algorithm.

    PubMed

    Cho, Jae Heon; Ha, Sung Ryong

    2010-03-15

    An influence coefficient algorithm and a genetic algorithm (GA) were introduced to develop an automatic calibration model for QUAL2K, the latest version of the QUAL2E river and stream water-quality model. The influence coefficient algorithm was used for the parameter optimization in unsteady state, open channel flow. The GA, used in solving the optimization problem, is very simple and comprehensible yet still applicable to any complicated mathematical problem, where it can find the global-optimum solution quickly and effectively. The previously established model QUAL2Kw was used for the automatic calibration of the QUAL2K. The parameter-optimization method using the influence coefficient and genetic algorithm (POMIG) developed in this study and QUAL2Kw were each applied to the Gangneung Namdaecheon River, which has multiple reaches, and the results of the two models were compared. In the modeling, the river reach was divided into two parts based on considerations of the water quality and hydraulic characteristics. The calibration results by POMIG showed a good correspondence between the calculated and observed values for most of water-quality variables. In the application of POMIG and QUAL2Kw, relatively large errors were generated between the observed and predicted values in the case of the dissolved oxygen (DO) and chlorophyll-a (Chl-a) in the lowest part of the river; therefore, two weighting factors (1 and 5) were applied for DO and Chl-a in the lower river. The sums of the errors for DO and Chl-a with a weighting factor of 5 were slightly lower compared with the application of a factor of 1. However, with a weighting factor of 5 the sums of errors for other water-quality variables were slightly increased in comparison to the case with a factor of 1. Generally, the results of the POMIG were slightly better than those of the QUAL2Kw.

  11. A statistical-based approach for acoustic tomography of the atmosphere.

    PubMed

    Kolouri, Soheil; Azimi-Sadjadi, Mahmood R; Ziemann, Astrid

    2014-01-01

    Acoustic travel-time tomography of the atmosphere is a nonlinear inverse problem which attempts to reconstruct temperature and wind velocity fields in the atmospheric surface layer using the dependence of sound speed on temperature and wind velocity fields along the propagation path. This paper presents a statistical-based acoustic travel-time tomography algorithm based on dual state-parameter unscented Kalman filter (UKF) which is capable of reconstructing and tracking, in time, temperature, and wind velocity fields (state variables) as well as the dynamic model parameters within a specified investigation area. An adaptive 3-D spatial-temporal autoregressive model is used to capture the state evolution in the UKF. The observations used in the dual state-parameter UKF process consist of the acoustic time of arrivals measured for every pair of transmitter/receiver nodes deployed in the investigation area. The proposed method is then applied to the data set collected at the Meteorological Observatory Lindenberg, Germany, as part of the STINHO experiment, and the reconstruction results are presented.

  12. Uncertainties in extracted parameters of a Gaussian emission line profile with continuum background.

    PubMed

    Minin, Serge; Kamalabadi, Farzad

    2009-12-20

    We derive analytical equations for uncertainties in parameters extracted by nonlinear least-squares fitting of a Gaussian emission function with an unknown continuum background component in the presence of additive white Gaussian noise. The derivation is based on the inversion of the full curvature matrix (equivalent to Fisher information matrix) of the least-squares error, chi(2), in a four-variable fitting parameter space. The derived uncertainty formulas (equivalent to Cramer-Rao error bounds) are found to be in good agreement with the numerically computed uncertainties from a large ensemble of simulated measurements. The derived formulas can be used for estimating minimum achievable errors for a given signal-to-noise ratio and for investigating some aspects of measurement setup trade-offs and optimization. While the intended application is Fabry-Perot spectroscopy for wind and temperature measurements in the upper atmosphere, the derivation is generic and applicable to other spectroscopy problems with a Gaussian line shape.

  13. Gated Sensor Fusion: A way to Improve the Precision of Ambulatory Human Body Motion Estimation.

    PubMed

    Olivares, Alberto; Górriz, J M; Ramírez, J; Olivares, Gonzalo

    2014-01-01

    Human body motion is usually variable in terms of intensity and, therefore, any Inertial Measurement Unit attached to a subject will measure both low and high angular rate and accelerations. This can be a problem for the accuracy of orientation estimation algorithms based on adaptive filters such as the Kalman filter, since both the variances of the process noise and the measurement noise are set at the beginning of the algorithm and remain constant during its execution. Setting fixed noise parameters burdens the adaptation capability of the filter if the intensity of the motion changes rapidly. In this work we present a conjoint novel algorithm which uses a motion intensity detector to dynamically vary the noise statistical parameters of different approaches of the Kalman filter. Results show that the precision of the estimated orientation in terms of the RMSE can be improved up to 29% with respect to the standard fixed-parameters approaches.

  14. An issue encountered in solving problems in electricity and magnetism: curvilinear coordinates

    NASA Astrophysics Data System (ADS)

    Gülçiçek, Çağlar; Damlı, Volkan

    2016-11-01

    In physics lectures on electromagnetic theory and mathematical methods, physics teacher candidates have some difficulties with curvilinear coordinate systems. According to our experience, based on both in-class interactions and teacher candidates’ answers in test papers, they do not seem to have understood the variables in curvilinear coordinate systems very well. For this reason, the problems that physics teacher candidates have with variables in curvilinear coordinate systems have been selected as a study subject. The aim of this study is to find the physics teacher candidates’ problems with determining the variables of drawn shapes, and problems with drawing shapes based on given variables in curvilinear coordinate systems. Two different assessment tests were used in the study to achieve this aim. The curvilinear coordinates drawing test (CCDrT) was used to discover their problems related to drawing shapes, and the curvilinear coordinates detection test (CCDeT) was used to find out about problems related to determining variables. According to the findings obtained from both tests, most physics teacher candidates have problems with the ϕ variable, while they have limited problems with the r variable. Questions that are mostly answered wrongly have some common properties, such as value. According to inferential statistics, there is no significant difference between the means of the CCDeT and CCDrT scores. The mean of the CCDeT scores is only 4.63 and the mean of the CCDrT is only 4.66. Briefly, we can say that most physics teacher candidates have problems with drawing a shape using the variables of curvilinear coordinate systems or in determining the variables of drawn shapes. Part of this study was presented at the XI. National Science and Mathematics Education Congress (UFBMEK) in 2014.

  15. Soil variability in engineering applications

    NASA Astrophysics Data System (ADS)

    Vessia, Giovanna

    2014-05-01

    Natural geomaterials, as soils and rocks, show spatial variability and heterogeneity of physical and mechanical properties. They can be measured by in field and laboratory testing. The heterogeneity concerns different values of litho-technical parameters pertaining similar lithological units placed close to each other. On the contrary, the variability is inherent to the formation and evolution processes experienced by each geological units (homogeneous geomaterials on average) and captured as a spatial structure of fluctuation of physical property values about their mean trend, e.g. the unit weight, the hydraulic permeability, the friction angle, the cohesion, among others. The preceding spatial variations shall be managed by engineering models to accomplish reliable designing of structures and infrastructures. Materon (1962) introduced the Geostatistics as the most comprehensive tool to manage spatial correlation of parameter measures used in a wide range of earth science applications. In the field of the engineering geology, Vanmarcke (1977) developed the first pioneering attempts to describe and manage the inherent variability in geomaterials although Terzaghi (1943) already highlighted that spatial fluctuations of physical and mechanical parameters used in geotechnical designing cannot be neglected. A few years later, Mandelbrot (1983) and Turcotte (1986) interpreted the internal arrangement of geomaterial according to Fractal Theory. In the same years, Vanmarcke (1983) proposed the Random Field Theory providing mathematical tools to deal with inherent variability of each geological units or stratigraphic succession that can be resembled as one material. In this approach, measurement fluctuations of physical parameters are interpreted through the spatial variability structure consisting in the correlation function and the scale of fluctuation. Fenton and Griffiths (1992) combined random field simulation with the finite element method to produce the Random Finite Element Method (RFEM). This method has been used to investigate the random behavior of soils in the context of a variety of classical geotechnical problems. Afterward, some following studies collected the worldwide variability values of many technical parameters of soils (Phoon and Kulhawy 1999a) and their spatial correlation functions (Phoon and Kulhawy 1999b). In Italy, Cherubini et al. (2007) calculated the spatial variability structure of sandy and clayey soils from the standard cone penetration test readings. The large extent of the worldwide measured spatial variability of soils and rocks heavily affects the reliability of geotechnical designing as well as other uncertainties introduced by testing devices and engineering models. So far, several methods have been provided to deal with the preceding sources of uncertainties in engineering designing models (e.g. First Order Reliability Method, Second Order Reliability Method, Response Surface Method, High Dimensional Model Representation, etc.). Nowadays, the efforts in this field have been focusing on (1) measuring spatial variability of different rocks and soils and (2) developing numerical models that take into account the spatial variability as additional physical variable. References Cherubini C., Vessia G. and Pula W. 2007. Statistical soil characterization of Italian sites for reliability analyses. Proc. 2nd Int. Workshop. on Characterization and Engineering Properties of Natural Soils, 3-4: 2681-2706. Griffiths D.V. and Fenton G.A. 1993. Seepage beneath water retaining structures founded on spatially random soil, Géotechnique, 43(6): 577-587. Mandelbrot B.B. 1983. The Fractal Geometry of Nature. San Francisco: W H Freeman. Matheron G. 1962. Traité de Géostatistique appliquée. Tome 1, Editions Technip, Paris, 334 p. Phoon K.K. and Kulhawy F.H. 1999a. Characterization of geotechnical variability. Can Geotech J, 36(4): 612-624. Phoon K.K. and Kulhawy F.H. 1999b. Evaluation of geotechnical property variability. Can Geotech J, 36(4): 625-639. Terzaghi K. 1943. Theoretical Soil Mechanics. New York: John Wiley and Sons. Turcotte D.L. 1986. Fractals and fragmentation. J Geophys Res, 91: 1921-1926. Vanmarcke E.H. 1977. Probabilistic modeling of soil profiles. J Geotech Eng Div, ASCE, 103: 1227-1246. Vanmarcke E.H. 1983. Random fields: analysis and synthesis. MIT Press, Cambridge.

  16. Improving predictive power of physically based rainfall-induced shallow landslide models: a probablistic approach

    USGS Publications Warehouse

    Raia, S.; Alvioli, M.; Rossi, M.; Baum, R.L.; Godt, J.W.; Guzzetti, F.

    2013-01-01

    Distributed models to forecast the spatial and temporal occurrence of rainfall-induced shallow landslides are deterministic. These models extend spatially the static stability models adopted in geotechnical engineering and adopt an infinite-slope geometry to balance the resisting and the driving forces acting on the sliding mass. An infiltration model is used to determine how rainfall changes pore-water conditions, modulating the local stability/instability conditions. A problem with the existing models is the difficulty in obtaining accurate values for the several variables that describe the material properties of the slopes. The problem is particularly severe when the models are applied over large areas, for which sufficient information on the geotechnical and hydrological conditions of the slopes is not generally available. To help solve the problem, we propose a probabilistic Monte Carlo approach to the distributed modeling of shallow rainfall-induced landslides. For the purpose, we have modified the Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability Analysis (TRIGRS) code. The new code (TRIGRS-P) adopts a stochastic approach to compute, on a cell-by-cell basis, transient pore-pressure changes and related changes in the factor of safety due to rainfall infiltration. Infiltration is modeled using analytical solutions of partial differential equations describing one-dimensional vertical flow in isotropic, homogeneous materials. Both saturated and unsaturated soil conditions can be considered. TRIGRS-P copes with the natural variability inherent to the mechanical and hydrological properties of the slope materials by allowing values of the TRIGRS model input parameters to be sampled randomly from a given probability distribution. The range of variation and the mean value of the parameters can be determined by the usual methods used for preparing the TRIGRS input parameters. The outputs of several model runs obtained varying the input parameters are analyzed statistically, and compared to the original (deterministic) model output. The comparison suggests an improvement of the predictive power of the model of about 10% and 16% in two small test areas, i.e. the Frontignano (Italy) and the Mukilteo (USA) areas, respectively. We discuss the computational requirements of TRIGRS-P to determine the potential use of the numerical model to forecast the spatial and temporal occurrence of rainfall-induced shallow landslides in very large areas, extending for several hundreds or thousands of square kilometers. Parallel execution of the code using a simple process distribution and the Message Passing Interface (MPI) on multi-processor machines was successful, opening the possibly of testing the use of TRIGRS-P for the operational forecasting of rainfall-induced shallow landslides over large regions.

  17. Demonstration of Automatically-Generated Adjoint Code for Use in Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Green, Lawrence; Carle, Alan; Fagan, Mike

    1999-01-01

    Gradient-based optimization requires accurate derivatives of the objective function and constraints. These gradients may have previously been obtained by manual differentiation of analysis codes, symbolic manipulators, finite-difference approximations, or existing automatic differentiation (AD) tools such as ADIFOR (Automatic Differentiation in FORTRAN). Each of these methods has certain deficiencies, particularly when applied to complex, coupled analyses with many design variables. Recently, a new AD tool called ADJIFOR (Automatic Adjoint Generation in FORTRAN), based upon ADIFOR, was developed and demonstrated. Whereas ADIFOR implements forward-mode (direct) differentiation throughout an analysis program to obtain exact derivatives via the chain rule of calculus, ADJIFOR implements the reverse-mode counterpart of the chain rule to obtain exact adjoint form derivatives from FORTRAN code. Automatically-generated adjoint versions of the widely-used CFL3D computational fluid dynamics (CFD) code and an algebraic wing grid generation code were obtained with just a few hours processing time using the ADJIFOR tool. The codes were verified for accuracy and were shown to compute the exact gradient of the wing lift-to-drag ratio, with respect to any number of shape parameters, in about the time required for 7 to 20 function evaluations. The codes have now been executed on various computers with typical memory and disk space for problems with up to 129 x 65 x 33 grid points, and for hundreds to thousands of independent variables. These adjoint codes are now used in a gradient-based aerodynamic shape optimization problem for a swept, tapered wing. For each design iteration, the optimization package constructs an approximate, linear optimization problem, based upon the current objective function, constraints, and gradient values. The optimizer subroutines are called within a design loop employing the approximate linear problem until an optimum shape is found, the design loop limit is reached, or no further design improvement is possible due to active design variable bounds and/or constraints. The resulting shape parameters are then used by the grid generation code to define a new wing surface and computational grid. The lift-to-drag ratio and its gradient are computed for the new design by the automatically-generated adjoint codes. Several optimization iterations may be required to find an optimum wing shape. Results from two sample cases will be discussed. The reader should note that this work primarily represents a demonstration of use of automatically- generated adjoint code within an aerodynamic shape optimization. As such, little significance is placed upon the actual optimization results, relative to the method for obtaining the results.

  18. Investigation of various factors influencing Raman spectra interpretation with the use of likelihood ratio approach.

    PubMed

    Michalska, Aleksandra; Martyna, Agnieszka; Zadora, Grzegorz

    2018-01-01

    The main aim of this study was to verify whether selected analytical parameters may affect solving the comparison problem of Raman spectra with the use of the likelihood ratio (LR) approach. Firstly the LR methodologies developed for Raman spectra of blue automotive paints obtained with the use of 785nm laser source (results published by the authors previously) were implemented for good quality spectra recorded for these paints with the use of 514.5nm laser source. For LR models construction two types of variables were used i.e. areas under selected pigments bands and coefficients derived from discrete wavelet transform procedure (DWT). Few experiments were designed for 785nm and 514.5nm Raman spectra databases after constructing well performing LR models (low rates of false positive and false negative answers and acceptable results of empirical cross entropy approach). In order to verify whether objective magnification described by its numerical aperture affects spectra interpretation, three objective magnifications -20×(N.A.=0.4.), 50×(N.A.=0.75) and 100×(N.A.=0.85) within each of the applied laser sources (514.5nm and 785nm) were tested for a group of blue solid and metallic automotive paints having the same sets of pigments depending on the applied laser source. The findings obtained by two types of LR models indicate the importance of this parameter for solving the comparison problem of both solid and metallic automotive paints regardless of the laser source used for measuring Raman signal. Hence, the same objective magnification, preferably 50× (established based on the analysis of within- and between-samples variability and F-factor value), should be used when focusing the laser on samples during Raman measurements. Then the influence of parameters (laser power and time of irradiation) of one of the recommended fluorescence suppression techniques, namely photobleaching, was under investigation. Analysis performed on a group of solid automotive paint samples showed that time of irradiation upon established laser power does not affect solving the comparison problem with the use of LR test. Likewise upon established time of irradiation 5% or 10% laser power could be used interchangeably without changing conclusions within this problem. However, upon the established time of irradiation changes in laser power between control and recovered sample from 5% or 10% to 50% may cause erroneous conclusions. Additionally it was also proved that prolonged irradiation of paint does not quantitatively affect pigments bands areas revealed after such a pre-treatment. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Developing Dynamic Field Theory Architectures for Embodied Cognitive Systems with cedar.

    PubMed

    Lomp, Oliver; Richter, Mathis; Zibner, Stephan K U; Schöner, Gregor

    2016-01-01

    Embodied artificial cognitive systems, such as autonomous robots or intelligent observers, connect cognitive processes to sensory and effector systems in real time. Prime candidates for such embodied intelligence are neurally inspired architectures. While components such as forward neural networks are well established, designing pervasively autonomous neural architectures remains a challenge. This includes the problem of tuning the parameters of such architectures so that they deliver specified functionality under variable environmental conditions and retain these functions as the architectures are expanded. The scaling and autonomy problems are solved, in part, by dynamic field theory (DFT), a theoretical framework for the neural grounding of sensorimotor and cognitive processes. In this paper, we address how to efficiently build DFT architectures that control embodied agents and how to tune their parameters so that the desired cognitive functions emerge while such agents are situated in real environments. In DFT architectures, dynamic neural fields or nodes are assigned dynamic regimes, that is, attractor states and their instabilities, from which cognitive function emerges. Tuning thus amounts to determining values of the dynamic parameters for which the components of a DFT architecture are in the specified dynamic regime under the appropriate environmental conditions. The process of tuning is facilitated by the software framework cedar , which provides a graphical interface to build and execute DFT architectures. It enables to change dynamic parameters online and visualize the activation states of any component while the agent is receiving sensory inputs in real time. Using a simple example, we take the reader through the workflow of conceiving of DFT architectures, implementing them on embodied agents, tuning their parameters, and assessing performance while the system is coupled to real sensory inputs.

  20. Developing Dynamic Field Theory Architectures for Embodied Cognitive Systems with cedar

    PubMed Central

    Lomp, Oliver; Richter, Mathis; Zibner, Stephan K. U.; Schöner, Gregor

    2016-01-01

    Embodied artificial cognitive systems, such as autonomous robots or intelligent observers, connect cognitive processes to sensory and effector systems in real time. Prime candidates for such embodied intelligence are neurally inspired architectures. While components such as forward neural networks are well established, designing pervasively autonomous neural architectures remains a challenge. This includes the problem of tuning the parameters of such architectures so that they deliver specified functionality under variable environmental conditions and retain these functions as the architectures are expanded. The scaling and autonomy problems are solved, in part, by dynamic field theory (DFT), a theoretical framework for the neural grounding of sensorimotor and cognitive processes. In this paper, we address how to efficiently build DFT architectures that control embodied agents and how to tune their parameters so that the desired cognitive functions emerge while such agents are situated in real environments. In DFT architectures, dynamic neural fields or nodes are assigned dynamic regimes, that is, attractor states and their instabilities, from which cognitive function emerges. Tuning thus amounts to determining values of the dynamic parameters for which the components of a DFT architecture are in the specified dynamic regime under the appropriate environmental conditions. The process of tuning is facilitated by the software framework cedar, which provides a graphical interface to build and execute DFT architectures. It enables to change dynamic parameters online and visualize the activation states of any component while the agent is receiving sensory inputs in real time. Using a simple example, we take the reader through the workflow of conceiving of DFT architectures, implementing them on embodied agents, tuning their parameters, and assessing performance while the system is coupled to real sensory inputs. PMID:27853431

  1. Characterization of complex systems using the design of experiments approach: transient protein expression in tobacco as a case study.

    PubMed

    Buyel, Johannes Felix; Fischer, Rainer

    2014-01-31

    Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.

  2. Fuzzy Mixed Assembly Line Sequencing and Scheduling Optimization Model Using Multiobjective Dynamic Fuzzy GA

    PubMed Central

    Tahriri, Farzad; Dawal, Siti Zawiah Md; Taha, Zahari

    2014-01-01

    A new multiobjective dynamic fuzzy genetic algorithm is applied to solve a fuzzy mixed-model assembly line sequencing problem in which the primary goals are to minimize the total make-span and minimize the setup number simultaneously. Trapezoidal fuzzy numbers are implemented for variables such as operation and travelling time in order to generate results with higher accuracy and representative of real-case data. An improved genetic algorithm called fuzzy adaptive genetic algorithm (FAGA) is proposed in order to solve this optimization model. In establishing the FAGA, five dynamic fuzzy parameter controllers are devised in which fuzzy expert experience controller (FEEC) is integrated with automatic learning dynamic fuzzy controller (ALDFC) technique. The enhanced algorithm dynamically adjusts the population size, number of generations, tournament candidate, crossover rate, and mutation rate compared with using fixed control parameters. The main idea is to improve the performance and effectiveness of existing GAs by dynamic adjustment and control of the five parameters. Verification and validation of the dynamic fuzzy GA are carried out by developing test-beds and testing using a multiobjective fuzzy mixed production assembly line sequencing optimization problem. The simulation results highlight that the performance and efficacy of the proposed novel optimization algorithm are more efficient than the performance of the standard genetic algorithm in mixed assembly line sequencing model. PMID:24982962

  3. Modeling Governance KB with CATPCA to Overcome Multicollinearity in the Logistic Regression

    NASA Astrophysics Data System (ADS)

    Khikmah, L.; Wijayanto, H.; Syafitri, U. D.

    2017-04-01

    The problem often encounters in logistic regression modeling are multicollinearity problems. Data that have multicollinearity between explanatory variables with the result in the estimation of parameters to be bias. Besides, the multicollinearity will result in error in the classification. In general, to overcome multicollinearity in regression used stepwise regression. They are also another method to overcome multicollinearity which involves all variable for prediction. That is Principal Component Analysis (PCA). However, classical PCA in only for numeric data. Its data are categorical, one method to solve the problems is Categorical Principal Component Analysis (CATPCA). Data were used in this research were a part of data Demographic and Population Survey Indonesia (IDHS) 2012. This research focuses on the characteristic of women of using the contraceptive methods. Classification results evaluated using Area Under Curve (AUC) values. The higher the AUC value, the better. Based on AUC values, the classification of the contraceptive method using stepwise method (58.66%) is better than the logistic regression model (57.39%) and CATPCA (57.39%). Evaluation of the results of logistic regression using sensitivity, shows the opposite where CATPCA method (99.79%) is better than logistic regression method (92.43%) and stepwise (92.05%). Therefore in this study focuses on major class classification (using a contraceptive method), then the selected model is CATPCA because it can raise the level of the major class model accuracy.

  4. Thermal runaway and microwave heating in thin cylindrical domains

    NASA Astrophysics Data System (ADS)

    Ward, Michael J.

    2002-04-01

    The behaviour of the solution to two nonlinear heating problems in a thin cylinder of revolution of variable cross-sectional area is analysed using asymptotic and numerical methods. The first problem is to calculate the fold point, corresponding to the onset of thermal runaway, for a steady-state nonlinear elliptic equation that arises in combustion theory. In the limit of thin cylindrical domains, it is shown that the onset of thermal runaway can be delayed when a circular cylindrical domain is perturbed into a dumbell shape. Numerical values for the fold point for different domain shapes are obtained asymptotically and numerically. The second problem that is analysed is a nonlinear parabolic equation modelling the microwave heating of a ceramic cylinder by a known electric field. The basic model in a thin circular cylindrical domain was analysed in Booty & Kriegsmann (Meth. Appl. Anal. 4 (1994) p. 403). Their analysis is extended to treat thin cylindrical domains of variable cross-section. It is shown that the steady-state and dynamic behaviours of localized regions of high temperature, called hot-spots, depend on a competition between the maxima of the electric field and the maximum deformation of the circular cylinder. For a dumbell-shaped region it is shown that two disconnected hot-spot regions can occur. Depending on the parameters in the model, these regions, ultimately, either merge as time increases or else remain as disconnected regions for all time.

  5. Dynamic optimization of open-loop input signals for ramp-up current profiles in tokamak plasmas

    NASA Astrophysics Data System (ADS)

    Ren, Zhigang; Xu, Chao; Lin, Qun; Loxton, Ryan; Teo, Kok Lay

    2016-03-01

    Establishing a good current spatial profile in tokamak fusion reactors is crucial to effective steady-state operation. The evolution of the current spatial profile is related to the evolution of the poloidal magnetic flux, which can be modeled in the normalized cylindrical coordinates using a parabolic partial differential equation (PDE) called the magnetic diffusion equation. In this paper, we consider the dynamic optimization problem of attaining the best possible current spatial profile during the ramp-up phase of the tokamak. We first use the Galerkin method to obtain a finite-dimensional ordinary differential equation (ODE) model based on the original magnetic diffusion PDE. Then, we combine the control parameterization method with a novel time-scaling transformation to obtain an approximate optimal parameter selection problem, which can be solved using gradient-based optimization techniques such as sequential quadratic programming (SQP). This control parameterization approach involves approximating the tokamak input signals by piecewise-linear functions whose slopes and break-points are decision variables to be optimized. We show that the gradient of the objective function with respect to the decision variables can be computed by solving an auxiliary dynamic system governing the state sensitivity matrix. Finally, we conclude the paper with simulation results for an example problem based on experimental data from the DIII-D tokamak in San Diego, California.

  6. An improved state-parameter analysis of ecosystem models using data assimilation

    USGS Publications Warehouse

    Chen, M.; Liu, S.; Tieszen, L.L.; Hollinger, D.Y.

    2008-01-01

    Much of the effort spent in developing data assimilation methods for carbon dynamics analysis has focused on estimating optimal values for either model parameters or state variables. The main weakness of estimating parameter values alone (i.e., without considering state variables) is that all errors from input, output, and model structure are attributed to model parameter uncertainties. On the other hand, the accuracy of estimating state variables may be lowered if the temporal evolution of parameter values is not incorporated. This research develops a smoothed ensemble Kalman filter (SEnKF) by combining ensemble Kalman filter with kernel smoothing technique. SEnKF has following characteristics: (1) to estimate simultaneously the model states and parameters through concatenating unknown parameters and state variables into a joint state vector; (2) to mitigate dramatic, sudden changes of parameter values in parameter sampling and parameter evolution process, and control narrowing of parameter variance which results in filter divergence through adjusting smoothing factor in kernel smoothing algorithm; (3) to assimilate recursively data into the model and thus detect possible time variation of parameters; and (4) to address properly various sources of uncertainties stemming from input, output and parameter uncertainties. The SEnKF is tested by assimilating observed fluxes of carbon dioxide and environmental driving factor data from an AmeriFlux forest station located near Howland, Maine, USA, into a partition eddy flux model. Our analysis demonstrates that model parameters, such as light use efficiency, respiration coefficients, minimum and optimum temperatures for photosynthetic activity, and others, are highly constrained by eddy flux data at daily-to-seasonal time scales. The SEnKF stabilizes parameter values quickly regardless of the initial values of the parameters. Potential ecosystem light use efficiency demonstrates a strong seasonality. Results show that the simultaneous parameter estimation procedure significantly improves model predictions. Results also show that the SEnKF can dramatically reduce the variance in state variables stemming from the uncertainty of parameters and driving variables. The SEnKF is a robust and effective algorithm in evaluating and developing ecosystem models and in improving the understanding and quantification of carbon cycle parameters and processes. ?? 2008 Elsevier B.V.

  7. Edge delamination in angle-ply composite laminates, part 5

    NASA Technical Reports Server (NTRS)

    Wang, S. S.

    1981-01-01

    A theoretical method was developed for describing the edge delamination stress intensity characteristics in angle-ply composite laminates. The method is based on the theory of anisotropic elasticity. The edge delamination problem is formulated using Lekhnitskii's complex-variable stress potentials and an especially developed eigenfunction expansion method. The method predicts exact orders of the three-dimensional stress singularity in a delamination crack tip region. With the aid of boundary collocation, the method predicts the complete stress and displacement fields in a finite-dimensional, delaminated composite. Fracture mechanics parameters such as the mixed-mode stress intensity factors and associated energy release rates for edge delamination can be calculated explicity. Solutions are obtained for edge delaminated (theta/-theta theta/-theta) angle-ply composites under uniform axial extension. Effects of delamination lengths, fiber orientations, lamination and geometric variables are studied.

  8. Practical security analysis of continuous-variable quantum key distribution with jitter in clock synchronization

    NASA Astrophysics Data System (ADS)

    Xie, Cailang; Guo, Ying; Liao, Qin; Zhao, Wei; Huang, Duan; Zhang, Ling; Zeng, Guihua

    2018-03-01

    How to narrow the gap of security between theory and practice has been a notoriously urgent problem in quantum cryptography. Here, we analyze and provide experimental evidence of the clock jitter effect on the practical continuous-variable quantum key distribution (CV-QKD) system. The clock jitter is a random noise which exists permanently in the clock synchronization in the practical CV-QKD system, it may compromise the system security because of its impact on data sampling and parameters estimation. In particular, the practical security of CV-QKD with different clock jitter against collective attack is analyzed theoretically based on different repetition frequencies, the numerical simulations indicate that the clock jitter has more impact on a high-speed scenario. Furthermore, a simplified experiment is designed to investigate the influence of the clock jitter.

  9. Enhancing Important Fluctuations: Rare Events and Metadynamics from a Conceptual Viewpoint

    NASA Astrophysics Data System (ADS)

    Valsson, Omar; Tiwary, Pratyush; Parrinello, Michele

    2016-05-01

    Atomistic simulations play a central role in many fields of science. However, their usefulness is often limited by the fact that many systems are characterized by several metastable states separated by high barriers, leading to kinetic bottlenecks. Transitions between metastable states are thus rare events that occur on significantly longer timescales than one can simulate in practice. Numerous enhanced sampling methods have been introduced to alleviate this timescale problem, including methods based on identifying a few crucial order parameters or collective variables and enhancing the sampling of these variables. Metadynamics is one such method that has proven successful in a great variety of fields. Here we review the conceptual and theoretical foundations of metadynamics. As demonstrated, metadynamics is not just a practical tool but can also be considered an important development in the theory of statistical mechanics.

  10. Matrix superpotentials

    NASA Astrophysics Data System (ADS)

    Nikitin, Anatoly G.; Karadzhov, Yuri

    2011-07-01

    We present a collection of matrix-valued shape invariant potentials which give rise to new exactly solvable problems of SUSY quantum mechanics. It includes all irreducible matrix superpotentials of the generic form W=kQ+\\frac{1}{k} R+P, where k is a variable parameter, Q is the unit matrix multiplied by a real-valued function of independent variable x, and P and R are the Hermitian matrices depending on x. In particular, we recover the Pron'ko-Stroganov 'matrix Coulomb potential' and all known scalar shape invariant potentials of SUSY quantum mechanics. In addition, five new shape invariant potentials are presented. Three of them admit a dual shape invariance, i.e. the related Hamiltonians can be factorized using two non-equivalent superpotentials. We find discrete spectrum and eigenvectors for the corresponding Schrödinger equations and prove that these eigenvectors are normalizable.

  11. On-line implementation of nonlinear parameter estimation for the Space Shuttle main engine

    NASA Technical Reports Server (NTRS)

    Buckland, Julia H.; Musgrave, Jeffrey L.; Walker, Bruce K.

    1992-01-01

    We investigate the performance of a nonlinear estimation scheme applied to the estimation of several parameters in a performance model of the Space Shuttle Main Engine. The nonlinear estimator is based upon the extended Kalman filter which has been augmented to provide estimates of several key performance variables. The estimated parameters are directly related to the efficiency of both the low pressure and high pressure fuel turbopumps. Decreases in the parameter estimates may be interpreted as degradations in turbine and/or pump efficiencies which can be useful measures for an online health monitoring algorithm. This paper extends previous work which has focused on off-line parameter estimation by investigating the filter's on-line potential from a computational standpoint. ln addition, we examine the robustness of the algorithm to unmodeled dynamics. The filter uses a reduced-order model of the engine that includes only fuel-side dynamics. The on-line results produced during this study are comparable to off-line results generated previously. The results show that the parameter estimates are sensitive to dynamics not included in the filter model. Off-line results using an extended Kalman filter with a full order engine model to address the robustness problems of the reduced-order model are also presented.

  12. Exact Scheffé-type confidence intervals for output from groundwater flow models: 1. Use of hydrogeologic information

    USGS Publications Warehouse

    Cooley, Richard L.

    1993-01-01

    A new method is developed to efficiently compute exact Scheffé-type confidence intervals for output (or other function of parameters) g(β) derived from a groundwater flow model. The method is general in that parameter uncertainty can be specified by any statistical distribution having a log probability density function (log pdf) that can be expanded in a Taylor series. However, for this study parameter uncertainty is specified by a statistical multivariate beta distribution that incorporates hydrogeologic information in the form of the investigator's best estimates of parameters and a grouping of random variables representing possible parameter values so that each group is defined by maximum and minimum bounds and an ordering according to increasing value. The new method forms the confidence intervals from maximum and minimum limits of g(β) on a contour of a linear combination of (1) the quadratic form for the parameters used by Cooley and Vecchia (1987) and (2) the log pdf for the multivariate beta distribution. Three example problems are used to compare characteristics of the confidence intervals for hydraulic head obtained using different weights for the linear combination. Different weights generally produced similar confidence intervals, whereas the method of Cooley and Vecchia (1987) often produced much larger confidence intervals.

  13. Experimental parameter identification of a multi-scale musculoskeletal model controlled by electrical stimulation: application to patients with spinal cord injury.

    PubMed

    Benoussaad, Mourad; Poignet, Philippe; Hayashibe, Mitsuhiro; Azevedo-Coste, Christine; Fattal, Charles; Guiraud, David

    2013-06-01

    We investigated the parameter identification of a multi-scale physiological model of skeletal muscle, based on Huxley's formulation. We focused particularly on the knee joint controlled by quadriceps muscles under electrical stimulation (ES) in subjects with a complete spinal cord injury. A noninvasive and in vivo identification protocol was thus applied through surface stimulation in nine subjects and through neural stimulation in one ES-implanted subject. The identification protocol included initial identification steps, which are adaptations of existing identification techniques to estimate most of the parameters of our model. Then we applied an original and safer identification protocol in dynamic conditions, which required resolution of a nonlinear programming (NLP) problem to identify the serial element stiffness of quadriceps. Each identification step and cross validation of the estimated model in dynamic condition were evaluated through a quadratic error criterion. The results highlighted good accuracy, the efficiency of the identification protocol and the ability of the estimated model to predict the subject-specific behavior of the musculoskeletal system. From the comparison of parameter values between subjects, we discussed and explored the inter-subject variability of parameters in order to select parameters that have to be identified in each patient.

  14. Numerical discretization-based estimation methods for ordinary differential equation models via penalized spline smoothing with applications in biomedical research.

    PubMed

    Wu, Hulin; Xue, Hongqi; Kumar, Arun

    2012-06-01

    Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. © 2012, The International Biometric Society.

  15. Data-driven strategies for robust forecast of continuous glucose monitoring time-series.

    PubMed

    Fiorini, Samuele; Martini, Chiara; Malpassi, Davide; Cordera, Renzo; Maggi, Davide; Verri, Alessandro; Barla, Annalisa

    2017-07-01

    Over the past decade, continuous glucose monitoring (CGM) has proven to be a very resourceful tool for diabetes management. To date, CGM devices are employed for both retrospective and online applications. Their use allows to better describe the patients' pathology as well as to achieve a better control of patients' level of glycemia. The analysis of CGM sensor data makes possible to observe a wide range of metrics, such as the glycemic variability during the day or the amount of time spent below or above certain glycemic thresholds. However, due to the high variability of the glycemic signals among sensors and individuals, CGM data analysis is a non-trivial task. Standard signal filtering solutions fall short when an appropriate model personalization is not applied. State-of-the-art data-driven strategies for online CGM forecasting rely upon the use of recursive filters. Each time a new sample is collected, such models need to adjust their parameters in order to predict the next glycemic level. In this paper we aim at demonstrating that the problem of online CGM forecasting can be successfully tackled by personalized machine learning models, that do not need to recursively update their parameters.

  16. A simulation study on Bayesian Ridge regression models for several collinearity levels

    NASA Astrophysics Data System (ADS)

    Efendi, Achmad; Effrihan

    2017-12-01

    When analyzing data with multiple regression model if there are collinearities, then one or several predictor variables are usually omitted from the model. However, there sometimes some reasons, for instance medical or economic reasons, the predictors are all important and should be included in the model. Ridge regression model is not uncommon in some researches to use to cope with collinearity. Through this modeling, weights for predictor variables are used for estimating parameters. The next estimation process could follow the concept of likelihood. Furthermore, for the estimation nowadays the Bayesian version could be an alternative. This estimation method does not match likelihood one in terms of popularity due to some difficulties; computation and so forth. Nevertheless, with the growing improvement of computational methodology recently, this caveat should not at the moment become a problem. This paper discusses about simulation process for evaluating the characteristic of Bayesian Ridge regression parameter estimates. There are several simulation settings based on variety of collinearity levels and sample sizes. The results show that Bayesian method gives better performance for relatively small sample sizes, and for other settings the method does perform relatively similar to the likelihood method.

  17. Continuous-variable phase estimation with unitary and random linear disturbance

    NASA Astrophysics Data System (ADS)

    Delgado de Souza, Douglas; Genoni, Marco G.; Kim, M. S.

    2014-10-01

    We address the problem of continuous-variable quantum phase estimation in the presence of linear disturbance at the Hamiltonian level by means of Gaussian probe states. In particular we discuss both unitary and random disturbance by considering the parameter which characterizes the unwanted linear term present in the Hamiltonian as fixed (unitary disturbance) or random with a given probability distribution (random disturbance). We derive the optimal input Gaussian states at fixed energy, maximizing the quantum Fisher information over the squeezing angle and the squeezing energy fraction, and we discuss the scaling of the quantum Fisher information in terms of the output number of photons, nout. We observe that, in the case of unitary disturbance, the optimal state is a squeezed vacuum state and the quadratic scaling is conserved. As regards the random disturbance, we observe that the optimal squeezing fraction may not be equal to one and, for any nonzero value of the noise parameter, the quantum Fisher information scales linearly with the average number of photons. Finally, we discuss the performance of homodyne measurement by comparing the achievable precision with the ultimate limit imposed by the quantum Cramér-Rao bound.

  18. Fuzzy – PI controller to control the velocity parameter of Induction Motor

    NASA Astrophysics Data System (ADS)

    Malathy, R.; Balaji, V.

    2018-04-01

    The major application of Induction motor includes the usage of the same in industries because of its high robustness, reliability, low cost, highefficiency and good self-starting capability. Even though it has the above mentioned advantages, it also have some limitations: (1) the standard motor is not a true constant-speed machine, itsfull-load slip varies less than 1 % (in high-horsepower motors).And (2) it is not inherently capable of providing variable-speedoperation. In order to solve the above mentioned problem smart motor controls and variable speed controllers are used. Motor applications involve non linearity features, which can be controlled by Fuzzy logic controller as it is capable of handling those features with high efficiency and it act similar to human operator. This paper presents individuality of the plant modelling. The fuzzy logic controller (FLC)trusts on a set of linguistic if-then rules, a rule-based Mamdani for closed loop Induction Motor model. Themotor model is designed and membership functions are chosenaccording to the parameters of the motor model. Simulation results contains non linearity in induction motor model. A conventional PI controller iscompared practically to fuzzy logic controller using Simulink.

  19. Hands-on parameter search for neural simulations by a MIDI-controller.

    PubMed

    Eichner, Hubert; Borst, Alexander

    2011-01-01

    Computational neuroscientists frequently encounter the challenge of parameter fitting--exploring a usually high dimensional variable space to find a parameter set that reproduces an experimental data set. One common approach is using automated search algorithms such as gradient descent or genetic algorithms. However, these approaches suffer several shortcomings related to their lack of understanding the underlying question, such as defining a suitable error function or getting stuck in local minima. Another widespread approach is manual parameter fitting using a keyboard or a mouse, evaluating different parameter sets following the users intuition. However, this process is often cumbersome and time-intensive. Here, we present a new method for manual parameter fitting. A MIDI controller provides input to the simulation software, where model parameters are then tuned according to the knob and slider positions on the device. The model is immediately updated on every parameter change, continuously plotting the latest results. Given reasonably short simulation times of less than one second, we find this method to be highly efficient in quickly determining good parameter sets. Our approach bears a close resemblance to tuning the sound of an analog synthesizer, giving the user a very good intuition of the problem at hand, such as immediate feedback if and how results are affected by specific parameter changes. In addition to be used in research, our approach should be an ideal teaching tool, allowing students to interactively explore complex models such as Hodgkin-Huxley or dynamical systems.

  20. Hands-On Parameter Search for Neural Simulations by a MIDI-Controller

    PubMed Central

    Eichner, Hubert; Borst, Alexander

    2011-01-01

    Computational neuroscientists frequently encounter the challenge of parameter fitting – exploring a usually high dimensional variable space to find a parameter set that reproduces an experimental data set. One common approach is using automated search algorithms such as gradient descent or genetic algorithms. However, these approaches suffer several shortcomings related to their lack of understanding the underlying question, such as defining a suitable error function or getting stuck in local minima. Another widespread approach is manual parameter fitting using a keyboard or a mouse, evaluating different parameter sets following the users intuition. However, this process is often cumbersome and time-intensive. Here, we present a new method for manual parameter fitting. A MIDI controller provides input to the simulation software, where model parameters are then tuned according to the knob and slider positions on the device. The model is immediately updated on every parameter change, continuously plotting the latest results. Given reasonably short simulation times of less than one second, we find this method to be highly efficient in quickly determining good parameter sets. Our approach bears a close resemblance to tuning the sound of an analog synthesizer, giving the user a very good intuition of the problem at hand, such as immediate feedback if and how results are affected by specific parameter changes. In addition to be used in research, our approach should be an ideal teaching tool, allowing students to interactively explore complex models such as Hodgkin-Huxley or dynamical systems. PMID:22066027

  1. Finite‐fault Bayesian inversion of teleseismic body waves

    USGS Publications Warehouse

    Clayton, Brandon; Hartzell, Stephen; Moschetti, Morgan P.; Minson, Sarah E.

    2017-01-01

    Inverting geophysical data has provided fundamental information about the behavior of earthquake rupture. However, inferring kinematic source model parameters for finite‐fault ruptures is an intrinsically underdetermined problem (the problem of nonuniqueness), because we are restricted to finite noisy observations. Although many studies use least‐squares techniques to make the finite‐fault problem tractable, these methods generally lack the ability to apply non‐Gaussian error analysis and the imposition of nonlinear constraints. However, the Bayesian approach can be employed to find a Gaussian or non‐Gaussian distribution of all probable model parameters, while utilizing nonlinear constraints. We present case studies to quantify the resolving power and associated uncertainties using only teleseismic body waves in a Bayesian framework to infer the slip history for a synthetic case and two earthquakes: the 2011 Mw 7.1 Van, east Turkey, earthquake and the 2010 Mw 7.2 El Mayor–Cucapah, Baja California, earthquake. In implementing the Bayesian method, we further present two distinct solutions to investigate the uncertainties by performing the inversion with and without velocity structure perturbations. We find that the posterior ensemble becomes broader when including velocity structure variability and introduces a spatial smearing of slip. Using the Bayesian framework solely on teleseismic body waves, we find rake is poorly constrained by the observations and rise time is poorly resolved when slip amplitude is low.

  2. Towards conformal loop quantum gravity

    NASA Astrophysics Data System (ADS)

    H-T Wang, Charles

    2006-03-01

    A discussion is given of recent developments in canonical gravity that assimilates the conformal analysis of gravitational degrees of freedom. The work is motivated by the problem of time in quantum gravity and is carried out at the metric and the triad levels. At the metric level, it is shown that by extending the Arnowitt-Deser-Misner (ADM) phase space of general relativity (GR), a conformal form of geometrodynamics can be constructed. In addition to the Hamiltonian and Diffeomorphism constraints, an extra first class constraint is introduced to generate conformal transformations. This phase space consists of York's mean extrinsic curvature time, conformal three-metric and their momenta. At the triad level, the phase space of GR is further enlarged by incorporating spin-gauge as well as conformal symmetries. This leads to a canonical formulation of GR using a new set of real spin connection variables. The resulting gravitational constraints are first class, consisting of the Hamiltonian constraint and the canonical generators for spin-gauge and conformorphism transformations. The formulation has a remarkable feature of being parameter-free. Indeed, it is shown that a conformal parameter of the Barbero-Immirzi type can be absorbed by the conformal symmetry of the extended phase space. This gives rise to an alternative approach to loop quantum gravity that addresses both the conceptual problem of time and the technical problem of functional calculus in quantum gravity.

  3. 3D+T motion analysis with nanosensors

    NASA Astrophysics Data System (ADS)

    Leduc, Jean-Pierre

    2017-09-01

    This paper addresses the problem of motion analysis performed in a signal sampled on an irregular grid spread in 3-dimensional space and time (3D+T). Nanosensors can be randomly scattered in the field to form a "sensor network". Once released, each nanosensor transmits at its own fixed pace information which corresponds to some physical variable measured in the field. Each nanosensor is supposed to have a limited lifetime given by a Poisson-exponential distribution after release. The motion analysis is supported by a model based on a Lie group called the Galilei group that refers to the actual mechanics that takes place on some given geometry. The Galilei group has representations in the Hilbert space of the captured signals. Those representations have the properties to be unitary, irreducible and square-integrable and to enable the existence of admissible continuous wavelets fit for motion analysis. The motion analysis can be considered as a so-called "inverse problem" where the physical model is inferred to estimate the kinematical parameters of interest. The estimation of the kinematical parameters is performed by a gradient algorithm. The gradient algorithm extends in the trajectory determination. Trajectory computation is related to a Lagrangian-Hamiltonian formulation and fits into a neuro-dynamic programming approach that can be implemented in the form of a Q-learning algorithm. Applications relevant for this problem can be found in medical imaging, Earth science, military, and neurophysiology.

  4. Implementing Scientific Simulation Codes Highly Tailored for Vector Architectures Using Custom Configurable Computing Machines

    NASA Technical Reports Server (NTRS)

    Rutishauser, David

    2006-01-01

    The motivation for this work comes from an observation that amidst the push for Massively Parallel (MP) solutions to high-end computing problems such as numerical physical simulations, large amounts of legacy code exist that are highly optimized for vector supercomputers. Because re-hosting legacy code often requires a complete re-write of the original code, which can be a very long and expensive effort, this work examines the potential to exploit reconfigurable computing machines in place of a vector supercomputer to implement an essentially unmodified legacy source code. Custom and reconfigurable computing resources could be used to emulate an original application's target platform to the extent required to achieve high performance. To arrive at an architecture that delivers the desired performance subject to limited resources involves solving a multi-variable optimization problem with constraints. Prior research in the area of reconfigurable computing has demonstrated that designing an optimum hardware implementation of a given application under hardware resource constraints is an NP-complete problem. The premise of the approach is that the general issue of applying reconfigurable computing resources to the implementation of an application, maximizing the performance of the computation subject to physical resource constraints, can be made a tractable problem by assuming a computational paradigm, such as vector processing. This research contributes a formulation of the problem and a methodology to design a reconfigurable vector processing implementation of a given application that satisfies a performance metric. A generic, parametric, architectural framework for vector processing implemented in reconfigurable logic is developed as a target for a scheduling/mapping algorithm that maps an input computation to a given instance of the architecture. This algorithm is integrated with an optimization framework to arrive at a specification of the architecture parameters that attempts to minimize execution time, while staying within resource constraints. The flexibility of using a custom reconfigurable implementation is exploited in a unique manner to leverage the lessons learned in vector supercomputer development. The vector processing framework is tailored to the application, with variable parameters that are fixed in traditional vector processing. Benchmark data that demonstrates the functionality and utility of the approach is presented. The benchmark data includes an identified bottleneck in a real case study example vector code, the NASA Langley Terminal Area Simulation System (TASS) application.

  5. GWM-a ground-water management process for the U.S. Geological Survey modular ground-water model (MODFLOW-2000)

    USGS Publications Warehouse

    Ahlfeld, David P.; Barlow, Paul M.; Mulligan, Anne E.

    2005-01-01

    GWM is a Ground?Water Management Process for the U.S. Geological Survey modular three?dimensional ground?water model, MODFLOW?2000. GWM uses a response?matrix approach to solve several types of linear, nonlinear, and mixed?binary linear ground?water management formulations. Each management formulation consists of a set of decision variables, an objective function, and a set of constraints. Three types of decision variables are supported by GWM: flow?rate decision variables, which are withdrawal or injection rates at well sites; external decision variables, which are sources or sinks of water that are external to the flow model and do not directly affect the state variables of the simulated ground?water system (heads, streamflows, and so forth); and binary variables, which have values of 0 or 1 and are used to define the status of flow?rate or external decision variables. Flow?rate decision variables can represent wells that extend over one or more model cells and be active during one or more model stress periods; external variables also can be active during one or more stress periods. A single objective function is supported by GWM, which can be specified to either minimize or maximize the weighted sum of the three types of decision variables. Four types of constraints can be specified in a GWM formulation: upper and lower bounds on the flow?rate and external decision variables; linear summations of the three types of decision variables; hydraulic?head based constraints, including drawdowns, head differences, and head gradients; and streamflow and streamflow?depletion constraints. The Response Matrix Solution (RMS) Package of GWM uses the Ground?Water Flow Process of MODFLOW to calculate the change in head at each constraint location that results from a perturbation of a flow?rate variable; these changes are used to calculate the response coefficients. For linear management formulations, the resulting matrix of response coefficients is then combined with other components of the linear management formulation to form a complete linear formulation; the formulation is then solved by use of the simplex algorithm, which is incorporated into the RMS Package. Nonlinear formulations arise for simulated conditions that include water?table (unconfined) aquifers or head?dependent boundary conditions (such as streams, drains, or evapotranspiration from the water table). Nonlinear formulations are solved by sequential linear programming; that is, repeated linearization of the nonlinear features of the management problem. In this approach, response coefficients are recalculated for each iteration of the solution process. Mixed?binary linear (or mildly nonlinear) formulations are solved by use of the branch and bound algorithm, which is also incorporated into the RMS Package. Three sample problems are provided to demonstrate the use of GWM for typical ground?water flow management problems. These sample problems provide examples of how GWM input files are constructed to specify the decision variables, objective function, constraints, and solution process for a GWM run. The GWM Process runs with the MODFLOW?2000 Global and Ground?Water Flow Processes, but in its current form GWM cannot be used with the Observation, Sensitivity, Parameter?Estimation, or Ground?Water Transport Processes. The GWM Process is written with a modular structure so that new objective functions, constraint types, and solution algorithms can be added.

  6. An optimal autonomous microgrid cluster based on distributed generation droop parameter optimization and renewable energy sources using an improved grey wolf optimizer

    NASA Astrophysics Data System (ADS)

    Moazami Goodarzi, Hamed; Kazemi, Mohammad Hosein

    2018-05-01

    Microgrid (MG) clustering is regarded as an important driver in improving the robustness of MGs. However, little research has been conducted on providing appropriate MG clustering. This article addresses this shortfall. It proposes a novel multi-objective optimization approach for finding optimal clustering of autonomous MGs by focusing on variables such as distributed generation (DG) droop parameters, the location and capacity of DG units, renewable energy sources, capacitors and powerline transmission. Power losses are minimized and voltage stability is improved while virtual cut-set lines with minimum power transmission for clustering MGs are obtained. A novel chaotic grey wolf optimizer (CGWO) algorithm is applied to solve the proposed multi-objective problem. The performance of the approach is evaluated by utilizing a 69-bus MG in several scenarios.

  7. The mixing length parameter alpha. [in stellar structure calculations

    NASA Technical Reports Server (NTRS)

    Canuto, V. M.

    1990-01-01

    The standard mixing length theory, MLT, treats turbulent eddies as if they were isotropic, while the largest eddies that carry most of the flux are highly anisotropic. Recently, an anisotropic MLT was constructed, and the relevant equations derived. It is shown that these new equations can actually be cast in a form that is formally identical to that of the standard isotropic MLT, provided the mixing length parameter, derived from stellar structure calculations, is interpreted as an intermediate, auxiliary function alpha(x), where x, the degree of anisotropy is given as a function of the thermodynamic variables of the problem. The relation between alpha(x) and the physically relevant alpha(l = Hp) is also given. Once the value alpha is deduced, it is found to be a function of the local thermodynamic quantities, as expected.

  8. Quantitative analysis of spatial variability of geotechnical parameters

    NASA Astrophysics Data System (ADS)

    Fang, Xing

    2018-04-01

    Geotechnical parameters are the basic parameters of geotechnical engineering design, while the geotechnical parameters have strong regional characteristics. At the same time, the spatial variability of geotechnical parameters has been recognized. It is gradually introduced into the reliability analysis of geotechnical engineering. Based on the statistical theory of geostatistical spatial information, the spatial variability of geotechnical parameters is quantitatively analyzed. At the same time, the evaluation of geotechnical parameters and the correlation coefficient between geotechnical parameters are calculated. A residential district of Tianjin Survey Institute was selected as the research object. There are 68 boreholes in this area and 9 layers of mechanical stratification. The parameters are water content, natural gravity, void ratio, liquid limit, plasticity index, liquidity index, compressibility coefficient, compressive modulus, internal friction angle, cohesion and SP index. According to the principle of statistical correlation, the correlation coefficient of geotechnical parameters is calculated. According to the correlation coefficient, the law of geotechnical parameters is obtained.

  9. Probabilistic Aeroelastic Analysis Developed for Turbomachinery Components

    NASA Technical Reports Server (NTRS)

    Reddy, T. S. R.; Mital, Subodh K.; Stefko, George L.; Pai, Shantaram S.

    2003-01-01

    Aeroelastic analyses for advanced turbomachines are being developed for use at the NASA Glenn Research Center and industry. However, these analyses at present are used for turbomachinery design with uncertainties accounted for by using safety factors. This approach may lead to overly conservative designs, thereby reducing the potential of designing higher efficiency engines. An integration of the deterministic aeroelastic analysis methods with probabilistic analysis methods offers the potential to design efficient engines with fewer aeroelastic problems and to make a quantum leap toward designing safe reliable engines. In this research, probabilistic analysis is integrated with aeroelastic analysis: (1) to determine the parameters that most affect the aeroelastic characteristics (forced response and stability) of a turbomachine component such as a fan, compressor, or turbine and (2) to give the acceptable standard deviation on the design parameters for an aeroelastically stable system. The approach taken is to combine the aeroelastic analysis of the MISER (MIStuned Engine Response) code with the FPI (fast probability integration) code. The role of MISER is to provide the functional relationships that tie the structural and aerodynamic parameters (the primitive variables) to the forced response amplitudes and stability eigenvalues (the response properties). The role of FPI is to perform probabilistic analyses by utilizing the response properties generated by MISER. The results are a probability density function for the response properties. The probabilistic sensitivities of the response variables to uncertainty in primitive variables are obtained as a byproduct of the FPI technique. The combined analysis of aeroelastic and probabilistic analysis is applied to a 12-bladed cascade vibrating in bending and torsion. Out of the total 11 design parameters, 6 are considered as having probabilistic variation. The six parameters are space-to-chord ratio (SBYC), stagger angle (GAMA), elastic axis (ELAXS), Mach number (MACH), mass ratio (MASSR), and frequency ratio (WHWB). The cascade is considered to be in subsonic flow with Mach 0.7. The results of the probabilistic aeroelastic analysis are the probability density function of predicted aerodynamic damping and frequency for flutter and the response amplitudes for forced response.

  10. A coupled electro-thermal Discontinuous Galerkin method

    NASA Astrophysics Data System (ADS)

    Homsi, L.; Geuzaine, C.; Noels, L.

    2017-11-01

    This paper presents a Discontinuous Galerkin scheme in order to solve the nonlinear elliptic partial differential equations of coupled electro-thermal problems. In this paper we discuss the fundamental equations for the transport of electricity and heat, in terms of macroscopic variables such as temperature and electric potential. A fully coupled nonlinear weak formulation for electro-thermal problems is developed based on continuum mechanics equations expressed in terms of energetically conjugated pair of fluxes and fields gradients. The weak form can thus be formulated as a Discontinuous Galerkin method. The existence and uniqueness of the weak form solution are proved. The numerical properties of the nonlinear elliptic problems i.e., consistency and stability, are demonstrated under specific conditions, i.e. use of high enough stabilization parameter and at least quadratic polynomial approximations. Moreover the prior error estimates in the H1-norm and in the L2-norm are shown to be optimal in the mesh size with the polynomial approximation degree.

  11. Optimizing Constrained Single Period Problem under Random Fuzzy Demand

    NASA Astrophysics Data System (ADS)

    Taleizadeh, Ata Allah; Shavandi, Hassan; Riazi, Afshin

    2008-09-01

    In this paper, we consider the multi-product multi-constraint newsboy problem with random fuzzy demands and total discount. The demand of the products is often stochastic in the real word but the estimation of the parameters of distribution function may be done by fuzzy manner. So an appropriate option to modeling the demand of products is using the random fuzzy variable. The objective function of proposed model is to maximize the expected profit of newsboy. We consider the constraints such as warehouse space and restriction on quantity order for products, and restriction on budget. We also consider the batch size for products order. Finally we introduce a random fuzzy multi-product multi-constraint newsboy problem (RFM-PM-CNP) and it is changed to a multi-objective mixed integer nonlinear programming model. Furthermore, a hybrid intelligent algorithm based on genetic algorithm, Pareto and TOPSIS is presented for the developed model. Finally an illustrative example is presented to show the performance of the developed model and algorithm.

  12. A prediction model to forecast the cost impact from a break in the production schedule

    NASA Technical Reports Server (NTRS)

    Delionback, L. M.

    1977-01-01

    The losses which are experienced after a break or stoppage in sequence of a production cycle portends an extremely complex situation and involves numerous variables, some of uncertain quantity and quality. There are no discrete formulas to define the losses during a gap in production. The techniques which are employed are therefore related to a prediction or forecast of the losses that take place, based on the conditions which exist in the production environment. Such parameters as learning curve slope, number of predecessor units, and length of time the production sequence is halted are utilized in formulating a prediction model. The pertinent current publications related to this subject are few in number, but are reviewed to provide an understanding of the problem. Example problems are illustrated together with appropriate trend curves to show the approach. Solved problems are also given to show the application of the models to actual cases or production breaks in the real world.

  13. Integration of the Response Surface Methodology with the Compromise Decision Support Problem in Developing a General Robust Design Procedure

    NASA Technical Reports Server (NTRS)

    Chen, Wei; Tsui, Kwok-Leung; Allen, Janet K.; Mistree, Farrokh

    1994-01-01

    In this paper we introduce a comprehensive and rigorous robust design procedure to overcome some limitations of the current approaches. A comprehensive approach is general enough to model the two major types of robust design applications, namely, robust design associated with the minimization of the deviation of performance caused by the deviation of noise factors (uncontrollable parameters), and robust design due to the minimization of the deviation of performance caused by the deviation of control factors (design variables). We achieve mathematical rigor by using, as a foundation, principles from the design of experiments and optimization. Specifically, we integrate the Response Surface Method (RSM) with the compromise Decision Support Problem (DSP). Our approach is especially useful for design problems where there are no closed-form solutions and system performance is computationally expensive to evaluate. The design of a solar powered irrigation system is used as an example. Our focus in this paper is on illustrating our approach rather than on the results per se.

  14. Multi-Objective Hybrid Optimal Control for Multiple-Flyby Interplanetary Mission Design Using Chemical Propulsion

    NASA Technical Reports Server (NTRS)

    Englander, Jacob A.; Vavrina, Matthew A.

    2015-01-01

    Preliminary design of high-thrust interplanetary missions is a highly complex process. The mission designer must choose discrete parameters such as the number of flybys and the bodies at which those flybys are performed. For some missions, such as surveys of small bodies, the mission designer also contributes to target selection. In addition, real-valued decision variables, such as launch epoch, flight times, maneuver and flyby epochs, and flyby altitudes must be chosen. There are often many thousands of possible trajectories to be evaluated. The customer who commissions a trajectory design is not usually interested in a point solution, but rather the exploration of the trade space of trajectories between several different objective functions. This can be a very expensive process in terms of the number of human analyst hours required. An automated approach is therefore very desirable. This work presents such an approach by posing the impulsive mission design problem as a multiobjective hybrid optimal control problem. The method is demonstrated on several real-world problems.

  15. Advanced development of the boundary element method for elastic and inelastic thermal stress analysis. Ph.D. Thesis, 1987 Final Report

    NASA Technical Reports Server (NTRS)

    Henry, Donald P., Jr.

    1991-01-01

    The focus of this dissertation is on advanced development of the boundary element method for elastic and inelastic thermal stress analysis. New formulations for the treatment of body forces and nonlinear effects are derived. These formulations, which are based on particular integral theory, eliminate the need for volume integrals or extra surface integrals to account for these effects. The formulations are presented for axisymmetric, two and three dimensional analysis. Also in this dissertation, two dimensional and axisymmetric formulations for elastic and inelastic, inhomogeneous stress analysis are introduced. The derivatives account for inhomogeneities due to spatially dependent material parameters, and thermally induced inhomogeneities. The nonlinear formulation of the present work are based on an incremental initial stress approach. Two inelastic solutions algorithms are implemented: an iterative; and a variable stiffness type approach. The Von Mises yield criterion with variable hardening and the associated flow rules are adopted in these algorithms. All formulations are implemented in a general purpose, multi-region computer code with the capability of local definition of boundary conditions. Quadratic, isoparametric shape functions are used to model the geometry and field variables of the boundary (and domain) of the problem. The multi-region implementation permits a body to be modeled in substructured parts, thus dramatically reducing the cost of analysis. Furthermore, it allows a body consisting of regions of different (homogeneous) material to be studied. To test the program, results obtained for simple test cases are checked against their analytic solutions. Thereafter, a range of problems of practical interest are analyzed. In addition to displacement and traction loads, problems with body forces due to self-weight, centrifugal, and thermal loads are considered.

  16. Occupational Stress Among Male Employees of Esfahan Steel Company, Iran: Prevalence and Associated Factors

    PubMed Central

    Lotfizadeh, Masoud; Moazen, Babak; Habibi, Ehsan; Hassim, Noor

    2013-01-01

    Background: Lack of data on occupational stress among Iranian industrial employees persuaded us to design and conduct this study to evaluate the prevalence and associated parameters of occupational stress among male employees of the Esfahan Steel Company (ESCO), one of the biggest industrial units in Iran. Methods: In this cross-sectional study, 400 male employees were sampled from the operational divisions of the company. Socio-demographic data and stress-related variables were entered into a logistic regression to determine significant associated factors of occupational stress among the participants. Results: From all samples, 53% were found as stressful. A monthly salary of less than $600 (OR = 1.88, 95% confidence interval [CI] = 1.21-2.94), family-related problems (OR = 2.75, 95% CI = 1.22-6.21), work environment (OR = 3.09, 95% CI = 1.78-5.33) and having a second job (OR = 2.68, 95% CI = 1.78-6.78) were significantly associated with the outcome. Conclusions: Attention to some variables, especially economic problems and the work environment of employees, might play a protective role against the prevalence of occupational stress, not only among the employees of ESCO but also among all industrial employees in Iran. PMID:24049599

  17. Parallel methodology to capture cyclic variability in motored engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ameen, Muhsin M.; Yang, Xiaofeng; Kuo, Tang-Wei

    2016-07-28

    Numerical prediction of of cycle-to-cycle variability (CCV) in SI engines is extremely challenging for two key reasons: (i) high-fidelity methods such as large eddy simulation (LES) are require to accurately capture the in-cylinder turbulent flowfield, and (ii) CCV is experienced over long timescales and hence the simulations need to be performed for hundreds of consecutive cycles. In this study, a new methodology is proposed to dissociate this long time-scale problem into several shorter time-scale problems, which can considerably reduce the computational time without sacrificing the fidelity of the simulations. The strategy is to perform multiple single-cycle simulations in parallel bymore » effectively perturbing the simulation parameters such as the initial and boundary conditions. It is shown that by perturbing the initial velocity field effectively based on the intensity of the in-cylinder turbulence, the mean and variance of the in-cylinder flowfield is captured reasonably well. Adding perturbations in the initial pressure field and the boundary pressure improves the predictions. It is shown that this new approach is able to give accurate predictions of the flowfield statistics in less than one-tenth of time required for the conventional approach of simulating consecutive engine cycles.« less

  18. Smoothed Biasing Forces Yield Unbiased Free Energies with the Extended-System Adaptive Biasing Force Method

    PubMed Central

    2016-01-01

    We report a theoretical description and numerical tests of the extended-system adaptive biasing force method (eABF), together with an unbiased estimator of the free energy surface from eABF dynamics. Whereas the original ABF approach uses its running estimate of the free energy gradient as the adaptive biasing force, eABF is built on the idea that the exact free energy gradient is not necessary for efficient exploration, and that it is still possible to recover the exact free energy separately with an appropriate estimator. eABF does not directly bias the collective coordinates of interest, but rather fictitious variables that are harmonically coupled to them; therefore is does not require second derivative estimates, making it easily applicable to a wider range of problems than ABF. Furthermore, the extended variables present a smoother, coarse-grain-like sampling problem on a mollified free energy surface, leading to faster exploration and convergence. We also introduce CZAR, a simple, unbiased free energy estimator from eABF trajectories. eABF/CZAR converges to the physical free energy surface faster than standard ABF for a wide range of parameters. PMID:27959559

  19. Dynamical Behavior of a Malaria Model with Discrete Delay and Optimal Insecticide Control

    NASA Astrophysics Data System (ADS)

    Kar, Tuhin Kumar; Jana, Soovoojeet

    In this paper we have proposed and analyzed a simple three-dimensional mathematical model related to malaria disease. We consider three state variables associated with susceptible human population, infected human population and infected mosquitoes, respectively. A discrete delay parameter has been incorporated to take account of the time of incubation period with infected mosquitoes. We consider the effect of insecticide control, which is applied to the mosquitoes. Basic reproduction number is figured out for the proposed model and it is shown that when this threshold is less than unity then the system moves to the disease-free state whereas for higher values other than unity, the system would tend to an endemic state. On the other hand if we consider the system with delay, then there may exist some cases where the endemic equilibrium would be unstable although the numerical value of basic reproduction number may be greater than one. We formulate and solve the optimal control problem by considering insecticide as the control variable. Optimal control problem assures to obtain better result than the noncontrol situation. Numerical illustrations are provided in support of the theoretical results.

  20. An exponential decay model for mediation.

    PubMed

    Fritz, Matthew S

    2014-10-01

    Mediation analysis is often used to investigate mechanisms of change in prevention research. Results finding mediation are strengthened when longitudinal data are used because of the need for temporal precedence. Current longitudinal mediation models have focused mainly on linear change, but many variables in prevention change nonlinearly across time. The most common solution to nonlinearity is to add a quadratic term to the linear model, but this can lead to the use of the quadratic function to explain all nonlinearity, regardless of theory and the characteristics of the variables in the model. The current study describes the problems that arise when quadratic functions are used to describe all nonlinearity and how the use of nonlinear functions, such as exponential decay, address many of these problems. In addition, nonlinear models provide several advantages over polynomial models including usefulness of parameters, parsimony, and generalizability. The effects of using nonlinear functions for mediation analysis are then discussed and a nonlinear growth curve model for mediation is presented. An empirical example using data from a randomized intervention study is then provided to illustrate the estimation and interpretation of the model. Implications, limitations, and future directions are also discussed.

Top