System and method for motor parameter estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luhrs, Bin; Yan, Ting
2014-03-18
A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values formore » motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.« less
Hu, Jin; Zeng, Chunna
2017-02-01
The complex-valued Cohen-Grossberg neural network is a special kind of complex-valued neural network. In this paper, the synchronization problem of a class of complex-valued Cohen-Grossberg neural networks with known and unknown parameters is investigated. By using Lyapunov functionals and the adaptive control method based on parameter identification, some adaptive feedback schemes are proposed to achieve synchronization exponentially between the drive and response systems. The results obtained in this paper have extended and improved some previous works on adaptive synchronization of Cohen-Grossberg neural networks. Finally, two numerical examples are given to demonstrate the effectiveness of the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Wilson, Edward (Inventor)
2006-01-01
The present invention is a method for identifying unknown parameters in a system having a set of governing equations describing its behavior that cannot be put into regression form with the unknown parameters linearly represented. In this method, the vector of unknown parameters is segmented into a plurality of groups where each individual group of unknown parameters may be isolated linearly by manipulation of said equations. Multiple concurrent and independent recursive least squares identification of each said group run, treating other unknown parameters appearing in their regression equation as if they were known perfectly, with said values provided by recursive least squares estimation from the other groups, thereby enabling the use of fast, compact, efficient linear algorithms to solve problems that would otherwise require nonlinear solution approaches. This invention is presented with application to identification of mass and thruster properties for a thruster-controlled spacecraft.
NASA Astrophysics Data System (ADS)
Xu, Peiliang
2018-06-01
The numerical integration method has been routinely used by major institutions worldwide, for example, NASA Goddard Space Flight Center and German Research Center for Geosciences (GFZ), to produce global gravitational models from satellite tracking measurements of CHAMP and/or GRACE types. Such Earth's gravitational products have found widest possible multidisciplinary applications in Earth Sciences. The method is essentially implemented by solving the differential equations of the partial derivatives of the orbit of a satellite with respect to the unknown harmonic coefficients under the conditions of zero initial values. From the mathematical and statistical point of view, satellite gravimetry from satellite tracking is essentially the problem of estimating unknown parameters in the Newton's nonlinear differential equations from satellite tracking measurements. We prove that zero initial values for the partial derivatives are incorrect mathematically and not permitted physically. The numerical integration method, as currently implemented and used in mathematics and statistics, chemistry and physics, and satellite gravimetry, is groundless, mathematically and physically. Given the Newton's nonlinear governing differential equations of satellite motion with unknown equation parameters and unknown initial conditions, we develop three methods to derive new local solutions around a nominal reference orbit, which are linked to measurements to estimate the unknown corrections to approximate values of the unknown parameters and the unknown initial conditions. Bearing in mind that satellite orbits can now be tracked almost continuously at unprecedented accuracy, we propose the measurement-based perturbation theory and derive global uniformly convergent solutions to the Newton's nonlinear governing differential equations of satellite motion for the next generation of global gravitational models. Since the solutions are global uniformly convergent, theoretically speaking, they are able to extract smallest possible gravitational signals from modern and future satellite tracking measurements, leading to the production of global high-precision, high-resolution gravitational models. By directly turning the nonlinear differential equations of satellite motion into the nonlinear integral equations, and recognizing the fact that satellite orbits are measured with random errors, we further reformulate the links between satellite tracking measurements and the global uniformly convergent solutions to the Newton's governing differential equations as a condition adjustment model with unknown parameters, or equivalently, the weighted least squares estimation of unknown differential equation parameters with equality constraints, for the reconstruction of global high-precision, high-resolution gravitational models from modern (and future) satellite tracking measurements.
Su, Fei; Wang, Jiang; Deng, Bin; Wei, Xi-Le; Chen, Ying-Yuan; Liu, Chen; Li, Hui-Yan
2015-02-01
The objective here is to explore the use of adaptive input-output feedback linearization method to achieve an improved deep brain stimulation (DBS) algorithm for closed-loop control of Parkinson's state. The control law is based on a highly nonlinear computational model of Parkinson's disease (PD) with unknown parameters. The restoration of thalamic relay reliability is formulated as the desired outcome of the adaptive control methodology, and the DBS waveform is the control input. The control input is adjusted in real time according to estimates of unknown parameters as well as the feedback signal. Simulation results show that the proposed adaptive control algorithm succeeds in restoring the relay reliability of the thalamus, and at the same time achieves accurate estimation of unknown parameters. Our findings point to the potential value of adaptive control approach that could be used to regulate DBS waveform in more effective treatment of PD.
Bayesian statistics and Monte Carlo methods
NASA Astrophysics Data System (ADS)
Koch, K. R.
2018-03-01
The Bayesian approach allows an intuitive way to derive the methods of statistics. Probability is defined as a measure of the plausibility of statements or propositions. Three rules are sufficient to obtain the laws of probability. If the statements refer to the numerical values of variables, the so-called random variables, univariate and multivariate distributions follow. They lead to the point estimation by which unknown quantities, i.e. unknown parameters, are computed from measurements. The unknown parameters are random variables, they are fixed quantities in traditional statistics which is not founded on Bayes' theorem. Bayesian statistics therefore recommends itself for Monte Carlo methods, which generate random variates from given distributions. Monte Carlo methods, of course, can also be applied in traditional statistics. The unknown parameters, are introduced as functions of the measurements, and the Monte Carlo methods give the covariance matrix and the expectation of these functions. A confidence region is derived where the unknown parameters are situated with a given probability. Following a method of traditional statistics, hypotheses are tested by determining whether a value for an unknown parameter lies inside or outside the confidence region. The error propagation of a random vector by the Monte Carlo methods is presented as an application. If the random vector results from a nonlinearly transformed vector, its covariance matrix and its expectation follow from the Monte Carlo estimate. This saves a considerable amount of derivatives to be computed, and errors of the linearization are avoided. The Monte Carlo method is therefore efficient. If the functions of the measurements are given by a sum of two or more random vectors with different multivariate distributions, the resulting distribution is generally not known. TheMonte Carlo methods are then needed to obtain the covariance matrix and the expectation of the sum.
Parameter identification of thermophilic anaerobic degradation of valerate.
Flotats, Xavier; Ahring, Birgitte K; Angelidaki, Irini
2003-01-01
The considered mathematical model of the decomposition of valerate presents three unknown kinetic parameters, two unknown stoichiometric coefficients, and three unknown initial concentrations for biomass. Applying a structural identifiability study, we concluded that it is necessary to perform simultaneous batch experiments with different initial conditions for estimating these parameters. Four simultaneous batch experiments were conducted at 55 degrees C, characterized by four different initial acetate concentrations. Product inhibition of valerate degradation by acetate was considered. Practical identification was done optimizing the sum of the multiple determination coefficients for all measured state variables and for all experiments simultaneously. The estimated values of kinetic parameters and stoichiometric coefficients were characterized by the parameter correlation matrix, the confidence interval, and the student's t-test at 5% significance level with positive results except for the saturation constant, for which more experiments for improving its identifiability should be conducted. In this article, we discuss kinetic parameter estimation methods.
The Use of One-Sample Prediction Intervals for Estimating CO2 Scrubber Canister Durations
2012-10-01
Grade and 812 D-Grade Sofnolime.3 Definitions According to Devore,4 A CI (confidence interval) refers to a parameter, or population ... characteristic , whose value is fixed but unknown to us. In contrast, a future value of Y is not a parameter but instead a random variable; for this
Efficient Bayesian experimental design for contaminant source identification
NASA Astrophysics Data System (ADS)
Zhang, J.; Zeng, L.
2013-12-01
In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameter identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from indirect concentration measurements in identifying unknown source parameters such as the release time, strength and location. In this approach, the sampling location that gives the maximum relative entropy is selected as the optimal one. Once the sampling location is determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown source parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. Compared with the traditional optimal design, which is based on the Gaussian linear assumption, the method developed in this study can cope with arbitrary nonlinearity. It can be used to assist in groundwater monitor network design and identification of unknown contaminant sources. Contours of the expected information gain. The optimal observing location corresponds to the maximum value. Posterior marginal probability densities of unknown parameters, the thick solid black lines are for the designed location. For comparison, other 7 lines are for randomly chosen locations. The true values are denoted by vertical lines. It is obvious that the unknown parameters are estimated better with the desinged location.
Tune-stabilized, non-scaling, fixed-field, alternating gradient accelerator
Johnstone, Carol J [Warrenville, IL
2011-02-01
A FFAG is a particle accelerator having turning magnets with a linear field gradient for confinement and a large edge angle to compensate for acceleration. FODO cells contain focus magnets and defocus magnets that are specified by a number of parameters. A set of seven equations, called the FFAG equations relate the parameters to one another. A set of constraints, call the FFAG constraints, constrain the FFAG equations. Selecting a few parameters, such as injection momentum, extraction momentum, and drift distance reduces the number of unknown parameters to seven. Seven equations with seven unknowns can be solved to yield the values for all the parameters and to thereby fully specify a FFAG.
Method and apparatus for sensor fusion
NASA Technical Reports Server (NTRS)
Krishen, Kumar (Inventor); Shaw, Scott (Inventor); Defigueiredo, Rui J. P. (Inventor)
1991-01-01
Method and apparatus for fusion of data from optical and radar sensors by error minimization procedure is presented. The method was applied to the problem of shape reconstruction of an unknown surface at a distance. The method involves deriving an incomplete surface model from an optical sensor. The unknown characteristics of the surface are represented by some parameter. The correct value of the parameter is computed by iteratively generating theoretical predictions of the radar cross sections (RCS) of the surface, comparing the predicted and the observed values for the RCS, and improving the surface model from results of the comparison. Theoretical RCS may be computed from the surface model in several ways. One RCS prediction technique is the method of moments. The method of moments can be applied to an unknown surface only if some shape information is available from an independent source. The optical image provides the independent information.
NASA Astrophysics Data System (ADS)
Roozegar, Mehdi; Mahjoob, Mohammad J.; Ayati, Moosa
2017-05-01
This paper deals with adaptive estimation of the unknown parameters and states of a pendulum-driven spherical robot (PDSR), which is a nonlinear in parameters (NLP) chaotic system with parametric uncertainties. Firstly, the mathematical model of the robot is deduced by applying the Newton-Euler methodology for a system of rigid bodies. Then, based on the speed gradient (SG) algorithm, the states and unknown parameters of the robot are estimated online for different step length gains and initial conditions. The estimated parameters are updated adaptively according to the error between estimated and true state values. Since the errors of the estimated states and parameters as well as the convergence rates depend significantly on the value of step length gain, this gain should be chosen optimally. Hence, a heuristic fuzzy logic controller is employed to adjust the gain adaptively. Simulation results indicate that the proposed approach is highly encouraging for identification of this NLP chaotic system even if the initial conditions change and the uncertainties increase; therefore, it is reliable to be implemented on a real robot.
The power and robustness of maximum LOD score statistics.
Yoo, Y J; Mendell, N R
2008-07-01
The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jinsong; Kemna, Andreas; Hubbard, Susan S.
2008-05-15
We develop a Bayesian model to invert spectral induced polarization (SIP) data for Cole-Cole parameters using Markov chain Monte Carlo (MCMC) sampling methods. We compare the performance of the MCMC based stochastic method with an iterative Gauss-Newton based deterministic method for Cole-Cole parameter estimation through inversion of synthetic and laboratory SIP data. The Gauss-Newton based method can provide an optimal solution for given objective functions under constraints, but the obtained optimal solution generally depends on the choice of initial values and the estimated uncertainty information is often inaccurate or insufficient. In contrast, the MCMC based inversion method provides extensive globalmore » information on unknown parameters, such as the marginal probability distribution functions, from which we can obtain better estimates and tighter uncertainty bounds of the parameters than with the deterministic method. Additionally, the results obtained with the MCMC method are independent of the choice of initial values. Because the MCMC based method does not explicitly offer single optimal solution for given objective functions, the deterministic and stochastic methods can complement each other. For example, the stochastic method can first be used to obtain the means of the unknown parameters by starting from an arbitrary set of initial values and the deterministic method can then be initiated using the means as starting values to obtain the optimal estimates of the Cole-Cole parameters.« less
Unknown loads affect force production capacity in early phases of bench press throws.
Hernández Davó, J L; Sabido Solana, R; Sarabia Marínm, J M; Sánchez Martos, Á; Moya Ramón, M
2015-10-01
Explosive strength training aims to improve force generation in early phases of movement due to its importance in sport performance. The present study examined the influence of lack of knowledge about the load lifted in explosive parameters during bench press throws. Thirteen healthy young men (22.8±2.0 years) participated in the study. Participants performed bench press throws with three different loads (30, 50 and 70% of 1 repetition maximum) in two different conditions (known and unknown loads). In unknown condition, loads were changed within sets in each repetition and participants did not know the load, whereas in known condition the load did not change within sets and participants had knowledge about the load lifted. Results of repeated-measures ANOVA revealed that unknown conditions involves higher power in the first 30, 50, 100 and 150 ms with the three loads, higher values of ratio of force development in those first instants, and differences in time to reach maximal rate of force development with 50 and 70% of 1 repetition maximum. This study showed that unknown conditions elicit higher values of explosive parameters in early phases of bench press throws, thereby this kind of methodology could be considered in explosive strength training.
The Routine Fitting of Kinetic Data to Models
Berman, Mones; Shahn, Ezra; Weiss, Marjory F.
1962-01-01
A mathematical formalism is presented for use with digital computers to permit the routine fitting of data to physical and mathematical models. Given a set of data, the mathematical equations describing a model, initial conditions for an experiment, and initial estimates for the values of model parameters, the computer program automatically proceeds to obtain a least squares fit of the data by an iterative adjustment of the values of the parameters. When the experimental measures are linear combinations of functions, the linear coefficients for a least squares fit may also be calculated. The values of both the parameters of the model and the coefficients for the sum of functions may be unknown independent variables, unknown dependent variables, or known constants. In the case of dependence, only linear dependencies are provided for in routine use. The computer program includes a number of subroutines, each one of which performs a special task. This permits flexibility in choosing various types of solutions and procedures. One subroutine, for example, handles linear differential equations, another, special non-linear functions, etc. The use of analytic or numerical solutions of equations is possible. PMID:13867975
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kato, Go
We consider the situation where s replicas of a qubit with an unknown state and its orthogonal k replicas are given as an input, and we try to make c clones of the qubit with the unknown state. As a function of s, k, and c, we obtain the optimal fidelity between the qubit with an unknown state and the clone by explicitly giving a completely positive trace-preserving (CPTP) map that represents a cloning machine. We discuss dependency of the fidelity on the values of the parameters s, k, and c.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, Jaromy; Sun Zaijing; Wells, Doug
2009-03-10
Photon activation analysis detected elements in two NIST standards that did not have reported concentration values. A method is currently being developed to infer these concentrations by using scaling parameters and the appropriate known quantities within the NIST standard itself. Scaling parameters include: threshold, peak and endpoint energies; photo-nuclear cross sections for specific isotopes; Bremstrahlung spectrum; target thickness; and photon flux. Photo-nuclear cross sections and energies from the unknown elements must also be known. With these quantities, the same integral was performed for both the known and unknown elements resulting in an inference of the concentration of the un-reported elementmore » based on the reported value. Since Rb and Mn were elements that were reported in the standards, and because they had well-identified peaks, they were used as the standards of inference to determine concentrations of the unreported elements of As, I, Nb, Y, and Zr. This method was tested by choosing other known elements within the standards and inferring a value based on the stated procedure. The reported value of Mn in the first NIST standard was 403{+-}15 ppm and the reported value of Ca in the second NIST standard was 87000 ppm (no reported uncertainty). The inferred concentrations were 370{+-}23 ppm and 80200{+-}8700 ppm respectively.« less
Improved mapping of radio sources from VLBI data by least-square fit
NASA Technical Reports Server (NTRS)
Rodemich, E. R.
1985-01-01
A method is described for producing improved mapping of radio sources from Very Long Base Interferometry (VLBI) data. The method described is more direct than existing Fourier methods, is often more accurate, and runs at least as fast. The visibility data is modeled here, as in existing methods, as a function of the unknown brightness distribution and the unknown antenna gains and phases. These unknowns are chosen so that the resulting function values are as near as possible to the observed values. If researchers use the radio mapping source deviation to measure the closeness of this fit to the observed values, they are led to the problem of minimizing a certain function of all the unknown parameters. This minimization problem cannot be solved directly, but it can be attacked by iterative methods which we show converge automatically to the minimum with no user intervention. The resulting brightness distribution will furnish the best fit to the data among all brightness distributions of given resolution.
McNeilly, Clyde E.
1977-01-04
A device is provided for automatically selecting from a plurality of ranges of a scale of values to which a meter may be made responsive, that range which encompasses the value of an unknown parameter. A meter relay indicates whether the unknown is of greater or lesser value than the range to which the meter is then responsive. The rotatable part of a stepping relay is rotated in one direction or the other in response to the indication from the meter relay. Various positions of the rotatable part are associated with particular scales. Switching means are sensitive to the position of the rotatable part to couple the associated range to the meter.
Adaptive control of stochastic linear systems with unknown parameters. M.S. Thesis
NASA Technical Reports Server (NTRS)
Ku, R. T.
1972-01-01
The problem of optimal control of linear discrete-time stochastic dynamical system with unknown and, possibly, stochastically varying parameters is considered on the basis of noisy measurements. It is desired to minimize the expected value of a quadratic cost functional. Since the simultaneous estimation of the state and plant parameters is a nonlinear filtering problem, the extended Kalman filter algorithm is used. Several qualitative and asymptotic properties of the open loop feedback optimal control and the enforced separation scheme are discussed. Simulation results via Monte Carlo method show that, in terms of the performance measure, for stable systems the open loop feedback optimal control system is slightly better than the enforced separation scheme, while for unstable systems the latter scheme is far better.
Reconstructing high-dimensional two-photon entangled states via compressive sensing
Tonolini, Francesco; Chan, Susan; Agnew, Megan; Lindsay, Alan; Leach, Jonathan
2014-01-01
Accurately establishing the state of large-scale quantum systems is an important tool in quantum information science; however, the large number of unknown parameters hinders the rapid characterisation of such states, and reconstruction procedures can become prohibitively time-consuming. Compressive sensing, a procedure for solving inverse problems by incorporating prior knowledge about the form of the solution, provides an attractive alternative to the problem of high-dimensional quantum state characterisation. Using a modified version of compressive sensing that incorporates the principles of singular value thresholding, we reconstruct the density matrix of a high-dimensional two-photon entangled system. The dimension of each photon is equal to d = 17, corresponding to a system of 83521 unknown real parameters. Accurate reconstruction is achieved with approximately 2500 measurements, only 3% of the total number of unknown parameters in the state. The algorithm we develop is fast, computationally inexpensive, and applicable to a wide range of quantum states, thus demonstrating compressive sensing as an effective technique for measuring the state of large-scale quantum systems. PMID:25306850
Evolutionary algorithm based heuristic scheme for nonlinear heat transfer equations.
Ullah, Azmat; Malik, Suheel Abdullah; Alimgeer, Khurram Saleem
2018-01-01
In this paper, a hybrid heuristic scheme based on two different basis functions i.e. Log Sigmoid and Bernstein Polynomial with unknown parameters is used for solving the nonlinear heat transfer equations efficiently. The proposed technique transforms the given nonlinear ordinary differential equation into an equivalent global error minimization problem. Trial solution for the given nonlinear differential equation is formulated using a fitness function with unknown parameters. The proposed hybrid scheme of Genetic Algorithm (GA) with Interior Point Algorithm (IPA) is opted to solve the minimization problem and to achieve the optimal values of unknown parameters. The effectiveness of the proposed scheme is validated by solving nonlinear heat transfer equations. The results obtained by the proposed scheme are compared and found in sharp agreement with both the exact solution and solution obtained by Haar Wavelet-Quasilinearization technique which witnesses the effectiveness and viability of the suggested scheme. Moreover, the statistical analysis is also conducted for investigating the stability and reliability of the presented scheme.
Optimal hemodynamic response model for functional near-infrared spectroscopy
Kamran, Muhammad A.; Jeong, Myung Yung; Mannan, Malik M. N.
2015-01-01
Functional near-infrared spectroscopy (fNIRS) is an emerging non-invasive brain imaging technique and measures brain activities by means of near-infrared light of 650–950 nm wavelengths. The cortical hemodynamic response (HR) differs in attributes at different brain regions and on repetition of trials, even if the experimental paradigm is kept exactly the same. Therefore, an HR model that can estimate such variations in the response is the objective of this research. The canonical hemodynamic response function (cHRF) is modeled by two Gamma functions with six unknown parameters (four of them to model the shape and other two to scale and baseline respectively). The HRF model is supposed to be a linear combination of HRF, baseline, and physiological noises (amplitudes and frequencies of physiological noises are supposed to be unknown). An objective function is developed as a square of the residuals with constraints on 12 free parameters. The formulated problem is solved by using an iterative optimization algorithm to estimate the unknown parameters in the model. Inter-subject variations in HRF and physiological noises have been estimated for better cortical functional maps. The accuracy of the algorithm has been verified using 10 real and 15 simulated data sets. Ten healthy subjects participated in the experiment and their HRF for finger-tapping tasks have been estimated and analyzed. The statistical significance of the estimated activity strength parameters has been verified by employing statistical analysis (i.e., t-value > tcritical and p-value < 0.05). PMID:26136668
Optimal hemodynamic response model for functional near-infrared spectroscopy.
Kamran, Muhammad A; Jeong, Myung Yung; Mannan, Malik M N
2015-01-01
Functional near-infrared spectroscopy (fNIRS) is an emerging non-invasive brain imaging technique and measures brain activities by means of near-infrared light of 650-950 nm wavelengths. The cortical hemodynamic response (HR) differs in attributes at different brain regions and on repetition of trials, even if the experimental paradigm is kept exactly the same. Therefore, an HR model that can estimate such variations in the response is the objective of this research. The canonical hemodynamic response function (cHRF) is modeled by two Gamma functions with six unknown parameters (four of them to model the shape and other two to scale and baseline respectively). The HRF model is supposed to be a linear combination of HRF, baseline, and physiological noises (amplitudes and frequencies of physiological noises are supposed to be unknown). An objective function is developed as a square of the residuals with constraints on 12 free parameters. The formulated problem is solved by using an iterative optimization algorithm to estimate the unknown parameters in the model. Inter-subject variations in HRF and physiological noises have been estimated for better cortical functional maps. The accuracy of the algorithm has been verified using 10 real and 15 simulated data sets. Ten healthy subjects participated in the experiment and their HRF for finger-tapping tasks have been estimated and analyzed. The statistical significance of the estimated activity strength parameters has been verified by employing statistical analysis (i.e., t-value > t critical and p-value < 0.05).
State, Parameter, and Unknown Input Estimation Problems in Active Automotive Safety Applications
NASA Astrophysics Data System (ADS)
Phanomchoeng, Gridsada
A variety of driver assistance systems such as traction control, electronic stability control (ESC), rollover prevention and lane departure avoidance systems are being developed by automotive manufacturers to reduce driver burden, partially automate normal driving operations, and reduce accidents. The effectiveness of these driver assistance systems can be significant enhanced if the real-time values of several vehicle parameters and state variables, namely tire-road friction coefficient, slip angle, roll angle, and rollover index, can be known. Since there are no inexpensive sensors available to measure these variables, it is necessary to estimate them. However, due to the significant nonlinear dynamics in a vehicle, due to unknown and changing plant parameters, and due to the presence of unknown input disturbances, the design of estimation algorithms for this application is challenging. This dissertation develops a new approach to observer design for nonlinear systems in which the nonlinearity has a globally (or locally) bounded Jacobian. The developed approach utilizes a modified version of the mean value theorem to express the nonlinearity in the estimation error dynamics as a convex combination of known matrices with time varying coefficients. The observer gains are then obtained by solving linear matrix inequalities (LMIs). A number of illustrative examples are presented to show that the developed approach is less conservative and more useful than the standard Lipschitz assumption based nonlinear observer. The developed nonlinear observer is utilized for estimation of slip angle, longitudinal vehicle velocity, and vehicle roll angle. In order to predict and prevent vehicle rollovers in tripped situations, it is necessary to estimate the vertical tire forces in the presence of unknown road disturbance inputs. An approach to estimate unknown disturbance inputs in nonlinear systems using dynamic model inversion and a modified version of the mean value theorem is presented. The developed theory is used to estimate vertical tire forces and predict tripped rollovers in situations involving road bumps, potholes, and lateral unknown force inputs. To estimate the tire-road friction coefficients at each individual tire of the vehicle, algorithms to estimate longitudinal forces and slip ratios at each tire are proposed. Subsequently, tire-road friction coefficients are obtained using recursive least squares parameter estimators that exploit the relationship between longitudinal force and slip ratio at each tire. The developed approaches are evaluated through simulations with industry standard software, CARSIM, with experimental tests on a Volvo XC90 sport utility vehicle and with experimental tests on a 1/8th scaled vehicle. The simulation and experimental results show that the developed approaches can reliably estimate the vehicle parameters and state variables needed for effective ESC and rollover prevention applications.
NASA Astrophysics Data System (ADS)
Cui, Jie; Li, Zhiying; Krems, Roman V.
2015-10-01
We consider a problem of extrapolating the collision properties of a large polyatomic molecule A-H to make predictions of the dynamical properties for another molecule related to A-H by the substitution of the H atom with a small molecular group X, without explicitly computing the potential energy surface for A-X. We assume that the effect of the -H →-X substitution is embodied in a multidimensional function with unknown parameters characterizing the change of the potential energy surface. We propose to apply the Gaussian Process model to determine the dependence of the dynamical observables on the unknown parameters. This can be used to produce an interval of the observable values which corresponds to physical variations of the potential parameters. We show that the Gaussian Process model combined with classical trajectory calculations can be used to obtain the dependence of the cross sections for collisions of C6H5CN with He on the unknown parameters describing the interaction of the He atom with the CN fragment of the molecule. The unknown parameters are then varied within physically reasonable ranges to produce a prediction uncertainty of the cross sections. The results are normalized to the cross sections for He — C6H6 collisions obtained from quantum scattering calculations in order to provide a prediction interval of the thermally averaged cross sections for collisions of C6H5CN with He.
1981-12-01
preventing the generation of 16 6 negative location estimators. Because of the invariant pro- perty of the EDF statistics, this transformation will...likelihood. If the parameter estimation method developed by Harter and Moore is used, care must be taken to prevent the location estimators from being...vs A 2 Critical Values, Level-.Ol, n-30 128 , 0 6N m m • w - APPENDIX E Computer Prgrams 129 Program to Calculate the Cramer-von Mises Critical Values
Islam, Md Hamidul; Khan, Kamruzzaman; Akbar, M Ali; Salam, Md Abdus
2014-01-01
Mathematical modeling of many physical systems leads to nonlinear evolution equations because most physical systems are inherently nonlinear in nature. The investigation of traveling wave solutions of nonlinear partial differential equations (NPDEs) plays a significant role in the study of nonlinear physical phenomena. In this article, we construct the traveling wave solutions of modified KDV-ZK equation and viscous Burgers equation by using an enhanced (G '/G) -expansion method. A number of traveling wave solutions in terms of unknown parameters are obtained. Derived traveling wave solutions exhibit solitary waves when special values are given to its unknown parameters. 35C07; 35C08; 35P99.
Pet-Armacost, J J; Sepulveda, J; Sakude, M
1999-12-01
The US Department of Transportation was interested in the risks associated with transporting Hydrazine in tanks with and without relief devices. Hydrazine is both highly toxic and flammable, as well as corrosive. Consequently, there was a conflict as to whether a relief device should be used or not. Data were not available on the impact of relief devices on release probabilities or the impact of Hydrazine on the likelihood of fires and explosions. In this paper, a Monte Carlo sensitivity analysis of the unknown parameters was used to assess the risks associated with highway transport of Hydrazine. To help determine whether or not relief devices should be used, fault trees and event trees were used to model the sequences of events that could lead to adverse consequences during transport of Hydrazine. The event probabilities in the event trees were derived as functions of the parameters whose effects were not known. The impacts of these parameters on the risk of toxic exposures, fires, and explosions were analyzed through a Monte Carlo sensitivity analysis and analyzed statistically through an analysis of variance. The analysis allowed the determination of which of the unknown parameters had a significant impact on the risks. It also provided the necessary support to a critical transportation decision even though the values of several key parameters were not known.
Noise parameter estimation for poisson corrupted images using variance stabilization transforms.
Jin, Xiaodan; Xu, Zhenyu; Hirakawa, Keigo
2014-03-01
Noise is present in all images captured by real-world image sensors. Poisson distribution is said to model the stochastic nature of the photon arrival process and agrees with the distribution of measured pixel values. We propose a method for estimating unknown noise parameters from Poisson corrupted images using properties of variance stabilization. With a significantly lower computational complexity and improved stability, the proposed estimation technique yields noise parameters that are comparable in accuracy to the state-of-art methods.
Numerical solution of system of boundary value problems using B-spline with free parameter
NASA Astrophysics Data System (ADS)
Gupta, Yogesh
2017-01-01
This paper deals with method of B-spline solution for a system of boundary value problems. The differential equations are useful in various fields of science and engineering. Some interesting real life problems involve more than one unknown function. These result in system of simultaneous differential equations. Such systems have been applied to many problems in mathematics, physics, engineering etc. In present paper, B-spline and B-spline with free parameter methods for the solution of a linear system of second-order boundary value problems are presented. The methods utilize the values of cubic B-spline and its derivatives at nodal points together with the equations of the given system and boundary conditions, ensuing into the linear matrix equation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Jie; Krems, Roman V.; Li, Zhiying
2015-10-21
We consider a problem of extrapolating the collision properties of a large polyatomic molecule A–H to make predictions of the dynamical properties for another molecule related to A–H by the substitution of the H atom with a small molecular group X, without explicitly computing the potential energy surface for A–X. We assume that the effect of the −H →−X substitution is embodied in a multidimensional function with unknown parameters characterizing the change of the potential energy surface. We propose to apply the Gaussian Process model to determine the dependence of the dynamical observables on the unknown parameters. This can bemore » used to produce an interval of the observable values which corresponds to physical variations of the potential parameters. We show that the Gaussian Process model combined with classical trajectory calculations can be used to obtain the dependence of the cross sections for collisions of C{sub 6}H{sub 5}CN with He on the unknown parameters describing the interaction of the He atom with the CN fragment of the molecule. The unknown parameters are then varied within physically reasonable ranges to produce a prediction uncertainty of the cross sections. The results are normalized to the cross sections for He — C{sub 6}H{sub 6} collisions obtained from quantum scattering calculations in order to provide a prediction interval of the thermally averaged cross sections for collisions of C{sub 6}H{sub 5}CN with He.« less
Optimized tuner selection for engine performance estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)
2013-01-01
A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.
Inverse and forward modeling under uncertainty using MRE-based Bayesian approach
NASA Astrophysics Data System (ADS)
Hou, Z.; Rubin, Y.
2004-12-01
A stochastic inverse approach for subsurface characterization is proposed and applied to shallow vadose zone at a winery field site in north California and to a gas reservoir at the Ormen Lange field site in the North Sea. The approach is formulated in a Bayesian-stochastic framework, whereby the unknown parameters are identified in terms of their statistical moments or their probabilities. Instead of the traditional single-valued estimation /prediction provided by deterministic methods, the approach gives a probability distribution for an unknown parameter. This allows calculating the mean, the mode, and the confidence interval, which is useful for a rational treatment of uncertainty and its consequences. The approach also allows incorporating data of various types and different error levels, including measurements of state variables as well as information such as bounds on or statistical moments of the unknown parameters, which may represent prior information. To obtain minimally subjective prior probabilities required for the Bayesian approach, the principle of Minimum Relative Entropy (MRE) is employed. The approach is tested in field sites for flow parameters identification and soil moisture estimation in the vadose zone and for gas saturation estimation at great depth below the ocean floor. Results indicate the potential of coupling various types of field data within a MRE-based Bayesian formalism for improving the estimation of the parameters of interest.
Rhelogical constraints on ridge formation on Icy Satellites
NASA Astrophysics Data System (ADS)
Rudolph, M. L.; Manga, M.
2010-12-01
The processes responsible for forming ridges on Europa remain poorly understood. We use a continuum damage mechanics approach to model ridge formation. The main objectives of this contribution are to constrain (1) choice of rheological parameters and (2) maximum ridge size and rate of formation. The key rheological parameters to constrain appear in the evolution equation for a damage variable (D): ˙ {D} = B <<σ >>r}(1-D){-k-α D (p)/(μ ) and in the equation relating damage accumulation to volumetric changes, Jρ 0 = δ (1-D). Similar damage evolution laws have been applied to terrestrial glaciers and to the analysis of rock mechanics experiments. However, it is reasonable to expect that, like viscosity, the rheological constants B, α , and δ depend strongly on temperature, composition, and ice grain size. In order to determine whether the damage model is appropriate for Europa’s ridges, we must find values of the unknown damage parameters that reproduce ridge topography. We perform a suite of numerical experiments to identify the region of parameter space conducive to ridge production and show the sensitivity to changes in each unknown parameter.
A modified Leslie-Gower predator-prey interaction model and parameter identifiability
NASA Astrophysics Data System (ADS)
Tripathi, Jai Prakash; Meghwani, Suraj S.; Thakur, Manoj; Abbas, Syed
2018-01-01
In this work, bifurcation and a systematic approach for estimation of identifiable parameters of a modified Leslie-Gower predator-prey system with Crowley-Martin functional response and prey refuge is discussed. Global asymptotic stability is discussed by applying fluctuation lemma. The system undergoes into Hopf bifurcation with respect to parameters intrinsic growth rate of predators (s) and prey reserve (m). The stability of Hopf bifurcation is also discussed by calculating Lyapunov number. The sensitivity analysis of the considered model system with respect to all variables is performed which also supports our theoretical study. To estimate the unknown parameter from the data, an optimization procedure (pseudo-random search algorithm) is adopted. System responses and phase plots for estimated parameters are also compared with true noise free data. It is found that the system dynamics with true set of parametric values is similar to the estimated parametric values. Numerical simulations are presented to substantiate the analytical findings.
Bayesian methods for characterizing unknown parameters of material models
Emery, J. M.; Grigoriu, M. D.; Field Jr., R. V.
2016-02-04
A Bayesian framework is developed for characterizing the unknown parameters of probabilistic models for material properties. In this framework, the unknown parameters are viewed as random and described by their posterior distributions obtained from prior information and measurements of quantities of interest that are observable and depend on the unknown parameters. The proposed Bayesian method is applied to characterize an unknown spatial correlation of the conductivity field in the definition of a stochastic transport equation and to solve this equation by Monte Carlo simulation and stochastic reduced order models (SROMs). As a result, the Bayesian method is also employed tomore » characterize unknown parameters of material properties for laser welds from measurements of peak forces sustained by these welds.« less
Bayesian methods for characterizing unknown parameters of material models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emery, J. M.; Grigoriu, M. D.; Field Jr., R. V.
A Bayesian framework is developed for characterizing the unknown parameters of probabilistic models for material properties. In this framework, the unknown parameters are viewed as random and described by their posterior distributions obtained from prior information and measurements of quantities of interest that are observable and depend on the unknown parameters. The proposed Bayesian method is applied to characterize an unknown spatial correlation of the conductivity field in the definition of a stochastic transport equation and to solve this equation by Monte Carlo simulation and stochastic reduced order models (SROMs). As a result, the Bayesian method is also employed tomore » characterize unknown parameters of material properties for laser welds from measurements of peak forces sustained by these welds.« less
Systems identification using a modified Newton-Raphson method: A FORTRAN program
NASA Technical Reports Server (NTRS)
Taylor, L. W., Jr.; Iliff, K. W.
1972-01-01
A FORTRAN program is offered which computes a maximum likelihood estimate of the parameters of any linear, constant coefficient, state space model. For the case considered, the maximum likelihood estimate can be identical to that which minimizes simultaneously the weighted mean square difference between the computed and measured response of a system and the weighted square of the difference between the estimated and a priori parameter values. A modified Newton-Raphson or quasilinearization method is used to perform the minimization which typically requires several iterations. A starting technique is used which insures convergence for any initial values of the unknown parameters. The program and its operation are described in sufficient detail to enable the user to apply the program to his particular problem with a minimum of difficulty.
Groebner Basis Solutions to Satellite Trajectory Control by Pole Placement
NASA Astrophysics Data System (ADS)
Kukelova, Z.; Krsek, P.; Smutny, V.; Pajdla, T.
2013-09-01
Satellites play an important role, e.g., in telecommunication, navigation and weather monitoring. Controlling their trajectories is an important problem. In [1], an approach to the pole placement for the synthesis of a linear controller has been presented. It leads to solving five polynomial equations in nine unknown elements of the state space matrices of a compensator. This is an underconstrained system and therefore four of the unknown elements need to be considered as free parameters and set to some prior values to obtain a system of five equations in five unknowns. In [1], this system was solved for one chosen set of free parameters with the help of Dixon resultants. In this work, we study and present Groebner basis solutions to this problem of computation of a dynamic compensator for the satellite for different combinations of input free parameters. We show that the Groebner basis method for solving systems of polynomial equations leads to very simple solutions for all combinations of free parameters. These solutions require to perform only the Gauss-Jordan elimination of a small matrix and computation of roots of a single variable polynomial. The maximum degree of this polynomial is not greater than six in general but for most combinations of the input free parameters its degree is even lower. [1] B. Palancz. Application of Dixon resultant to satellite trajectory control by pole placement. Journal of Symbolic Computation, Volume 50, March 2013, Pages 79-99, Elsevier.
Model Reduction via Principe Component Analysis and Markov Chain Monte Carlo (MCMC) Methods
NASA Astrophysics Data System (ADS)
Gong, R.; Chen, J.; Hoversten, M. G.; Luo, J.
2011-12-01
Geophysical and hydrogeological inverse problems often include a large number of unknown parameters, ranging from hundreds to millions, depending on parameterization and problems undertaking. This makes inverse estimation and uncertainty quantification very challenging, especially for those problems in two- or three-dimensional spatial domains. Model reduction technique has the potential of mitigating the curse of dimensionality by reducing total numbers of unknowns while describing the complex subsurface systems adequately. In this study, we explore the use of principal component analysis (PCA) and Markov chain Monte Carlo (MCMC) sampling methods for model reduction through the use of synthetic datasets. We compare the performances of three different but closely related model reduction approaches: (1) PCA methods with geometric sampling (referred to as 'Method 1'), (2) PCA methods with MCMC sampling (referred to as 'Method 2'), and (3) PCA methods with MCMC sampling and inclusion of random effects (referred to as 'Method 3'). We consider a simple convolution model with five unknown parameters as our goal is to understand and visualize the advantages and disadvantages of each method by comparing their inversion results with the corresponding analytical solutions. We generated synthetic data with noise added and invert them under two different situations: (1) the noised data and the covariance matrix for PCA analysis are consistent (referred to as the unbiased case), and (2) the noise data and the covariance matrix are inconsistent (referred to as biased case). In the unbiased case, comparison between the analytical solutions and the inversion results show that all three methods provide good estimates of the true values and Method 1 is computationally more efficient. In terms of uncertainty quantification, Method 1 performs poorly because of relatively small number of samples obtained, Method 2 performs best, and Method 3 overestimates uncertainty due to inclusion of random effects. However, in the biased case, only Method 3 correctly estimates all the unknown parameters, and both Methods 1 and 2 provide wrong values for the biased parameters. The synthetic case study demonstrates that if the covariance matrix for PCA analysis is inconsistent with true models, the PCA methods with geometric or MCMC sampling will provide incorrect estimates.
NASA Technical Reports Server (NTRS)
Deshpande, Manohar D.; Dudley, Kenneth
2003-01-01
A simple method is presented to estimate the complex dielectric constants of individual layers of a multilayer composite material. Using the MatLab Optimization Tools simple MatLab scripts are written to search for electric properties of individual layers so as to match the measured and calculated S-parameters. A single layer composite material formed by using materials such as Bakelite, Nomex Felt, Fiber Glass, Woven Composite B and G, Nano Material #0, Cork, Garlock, of different thicknesses are tested using the present approach. Assuming the thicknesses of samples unknown, the present approach is shown to work well in estimating the dielectric constants and the thicknesses. A number of two layer composite materials formed by various combinations of above individual materials are tested using the present approach. However, the present approach could not provide estimate values close to their true values when the thicknesses of individual layers were assumed to be unknown. This is attributed to the difficulty in modelling the presence of airgaps between the layers while doing the measurement of S-parameters. A few examples of three layer composites are also presented.
Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J
2014-01-01
Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model.
NASA Astrophysics Data System (ADS)
Astroza, Rodrigo; Ebrahimian, Hamed; Li, Yong; Conte, Joel P.
2017-09-01
A methodology is proposed to update mechanics-based nonlinear finite element (FE) models of civil structures subjected to unknown input excitation. The approach allows to jointly estimate unknown time-invariant model parameters of a nonlinear FE model of the structure and the unknown time histories of input excitations using spatially-sparse output response measurements recorded during an earthquake event. The unscented Kalman filter, which circumvents the computation of FE response sensitivities with respect to the unknown model parameters and unknown input excitations by using a deterministic sampling approach, is employed as the estimation tool. The use of measurement data obtained from arrays of heterogeneous sensors, including accelerometers, displacement sensors, and strain gauges is investigated. Based on the estimated FE model parameters and input excitations, the updated nonlinear FE model can be interrogated to detect, localize, classify, and assess damage in the structure. Numerically simulated response data of a three-dimensional 4-story 2-by-1 bay steel frame structure with six unknown model parameters subjected to unknown bi-directional horizontal seismic excitation, and a three-dimensional 5-story 2-by-1 bay reinforced concrete frame structure with nine unknown model parameters subjected to unknown bi-directional horizontal seismic excitation are used to illustrate and validate the proposed methodology. The results of the validation studies show the excellent performance and robustness of the proposed algorithm to jointly estimate unknown FE model parameters and unknown input excitations.
Choi, Young Jun; Lee, Jeong Hyun; Kim, Hye Ok; Kim, Dae Yoon; Yoon, Ra Gyoung; Cho, So Hyun; Koh, Myeong Ju; Kim, Namkug; Kim, Sang Yoon; Baek, Jung Hwan
2016-01-01
To explore the added value of histogram analysis of apparent diffusion coefficient (ADC) values over magnetic resonance (MR) imaging and fluorine 18 ((18)F) fluorodeoxyglucose (FDG) positron emission tomography (PET)/computed tomography (CT) for the detection of occult palatine tonsil squamous cell carcinoma (SCC) in patients with cervical nodal metastasis from a cancer of an unknown primary site. The institutional review board approved this retrospective study, and the requirement for informed consent was waived. Differences in the bimodal histogram parameters of the ADC values were assessed among occult palatine tonsil SCC (n = 19), overt palatine tonsil SCC (n = 20), and normal palatine tonsils (n = 20). One-way analysis of variance was used to analyze differences among the three groups. Receiver operating characteristic curve analysis was used to determine the best differentiating parameters. The increased sensitivity of histogram analysis over MR imaging and (18)F-FDG PET/CT for the detection of occult palatine tonsil SCC was evaluated as added value. Histogram analysis showed statistically significant differences in the mean, standard deviation, and 50th and 90th percentile ADC values among the three groups (P < .0045). Occult palatine tonsil SCC had a significantly higher standard deviation for the overall curves, mean and standard deviation of the higher curves, and 90th percentile ADC value, compared with normal palatine tonsils (P < .0167). Receiver operating characteristic curve analysis showed that the standard deviation of the overall curve best delineated occult palatine tonsil SCC from normal palatine tonsils, with a sensitivity of 78.9% (15 of 19 patients) and a specificity of 60% (12 of 20 patients). The added value of ADC histogram analysis was 52.6% over MR imaging alone and 15.8% over combined conventional MR imaging and (18)F-FDG PET/CT. Adding ADC histogram analysis to conventional MR imaging can improve the detection sensitivity for occult palatine tonsil SCC in patients with a cervical nodal metastasis originating from a cancer of an unknown primary site. © RSNA, 2015.
NASA Astrophysics Data System (ADS)
Grombein, T.; Seitz, K.; Heck, B.
2013-12-01
In general, national height reference systems are related to individual vertical datums defined by specific tide gauges. The discrepancy of these vertical datums causes height system biases that range in an order of 1-2 m at a global scale. Continental height systems can be connected by spirit leveling and gravity measurements along the leveling lines as performed for the definition of the European Vertical Reference Frame. In order to unify intercontinental height systems, an indirect connection is needed. For this purpose, global geopotential models derived from recent satellite missions like GOCE provide an important contribution. However, to achieve a highly-precise solution, a combination with local terrestrial gravity data is indispensable. Such combinations result in the solution of a Geodetic Boundary Value Problem (GBVP). In contrast to previous studies, mostly related to the traditional (scalar) free GBVP, the present paper discusses the use of the fixed GBVP for height system unification, where gravity disturbances instead of gravity anomalies are applied as boundary values. The basic idea of our approach is a conversion of measured gravity anomalies to gravity disturbances, where unknown datum parameters occur that can be associated with height system biases. In this way, the fixed GBVP can be extended by datum parameters for each datum zone. By evaluating the GBVP at GNSS/leveling benchmarks, the unknown datum parameters can be estimated in a least squares adjustment. Beside the developed theory, we present numerical results of a case study based on the spherical fixed GBVP and boundary values simulated by the use of the global geopotential model EGM2008. In a further step, the impact of approximations like linearization as well as topographic and ellipsoidal effects is taken into account by suitable reduction and correction terms.
An improved state-parameter analysis of ecosystem models using data assimilation
Chen, M.; Liu, S.; Tieszen, L.L.; Hollinger, D.Y.
2008-01-01
Much of the effort spent in developing data assimilation methods for carbon dynamics analysis has focused on estimating optimal values for either model parameters or state variables. The main weakness of estimating parameter values alone (i.e., without considering state variables) is that all errors from input, output, and model structure are attributed to model parameter uncertainties. On the other hand, the accuracy of estimating state variables may be lowered if the temporal evolution of parameter values is not incorporated. This research develops a smoothed ensemble Kalman filter (SEnKF) by combining ensemble Kalman filter with kernel smoothing technique. SEnKF has following characteristics: (1) to estimate simultaneously the model states and parameters through concatenating unknown parameters and state variables into a joint state vector; (2) to mitigate dramatic, sudden changes of parameter values in parameter sampling and parameter evolution process, and control narrowing of parameter variance which results in filter divergence through adjusting smoothing factor in kernel smoothing algorithm; (3) to assimilate recursively data into the model and thus detect possible time variation of parameters; and (4) to address properly various sources of uncertainties stemming from input, output and parameter uncertainties. The SEnKF is tested by assimilating observed fluxes of carbon dioxide and environmental driving factor data from an AmeriFlux forest station located near Howland, Maine, USA, into a partition eddy flux model. Our analysis demonstrates that model parameters, such as light use efficiency, respiration coefficients, minimum and optimum temperatures for photosynthetic activity, and others, are highly constrained by eddy flux data at daily-to-seasonal time scales. The SEnKF stabilizes parameter values quickly regardless of the initial values of the parameters. Potential ecosystem light use efficiency demonstrates a strong seasonality. Results show that the simultaneous parameter estimation procedure significantly improves model predictions. Results also show that the SEnKF can dramatically reduce the variance in state variables stemming from the uncertainty of parameters and driving variables. The SEnKF is a robust and effective algorithm in evaluating and developing ecosystem models and in improving the understanding and quantification of carbon cycle parameters and processes. ?? 2008 Elsevier B.V.
NASA Technical Reports Server (NTRS)
Martin, William G.; Cairns, Brian; Bal, Guillaume
2014-01-01
This paper derives an efficient procedure for using the three-dimensional (3D) vector radiative transfer equation (VRTE) to adjust atmosphere and surface properties and improve their fit with multi-angle/multi-pixel radiometric and polarimetric measurements of scattered sunlight. The proposed adjoint method uses the 3D VRTE to compute the measurement misfit function and the adjoint 3D VRTE to compute its gradient with respect to all unknown parameters. In the remote sensing problems of interest, the scalar-valued misfit function quantifies agreement with data as a function of atmosphere and surface properties, and its gradient guides the search through this parameter space. Remote sensing of the atmosphere and surface in a three-dimensional region may require thousands of unknown parameters and millions of data points. Many approaches would require calls to the 3D VRTE solver in proportion to the number of unknown parameters or measurements. To avoid this issue of scale, we focus on computing the gradient of the misfit function as an alternative to the Jacobian of the measurement operator. The resulting adjoint method provides a way to adjust 3D atmosphere and surface properties with only two calls to the 3D VRTE solver for each spectral channel, regardless of the number of retrieval parameters, measurement view angles or pixels. This gives a procedure for adjusting atmosphere and surface parameters that will scale to the large problems of 3D remote sensing. For certain types of multi-angle/multi-pixel polarimetric measurements, this encourages the development of a new class of three-dimensional retrieval algorithms with more flexible parametrizations of spatial heterogeneity, less reliance on data screening procedures, and improved coverage in terms of the resolved physical processes in the Earth?s atmosphere.
Image Restoration for Fluorescence Planar Imaging with Diffusion Model
Gong, Yuzhu; Li, Yang
2017-01-01
Fluorescence planar imaging (FPI) is failure to capture high resolution images of deep fluorochromes due to photon diffusion. This paper presents an image restoration method to deal with this kind of blurring. The scheme of this method is conceived based on a reconstruction method in fluorescence molecular tomography (FMT) with diffusion model. A new unknown parameter is defined through introducing the first mean value theorem for definite integrals. System matrix converting this unknown parameter to the blurry image is constructed with the elements of depth conversion matrices related to a chosen plane named focal plane. Results of phantom and mouse experiments show that the proposed method is capable of reducing the blurring of FPI image caused by photon diffusion when the depth of focal plane is chosen within a proper interval around the true depth of fluorochrome. This method will be helpful to the estimation of the size of deep fluorochrome. PMID:29279843
Dhayat, Nasser A; Gradwell, Michael W; Pathare, Ganesh; Anderegg, Manuel; Schneider, Lisa; Luethi, David; Mattmann, Cedric; Moe, Orson W; Vogt, Bruno; Fuster, Daniel G
2017-09-07
Incomplete distal renal tubular acidosis is a well known cause of calcareous nephrolithiasis but the prevalence is unknown, mostly due to lack of accepted diagnostic tests and criteria. The ammonium chloride test is considered as gold standard for the diagnosis of incomplete distal renal tubular acidosis, but the furosemide/fludrocortisone test was recently proposed as an alternative. Because of the lack of rigorous comparative studies, the validity of the furosemide/fludrocortisone test in stone formers remains unknown. In addition, the performance of conventional, nonprovocative parameters in predicting incomplete distal renal tubular acidosis has not been studied. We conducted a prospective study in an unselected cohort of 170 stone formers that underwent sequential ammonium chloride and furosemide/fludrocortisone testing. Using the ammonium chloride test as gold standard, the prevalence of incomplete distal renal tubular acidosis was 8%. Sensitivity and specificity of the furosemide/fludrocortisone test were 77% and 85%, respectively, yielding a positive predictive value of 30% and a negative predictive value of 98%. Testing of several nonprovocative clinical parameters in the prediction of incomplete distal renal tubular acidosis revealed fasting morning urinary pH and plasma potassium as the most discriminative parameters. The combination of a fasting morning urinary threshold pH <5.3 with a plasma potassium threshold >3.8 mEq/L yielded a negative predictive value of 98% with a sensitivity of 85% and a specificity of 77% for the diagnosis of incomplete distal renal tubular acidosis. The furosemide/fludrocortisone test can be used for incomplete distal renal tubular acidosis screening in stone formers, but an abnormal furosemide/fludrocortisone test result needs confirmation by ammonium chloride testing. Our data furthermore indicate that incomplete distal renal tubular acidosis can reliably be excluded in stone formers by use of nonprovocative clinical parameters. Copyright © 2017 by the American Society of Nephrology.
NWP model forecast skill optimization via closure parameter variations
NASA Astrophysics Data System (ADS)
Järvinen, H.; Ollinaho, P.; Laine, M.; Solonen, A.; Haario, H.
2012-04-01
We present results of a novel approach to tune predictive skill of numerical weather prediction (NWP) models. These models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. The current practice is to specify manually the numerical parameter values, based on expert knowledge. We developed recently a concept and method (QJRMS 2011) for on-line estimation of the NWP model parameters via closure parameter variations. The method called EPPES ("Ensemble prediction and parameter estimation system") utilizes ensemble prediction infra-structure for parameter estimation in a very cost-effective way: practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating an ensemble of predictions so that each member uses different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In this presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an ensemble prediction system emulator, based on the ECHAM5 atmospheric GCM show that the model tuning capability of EPPES scales up to realistic models and ensemble prediction systems. Finally, preliminary results of EPPES in the context of ECMWF forecasting system are presented.
Numerical weather prediction model tuning via ensemble prediction system
NASA Astrophysics Data System (ADS)
Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.
2011-12-01
This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.
Systematic wavelength selection for improved multivariate spectral analysis
Thomas, Edward V.; Robinson, Mark R.; Haaland, David M.
1995-01-01
Methods and apparatus for determining in a biological material one or more unknown values of at least one known characteristic (e.g. the concentration of an analyte such as glucose in blood or the concentration of one or more blood gas parameters) with a model based on a set of samples with known values of the known characteristics and a multivariate algorithm using several wavelength subsets. The method includes selecting multiple wavelength subsets, from the electromagnetic spectral region appropriate for determining the known characteristic, for use by an algorithm wherein the selection of wavelength subsets improves the model's fitness of the determination for the unknown values of the known characteristic. The selection process utilizes multivariate search methods that select both predictive and synergistic wavelengths within the range of wavelengths utilized. The fitness of the wavelength subsets is determined by the fitness function F=.function.(cost, performance). The method includes the steps of: (1) using one or more applications of a genetic algorithm to produce one or more count spectra, with multiple count spectra then combined to produce a combined count spectrum; (2) smoothing the count spectrum; (3) selecting a threshold count from a count spectrum to select these wavelength subsets which optimize the fitness function; and (4) eliminating a portion of the selected wavelength subsets. The determination of the unknown values can be made: (1) noninvasively and in vivo; (2) invasively and in vivo; or (3) in vitro.
Flexible engineering designs for urban water management in Lusaka, Zambia.
Tembo, Lucy; Pathirana, Assela; van der Steen, Peter; Zevenbergen, Chris
2015-01-01
Urban water systems are often designed using deterministic single values as design parameters. Subsequently the different design alternatives are compared using a discounted cash flow analysis that assumes that all parameters remain as-predicted for the entire project period. In reality the future is unknown and at best a possible range of values for design parameters can be estimated. A Monte Carlo simulation could then be used to calculate the expected Net Present Value of project alternatives, as well as so-called target curves (cumulative frequency distribution of possible Net Present Values). The same analysis could be done after flexibilities were incorporated in the design, either by using decision rules to decide about the moment of capacity increase, or by buying Real Options (in this case land) to cater for potential capacity increases in the future. This procedure was applied to a sanitation and wastewater treatment case in Lusaka, Zambia. It included various combinations of on-site anaerobic baffled reactors and off-site waste stabilisation ponds. For the case study, it was found that the expected net value of wastewater treatment systems can be increased by 35-60% by designing a small flexible system with Real Options, rather than a large inflexible system.
NASA Technical Reports Server (NTRS)
Bakhshiyan, B. T.; Nazirov, R. R.; Elyasberg, P. E.
1980-01-01
The problem of selecting the optimal algorithm of filtration and the optimal composition of the measurements is examined assuming that the precise values of the mathematical expectancy and the matrix of covariation of errors are unknown. It is demonstrated that the optimal algorithm of filtration may be utilized for making some parameters more precise (for example, the parameters of the gravitational fields) after preliminary determination of the elements of the orbit by a simpler method of processing (for example, the method of least squares).
Reliable noninvasive measurement of blood gases
Thomas, Edward V.; Robinson, Mark R.; Haaland, David M.; Alam, Mary K.
1994-01-01
Methods and apparatus for, preferably, determining noninvasively and in vivo at least two of the five blood gas parameters (i.e., pH, PCO.sub.2, [HCO.sub.3.sup.- ], PO.sub.2, and O.sub.2 sat.) in a human. The non-invasive method includes the steps of: generating light at three or more different wavelengths in the range of 500 nm to 2500 nm; irradiating blood containing tissue; measuring the intensities of the wavelengths emerging from the blood containing tissue to obtain a set of at least three spectral intensities v. wavelengths; and determining the unknown values of at least two of pH, [HCO.sub.3.sup.- ], PCO.sub.2 and a measure of oxygen concentration. The determined values are within the physiological ranges observed in blood containing tissue. The method also includes the steps of providing calibration samples, determining if the spectral intensities v. wavelengths from the tissue represents an outlier, and determining if any of the calibration samples represents an outlier. The determination of the unknown values is performed by at least one multivariate algorithm using two or more variables and at least one calibration model. Preferably, there is a separate calibration for each blood gas parameter being determined. The method can be utilized in a pulse mode and can also be used invasively. The apparatus includes a tissue positioning device, a source, at least one detector, electronics, a microprocessor, memory, and apparatus for indicating the determined values.
Lança, L; Silva, A; Alves, E; Serranheira, F; Correia, M
2008-01-01
Typical distribution of exposure parameters in plain radiography is unknown in Portugal. This study aims to identify exposure parameters that are being used in plain radiography in the Lisbon area and to compare the collected data with European references [Commission of European Communities (CEC) guidelines]. The results show that in four examinations (skull, chest, lumbar spine and pelvis), there is a strong tendency of using exposure times above the European recommendation. The X-ray tube potential values (in kV) are below the recommended values from CEC guidelines. This study shows that at a local level (Lisbon region), radiographic practice does not comply with CEC guidelines concerning exposure techniques. Further national/local studies are recommended with the objective to improve exposure optimisation and technical procedures in plain radiography. This study also suggests the need to establish national/local diagnostic reference levels and to proceed to effective measurements for exposure optimisation.
State and Parameter Estimation for a Coupled Ocean--Atmosphere Model
NASA Astrophysics Data System (ADS)
Ghil, M.; Kondrashov, D.; Sun, C.
2006-12-01
The El-Nino/Southern-Oscillation (ENSO) dominates interannual climate variability and plays, therefore, a key role in seasonal-to-interannual prediction. Much is known by now about the main physical mechanisms that give rise to and modulate ENSO, but the values of several parameters that enter these mechanisms are an important unknown. We apply Extended Kalman Filtering (EKF) for both model state and parameter estimation in an intermediate, nonlinear, coupled ocean--atmosphere model of ENSO. The coupled model consists of an upper-ocean, reduced-gravity model of the Tropical Pacific and a steady-state atmospheric response to the sea surface temperature (SST). The model errors are assumed to be mainly in the atmospheric wind stress, and assimilated data are equatorial Pacific SSTs. Model behavior is very sensitive to two key parameters: (i) μ, the ocean-atmosphere coupling coefficient between SST and wind stress anomalies; and (ii) δs, the surface-layer coefficient. Previous work has shown that δs determines the period of the model's self-sustained oscillation, while μ measures the degree of nonlinearity. Depending on the values of these parameters, the spatio-temporal pattern of model solutions is either that of a delayed oscillator or of a westward propagating mode. Estimation of these parameters is tested first on synthetic data and allows us to recover the delayed-oscillator mode starting from model parameter values that correspond to the westward-propagating case. Assimilation of SST data from the NCEP-NCAR Reanalysis-2 shows that the parameters can vary on fairly short time scales and switch between values that approximate the two distinct modes of ENSO behavior. Rapid adjustments of these parameters occur, in particular, during strong ENSO events. Ways to apply EKF parameter estimation efficiently to state-of-the-art coupled ocean--atmosphere GCMs will be discussed.
1994-03-01
labels of a, which is called significance levels. The hypothesis tests are done based on the a levels . The maximum probabilities of making type II error...critical values at specific a levels . This procedure is done for each of the 50,000 samples. The number of the samples passing each test at those specific... a levels is counted. The ratio of the number of accepted samples to 50,000 gives the percentage point. Then, subtracting that value from one would
Determination of Stark parameters by cross-calibration in a multi-element laser-induced plasma
NASA Astrophysics Data System (ADS)
Liu, Hao; Truscott, Benjamin S.; Ashfold, Michael N. R.
2016-05-01
We illustrate a Stark broadening analysis of the electron density Ne and temperature Te in a laser-induced plasma (LIP), using a model free of assumptions regarding local thermodynamic equilibrium (LTE). The method relies on Stark parameters determined also without assuming LTE, which are often unknown and unavailable in the literature. Here, we demonstrate that the necessary values can be obtained in situ by cross-calibration between the spectral lines of different charge states, and even different elements, given determinations of Ne and Te based on appropriate parameters for at least one observed transition. This approach enables essentially free choice between species on which to base the analysis, extending the range over which these properties can be measured and giving improved access to low-density plasmas out of LTE. Because of the availability of suitable tabulated values for several charge states of both Si and C, the example of a SiC LIP is taken to illustrate the consistency and accuracy of the procedure. The cross-calibrated Stark parameters are at least as reliable as values obtained by other means, offering a straightforward route to extending the literature in this area.
Transformation to equivalent dimensions—a new methodology to study earthquake clustering
NASA Astrophysics Data System (ADS)
Lasocki, Stanislaw
2014-05-01
A seismic event is represented by a point in a parameter space, quantified by the vector of parameter values. Studies of earthquake clustering involve considering distances between such points in multidimensional spaces. However, the metrics of earthquake parameters are different, hence the metric in a multidimensional parameter space cannot be readily defined. The present paper proposes a solution of this metric problem based on a concept of probabilistic equivalence of earthquake parameters. Under this concept the lengths of parameter intervals are equivalent if the probability for earthquakes to take values from either interval is the same. Earthquake clustering is studied in an equivalent rather than the original dimensions space, where the equivalent dimension (ED) of a parameter is its cumulative distribution function. All transformed parameters are of linear scale in [0, 1] interval and the distance between earthquakes represented by vectors in any ED space is Euclidean. The unknown, in general, cumulative distributions of earthquake parameters are estimated from earthquake catalogues by means of the model-free non-parametric kernel estimation method. Potential of the transformation to EDs is illustrated by two examples of use: to find hierarchically closest neighbours in time-space and to assess temporal variations of earthquake clustering in a specific 4-D phase space.
Neural correlates of value, risk, and risk aversion contributing to decision making under risk.
Christopoulos, George I; Tobler, Philippe N; Bossaerts, Peter; Dolan, Raymond J; Schultz, Wolfram
2009-10-07
Decision making under risk is central to human behavior. Economic decision theory suggests that value, risk, and risk aversion influence choice behavior. Although previous studies identified neural correlates of decision parameters, the contribution of these correlates to actual choices is unknown. In two different experiments, participants chose between risky and safe options. We identified discrete blood oxygen level-dependent (BOLD) correlates of value and risk in the ventral striatum and anterior cingulate, respectively. Notably, increasing inferior frontal gyrus activity to low risk and safe options correlated with higher risk aversion. Importantly, the combination of these BOLD responses effectively decoded the behavioral choice. Striatal value and cingulate risk responses increased the probability of a risky choice, whereas inferior frontal gyrus responses showed the inverse relationship. These findings suggest that the BOLD correlates of decision factors are appropriate for an ideal observer to detect behavioral choices. More generally, these biological data contribute to the validity of the theoretical decision parameters for actual decisions under risk.
Qin, Qin; Huang, Alan J; Hua, Jun; Desmond, John E; Stevens, Robert D; van Zijl, Peter C M
2014-02-01
Measurement of the cerebral blood flow (CBF) with whole-brain coverage is challenging in terms of both acquisition and quantitative analysis. In order to fit arterial spin labeling-based perfusion kinetic curves, an empirical three-parameter model which characterizes the effective impulse response function (IRF) is introduced, which allows the determination of CBF, the arterial transit time (ATT) and T(1,eff). The accuracy and precision of the proposed model were compared with those of more complicated models with four or five parameters through Monte Carlo simulations. Pseudo-continuous arterial spin labeling images were acquired on a clinical 3-T scanner in 10 normal volunteers using a three-dimensional multi-shot gradient and spin echo scheme at multiple post-labeling delays to sample the kinetic curves. Voxel-wise fitting was performed using the three-parameter model and other models that contain two, four or five unknown parameters. For the two-parameter model, T(1,eff) values close to tissue and blood were assumed separately. Standard statistical analysis was conducted to compare these fitting models in various brain regions. The fitted results indicated that: (i) the estimated CBF values using the two-parameter model show appreciable dependence on the assumed T(1,eff) values; (ii) the proposed three-parameter model achieves the optimal balance between the goodness of fit and model complexity when compared among the models with explicit IRF fitting; (iii) both the two-parameter model using fixed blood T1 values for T(1,eff) and the three-parameter model provide reasonable fitting results. Using the proposed three-parameter model, the estimated CBF (46 ± 14 mL/100 g/min) and ATT (1.4 ± 0.3 s) values averaged from different brain regions are close to the literature reports; the estimated T(1,eff) values (1.9 ± 0.4 s) are higher than the tissue T1 values, possibly reflecting a contribution from the microvascular arterial blood compartment. Copyright © 2013 John Wiley & Sons, Ltd.
Calibration of two complex ecosystem models with different likelihood functions
NASA Astrophysics Data System (ADS)
Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán
2014-05-01
The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model goodness metric on calibration. The different likelihoods are different functions of RMSE (root mean squared error) weighted by measurement uncertainty: exponential / linear / quadratic / linear normalized by correlation. As a first calibration step sensitivity analysis was performed in order to select the influential parameters which have strong effect on the output data. In the second calibration step only the sensitive parameters were calibrated (optimal values and confidence intervals were calculated). In case of PaSim more parameters were found responsible for the 95% of the output data variance than is case of BBGC MuSo. Analysis of the results of the optimized models revealed that the exponential likelihood estimation proved to be the most robust (best model simulation with optimized parameter, highest confidence interval increase). The cross-validation of the model simulations can help in constraining the highly uncertain greenhouse gas budget of grasslands.
Neutrino oscillations and Non-Standard Interactions
NASA Astrophysics Data System (ADS)
Farzan, Yasaman; Tórtola, Mariam
2018-02-01
Current neutrino experiments are measuring the neutrino mixing parameters with an unprecedented accuracy. The upcoming generation of neutrino experiments will be sensitive to subdominant oscillation effects that can give information on the yet-unknown neutrino parameters: the Dirac CP-violating phase, the mass ordering and the octant of θ_{23}. Determining the exact values of neutrino mass and mixing parameters is crucial to test neutrino models and flavor symmetries designed to predict these neutrino parameters. In the first part of this review, we summarize the current status of the neutrino oscillation parameter determination. We consider the most recent data from all solar experiments and the atmospheric data from Super-Kamiokande, IceCube and ANTARES. We also implement the data from the reactor neutrino experiments KamLAND, Daya Bay, RENO and Double Chooz as well as the long baseline neutrino data from MINOS, T2K and NOvA. If in addition to the standard interactions, neutrinos have subdominant yet-unknown Non-Standard Interactions (NSI) with matter fields, extracting the values of these parameters will suffer from new degeneracies and ambiguities. We review such effects and formulate the conditions on the NSI parameters under which the precision measurement of neutrino oscillation parameters can be distorted. Like standard weak interactions, the non-standard interaction can be categorized into two groups: Charged Current (CC) NSI and Neutral Current (NC) NSI. Our focus will be mainly on neutral current NSI because it is possible to build a class of models that give rise to sizeable NC NSI with discernible effects on neutrino oscillation. These models are based on new U(1) gauge symmetry with a gauge boson of mass ≲ 10 MeV. The UV complete model should be of course electroweak invariant which in general implies that along with neutrinos, charged fermions also acquire new interactions on which there are strong bounds. We enumerate the bounds that already exist on the electroweak symmetric models and demonstrate that it is possible to build viable models avoiding all these bounds. In the end, we review methods to test these models and suggest approaches to break the degeneracies in deriving neutrino mass parameters caused by NSI.
Probabilistic and deterministic aspects of linear estimation in geodesy. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Dermanis, A.
1976-01-01
Recent advances in observational techniques related to geodetic work (VLBI, laser ranging) make it imperative that more consideration should be given to modeling problems. Uncertainties in the effect of atmospheric refraction, polar motion and precession-nutation parameters, cannot be dispensed with in the context of centimeter level geodesy. Even physical processes that have generally been previously altogether neglected (station motions) must now be taken into consideration. The problem of modeling functions of time or space, or at least their values at observation points (epochs) is explored. When the nature of the function to be modeled is unknown. The need to include a limited number of terms and to a priori decide upon a specific form may result in a representation which fails to sufficiently approximate the unknown function. An alternative approach of increasing application is the modeling of unknown functions as stochastic processes.
An adaptive control scheme for a flexible manipulator
NASA Technical Reports Server (NTRS)
Yang, T. C.; Yang, J. C. S.; Kudva, P.
1987-01-01
The problem of controlling a single link flexible manipulator is considered. A self-tuning adaptive control scheme is proposed which consists of a least squares on-line parameter identification of an equivalent linear model followed by a tuning of the gains of a pole placement controller using the parameter estimates. Since the initial parameter values for this model are assumed unknown, the use of arbitrarily chosen initial parameter estimates in the adaptive controller would result in undesirable transient effects. Hence, the initial stage control is carried out with a PID controller. Once the identified parameters have converged, control is transferred to the adaptive controller. Naturally, the relevant issues in this scheme are tests for parameter convergence and minimization of overshoots during control switch-over. To demonstrate the effectiveness of the proposed scheme, simulation results are presented with an analytical nonlinear dynamic model of a single link flexible manipulator.
Statistical inference involving binomial and negative binomial parameters.
García-Pérez, Miguel A; Núñez-Antón, Vicente
2009-05-01
Statistical inference about two binomial parameters implies that they are both estimated by binomial sampling. There are occasions in which one aims at testing the equality of two binomial parameters before and after the occurrence of the first success along a sequence of Bernoulli trials. In these cases, the binomial parameter before the first success is estimated by negative binomial sampling whereas that after the first success is estimated by binomial sampling, and both estimates are related. This paper derives statistical tools to test two hypotheses, namely, that both binomial parameters equal some specified value and that both parameters are equal though unknown. Simulation studies are used to show that in small samples both tests are accurate in keeping the nominal Type-I error rates, and also to determine sample size requirements to detect large, medium, and small effects with adequate power. Additional simulations also show that the tests are sufficiently robust to certain violations of their assumptions.
Slope Estimation in Noisy Piecewise Linear Functions✩
Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy
2014-01-01
This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure. PMID:25419020
Slope Estimation in Noisy Piecewise Linear Functions.
Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy
2015-03-01
This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure.
Identification of vehicle suspension parameters by design optimization
NASA Astrophysics Data System (ADS)
Tey, J. Y.; Ramli, R.; Kheng, C. W.; Chong, S. Y.; Abidin, M. A. Z.
2014-05-01
The design of a vehicle suspension system through simulation requires accurate representation of the design parameters. These parameters are usually difficult to measure or sometimes unavailable. This article proposes an efficient approach to identify the unknown parameters through optimization based on experimental results, where the covariance matrix adaptation-evolutionary strategy (CMA-es) is utilized to improve the simulation and experimental results against the kinematic and compliance tests. This speeds up the design and development cycle by recovering all the unknown data with respect to a set of kinematic measurements through a single optimization process. A case study employing a McPherson strut suspension system is modelled in a multi-body dynamic system. Three kinematic and compliance tests are examined, namely, vertical parallel wheel travel, opposite wheel travel and single wheel travel. The problem is formulated as a multi-objective optimization problem with 40 objectives and 49 design parameters. A hierarchical clustering method based on global sensitivity analysis is used to reduce the number of objectives to 30 by grouping correlated objectives together. Then, a dynamic summation of rank value is used as pseudo-objective functions to reformulate the multi-objective optimization to a single-objective optimization problem. The optimized results show a significant improvement in the correlation between the simulated model and the experimental model. Once accurate representation of the vehicle suspension model is achieved, further analysis, such as ride and handling performances, can be implemented for further optimization.
NASA Astrophysics Data System (ADS)
Jacquin, A. P.
2012-04-01
This study is intended to quantify the impact of uncertainty about precipitation spatial distribution on predictive uncertainty of a snowmelt runoff model. This problem is especially relevant in mountain catchments with a sparse precipitation observation network and relative short precipitation records. The model analysed is a conceptual watershed model operating at a monthly time step. The model divides the catchment into five elevation zones, where the fifth zone corresponds to the catchment's glaciers. Precipitation amounts at each elevation zone i are estimated as the product between observed precipitation at a station and a precipitation factor FPi. If other precipitation data are not available, these precipitation factors must be adjusted during the calibration process and are thus seen as parameters of the model. In the case of the fifth zone, glaciers are seen as an inexhaustible source of water that melts when the snow cover is depleted.The catchment case study is Aconcagua River at Chacabuquito, located in the Andean region of Central Chile. The model's predictive uncertainty is measured in terms of the output variance of the mean squared error of the Box-Cox transformed discharge, the relative volumetric error, and the weighted average of snow water equivalent in the elevation zones at the end of the simulation period. Sobol's variance decomposition (SVD) method is used for assessing the impact of precipitation spatial distribution, represented by the precipitation factors FPi, on the models' predictive uncertainty. In the SVD method, the first order effect of a parameter (or group of parameters) indicates the fraction of predictive uncertainty that could be reduced if the true value of this parameter (or group) was known. Similarly, the total effect of a parameter (or group) measures the fraction of predictive uncertainty that would remain if the true value of this parameter (or group) was unknown, but all the remaining model parameters could be fixed. In this study, first order and total effects of the group of precipitation factors FP1- FP4, and the precipitation factor FP5, are calculated separately. First order and total effects of the group FP1- FP4 are much higher than first order and total effects of the factor FP5, which are negligible This situation is due to the fact that the actual value taken by FP5 does not have much influence in the contribution of the glacier zone to the catchment's output discharge, mainly limited by incident solar radiation. In addition to this, first order effects indicate that, in average, nearly 25% of predictive uncertainty could be reduced if the true values of the precipitation factors FPi could be known, but no information was available on the appropriate values for the remaining model parameters. Finally, the total effects of the precipitation factors FP1- FP4 are close to 41% in average, implying that even if the appropriate values for the remaining model parameters could be fixed, predictive uncertainty would be still quite high if the spatial distribution of precipitation remains unknown. Acknowledgements: This research was funded by FONDECYT, Research Project 1110279.
Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy
[What is the prognostic significance of histomorphology in small cell lung carcinoma?].
Facilone, F; Cimmino, A; Assennato, G; Sardelli, P; Colucci, G A; Resta, L
1993-01-01
What is the prognostic significant of the histomorphology in the small cell carcinomas of the lung? After the WHO classification of the lung cancer (1981), several studies criticized the subdivision of the small cell carcinoma in three sub-types (oat-cell, intermediate cell and combined types). The role of histology in the prognostic predition has been devaluated. In order to verify the prognostic value of the morphology of the small cell types of lung cancer, we performed a multivariate analysis in 62 patients. The survival rate was analytically compared with the following parameters: nuclear maximum diameter, nuclear form, nuclear chromatism, chromatine distribution, presence of nucleolus, evidence of cytoplasm. The results showed that none of these parameters are able to express a prognostic value. According to the recent studies, we think that the small cell carcinoma of the lung is a neoplasia with a multiform histologic pattern. Differences observed in clinical management are not correlate with the morphology, but with other biological parameters still unknown.
Impacts of different types of measurements on estimating unsaturated flow parameters
NASA Astrophysics Data System (ADS)
Shi, Liangsheng; Song, Xuehang; Tong, Juxiu; Zhu, Yan; Zhang, Qiuru
2015-05-01
This paper assesses the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.
Impacts of Different Types of Measurements on Estimating Unsaturatedflow Parameters
NASA Astrophysics Data System (ADS)
Shi, L.
2015-12-01
This study evaluates the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.
An easy-to-use tool for the evaluation of leachate production at landfill sites.
Grugnaletti, Matteo; Pantini, Sara; Verginelli, Iason; Lombardi, Francesco
2016-09-01
A simulation program for the evaluation of leachate generation at landfill sites is herein presented. The developed tool is based on a water balance model that accounts for all the key processes influencing leachate generation through analytical and empirical equations. After a short description of the tool, different simulations on four Italian landfill sites are shown. The obtained results revealed that when literature values were assumed for the unknown input parameters, the model provided a rough estimation of the leachate production measured in the field. In this case, indeed, the deviations between observed and predicted data appeared, in some cases, significant. Conversely, by performing a preliminary calibration for some of the unknown input parameters (e.g. initial moisture content of wastes, compression index), in nearly all cases the model performances significantly improved. These results although showed the potential capability of a water balance model to estimate the leachate production at landfill sites also highlighted the intrinsic limitation of a deterministic approach to accurately forecast the leachate production over time. Indeed, parameters such as the initial water content of incoming waste and the compression index, that have a great influence on the leachate production, may exhibit temporal variation due to seasonal changing of weather conditions (e.g. rainfall, air humidity) as well as to seasonal variability in the amount and type of specific waste fractions produced (e.g. yard waste, food, plastics) that make their prediction quite complicated. In this sense, we believe that a tool such as the one proposed in this work that requires a limited number of unknown parameters, can be easier handled to quantify the uncertainties. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Basin, M.; Maldonado, J. J.; Zendejo, O.
2016-07-01
This paper proposes new mean-square filter and parameter estimator design for linear stochastic systems with unknown parameters over linear observations, where unknown parameters are considered as combinations of Gaussian and Poisson white noises. The problem is treated by reducing the original problem to a filtering problem for an extended state vector that includes parameters as additional states, modelled as combinations of independent Gaussian and Poisson processes. The solution to this filtering problem is based on the mean-square filtering equations for incompletely polynomial states confused with Gaussian and Poisson noises over linear observations. The resulting mean-square filter serves as an identifier for the unknown parameters. Finally, a simulation example shows effectiveness of the proposed mean-square filter and parameter estimator.
Determination of power system component parameters using nonlinear dead beat estimation method
NASA Astrophysics Data System (ADS)
Kolluru, Lakshmi
Power systems are considered the most complex man-made wonders in existence today. In order to effectively supply the ever increasing demands of the consumers, power systems are required to remain stable at all times. Stability and monitoring of these complex systems are achieved by strategically placed computerized control centers. State and parameter estimation is an integral part of these facilities, as they deal with identifying the unknown states and/or parameters of the systems. Advancements in measurement technologies and the introduction of phasor measurement units (PMU) provide detailed and dynamic information of all measurements. Accurate availability of dynamic measurements provides engineers the opportunity to expand and explore various possibilities in power system dynamic analysis/control. This thesis discusses the development of a parameter determination algorithm for nonlinear power systems, using dynamic data obtained from local measurements. The proposed algorithm was developed by observing the dead beat estimator used in state space estimation of linear systems. The dead beat estimator is considered to be very effective as it is capable of obtaining the required results in a fixed number of steps. The number of steps required is related to the order of the system and the number of parameters to be estimated. The proposed algorithm uses the idea of dead beat estimator and nonlinear finite difference methods to create an algorithm which is user friendly and can determine the parameters fairly accurately and effectively. The proposed algorithm is based on a deterministic approach, which uses dynamic data and mathematical models of power system components to determine the unknown parameters. The effectiveness of the algorithm is tested by implementing it to identify the unknown parameters of a synchronous machine. MATLAB environment is used to create three test cases for dynamic analysis of the system with assumed known parameters. Faults are introduced in the virtual test systems and the dynamic data obtained in each case is analyzed and recorded. Ideally, actual measurements are to be provided to the algorithm. As the measurements are not readily available the data obtained from simulations is fed into the determination algorithm as inputs. The obtained results are then compared to the original (or assumed) values of the parameters. The results obtained suggest that the algorithm is able to determine the parameters of a synchronous machine when crisp data is available.
NASA Astrophysics Data System (ADS)
Ding, Liang; Gao, Haibo; Liu, Zhen; Deng, Zongquan; Liu, Guangjun
2015-12-01
Identifying the mechanical property parameters of planetary soil based on terramechanics models using in-situ data obtained from autonomous planetary exploration rovers is both an important scientific goal and essential for control strategy optimization and high-fidelity simulations of rovers. However, identifying all the terrain parameters is a challenging task because of the nonlinear and coupling nature of the involved functions. Three parameter identification methods are presented in this paper to serve different purposes based on an improved terramechanics model that takes into account the effects of slip, wheel lugs, etc. Parameter sensitivity and coupling of the equations are analyzed, and the parameters are grouped according to their sensitivity to the normal force, resistance moment and drawbar pull. An iterative identification method using the original integral model is developed first. In order to realize real-time identification, the model is then simplified by linearizing the normal and shearing stresses to derive decoupled closed-form analytical equations. Each equation contains one or two groups of soil parameters, making step-by-step identification of all the unknowns feasible. Experiments were performed using six different types of single-wheels as well as a four-wheeled rover moving on planetary soil simulant. All the unknown model parameters were identified using the measured data and compared with the values obtained by conventional experiments. It is verified that the proposed iterative identification method provides improved accuracy, making it suitable for scientific studies of soil properties, whereas the step-by-step identification methods based on simplified models require less calculation time, making them more suitable for real-time applications. The models have less than 10% margin of error comparing with the measured results when predicting the interaction forces and moments using the corresponding identified parameters.
Nagasaki, Masao; Yamaguchi, Rui; Yoshida, Ryo; Imoto, Seiya; Doi, Atsushi; Tamada, Yoshinori; Matsuno, Hiroshi; Miyano, Satoru; Higuchi, Tomoyuki
2006-01-01
We propose an automatic construction method of the hybrid functional Petri net as a simulation model of biological pathways. The problems we consider are how we choose the values of parameters and how we set the network structure. Usually, we tune these unknown factors empirically so that the simulation results are consistent with biological knowledge. Obviously, this approach has the limitation in the size of network of interest. To extend the capability of the simulation model, we propose the use of data assimilation approach that was originally established in the field of geophysical simulation science. We provide genomic data assimilation framework that establishes a link between our simulation model and observed data like microarray gene expression data by using a nonlinear state space model. A key idea of our genomic data assimilation is that the unknown parameters in simulation model are converted as the parameter of the state space model and the estimates are obtained as the maximum a posteriori estimators. In the parameter estimation process, the simulation model is used to generate the system model in the state space model. Such a formulation enables us to handle both the model construction and the parameter tuning within a framework of the Bayesian statistical inferences. In particular, the Bayesian approach provides us a way of controlling overfitting during the parameter estimations that is essential for constructing a reliable biological pathway. We demonstrate the effectiveness of our approach using synthetic data. As a result, parameter estimation using genomic data assimilation works very well and the network structure is suitably selected.
Zouari, Farouk; Ibeas, Asier; Boulkroune, Abdesselem; Cao, Jinde; Mehdi Arefi, Mohammad
2018-06-01
This study addresses the issue of the adaptive output tracking control for a category of uncertain nonstrict-feedback delayed incommensurate fractional-order systems in the presence of nonaffine structures, unmeasured pseudo-states, unknown control directions, unknown actuator nonlinearities and output constraints. Firstly, the mean value theorem and the Gaussian error function are introduced to eliminate the difficulties that arise from the nonaffine structures and the unknown actuator nonlinearities, respectively. Secondly, the immeasurable tracking error variables are suitably estimated by constructing a fractional-order linear observer. Thirdly, the neural network, the Razumikhin Lemma, the variable separation approach, and the smooth Nussbaum-type function are used to deal with the uncertain nonlinear dynamics, the unknown time-varying delays, the nonstrict feedback and the unknown control directions, respectively. Fourthly, asymmetric barrier Lyapunov functions are employed to overcome the violation of the output constraints and to tune online the parameters of the adaptive neural controller. Through rigorous analysis, it is proved that the boundedness of all variables in the closed-loop system and the semi global asymptotic tracking are ensured without transgression of the constraints. The principal contributions of this study can be summarized as follows: (1) based on Caputo's definitions and new lemmas, methods concerning the controllability, observability and stability analysis of integer-order systems are extended to fractional-order ones, (2) the output tracking objective for a relatively large class of uncertain systems is achieved with a simple controller and less tuning parameters. Finally, computer-simulation studies from the robotic field are given to demonstrate the effectiveness of the proposed controller. Copyright © 2018 Elsevier Ltd. All rights reserved.
Neighboring extremals of dynamic optimization problems with path equality constraints
NASA Technical Reports Server (NTRS)
Lee, A. Y.
1988-01-01
Neighboring extremals of dynamic optimization problems with path equality constraints and with an unknown parameter vector are considered in this paper. With some simplifications, the problem is reduced to solving a linear, time-varying two-point boundary-value problem with integral path equality constraints. A modified backward sweep method is used to solve this problem. Two example problems are solved to illustrate the validity and usefulness of the solution technique.
Highly adaptive tests for group differences in brain functional connectivity.
Kim, Junghi; Pan, Wei
2015-01-01
Resting-state functional magnetic resonance imaging (rs-fMRI) and other technologies have been offering evidence and insights showing that altered brain functional networks are associated with neurological illnesses such as Alzheimer's disease. Exploring brain networks of clinical populations compared to those of controls would be a key inquiry to reveal underlying neurological processes related to such illnesses. For such a purpose, group-level inference is a necessary first step in order to establish whether there are any genuinely disrupted brain subnetworks. Such an analysis is also challenging due to the high dimensionality of the parameters in a network model and high noise levels in neuroimaging data. We are still in the early stage of method development as highlighted by Varoquaux and Craddock (2013) that "there is currently no unique solution, but a spectrum of related methods and analytical strategies" to learn and compare brain connectivity. In practice the important issue of how to choose several critical parameters in estimating a network, such as what association measure to use and what is the sparsity of the estimated network, has not been carefully addressed, largely because the answers are unknown yet. For example, even though the choice of tuning parameters in model estimation has been extensively discussed in the literature, as to be shown here, an optimal choice of a parameter for network estimation may not be optimal in the current context of hypothesis testing. Arbitrarily choosing or mis-specifying such parameters may lead to extremely low-powered tests. Here we develop highly adaptive tests to detect group differences in brain connectivity while accounting for unknown optimal choices of some tuning parameters. The proposed tests combine statistical evidence against a null hypothesis from multiple sources across a range of plausible tuning parameter values reflecting uncertainty with the unknown truth. These highly adaptive tests are not only easy to use, but also high-powered robustly across various scenarios. The usage and advantages of these novel tests are demonstrated on an Alzheimer's disease dataset and simulated data.
NASA Astrophysics Data System (ADS)
Chowdhary, Girish; Mühlegg, Maximilian; Johnson, Eric
2014-08-01
In model reference adaptive control (MRAC) the modelling uncertainty is often assumed to be parameterised with time-invariant unknown ideal parameters. The convergence of parameters of the adaptive element to these ideal parameters is beneficial, as it guarantees exponential stability, and makes an online learned model of the system available. Most MRAC methods, however, require persistent excitation of the states to guarantee that the adaptive parameters converge to the ideal values. Enforcing PE may be resource intensive and often infeasible in practice. This paper presents theoretical analysis and illustrative examples of an adaptive control method that leverages the increasing ability to record and process data online by using specifically selected and online recorded data concurrently with instantaneous data for adaptation. It is shown that when the system uncertainty can be modelled as a combination of known nonlinear bases, simultaneous exponential tracking and parameter error convergence can be guaranteed if the system states are exciting over finite intervals such that rich data can be recorded online; PE is not required. Furthermore, the rate of convergence is directly proportional to the minimum singular value of the matrix containing online recorded data. Consequently, an online algorithm to record and forget data is presented and its effects on the resulting switched closed-loop dynamics are analysed. It is also shown that when radial basis function neural networks (NNs) are used as adaptive elements, the method guarantees exponential convergence of the NN parameters to a compact neighbourhood of their ideal values without requiring PE. Flight test results on a fixed-wing unmanned aerial vehicle demonstrate the effectiveness of the method.
A new zonation algorithm with parameter estimation using hydraulic head and subsidence observations.
Zhang, Meijing; Burbey, Thomas J; Nunes, Vitor Dos Santos; Borggaard, Jeff
2014-01-01
Parameter estimation codes such as UCODE_2005 are becoming well-known tools in groundwater modeling investigations. These programs estimate important parameter values such as transmissivity (T) and aquifer storage values (Sa ) from known observations of hydraulic head, flow, or other physical quantities. One drawback inherent in these codes is that the parameter zones must be specified by the user. However, such knowledge is often unknown even if a detailed hydrogeological description is available. To overcome this deficiency, we present a discrete adjoint algorithm for identifying suitable zonations from hydraulic head and subsidence measurements, which are highly sensitive to both elastic (Sske) and inelastic (Sskv) skeletal specific storage coefficients. With the advent of interferometric synthetic aperture radar (InSAR), distributed spatial and temporal subsidence measurements can be obtained. A synthetic conceptual model containing seven transmissivity zones, one aquifer storage zone and three interbed zones for elastic and inelastic storage coefficients were developed to simulate drawdown and subsidence in an aquifer interbedded with clay that exhibits delayed drainage. Simulated delayed land subsidence and groundwater head data are assumed to be the observed measurements, to which the discrete adjoint algorithm is called to create approximate spatial zonations of T, Sske , and Sskv . UCODE-2005 is then used to obtain the final optimal parameter values. Calibration results indicate that the estimated zonations calculated from the discrete adjoint algorithm closely approximate the true parameter zonations. This automation algorithm reduces the bias established by the initial distribution of zones and provides a robust parameter zonation distribution. © 2013, National Ground Water Association.
NASA Astrophysics Data System (ADS)
Montzka, S. A.; Butler, J. H.; Dutton, G.; Thompson, T. M.; Hall, B.; Mondeel, D. J.; Elkins, J. W.
2005-05-01
The El-Nino/Southern-Oscillation (ENSO) dominates interannual climate variability and plays, therefore, a key role in seasonal-to-interannual prediction. Much is known by now about the main physical mechanisms that give rise to and modulate ENSO, but the values of several parameters that enter these mechanisms are an important unknown. We apply Extended Kalman Filtering (EKF) for both model state and parameter estimation in an intermediate, nonlinear, coupled ocean--atmosphere model of ENSO. The coupled model consists of an upper-ocean, reduced-gravity model of the Tropical Pacific and a steady-state atmospheric response to the sea surface temperature (SST). The model errors are assumed to be mainly in the atmospheric wind stress, and assimilated data are equatorial Pacific SSTs. Model behavior is very sensitive to two key parameters: (i) μ, the ocean-atmosphere coupling coefficient between SST and wind stress anomalies; and (ii) δs, the surface-layer coefficient. Previous work has shown that δs determines the period of the model's self-sustained oscillation, while μ measures the degree of nonlinearity. Depending on the values of these parameters, the spatio-temporal pattern of model solutions is either that of a delayed oscillator or of a westward propagating mode. Estimation of these parameters is tested first on synthetic data and allows us to recover the delayed-oscillator mode starting from model parameter values that correspond to the westward-propagating case. Assimilation of SST data from the NCEP-NCAR Reanalysis-2 shows that the parameters can vary on fairly short time scales and switch between values that approximate the two distinct modes of ENSO behavior. Rapid adjustments of these parameters occur, in particular, during strong ENSO events. Ways to apply EKF parameter estimation efficiently to state-of-the-art coupled ocean--atmosphere GCMs will be discussed.
Parameter Estimation of Partial Differential Equation Models.
Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab
2013-01-01
Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data.
A genetic analysis of post-weaning feedlot performance and profitability in Bonsmara cattle.
van der Westhuizen, R R; van der Westhuizen, J; Schoeman, S J
2009-02-25
The aim of this study was to identify factors influencing profitability in a feedlot environment and to estimate genetic parameters for and between a feedlot profit function and productive traits measured in growth tests. The heritability estimate of 0.36 for feedlot profitability shows that this trait is genetically inherited and that it can be selected for. The genetic correlations between feedlot profitability and production and efficiency varied from negligible to high. The genetic correlation estimate of -0.92 between feed conversion ratio and feedlot profitability is largely due to the part-whole relationship between these two traits. Consequently, a multiple regression equation was developed to estimate a feed intake value for all performance-tested Bonsmara bulls, which were group fed and whose feed intakes were unknown. These predicted feed intake values enabled the calculation of a post-weaning growth or feedlot profitability value for all tested bulls, even where individual feed intakes were unknown. Subsequently, a feedlot profitability value for each bull was calculated in a favorable economic environment, an average economic environment and in an unfavorable economic environment. The high Pearson and Spearman correlations between the estimate breeding values based on the average economic environment and the other two environments suggested that the average economic environment could be used to calculate estimate breeding values for feedlot profitability. It is therefore not necessary to change the carcass, weaned calf or feed price on a regular basis to allow for possible re-rankings based on estimate breeding values.
Implementation of Hybrid V-Cycle Multilevel Methods for Mixed Finite Element Systems with Penalty
NASA Technical Reports Server (NTRS)
Lai, Chen-Yao G.
1996-01-01
The goal of this paper is the implementation of hybrid V-cycle hierarchical multilevel methods for the indefinite discrete systems which arise when a mixed finite element approximation is used to solve elliptic boundary value problems. By introducing a penalty parameter, the perturbed indefinite system can be reduced to a symmetric positive definite system containing the small penalty parameter for the velocity unknown alone. We stabilize the hierarchical spatial decomposition approach proposed by Cai, Goldstein, and Pasciak for the reduced system. We demonstrate that the relative condition number of the preconditioner is bounded uniformly with respect to the penalty parameter, the number of levels and possible jumps of the coefficients as long as they occur only across the edges of the coarsest elements.
Determination of Phobos' rotational parameters by an inertial frame bundle block adjustment
NASA Astrophysics Data System (ADS)
Burmeister, Steffi; Willner, Konrad; Schmidt, Valentina; Oberst, Jürgen
2018-01-01
A functional model for a bundle block adjustment in the inertial reference frame was developed, implemented and tested. This approach enables the determination of rotation parameters of planetary bodies on the basis of photogrammetric observations. Tests with a self-consistent synthetic data set showed that the implementation converges reliably toward the expected values of the introduced unknown parameters of the adjustment, e.g., spin pole orientation, and that it can cope with typical observational errors in the data. We applied the model to a data set of Phobos using images from the Mars Express and the Viking mission. With Phobos being in a locked rotation, we computed a forced libration amplitude of 1.14^circ ± 0.03^circ together with a control point network of 685 points.
Controlled wavelet domain sparsity for x-ray tomography
NASA Astrophysics Data System (ADS)
Purisha, Zenith; Rimpeläinen, Juho; Bubba, Tatiana; Siltanen, Samuli
2018-01-01
Tomographic reconstruction is an ill-posed inverse problem that calls for regularization. One possibility is to require sparsity of the unknown in an orthonormal wavelet basis. This, in turn, can be achieved by variational regularization, where the penalty term is the sum of the absolute values of the wavelet coefficients. The primal-dual fixed point algorithm showed that the minimizer of the variational regularization functional can be computed iteratively using a soft-thresholding operation. Choosing the soft-thresholding parameter \
Numerical computations on one-dimensional inverse scattering problems
NASA Technical Reports Server (NTRS)
Dunn, M. H.; Hariharan, S. I.
1983-01-01
An approximate method to determine the index of refraction of a dielectric obstacle is presented. For simplicity one dimensional models of electromagnetic scattering are treated. The governing equations yield a second order boundary value problem, in which the index of refraction appears as a functional parameter. The availability of reflection coefficients yield two additional boundary conditions. The index of refraction by a k-th order spline which can be written as a linear combination of B-splines is approximated. For N distinct reflection coefficients, the resulting N boundary value problems yield a system of N nonlinear equations in N unknowns which are the coefficients of the B-splines.
NASA Astrophysics Data System (ADS)
Wang, L. M.
2017-09-01
A novel model-free adaptive sliding mode strategy is proposed for a generalized projective synchronization (GPS) between two entirely unknown fractional-order chaotic systems subject to the external disturbances. To solve the difficulties from the little knowledge about the master-slave system and to overcome the bad effects of the external disturbances on the generalized projective synchronization, the radial basis function neural networks are used to approach the packaged unknown master system and the packaged unknown slave system (including the external disturbances). Consequently, based on the slide mode technology and the neural network theory, a model-free adaptive sliding mode controller is designed to guarantee asymptotic stability of the generalized projective synchronization error. The main contribution of this paper is that a control strategy is provided for the generalized projective synchronization between two entirely unknown fractional-order chaotic systems subject to the unknown external disturbances, and the proposed control strategy only requires that the master system has the same fractional orders as the slave system. Moreover, the proposed method allows us to achieve all kinds of generalized projective chaos synchronizations by turning the user-defined parameters onto the desired values. Simulation results show the effectiveness of the proposed method and the robustness of the controlled system.
Wang, Min; Ge, Shuzhi Sam; Hong, Keum-Shik
2010-11-01
This paper presents adaptive neural tracking control for a class of non-affine pure-feedback systems with multiple unknown state time-varying delays. To overcome the design difficulty from non-affine structure of pure-feedback system, mean value theorem is exploited to deduce affine appearance of state variables x(i) as virtual controls α(i), and of the actual control u. The separation technique is introduced to decompose unknown functions of all time-varying delayed states into a series of continuous functions of each delayed state. The novel Lyapunov-Krasovskii functionals are employed to compensate for the unknown functions of current delayed state, which is effectively free from any restriction on unknown time-delay functions and overcomes the circular construction of controller caused by the neural approximation of a function of u and [Formula: see text] . Novel continuous functions are introduced to overcome the design difficulty deduced from the use of one adaptive parameter. To achieve uniformly ultimate boundedness of all the signals in the closed-loop system and tracking performance, control gains are effectively modified as a dynamic form with a class of even function, which makes stability analysis be carried out at the present of multiple time-varying delays. Simulation studies are provided to demonstrate the effectiveness of the proposed scheme.
Simulation-based Extraction of Key Material Parameters from Atomic Force Microscopy
NASA Astrophysics Data System (ADS)
Alsafi, Huseen; Peninngton, Gray
Models for the atomic force microscopy (AFM) tip and sample interaction contain numerous material parameters that are often poorly known. This is especially true when dealing with novel material systems or when imaging samples that are exposed to complicated interactions with the local environment. In this work we use Monte Carlo methods to extract sample material parameters from the experimental AFM analysis of a test sample. The parameterized theoretical model that we use is based on the Virtual Environment for Dynamic AFM (VEDA) [1]. The extracted material parameters are then compared with the accepted values for our test sample. Using this procedure, we suggest a method that can be used to successfully determine unknown material properties in novel and complicated material systems. We acknowledge Fisher Endowment Grant support from the Jess and Mildred Fisher College of Science and Mathematics,Towson University.
Time‐dependent renewal‐model probabilities when date of last earthquake is unknown
Field, Edward H.; Jordan, Thomas H.
2015-01-01
We derive time-dependent, renewal-model earthquake probabilities for the case in which the date of the last event is completely unknown, and compare these with the time-independent Poisson probabilities that are customarily used as an approximation in this situation. For typical parameter values, the renewal-model probabilities exceed Poisson results by more than 10% when the forecast duration exceeds ~20% of the mean recurrence interval. We also derive probabilities for the case in which the last event is further constrained to have occurred before historical record keeping began (the historic open interval), which can only serve to increase earthquake probabilities for typically applied renewal models.We conclude that accounting for the historic open interval can improve long-term earthquake rupture forecasts for California and elsewhere.
NASA Astrophysics Data System (ADS)
Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.
2017-06-01
We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of a subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holoien, Thomas W. -S.; Marshall, Philip J.; Wechsler, Risa H.
We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of amore » subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.« less
The Inverse Problem for Confined Aquifer Flow: Identification and Estimation With Extensions
NASA Astrophysics Data System (ADS)
Loaiciga, Hugo A.; MariñO, Miguel A.
1987-01-01
The contributions of this work are twofold. First, a methodology for estimating the elements of parameter matrices in the governing equation of flow in a confined aquifer is developed. The estimation techniques for the distributed-parameter inverse problem pertain to linear least squares and generalized least squares methods. The linear relationship among the known heads and unknown parameters of the flow equation provides the background for developing criteria for determining the identifiability status of unknown parameters. Under conditions of exact or overidentification it is possible to develop statistically consistent parameter estimators and their asymptotic distributions. The estimation techniques, namely, two-stage least squares and three stage least squares, are applied to a specific groundwater inverse problem and compared between themselves and with an ordinary least squares estimator. The three-stage estimator provides the closer approximation to the actual parameter values, but it also shows relatively large standard errors as compared to the ordinary and two-stage estimators. The estimation techniques provide the parameter matrices required to simulate the unsteady groundwater flow equation. Second, a nonlinear maximum likelihood estimation approach to the inverse problem is presented. The statistical properties of maximum likelihood estimators are derived, and a procedure to construct confidence intervals and do hypothesis testing is given. The relative merits of the linear and maximum likelihood estimators are analyzed. Other topics relevant to the identification and estimation methodologies, i.e., a continuous-time solution to the flow equation, coping with noise-corrupted head measurements, and extension of the developed theory to nonlinear cases are also discussed. A simulation study is used to evaluate the methods developed in this study.
NASA Astrophysics Data System (ADS)
Lambert, M.; Lesselier, D.; Kooij, B. J.
1998-10-01
The retrieval of an unknown, possibly inhomogeneous, penetrable cylindrical obstacle buried entirely in a known homogeneous half-space - the constitutive material parameters of the obstacle and of its embedding obey a Maxwell model - is considered from single- or multiple-frequency aspect-limited data collected by ideal sensors located in air above the embedding half-space, when a small number of time-harmonic transverse electric (TE)-polarized line sources - the magnetic field H is directed along the axis of the cylinder - is also placed in air. The wavefield is modelled from a rigorous H-field domain integral-differential formulation which involves the dot product of the gradients of the single component of H and of the Green function of the stratified environment times a scalar-valued contrast function which contains the obstacle parameters (the frequency-independent, position-dependent relative permittivity and conductivity). A modified gradient method is developed in order to reconstruct the maps of such parameters within a prescribed search domain from the iterative minimization of a cost functional which incorporates both the error in reproducing the data and the error on the field built inside this domain. Non-physical values are excluded and convergence reached by incorporating in the solution algorithm, from a proper choice of unknowns, the condition that the relative permittivity be larger than or equal to 1, and the conductivity be non-negative. The efficiency of the constrained method is illustrated from noiseless and noisy synthetic data acquired independently. The importance of the choice of the initial values of the sought quantities, the need for a periodic refreshment of the constitutive parameters to avoid the algorithm providing inconsistent results, and the interest of a frequency-hopping strategy to obtain finer and finer features of the obstacle when the frequency is raised, are underlined. It is also shown that though either the permittivity map or the conductivity map can be obtained for a fair variety of cases, retrieving both of them may be difficult unless further information is made available.
NASA Astrophysics Data System (ADS)
Yang, Chao; Wu, Wei; Wu, Shu-Cheng; Liu, Hong-Bin; Peng, Qing
2014-02-01
Aroma types of flue-cured tobacco (FCT) are classified into light, medium, and heavy in China. However, the spatial distribution of FCT aroma types and the relationships among aroma types, chemical parameters, and climatic variables were still unknown at national scale. In the current study, multi-year averaged chemical parameters (total sugars, reducing sugars, nicotine, total nitrogen, chloride, and K2O) of FCT samples with grade of C3F and climatic variables (mean, minimum and maximum temperatures, rainfall, relative humidity, and sunshine hours) during the growth periods were collected from main planting areas across China. Significant relationships were found between chemical parameters and climatic variables ( p < 0.05). A spatial distribution map of FCT aroma types were produced using support vector machine algorithms and chemical parameters. Significant differences in chemical parameters and climatic variables were observed among the three aroma types based on one-way analysis of variance ( p < 0.05). Areas with light aroma type had significantly lower values of mean, maximum, and minimum temperatures than regions with medium and heavy aroma types ( p < 0.05). Areas with heavy aroma type had significantly lower values of rainfall and relative humidity and higher values of sunshine hours than regions with light and medium aroma types ( p < 0.05). The output produced by classification and regression trees showed that sunshine hours, rainfall, and maximum temperature were the most important factors affecting FCT aroma types at national scale.
Dynamic Modeling from Flight Data with Unknown Time Skews
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2016-01-01
A method for estimating dynamic model parameters from flight data with unknown time skews is described and demonstrated. The method combines data reconstruction, nonlinear optimization, and equation-error parameter estimation in the frequency domain to accurately estimate both dynamic model parameters and the relative time skews in the data. Data from a nonlinear F-16 aircraft simulation with realistic noise, instrumentation errors, and arbitrary time skews were used to demonstrate the approach. The approach was further evaluated using flight data from a subscale jet transport aircraft, where the measured data were known to have relative time skews. Comparison of modeling results obtained from time-skewed and time-synchronized data showed that the method accurately estimates both dynamic model parameters and relative time skew parameters from flight data with unknown time skews.
Identifying Bearing Rotodynamic Coefficients Using an Extended Kalman Filter
NASA Technical Reports Server (NTRS)
Miller, Brad A.; Howard, Samuel A.
2008-01-01
An Extended Kalman Filter is developed to estimate the linearized direct and indirect stiffness and damping force coefficients for bearings in rotor dynamic applications from noisy measurements of the shaft displacement in response to imbalance and impact excitation. The bearing properties are modeled as stochastic random variables using a Gauss-Markov model. Noise terms are introduced into the system model to account for all of the estimation error, including modeling errors and uncertainties and the propagation of measurement errors into the parameter estimates. The system model contains two user-defined parameters that can be tuned to improve the filter's performance; these parameters correspond to the covariance of the system and measurement noise variables. The filter is also strongly influenced by the initial values of the states and the error covariance matrix. The filter is demonstrated using numerically simulated data for a rotor bearing system with two identical bearings, which reduces the number of unknown linear dynamic coefficients to eight. The filter estimates for the direct damping coefficients and all four stiffness coefficients correlated well with actual values, whereas the estimates for the cross-coupled damping coefficients were the least accurate.
Parameter identifiability of linear dynamical systems
NASA Technical Reports Server (NTRS)
Glover, K.; Willems, J. C.
1974-01-01
It is assumed that the system matrices of a stationary linear dynamical system were parametrized by a set of unknown parameters. The question considered here is, when can such a set of unknown parameters be identified from the observed data? Conditions for the local identifiability of a parametrization are derived in three situations: (1) when input/output observations are made, (2) when there exists an unknown feedback matrix in the system and (3) when the system is assumed to be driven by white noise and only output observations are made. Also a sufficient condition for global identifiability is derived.
Refractive Index of Alkali Halides and Its Wavelength and Temperature Derivatives.
1975-05-01
of CoBr . . . .......... 236 82. Comparison of Dispersion Equations Proposed for CsBr ... . 237 83. Recommmded Values on the Refractive Index and Its... discovery of empirical relationships which enable us to calculate dn/dT data at 293 K for some ma- terials on which no data are available. In the data...or in handbooks. In the present work, however, this problem 160 was solved by our empirical discoveries by which the unknown parameters of Eq. (19) for
Decentralized adaptive control
NASA Technical Reports Server (NTRS)
Oh, B. J.; Jamshidi, M.; Seraji, H.
1988-01-01
A decentralized adaptive control is proposed to stabilize and track the nonlinear, interconnected subsystems with unknown parameters. The adaptation of the controller gain is derived by using model reference adaptive control theory based on Lyapunov's direct method. The adaptive gains consist of sigma, proportional, and integral combination of the measured and reference values of the corresponding subsystem. The proposed control is applied to the joint control of a two-link robot manipulator, and the performance in computer simulation corresponds with what is expected in theoretical development.
Forcing Regression through a Given Point Using Any Familiar Computational Routine.
1983-03-01
a linear model , Y =a + OX + e ( Model I) then adopt the principle of least squares; and use sample data to estimate the unknown parameters, a and 8...has an expected value of zero indicates that the "average" response is considered linear . If c varies widely, Model I, though conceptually correct, may...relationship is linear from the maximum observed x to x - a, then Model II should be used. To pro- ceed with the customary evaluation of Model I would be
NASA Astrophysics Data System (ADS)
Tirandaz, Hamed
2018-03-01
Chaos control and synchronization of chaotic systems is seemingly a challenging problem and has got a lot of attention in recent years due to its numerous applications in science and industry. This paper concentrates on the control and synchronization problem of the three-dimensional (3D) Zhang chaotic system. At first, an adaptive control law and a parameter estimation law are achieved for controlling the behavior of the Zhang chaotic system. Then, non-identical synchronization of Zhang chaotic system is provided with considering the Lü chaotic system as the follower system. The synchronization problem and parameters identification are achieved by introducing an adaptive control law and a parameters estimation law. Stability analysis of the proposed method is proved by the Lyapanov stability theorem. In addition, the convergence of the estimated parameters to their truly unknown values are evaluated. Finally, some numerical simulations are carried out to illustrate and to validate the effectiveness of the suggested method.
Bringing metabolic networks to life: convenience rate law and thermodynamic constraints
Liebermeister, Wolfram; Klipp, Edda
2006-01-01
Background Translating a known metabolic network into a dynamic model requires rate laws for all chemical reactions. The mathematical expressions depend on the underlying enzymatic mechanism; they can become quite involved and may contain a large number of parameters. Rate laws and enzyme parameters are still unknown for most enzymes. Results We introduce a simple and general rate law called "convenience kinetics". It can be derived from a simple random-order enzyme mechanism. Thermodynamic laws can impose dependencies on the kinetic parameters. Hence, to facilitate model fitting and parameter optimisation for large networks, we introduce thermodynamically independent system parameters: their values can be varied independently, without violating thermodynamical constraints. We achieve this by expressing the equilibrium constants either by Gibbs free energies of formation or by a set of independent equilibrium constants. The remaining system parameters are mean turnover rates, generalised Michaelis-Menten constants, and constants for inhibition and activation. All parameters correspond to molecular energies, for instance, binding energies between reactants and enzyme. Conclusion Convenience kinetics can be used to translate a biochemical network – manually or automatically - into a dynamical model with plausible biological properties. It implements enzyme saturation and regulation by activators and inhibitors, covers all possible reaction stoichiometries, and can be specified by a small number of parameters. Its mathematical form makes it especially suitable for parameter estimation and optimisation. Parameter estimates can be easily computed from a least-squares fit to Michaelis-Menten values, turnover rates, equilibrium constants, and other quantities that are routinely measured in enzyme assays and stored in kinetic databases. PMID:17173669
Liu, Jian; Liu, Kexin; Liu, Shutang
2017-01-01
In this paper, adaptive control is extended from real space to complex space, resulting in a new control scheme for a class of n-dimensional time-dependent strict-feedback complex-variable chaotic (hyperchaotic) systems (CVCSs) in the presence of uncertain complex parameters and perturbations, which has not been previously reported in the literature. In detail, we have developed a unified framework for designing the adaptive complex scalar controller to ensure this type of CVCSs asymptotically stable and for selecting complex update laws to estimate unknown complex parameters. In particular, combining Lyapunov functions dependent on complex-valued vectors and back-stepping technique, sufficient criteria on stabilization of CVCSs are derived in the sense of Wirtinger calculus in complex space. Finally, numerical simulation is presented to validate our theoretical results. PMID:28467431
Liu, Jian; Liu, Kexin; Liu, Shutang
2017-01-01
In this paper, adaptive control is extended from real space to complex space, resulting in a new control scheme for a class of n-dimensional time-dependent strict-feedback complex-variable chaotic (hyperchaotic) systems (CVCSs) in the presence of uncertain complex parameters and perturbations, which has not been previously reported in the literature. In detail, we have developed a unified framework for designing the adaptive complex scalar controller to ensure this type of CVCSs asymptotically stable and for selecting complex update laws to estimate unknown complex parameters. In particular, combining Lyapunov functions dependent on complex-valued vectors and back-stepping technique, sufficient criteria on stabilization of CVCSs are derived in the sense of Wirtinger calculus in complex space. Finally, numerical simulation is presented to validate our theoretical results.
Oxidative stress parameters in localized scleroderma patients.
Kilinc, F; Sener, S; Akbaş, A; Metin, A; Kirbaş, S; Neselioglu, S; Erel, O
2016-11-01
Localized scleroderma (LS) (morphea) is a chronic, inflammatory skin disease with unknown cause that progresses with sclerosis in the skin and/or subcutaneous tissues. Its pathogenesis is not completely understood. Oxidative stress is suggested to have a role in the pathogenesis of localized scleroderma. We have aimed to determine the relationship of morphea lesions with oxidative stress. The total oxidant capacity (TOC), total antioxidant capacity (TAC), paroxonase (PON) and arylesterase (ARES) activity parameters of PON 1 enzyme levels in the serum were investigated in 13 LS patients (generalized and plaque type) and 13 healthy controls. TOC values of the patient group were found higher than the TOC values of the control group (p < 0.01). ARES values of the patient group was found to be higher than the control group (p < 0.0001). OSI was significantly higher in the patient group when compared to the control (p < 0.005). Oxidative stress seems to be effective in the pathogenesis. ARES levels have increased in morphea patients regarding to the oxidative stress and its reduction. Further controlled studies are required in wider series.
A function approximation approach to anomaly detection in propulsion system test data
NASA Technical Reports Server (NTRS)
Whitehead, Bruce A.; Hoyt, W. A.
1993-01-01
Ground test data from propulsion systems such as the Space Shuttle Main Engine (SSME) can be automatically screened for anomalies by a neural network. The neural network screens data after being trained with nominal data only. Given the values of 14 measurements reflecting external influences on the SSME at a given time, the neural network predicts the expected nominal value of a desired engine parameter at that time. We compared the ability of three different function-approximation techniques to perform this nominal value prediction: a novel neural network architecture based on Gaussian bar basis functions, a conventional back propagation neural network, and linear regression. These three techniques were tested with real data from six SSME ground tests containing two anomalies. The basis function network trained more rapidly than back propagation. It yielded nominal predictions with, a tight enough confidence interval to distinguish anomalous deviations from the nominal fluctuations in an engine parameter. Since the function-approximation approach requires nominal training data only, it is capable of detecting unknown classes of anomalies for which training data is not available.
NASA Astrophysics Data System (ADS)
Yu, Miao; Huang, Deqing; Yang, Wanqiu
2018-06-01
In this paper, we address the problem of unknown periodicity for a class of discrete-time nonlinear parametric systems without assuming any growth conditions on the nonlinearities. The unknown periodicity hides in the parametric uncertainties, which is difficult to estimate with existing techniques. By incorporating a logic-based switching mechanism, we identify the period and bound of unknown parameter simultaneously. Lyapunov-based analysis is given to demonstrate that a finite number of switchings can guarantee the asymptotic tracking for the nonlinear parametric systems. The simulation result also shows the efficacy of the proposed switching periodic adaptive control approach.
Ellis, John; Evans, Jason L.; Mustafayev, Azar; ...
2016-10-28
Here, we revisit minimal supersymmetric SU(5) grand unification (GUT) models in which the soft supersymmetry-breaking parameters of the minimal supersymmetric Standard Model (MSSM) are universal at some input scale, M in, above the supersymmetric gauge-coupling unification scale, M GUT. As in the constrained MSSM (CMSSM), we assume that the scalar masses and gaugino masses have common values, m 0 and m 1/2, respectively, at M in, as do the trilinear soft supersymmetry-breaking parameters A 0. Going beyond previous studies of such a super-GUT CMSSM scenario, we explore the constraints imposed by the lower limit on the proton lifetime and themore » LHC measurement of the Higgs mass, m h. We find regions of m 0, m 1/2 A 0 and the parameters of the SU(5) superpotential that are compatible with these and other phenomenological constraints such as the density of cold dark matter, which we assume to be provided by the lightest neutralino. Typically, these allowed regions appear for m 0 and m 1/2 in the multi-TeV region, for suitable values of the unknown SU(5) GUT-scale phases and superpotential couplings, and with the ratio of supersymmetric Higgs vacuum expectation values tan β≲6.« less
A reverse KAM method to estimate unknown mutual inclinations in exoplanetary systems
NASA Astrophysics Data System (ADS)
Volpi, Mara; Locatelli, Ugo; Sansottera, Marco
2018-05-01
The inclinations of exoplanets detected via radial velocity method are essentially unknown. We aim to provide estimations of the ranges of mutual inclinations that are compatible with the long-term stability of the system. Focusing on the skeleton of an extrasolar system, i.e. considering only the two most massive planets, we study the Hamiltonian of the three-body problem after the reduction of the angular momentum. Such a Hamiltonian is expanded both in Poincaré canonical variables and in the small parameter D_2, which represents the normalised angular momentum deficit. The value of the mutual inclination is deduced from D_2 and, thanks to the use of interval arithmetic, we are able to consider open sets of initial conditions instead of single values. Looking at the convergence radius of the Kolmogorov normal form, we develop a reverse KAM approach in order to estimate the ranges of mutual inclinations that are compatible with the long-term stability in a KAM sense. Our method is successfully applied to the extrasolar systems HD 141399, HD 143761 and HD 40307.
NASA Astrophysics Data System (ADS)
Lv, Yongfeng; Na, Jing; Yang, Qinmin; Wu, Xing; Guo, Yu
2016-01-01
An online adaptive optimal control is proposed for continuous-time nonlinear systems with completely unknown dynamics, which is achieved by developing a novel identifier-critic-based approximate dynamic programming algorithm with a dual neural network (NN) approximation structure. First, an adaptive NN identifier is designed to obviate the requirement of complete knowledge of system dynamics, and a critic NN is employed to approximate the optimal value function. Then, the optimal control law is computed based on the information from the identifier NN and the critic NN, so that the actor NN is not needed. In particular, a novel adaptive law design method with the parameter estimation error is proposed to online update the weights of both identifier NN and critic NN simultaneously, which converge to small neighbourhoods around their ideal values. The closed-loop system stability and the convergence to small vicinity around the optimal solution are all proved by means of the Lyapunov theory. The proposed adaptation algorithm is also improved to achieve finite-time convergence of the NN weights. Finally, simulation results are provided to exemplify the efficacy of the proposed methods.
NASA Astrophysics Data System (ADS)
Chevalier, Pascal; Oukaci, Abdelkader; Delmas, Jean-Pierre
2011-12-01
The detection of a known signal with unknown parameters in the presence of noise plus interferences (called total noise) whose covariance matrix is unknown is an important problem which has received much attention these last decades for applications such as radar, satellite localization or time acquisition in radio communications. However, most of the available receivers assume a second order (SO) circular (or proper) total noise and become suboptimal in the presence of SO noncircular (or improper) interferences, potentially present in the previous applications. The scarce available receivers which take the potential SO noncircularity of the total noise into account have been developed under the restrictive condition of a known signal with known parameters or under the assumption of a random signal. For this reason, following a generalized likelihood ratio test (GLRT) approach, the purpose of this paper is to introduce and to analyze the performance of different array receivers for the detection of a known signal, with different sets of unknown parameters, corrupted by an unknown noncircular total noise. To simplify the study, we limit the analysis to rectilinear known useful signals for which the baseband signal is real, which concerns many applications.
On the predictiveness of single-field inflationary models
NASA Astrophysics Data System (ADS)
Burgess, C. P.; Patil, Subodh P.; Trott, Michael
2014-06-01
We re-examine the predictiveness of single-field inflationary models and discuss how an unknown UV completion can complicate determining inflationary model parameters from observations, even from precision measurements. Besides the usual naturalness issues associated with having a shallow inflationary potential, we describe another issue for inflation, namely, unknown UV physics modifies the running of Standard Model (SM) parameters and thereby introduces uncertainty into the potential inflationary predictions. We illustrate this point using the minimal Higgs Inflationary scenario, which is arguably the most predictive single-field model on the market, because its predictions for A S , r and n s are made using only one new free parameter beyond those measured in particle physics experiments, and run up to the inflationary regime. We find that this issue can already have observable effects. At the same time, this UV-parameter dependence in the Renormalization Group allows Higgs Inflation to occur (in principle) for a slightly larger range of Higgs masses. We comment on the origin of the various UV scales that arise at large field values for the SM Higgs, clarifying cut off scale arguments by further developing the formalism of a non-linear realization of SU L (2) × U(1) in curved space. We discuss the interesting fact that, outside of Higgs Inflation, the effect of a non-minimal coupling to gravity, even in the SM, results in a non-linear EFT for the Higgs sector. Finally, we briefly comment on post BICEP2 attempts to modify the Higgs Inflation scenario.
Blood gases, biochemistry and haematology of Galápagos hawksbill turtles (Eretmochelys imbricata)
Muñoz-Pérez, Juan Pablo; Hirschfeld, Maximilian; Alarcón-Ruales, Daniela; Denkinger, Judith; Castañeda, Jason Guillermo; García, Juan; Lohmann, Kenneth J.
2017-01-01
Abstract The hawksbill turtle, Eretmochelys imbricata, is a marine chelonian with a circum-global distribution, but the species is critically endangered and has nearly vanished from the eastern Pacific. Although reference blood parameter intervals have been published for many chelonian species and populations, including nesting Atlantic hawksbills, no such baseline biochemical and blood gas values have been reported for wild Pacific hawksbill turtles. Blood samples were drawn from eight hawksbill turtles captured in near shore foraging locations within the Galápagos archipelago over a period of four sequential years; three of these turtles were recaptured and sampled on multiple occasions. Of the eight sea turtles sampled, five were immature and of unknown sex, and the other three were females. A portable blood analyzer was used to obtain near immediate field results for a suite of blood gas and chemistry parameters. Values affected by temperature were corrected in two ways: (i) with standard formulas and (ii) with auto-corrections made by the portable analyzer. A bench top blood chemistry analyzer was used to measure a series of biochemistry parameters from plasma. Standard laboratory haematology techniques were employed for red and white blood cell counts and to determine haematocrit manually, which was compared to the haematocrit values generated by the portable analyzer. The values reported in this study provide reference data that may be useful in comparisons among populations and in detecting changes in health status among Galápagos sea turtles. The findings might also be helpful in future efforts to demonstrate associations between specific biochemical parameters and disease or environmental disasters. PMID:28496982
Blood gases, biochemistry and haematology of Galápagos hawksbill turtles (Eretmochelys imbricata).
Muñoz-Pérez, Juan Pablo; Lewbart, Gregory A; Hirschfeld, Maximilian; Alarcón-Ruales, Daniela; Denkinger, Judith; Castañeda, Jason Guillermo; García, Juan; Lohmann, Kenneth J
2017-01-01
The hawksbill turtle, Eretmochelys imbricata , is a marine chelonian with a circum-global distribution, but the species is critically endangered and has nearly vanished from the eastern Pacific. Although reference blood parameter intervals have been published for many chelonian species and populations, including nesting Atlantic hawksbills, no such baseline biochemical and blood gas values have been reported for wild Pacific hawksbill turtles. Blood samples were drawn from eight hawksbill turtles captured in near shore foraging locations within the Galápagos archipelago over a period of four sequential years; three of these turtles were recaptured and sampled on multiple occasions. Of the eight sea turtles sampled, five were immature and of unknown sex, and the other three were females. A portable blood analyzer was used to obtain near immediate field results for a suite of blood gas and chemistry parameters. Values affected by temperature were corrected in two ways: (i) with standard formulas and (ii) with auto-corrections made by the portable analyzer. A bench top blood chemistry analyzer was used to measure a series of biochemistry parameters from plasma. Standard laboratory haematology techniques were employed for red and white blood cell counts and to determine haematocrit manually, which was compared to the haematocrit values generated by the portable analyzer. The values reported in this study provide reference data that may be useful in comparisons among populations and in detecting changes in health status among Galápagos sea turtles. The findings might also be helpful in future efforts to demonstrate associations between specific biochemical parameters and disease or environmental disasters.
NASA Astrophysics Data System (ADS)
Marichev, V. A.
2005-08-01
In DFT calculation of the charge transfer (Δ N), anions pose a special problem since their electron affinities are unknown. There is no method for calculating reasonable values of the absolute electronegativity ( χA) and chemical hardness ( ηA) for ions from data of species themselves. We propose a new approach to the experimental measurement of χA at the condition: Δ N = 0 at which η values may be neglected and χA = χMe. Electrochemical parameters corresponding to this condition may be obtained by the contact electric resistance method during in situ investigation of anion adsorption in the particular system anion-metal.
Parrish, Rudolph S.; Smith, Charles N.
1990-01-01
A quantitative method is described for testing whether model predictions fall within a specified factor of true values. The technique is based on classical theory for confidence regions on unknown population parameters and can be related to hypothesis testing in both univariate and multivariate situations. A capability index is defined that can be used as a measure of predictive capability of a model, and its properties are discussed. The testing approach and the capability index should facilitate model validation efforts and permit comparisons among competing models. An example is given for a pesticide leaching model that predicts chemical concentrations in the soil profile.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmad, Israr, E-mail: iak-2000plus@yahoo.com; Saaban, Azizan Bin, E-mail: azizan.s@uum.edu.my; Ibrahim, Adyda Binti, E-mail: adyda@uum.edu.my
This paper addresses a comparative computational study on the synchronization quality, cost and converging speed for two pairs of identical chaotic and hyperchaotic systems with unknown time-varying parameters. It is assumed that the unknown time-varying parameters are bounded. Based on the Lyapunov stability theory and using the adaptive control method, a single proportional controller is proposed to achieve the goal of complete synchronizations. Accordingly, appropriate adaptive laws are designed to identify the unknown time-varying parameters. The designed control strategy is easy to implement in practice. Numerical simulations results are provided to verify the effectiveness of the proposed synchronization scheme.
Chaotic dynamics in the (47171) Lempo triple system
NASA Astrophysics Data System (ADS)
Correia, Alexandre C. M.
2018-05-01
We investigate the dynamics of the (47171) Lempo triple system, also known by 1999 TC36. We derive a full 3D N-body model that takes into account the orbital and spin evolution of all bodies, which are assumed triaxial ellipsoids. We show that, for reasonable values of the shapes and rotational periods, the present best fitted orbital solution for the Lempo system is chaotic and unstable in short time-scales. The formation mechanism of this system is unknown, but the orbits can be stabilised when tidal dissipation is taken into account. The dynamics of the Lempo system is very rich, but depends on many parameters that are presently unknown. A better understanding of this systems thus requires more observations, which also need to be fitted with a complete model like the one presented here.
Reference-free error estimation for multiple measurement methods.
Madan, Hennadii; Pernuš, Franjo; Špiclin, Žiga
2018-01-01
We present a computational framework to select the most accurate and precise method of measurement of a certain quantity, when there is no access to the true value of the measurand. A typical use case is when several image analysis methods are applied to measure the value of a particular quantitative imaging biomarker from the same images. The accuracy of each measurement method is characterized by systematic error (bias), which is modeled as a polynomial in true values of measurand, and the precision as random error modeled with a Gaussian random variable. In contrast to previous works, the random errors are modeled jointly across all methods, thereby enabling the framework to analyze measurement methods based on similar principles, which may have correlated random errors. Furthermore, the posterior distribution of the error model parameters is estimated from samples obtained by Markov chain Monte-Carlo and analyzed to estimate the parameter values and the unknown true values of the measurand. The framework was validated on six synthetic and one clinical dataset containing measurements of total lesion load, a biomarker of neurodegenerative diseases, which was obtained with four automatic methods by analyzing brain magnetic resonance images. The estimates of bias and random error were in a good agreement with the corresponding least squares regression estimates against a reference.
NASA Astrophysics Data System (ADS)
Kim, W.; Hahm, I.; Ahn, S. J.; Lim, D. H.
2005-12-01
This paper introduces a powerful method for determining hypocentral parameters for local earthquakes in 1-D using a genetic algorithm (GA) and two-point ray tracing. Using existing algorithms to determine hypocentral parameters is difficult, because these parameters can vary based on initial velocity models. We developed a new method to solve this problem by applying a GA to an existing algorithm, HYPO-71 (Lee and Larh, 1975). The original HYPO-71 algorithm was modified by applying two-point ray tracing and a weighting factor with respect to the takeoff angle at the source to reduce errors from the ray path and hypocenter depth. Artificial data, without error, were generated by computer using two-point ray tracing in a true model, in which velocity structure and hypocentral parameters were known. The accuracy of the calculated results was easily determined by comparing calculated and actual values. We examined the accuracy of this method for several cases by changing the true and modeled layer numbers and thicknesses. The computational results show that this method determines nearly exact hypocentral parameters without depending on initial velocity models. Furthermore, accurate and nearly unique hypocentral parameters were obtained, although the number of modeled layers and thicknesses differed from those in the true model. Therefore, this method can be a useful tool for determining hypocentral parameters in regions where reliable local velocity values are unknown. This method also provides the basic a priori information for 3-D studies. KEY -WORDS: hypocentral parameters, genetic algorithm (GA), two-point ray tracing
Optimization of ISOCS Parameters for Quantitative Non-Destructive Analysis of Uranium in Bulk Form
NASA Astrophysics Data System (ADS)
Kutniy, D.; Vanzha, S.; Mikhaylov, V.; Belkin, F.
2011-12-01
Quantitative calculation of the isotopic masses of fissionable U and Pu is important for forensic analysis of nuclear materials. γ-spectrometry is the most commonly applied tool for qualitative detection and analysis of key radionuclides in nuclear materials. Relative isotopic measurement of U and Pu may be obtained from γ-spectra through application of special software such as MGAU (Multi-Group Analysis for Uranium, LLNL) or FRAM (Fixed-Energy Response Function Analysis with Multiple Efficiency, LANL). If the concentration of U/Pu in the matrix is unknown, however, isotopic masses cannot be calculated. At present, active neutron interrogation is the only practical alternative for non-destructive quantification of fissionable isotopes of U and Pu. An active well coincidence counter (AWCC), an alternative for analyses of uranium materials, has the following disadvantages: 1) The detection of small quantities (≤100 g) of 235U is not possible in many models; 2) Representative standards that capture the geometry, density and chemical composition of the analyzed unknown are required for precise analysis; and 3) Specimen size is severely restricted by the size of the measuring chamber. These problems may be addressed using modified γ-spectrometry techniques based on a coaxial HPGe-detector and ISOCS software (In Situ Object Counting System software, Canberra). We present data testing a new gamma-spectrometry method uniting actinide detection with commonly utilized software, modified for application in determining the masses of the fissionable isotopes in unknown samples of nuclear materials. The ISOCS software, widely used in radiation monitoring, calculates the detector efficiency curve in a specified geometry and range of photon energies. In describing the geometry of the source-detector, it is necessary to clearly describe the distance between the source and the detector, the material and the thickness of the walls of the container, as well as material, density and chemical composition of the matrix of the specimen. Obviously, not all parameters can be characterized when measuring samples of unknown composition or uranium in bulk form. Because of this, and especially for uranium materials, the IAEA developed an ISOCS optimization procedure. The target values for the optimization are Μmatrixfixed, the matrix mass determined by weighing with a known mass container, and Εfixed, the 235U enrichment, determined by MGAU. Target values are fitted by varying the matrix density (ρ), and the concentration of uranium in the matrix of the unknown (w). For each (ρi, wi), an efficiency curve is generated, and the masses of uranium isotopes, Μ235Ui and Μ238Ui, determined using spectral activity data and known specific activities for U. Finally, fitted parameters are obtained for Μmatrixi = Μmatrixfixed ± 1σ, Εi = Εfixed ± 1σ, as well as important parameters (ρi, wi, Μ235Ui, Μ238Ui, ΜUi). We examined multiple forms of uranium (powdered, pressed, and scrap UO2 and U3O8) to test this method for its utility in accurately identifying the mass and enrichment of uranium materials, and will present the results of this research.
NASA Astrophysics Data System (ADS)
Tirandaz, Hamed; Karami-Mollaee, Ali
2018-06-01
Chaotic systems demonstrate complex behaviour in their state variables and their parameters, which generate some challenges and consequences. This paper presents a new synchronisation scheme based on integral sliding mode control (ISMC) method on a class of complex chaotic systems with complex unknown parameters. Synchronisation between corresponding states of a class of complex chaotic systems and also convergence of the errors of the system parameters to zero point are studied. The designed feedback control vector and complex unknown parameter vector are analytically achieved based on the Lyapunov stability theory. Moreover, the effectiveness of the proposed methodology is verified by synchronisation of the Chen complex system and the Lorenz complex systems as the leader and the follower chaotic systems, respectively. In conclusion, some numerical simulations related to the synchronisation methodology is given to illustrate the effectiveness of the theoretical discussions.
Completed Beltrami-Michell formulation for analyzing mixed boundary value problems in elasticity
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Kaljevic, Igor; Hopkins, Dale A.; Saigal, Sunil
1995-01-01
In elasticity, the method of forces, wherein stress parameters are considered as the primary unknowns, is known as the Beltrami-Michell formulation (BMF). The existing BMF can only solve stress boundary value problems; it cannot handle the more prevalent displacement of mixed boundary value problems of elasticity. Therefore, this formulation, which has restricted application, could not become a true alternative to the Navier's displacement method, which can solve all three types of boundary value problems. The restrictions in the BMF have been alleviated by augmenting the classical formulation with a novel set of conditions identified as the boundary compatibility conditions. This new method, which completes the classical force formulation, has been termed the completed Beltrami-Michell formulation (CBMF). The CBMF can solve general elasticity problems with stress, displacement, and mixed boundary conditions in terms of stresses as the primary unknowns. The CBMF is derived from the stationary condition of the variational functional of the integrated force method. In the CBMF, stresses for kinematically stable structures can be obtained without any reference to the displacements either in the field or on the boundary. This paper presents the CBMF and its derivation from the variational functional of the integrated force method. Several examples are presented to demonstrate the applicability of the completed formulation for analyzing mixed boundary value problems under thermomechanical loads. Selected example problems include a cylindrical shell wherein membrane and bending responses are coupled, and a composite circular plate.
NASA Astrophysics Data System (ADS)
Hassanabadi, Amir Hossein; Shafiee, Masoud; Puig, Vicenc
2018-01-01
In this paper, sensor fault diagnosis of a singular delayed linear parameter varying (LPV) system is considered. In the considered system, the model matrices are dependent on some parameters which are real-time measurable. The case of inexact parameter measurements is considered which is close to real situations. Fault diagnosis in this system is achieved via fault estimation. For this purpose, an augmented system is created by including sensor faults as additional system states. Then, an unknown input observer (UIO) is designed which estimates both the system states and the faults in the presence of measurement noise, disturbances and uncertainty induced by inexact measured parameters. Error dynamics and the original system constitute an uncertain system due to inconsistencies between real and measured values of the parameters. Then, the robust estimation of the system states and the faults are achieved with H∞ performance and formulated with a set of linear matrix inequalities (LMIs). The designed UIO is also applicable for fault diagnosis of singular delayed LPV systems with unmeasurable scheduling variables. The efficiency of the proposed approach is illustrated with an example.
Adaptive control based on retrospective cost optimization
NASA Technical Reports Server (NTRS)
Bernstein, Dennis S. (Inventor); Santillo, Mario A. (Inventor)
2012-01-01
A discrete-time adaptive control law for stabilization, command following, and disturbance rejection that is effective for systems that are unstable, MIMO, and/or nonminimum phase. The adaptive control algorithm includes guidelines concerning the modeling information needed for implementation. This information includes the relative degree, the first nonzero Markov parameter, and the nonminimum-phase zeros. Except when the plant has nonminimum-phase zeros whose absolute value is less than the plant's spectral radius, the required zero information can be approximated by a sufficient number of Markov parameters. No additional information about the poles or zeros need be known. Numerical examples are presented to illustrate the algorithm's effectiveness in handling systems with errors in the required modeling data, unknown latency, sensor noise, and saturation.
Interacting Winds in Eclipsing Symbiotic Systems - The Case Study of EG Andromedae
NASA Astrophysics Data System (ADS)
Calabrò, Emanuele
2014-03-01
We report the mathematical representation of the so called eccentric eclipse model, whose numerical solutions can be used to obtain the physical parameters of a quiescent eclipsing symbiotic system. Indeed the nebular region produced by the collision of the stellar winds should be shifted to the orbital axis because of the orbital motion of the system. This mechanism is not negligible, and it led us to modify the classical concept of an eclipse. The orbital elements obtained from spectroscopy and photometry of the symbiotic EG Andromedae were used to test the eccentric eclipse model. Consistent values for the unknown orbital elements of this symbiotic were obtained. The physical parameters are in agreement with those obtained by means of other simulations for this system.
Hierarchial mark-recapture models: a framework for inference about demographic processes
Link, W.A.; Barker, R.J.
2004-01-01
The development of sophisticated mark-recapture models over the last four decades has provided fundamental tools for the study of wildlife populations, allowing reliable inference about population sizes and demographic rates based on clearly formulated models for the sampling processes. Mark-recapture models are now routinely described by large numbers of parameters. These large models provide the next challenge to wildlife modelers: the extraction of signal from noise in large collections of parameters. Pattern among parameters can be described by strong, deterministic relations (as in ultrastructural models) but is more flexibly and credibly modeled using weaker, stochastic relations. Trend in survival rates is not likely to be manifest by a sequence of values falling precisely on a given parametric curve; rather, if we could somehow know the true values, we might anticipate a regression relation between parameters and explanatory variables, in which true value equals signal plus noise. Hierarchical models provide a useful framework for inference about collections of related parameters. Instead of regarding parameters as fixed but unknown quantities, we regard them as realizations of stochastic processes governed by hyperparameters. Inference about demographic processes is based on investigation of these hyperparameters. We advocate the Bayesian paradigm as a natural, mathematically and scientifically sound basis for inference about hierarchical models. We describe analysis of capture-recapture data from an open population based on hierarchical extensions of the Cormack-Jolly-Seber model. In addition to recaptures of marked animals, we model first captures of animals and losses on capture, and are thus able to estimate survival probabilities w (i.e., the complement of death or permanent emigration) and per capita growth rates f (i.e., the sum of recruitment and immigration rates). Covariation in these rates, a feature of demographic interest, is explicitly described in the model.
M-MRAC Backstepping for Systems with Unknown Virtual Control Coefficients
NASA Technical Reports Server (NTRS)
Stepanyan, Vahram; Krishnakumar, Kalmanje
2015-01-01
The paper presents an over-parametrization free certainty equivalence state feedback backstepping adaptive control design method for systems of any relative degree with unmatched uncertainties and unknown virtual control coefficients. It uses a fast prediction model to estimate the unknown parameters, which is independent of the control design. It is shown that the system's input and output tracking errors can be systematically decreased by the proper choice of the design parameters. The benefits of the approach are demonstrated in numerical simulations.
Cosmic-ray antiprotons, positrons, and gamma rays from halo dark matter annihilation
NASA Technical Reports Server (NTRS)
Rudaz, S.; Stecker, F. W.
1988-01-01
The subject of cosmic ray antiproton production is reexamined by considering other choices for the nature of the Majorana fermion chi other than the photino considered in a previous article. The calculations are extended to include cosmic-ray positrons and cosmic gamma rays as annihilation products. Taking chi to be a generic higgsino or simply a heavy Majorana neutrino with standard couplings to the Z-zero boson allows the previous interpretation of the cosmic antiproton data to be maintained. In this case also, the annihilation cross section can be calculated independently of unknown particle physics parameters. Whereas the relic density of photinos with the choice of parameters in the previous paper turned out to be only a few percent of the closure density, the corresponding value for Omega in the generic higgsino or Majorana case is about 0.2, in excellent agreement with the value associated with galaxies and one which is sufficient to give the halo mass.
Parameter estimation of qubit states with unknown phase parameter
NASA Astrophysics Data System (ADS)
Suzuki, Jun
2015-02-01
We discuss a problem of parameter estimation for quantum two-level system, qubit system, in presence of unknown phase parameter. We analyze trade-off relations for mean square errors (MSEs) when estimating relevant parameters with separable measurements based on known precision bounds; the symmetric logarithmic derivative (SLD) Cramér-Rao (CR) bound and Hayashi-Gill-Massar (HGM) bound. We investigate the optimal measurement which attains the HGM bound and discuss its properties. We show that the HGM bound for relevant parameters can be attained asymptotically by using some fraction of given n quantum states to estimate the phase parameter. We also discuss the Holevo bound which can be attained asymptotically by a collective measurement.
NASA Astrophysics Data System (ADS)
Barbarossa, S.; Farina, A.
A novel scheme for detecting moving targets with synthetic aperture radar (SAR) is presented. The proposed approach is based on the use of the Wigner-Ville distribution (WVD) for simultaneously detecting moving targets and estimating their motion kinematic parameters. The estimation plays a key role for focusing the target and correctly locating it with respect to the stationary background. The method has a number of advantages: (i) the detection is efficiently performed on the samples in the time-frequency domain, provided the WVD, without resorting to the use of a bank of filters, each one matched to possible values of the unknown target motion parameters; (ii) the estimation of the target motion parameters can be done on the same time-frequency domain by locating the line where the maximum energy of the WVD is concentrated. A validation of the approach is given by both analytical and simulation means. In addition, the estimation of the target kinematic parameters and the corresponding image focusing are also demonstrated.
Identifying Bearing Rotordynamic Coefficients using an Extended Kalman Filter
NASA Technical Reports Server (NTRS)
Miller, Brad A.; Howard, Samuel A.
2008-01-01
An Extended Kalman Filter is developed to estimate the linearized direct and indirect stiffness and damping force coefficients for bearings in rotor-dynamic applications from noisy measurements of the shaft displacement in response to imbalance and impact excitation. The bearing properties are modeled as stochastic random variables using a Gauss-Markov model. Noise terms are introduced into the system model to account for all of the estimation error, including modeling errors and uncertainties and the propagation of measurement errors into the parameter estimates. The system model contains two user-defined parameters that can be tuned to improve the filter s performance; these parameters correspond to the covariance of the system and measurement noise variables. The filter is also strongly influenced by the initial values of the states and the error covariance matrix. The filter is demonstrated using numerically simulated data for a rotor-bearing system with two identical bearings, which reduces the number of unknown linear dynamic coefficients to eight. The filter estimates for the direct damping coefficients and all four stiffness coefficients correlated well with actual values, whereas the estimates for the cross-coupled damping coefficients were the least accurate.
NASA Astrophysics Data System (ADS)
Piecuch, C. G.; Huybers, P. J.; Tingley, M.
2016-12-01
Sea level observations from coastal tide gauges are some of the longest instrumental records of the ocean. However, these data can be noisy, biased, and gappy, featuring missing values, and reflecting land motion and local effects. Coping with these issues in a formal manner is a challenging task. Some studies use Bayesian approaches to estimate sea level from tide gauge records, making inference probabilistically. Such methods are typically empirically Bayesian in nature: model parameters are treated as known and assigned point values. But, in reality, parameters are not perfectly known. Empirical Bayes methods thus neglect a potentially important source of uncertainty, and so may overestimate the precision (i.e., underestimate the uncertainty) of sea level estimates. We consider whether empirical Bayes methods underestimate uncertainty in sea level from tide gauge data, comparing to a full Bayes method that treats parameters as unknowns to be solved for along with the sea level field. We develop a hierarchical algorithm that we apply to tide gauge data on the North American northeast coast over 1893-2015. The algorithm is run in full Bayes mode, solving for the sea level process and parameters, and in empirical mode, solving only for the process using fixed parameter values. Error bars on sea level from the empirical method are smaller than from the full Bayes method, and the relative discrepancies increase with time; the 95% credible interval on sea level values from the empirical Bayes method in 1910 and 2010 is 23% and 56% narrower, respectively, than from the full Bayes approach. To evaluate the representativeness of the credible intervals, empirical Bayes and full Bayes methods are applied to corrupted data of a known surrogate field. Using rank histograms to evaluate the solutions, we find that the full Bayes method produces generally reliable error bars, whereas the empirical Bayes method gives too-narrow error bars, such that the 90% credible interval only encompasses 70% of true process values. Results demonstrate that parameter uncertainty is an important source of process uncertainty, and advocate for the fully Bayesian treatment of tide gauge records in ocean circulation and climate studies.
NASA Astrophysics Data System (ADS)
Hoppe, C. J. M.; Langer, G.; Rokitta, S. D.; Wolf-Gladrow, D. A.; Rost, B.
2012-07-01
The growing field of ocean acidification research is concerned with the investigation of organism responses to increasing pCO2 values. One important approach in this context is culture work using seawater with adjusted CO2 levels. As aqueous pCO2 is difficult to measure directly in small-scale experiments, it is generally calculated from two other measured parameters of the carbonate system (often AT, CT or pH). Unfortunately, the overall uncertainties of measured and subsequently calculated values are often unknown. Especially under high pCO2, this can become a severe problem with respect to the interpretation of physiological and ecological data. In the few datasets from ocean acidification research where all three of these parameters were measured, pCO2 values calculated from AT and CT are typically about 30% lower (i.e. ~300 μatm at a target pCO2 of 1000 μatm) than those calculated from AT and pH or CT and pH. This study presents and discusses these discrepancies as well as likely consequences for the ocean acidification community. Until this problem is solved, one has to consider that calculated parameters of the carbonate system (e.g. pCO2, calcite saturation state) may not be comparable between studies, and that this may have important implications for the interpretation of CO2 perturbation experiments.
NASA Astrophysics Data System (ADS)
Hoppe, C. J. M.; Langer, G.; Rokitta, S. D.; Wolf-Gladrow, D. A.; Rost, B.
2012-02-01
The growing field of ocean acidification research is concerned with the investigation of organisms' responses to increasing pCO2 values. One important approach in this context is culture work using seawater with adjusted CO2 levels. As aqueous pCO2 is difficult to measure directly in small scale experiments, it is generally calculated from two other measured parameters of the carbonate system (often AT, CT or pH). Unfortunately, the overall uncertainties of measured and subsequently calculated values are often unknown. Especially under high pCO2, this can become a severe problem with respect to the interpretation of physiological and ecological data. In the few datasets from ocean acidification research where all three of these parameters were measured, pCO2 values calculated from AT and CT are typically about 30 % lower (i.e. ~300 μatm at a target pCO2 of 1000 μatm) than those calculated from AT and pH or CT and pH. This study presents and discusses these discrepancies as well as likely consequences for the ocean acidification community. Until this problem is solved, one has to consider that calculated parameters of the carbonate system (e.g. pCO2, calcite saturation state) may not be comparable between studies, and that this may have important implications for the interpretation of CO2 perturbation experiments.
NASA Astrophysics Data System (ADS)
Capozzi, Francesco; Lisi, Eligio; Marrone, Antonio
2016-04-01
Within the standard 3ν oscillation framework, we illustrate the status of currently unknown oscillation parameters: the θ23 octant, the mass hierarchy (normal or inverted), and the possible CP-violating phase δ, as derived by a (preliminary) global analysis of oscillation data available in 2015. We then discuss some challenges that will be faced by future, high-statistics analyses of spectral data, starting with one-dimensional energy spectra in reactor experiments, and concluding with two-dimensional energy-angle spectra in large-volume atmospheric experiments. It is shown that systematic uncertainties in the spectral shapes can noticeably affect the prospective sensitivities to unknown oscillation parameters, in particular to the mass hierarchy.
Herrera Lara, Susana; Fernández-Fabrellas, Estrella; Juan Samper, Gustavo; Marco Buades, Josefa; Andreu Lapiedra, Rafael; Pinilla Moreno, Amparo; Morales Suárez-Varela, María
2017-10-01
The usefulness of clinical, radiological and pleural fluid analytical parameters for diagnosing malignant and paramalignant pleural effusion is not clearly stated. Hence this study aimed to identify possible predictor variables of diagnosing malignancy in pleural effusion of unknown aetiology. Clinical, radiological and pleural fluid analytical parameters were obtained from consecutive patients who had suffered pleural effusion of unknown aetiology. They were classified into three groups according to their final diagnosis: malignant, paramalignant and benign pleural effusion. The CHAID (Chi-square automatic interaction detector) methodology was used to estimate the implication of the clinical, radiological and analytical variables in daily practice through decision trees. Of 71 patients, malignant (n = 31), paramalignant (n = 15) and benign (n = 25), smoking habit, dyspnoea, weight loss, radiological characteristics (mass, node, adenopathies and pleural thickening) and pleural fluid analytical parameters (pH and glucose) distinguished malignant and paramalignant pleural effusions (all with a p < 0.05). Decision tree 1 classified 77.8% of malignant and paramalignant pleural effusions in step 2. Decision tree 2 classified 83.3% of malignant pleural effusions in step 2, 73.3% of paramalignant pleural effusions and 91.7% of benign ones. The data herein suggest that the identified predictor values applied to tree diagrams, which required no extraordinary measures, have a higher rate of correct identification of malignant, paramalignant and benign effusions when compared to techniques available today and proved most useful for usual clinical practice. Future studies are still needed to further improve the classification of patients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Sen; Zhang, Wei; Lian, Jianming
This two-part paper considers the coordination of a population of Thermostatically Controlled Loads (TCLs) with unknown parameters to achieve group objectives. The problem involves designing the bidding and market clearing strategy to motivate self-interested users to realize efficient energy allocation subject to a peak power constraint. The companion paper (Part I) formulates the problem and proposes a load coordination framework using the mechanism design approach. To address the unknown parameters, Part II of this paper presents a joint state and parameter estimation framework based on the expectation maximization algorithm. The overall framework is then validated using real-world weather data andmore » price data, and is compared with other approaches in terms of aggregated power response. Simulation results indicate that our coordination framework can effectively improve the efficiency of the power grid operations and reduce power congestion at key times.« less
Kaklamanos, James; Baise, Laurie G.; Boore, David M.
2011-01-01
The ground-motion prediction equations (GMPEs) developed as part of the Next Generation Attenuation of Ground Motions (NGA-West) project in 2008 are becoming widely used in seismic hazard analyses. However, these new models are considerably more complicated than previous GMPEs, and they require several more input parameters. When employing the NGA models, users routinely face situations in which some of the required input parameters are unknown. In this paper, we present a framework for estimating the unknown source, path, and site parameters when implementing the NGA models in engineering practice, and we derive geometrically-based equations relating the three distance measures found in the NGA models. Our intent is for the content of this paper not only to make the NGA models more accessible, but also to help with the implementation of other present or future GMPEs.
The application of the pilot points in groundwater numerical inversion model
NASA Astrophysics Data System (ADS)
Hu, Bin; Teng, Yanguo; Cheng, Lirong
2015-04-01
Numerical inversion simulation of groundwater has been widely applied in groundwater. Compared to traditional forward modeling, inversion model has more space to study. Zones and inversing modeling cell by cell are conventional methods. Pilot points is a method between them. The traditional inverse modeling method often uses software dividing the model into several zones with a few parameters needed to be inversed. However, distribution is usually too simple for modeler and result of simulation deviation. Inverse cell by cell will get the most actual parameter distribution in theory, but it need computational complexity greatly and quantity of survey data for geological statistical simulation areas. Compared to those methods, pilot points distribute a set of points throughout the different model domains for parameter estimation. Property values are assigned to model cells by Kriging to ensure geological units within the parameters of heterogeneity. It will reduce requirements of simulation area geological statistics and offset the gap between above methods. Pilot points can not only save calculation time, increase fitting degree, but also reduce instability of numerical model caused by numbers of parameters and other advantages. In this paper, we use pilot point in a field which structure formation heterogeneity and hydraulics parameter was unknown. We compare inversion modeling results of zones and pilot point methods. With the method of comparative analysis, we explore the characteristic of pilot point in groundwater inversion model. First, modeler generates an initial spatially correlated field given a geostatistical model by the description of the case site with the software named Groundwater Vistas 6. Defining Kriging to obtain the value of the field functions over the model domain on the basis of their values at measurement and pilot point locations (hydraulic conductivity), then we assign pilot points to the interpolated field which have been divided into 4 zones. And add range of disturbance values to inversion targets to calculate the value of hydraulic conductivity. Third, after inversion calculation (PEST), the interpolated field will minimize an objective function measuring the misfit between calculated and measured data. It's an optimization problem to find the optimum value of parameters. After the inversion modeling, the following major conclusion can be found out: (1) In a field structure formation is heterogeneity, the results of pilot point method is more real: better fitting result of parameters, more stable calculation of numerical simulation (stable residual distribution). Compared to zones, it is better of reflecting the heterogeneity of study field. (2) Pilot point method ensures that each parameter is sensitive and not entirely dependent on other parameters. Thus it guarantees the relative independence and authenticity of parameters evaluation results. However, it costs more time to calculate than zones. Key words: groundwater; pilot point; inverse model; heterogeneity; hydraulic conductivity
Dopamine Receptor-Specific Contributions to the Computation of Value.
Burke, Christopher J; Soutschek, Alexander; Weber, Susanna; Raja Beharelle, Anjali; Fehr, Ernst; Haker, Helene; Tobler, Philippe N
2018-05-01
Dopamine is thought to play a crucial role in value-based decision making. However, the specific contributions of different dopamine receptor subtypes to the computation of subjective value remain unknown. Here we demonstrate how the balance between D1 and D2 dopamine receptor subtypes shapes subjective value computation during risky decision making. We administered the D2 receptor antagonist amisulpride or placebo before participants made choices between risky options. Compared with placebo, D2 receptor blockade resulted in more frequent choice of higher risk and higher expected value options. Using a novel model fitting procedure, we concurrently estimated the three parameters that define individual risk attitude according to an influential theoretical account of risky decision making (prospect theory). This analysis revealed that the observed reduction in risk aversion under amisulpride was driven by increased sensitivity to reward magnitude and decreased distortion of outcome probability, resulting in more linear value coding. Our data suggest that different components that govern individual risk attitude are under dopaminergic control, such that D2 receptor blockade facilitates risk taking and expected value processing.
Half-blind remote sensing image restoration with partly unknown degradation
NASA Astrophysics Data System (ADS)
Xie, Meihua; Yan, Fengxia
2017-01-01
The problem of image restoration has been extensively studied for its practical importance and theoretical interest. This paper mainly discusses the problem of image restoration with partly unknown kernel. In this model, the degraded kernel function is known but its parameters are unknown. With this model, we should estimate the parameters in Gaussian kernel and the real image simultaneity. For this new problem, a total variation restoration model is put out and an intersect direction iteration algorithm is designed. Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index Measurement (SSIM) are used to measure the performance of the method. Numerical results show that we can estimate the parameters in kernel accurately, and the new method has both much higher PSNR and much higher SSIM than the expectation maximization (EM) method in many cases. In addition, the accuracy of estimation is not sensitive to noise. Furthermore, even though the support of the kernel is unknown, we can also use this method to get accurate estimation.
Identification of linear system models and state estimators for controls
NASA Technical Reports Server (NTRS)
Chen, Chung-Wen
1992-01-01
The following paper is presented in viewgraph format and covers topics including: (1) linear state feedback control system; (2) Kalman filter state estimation; (3) relation between residual and stochastic part of output; (4) obtaining Kalman filter gain; (5) state estimation under unknown system model and unknown noises; and (6) relationship between filter Markov parameters and system Markov parameters.
Krajbich, Ian; Rangel, Antonio
2011-08-16
How do we make decisions when confronted with several alternatives (e.g., on a supermarket shelf)? Previous work has shown that accumulator models, such as the drift-diffusion model, can provide accurate descriptions of the psychometric data for binary value-based choices, and that the choice process is guided by visual attention. However, the computational processes used to make choices in more complicated situations involving three or more options are unknown. We propose a model of trinary value-based choice that generalizes what is known about binary choice, and test it using an eye-tracking experiment. We find that the model provides a quantitatively accurate description of the relationship between choice, reaction time, and visual fixation data using the same parameters that were estimated in previous work on binary choice. Our findings suggest that the brain uses similar computational processes to make binary and trinary choices.
Coelho, Hélio José; Sampaio, Ricardo Aurélio Carvalho; Gonçalvez, Ivan de Oliveira; Aguiar, Samuel da Silva; Palmeira, Rafael; Oliveira, José Fernando de; Asano, Ricardo Yukio; Sampaio, Priscila Yukari Sewo; Uchida, Marco Carlos
2016-01-01
In elderly people, measurement of several anthropometric parameters may present complications. Although neck circumference measurements seem to avoid these issues, the cutoffs and cardiovascular risk factors associated with this parameter among elderly people remain unknown. This study was developed to identify the cutoff values and cardiovascular risk factors associated with neck circumference measurements among elderly people. Cross-sectional study conducted in two community centers for elderly people. 435 elderly adults (371 women and 64 men) were recruited. These volunteers underwent morphological evaluations (body mass index and waist, hip, and neck circumferences) and hemodynamic evaluations (blood pressure values and heart rate). Receiver operating characteristic curve analyses were used to determine the predictive validity of cutoff values for neck circumference, for identifying overweight/obesity. Multivariate analysis was used to identify cardiovascular risk factors associated with large neck circumference. Cutoff values for neck circumference (men = 40.5 cm and women = 35.7 cm), for detection of obese older adults according to body mass index, were identified. After a second analysis, large neck circumference was shown to be associated with elevated body mass index in men; and elevated body mass index, blood pressure values, prevalence of type 2 diabetes and hypertension in women. The data indicate that neck circumference can be used as a screening tool to identify overweight/obesity in older people. Moreover, large neck circumference values may be associated with cardiovascular risk factors.
Castro Sanchez, Amparo Yovanna; Aerts, Marc; Shkedy, Ziv; Vickerman, Peter; Faggiano, Fabrizio; Salamina, Guiseppe; Hens, Niel
2013-03-01
The hepatitis C virus (HCV) and the human immunodeficiency virus (HIV) are a clear threat for public health, with high prevalences especially in high risk groups such as injecting drug users. People with HIV infection who are also infected by HCV suffer from a more rapid progression to HCV-related liver disease and have an increased risk for cirrhosis and liver cancer. Quantifying the impact of HIV and HCV co-infection is therefore of great importance. We propose a new joint mathematical model accounting for co-infection with the two viruses in the context of injecting drug users (IDUs). Statistical concepts and methods are used to assess the model from a statistical perspective, in order to get further insights in: (i) the comparison and selection of optional model components, (ii) the unknown values of the numerous model parameters, (iii) the parameters to which the model is most 'sensitive' and (iv) the combinations or patterns of values in the high-dimensional parameter space which are most supported by the data. Data from a longitudinal study of heroin users in Italy are used to illustrate the application of the proposed joint model and its statistical assessment. The parameters associated with contact rates (sharing syringes) and the transmission rates per syringe-sharing event are shown to play a major role. Copyright © 2013 Elsevier B.V. All rights reserved.
Blood gases, biochemistry, and hematology of Galapagos green turtles (Chelonia mydas).
Lewbart, Gregory A; Hirschfeld, Maximilian; Denkinger, Judith; Vasco, Karla; Guevara, Nataly; García, Juan; Muñoz, Juanpablo; Lohmann, Kenneth J
2014-01-01
The green turtle, Chelonia mydas, is an endangered marine chelonian with a circum-global distribution. Reference blood parameter intervals have been published for some chelonian species, but baseline hematology, biochemical, and blood gas values are lacking from the Galapagos sea turtles. Analyses were done on blood samples drawn from 28 green turtles captured in two foraging locations on San Cristóbal Island (14 from each site). Of these turtles, 20 were immature and of unknown sex; the other eight were males (five mature, three immature). A portable blood analyzer (iSTAT) was used to obtain near immediate field results for pH, lactate, pO2, pCO2, HCO3-, Hct, Hb, Na, K, iCa, and Glu. Parameter values affected by temperature were corrected in two ways: (1) with standard formulas; and (2) with auto-corrections made by the iSTAT. The two methods yielded clinically equivalent results. Standard laboratory hematology techniques were employed for the red and white blood cell counts and the hematocrit determination, which was also compared to the hematocrit values generated by the iSTAT. Of all blood analytes, only lactate concentrations were positively correlated with body size. All other values showed no significant difference between the two sample locations nor were they correlated with body size or internal temperature. For hematocrit count, the iSTAT blood analyzer yielded results indistinguishable from those obtained with high-speed centrifugation. The values reported in this study provide baseline data that may be useful in comparisons among populations and in detecting changes in health status among Galapagos sea turtles. The findings might also be helpful in future efforts to demonstrate associations between specific biochemical parameters and disease.
Blood Gases, Biochemistry, and Hematology of Galapagos Green Turtles (Chelonia Mydas)
Lewbart, Gregory A.; Hirschfeld, Maximilian; Denkinger, Judith; Vasco, Karla; Guevara, Nataly; García, Juan; Muñoz, Juanpablo; Lohmann, Kenneth J.
2014-01-01
The green turtle, Chelonia mydas, is an endangered marine chelonian with a circum-global distribution. Reference blood parameter intervals have been published for some chelonian species, but baseline hematology, biochemical, and blood gas values are lacking from the Galapagos sea turtles. Analyses were done on blood samples drawn from 28 green turtles captured in two foraging locations on San Cristóbal Island (14 from each site). Of these turtles, 20 were immature and of unknown sex; the other eight were males (five mature, three immature). A portable blood analyzer (iSTAT) was used to obtain near immediate field results for pH, lactate, pO2, pCO2, HCO3 −, Hct, Hb, Na, K, iCa, and Glu. Parameter values affected by temperature were corrected in two ways: (1) with standard formulas; and (2) with auto-corrections made by the iSTAT. The two methods yielded clinically equivalent results. Standard laboratory hematology techniques were employed for the red and white blood cell counts and the hematocrit determination, which was also compared to the hematocrit values generated by the iSTAT. Of all blood analytes, only lactate concentrations were positively correlated with body size. All other values showed no significant difference between the two sample locations nor were they correlated with body size or internal temperature. For hematocrit count, the iSTAT blood analyzer yielded results indistinguishable from those obtained with high-speed centrifugation. The values reported in this study provide baseline data that may be useful in comparisons among populations and in detecting changes in health status among Galapagos sea turtles. The findings might also be helpful in future efforts to demonstrate associations between specific biochemical parameters and disease. PMID:24824065
Selection of entropy-measure parameters for knowledge discovery in heart rate variability data
2014-01-01
Background Heart rate variability is the variation of the time interval between consecutive heartbeats. Entropy is a commonly used tool to describe the regularity of data sets. Entropy functions are defined using multiple parameters, the selection of which is controversial and depends on the intended purpose. This study describes the results of tests conducted to support parameter selection, towards the goal of enabling further biomarker discovery. Methods This study deals with approximate, sample, fuzzy, and fuzzy measure entropies. All data were obtained from PhysioNet, a free-access, on-line archive of physiological signals, and represent various medical conditions. Five tests were defined and conducted to examine the influence of: varying the threshold value r (as multiples of the sample standard deviation σ, or the entropy-maximizing rChon), the data length N, the weighting factors n for fuzzy and fuzzy measure entropies, and the thresholds rF and rL for fuzzy measure entropy. The results were tested for normality using Lilliefors' composite goodness-of-fit test. Consequently, the p-value was calculated with either a two sample t-test or a Wilcoxon rank sum test. Results The first test shows a cross-over of entropy values with regard to a change of r. Thus, a clear statement that a higher entropy corresponds to a high irregularity is not possible, but is rather an indicator of differences in regularity. N should be at least 200 data points for r = 0.2 σ and should even exceed a length of 1000 for r = rChon. The results for the weighting parameters n for the fuzzy membership function show different behavior when coupled with different r values, therefore the weighting parameters have been chosen independently for the different threshold values. The tests concerning rF and rL showed that there is no optimal choice, but r = rF = rL is reasonable with r = rChon or r = 0.2σ. Conclusions Some of the tests showed a dependency of the test significance on the data at hand. Nevertheless, as the medical conditions are unknown beforehand, compromises had to be made. Optimal parameter combinations are suggested for the methods considered. Yet, due to the high number of potential parameter combinations, further investigations of entropy for heart rate variability data will be necessary. PMID:25078574
Selection of entropy-measure parameters for knowledge discovery in heart rate variability data.
Mayer, Christopher C; Bachler, Martin; Hörtenhuber, Matthias; Stocker, Christof; Holzinger, Andreas; Wassertheurer, Siegfried
2014-01-01
Heart rate variability is the variation of the time interval between consecutive heartbeats. Entropy is a commonly used tool to describe the regularity of data sets. Entropy functions are defined using multiple parameters, the selection of which is controversial and depends on the intended purpose. This study describes the results of tests conducted to support parameter selection, towards the goal of enabling further biomarker discovery. This study deals with approximate, sample, fuzzy, and fuzzy measure entropies. All data were obtained from PhysioNet, a free-access, on-line archive of physiological signals, and represent various medical conditions. Five tests were defined and conducted to examine the influence of: varying the threshold value r (as multiples of the sample standard deviation σ, or the entropy-maximizing rChon), the data length N, the weighting factors n for fuzzy and fuzzy measure entropies, and the thresholds rF and rL for fuzzy measure entropy. The results were tested for normality using Lilliefors' composite goodness-of-fit test. Consequently, the p-value was calculated with either a two sample t-test or a Wilcoxon rank sum test. The first test shows a cross-over of entropy values with regard to a change of r. Thus, a clear statement that a higher entropy corresponds to a high irregularity is not possible, but is rather an indicator of differences in regularity. N should be at least 200 data points for r = 0.2 σ and should even exceed a length of 1000 for r = rChon. The results for the weighting parameters n for the fuzzy membership function show different behavior when coupled with different r values, therefore the weighting parameters have been chosen independently for the different threshold values. The tests concerning rF and rL showed that there is no optimal choice, but r = rF = rL is reasonable with r = rChon or r = 0.2σ. Some of the tests showed a dependency of the test significance on the data at hand. Nevertheless, as the medical conditions are unknown beforehand, compromises had to be made. Optimal parameter combinations are suggested for the methods considered. Yet, due to the high number of potential parameter combinations, further investigations of entropy for heart rate variability data will be necessary.
Analytical Incorporation of Velocity Parameters into Ice Sheet Elevation Change Rate Computations
NASA Astrophysics Data System (ADS)
Nagarajan, S.; Ahn, Y.; Teegavarapu, R. S. V.
2014-12-01
NASA, ESA and various other agencies have been collecting laser, optical and RADAR altimetry data through various missions to study the elevation changes of the Cryosphere. The laser altimetry collected by various airborne and spaceborne missions provides multi-temporal coverage of Greenland and Antarctica since 1993 to now. Though these missions have increased the data coverage, considering the dynamic nature of the ice surface, it is still sparse both spatially and temporally for accurate elevation change detection studies. The temporal and spatial gaps are usually filled by interpolation techniques. This presentation will demonstrate a method to improve the temporal interpolation. Considering the accuracy, repeat coverage and spatial distribution, the laser scanning data has been widely used to compute elevation change rate of Greenland and Antarctica ice sheets. A major problem with these approaches is non-consideration of ice sheet velocity dynamics into change rate computations. Though the correlation between velocity and elevation change rate have been noticed by Hurkmans et al., 2012, the corrections for velocity changes were applied after computing elevation change rates by assuming linear or higher polynomial relationship. This research will discuss the possibilities of parameterizing ice sheet dynamics as unknowns (dX and dY) in the adjustment mathematical model that computes elevation change (dZ) rates. It is a simultaneous computation of changes in all three directions of the ice surface. Also, the laser points between two time epochs in a crossover area have different distribution and count. Therefore, a registration method that does not require point-to-point correspondence is required to recover the unknown elevation and velocity parameters. This research will experiment the possibilities of registering multi-temporal datasets using volume minimization algorithm, which determines the unknown dX, dY and dZ that minimizes the volume between two or more time-epoch point clouds. In order to make use of other existing data as well as to constrain the adjustment, InSAR velocity will be used as initial values for the parameters dX and dY. The presentation will discuss the results of analytical incorporation of parameters and the volume based registration method for a test site in Greenland.
NASA Astrophysics Data System (ADS)
Farhadi, L.; Abdolghafoorian, A.
2015-12-01
The land surface is a key component of climate system. It controls the partitioning of available energy at the surface between sensible and latent heat, and partitioning of available water between evaporation and runoff. Water and energy cycle are intrinsically coupled through evaporation, which represents a heat exchange as latent heat flux. Accurate estimation of fluxes of heat and moisture are of significant importance in many fields such as hydrology, climatology and meteorology. In this study we develop and apply a Bayesian framework for estimating the key unknown parameters of terrestrial water and energy balance equations (i.e. moisture and heat diffusion) and their uncertainty in land surface models. These equations are coupled through flux of evaporation. The estimation system is based on the adjoint method for solving a least-squares optimization problem. The cost function consists of aggregated errors on state (i.e. moisture and temperature) with respect to observation and parameters estimation with respect to prior values over the entire assimilation period. This cost function is minimized with respect to parameters to identify models of sensible heat, latent heat/evaporation and drainage and runoff. Inverse of Hessian of the cost function is an approximation of the posterior uncertainty of parameter estimates. Uncertainty of estimated fluxes is estimated by propagating the uncertainty for linear and nonlinear function of key parameters through the method of First Order Second Moment (FOSM). Uncertainty analysis is used in this method to guide the formulation of a well-posed estimation problem. Accuracy of the method is assessed at point scale using surface energy and water fluxes generated by the Simultaneous Heat and Water (SHAW) model at the selected AmeriFlux stations. This method can be applied to diverse climates and land surface conditions with different spatial scales, using remotely sensed measurements of surface moisture and temperature states
Variations on Bayesian Prediction and Inference
2016-05-09
inference 2.2.1 Background There are a number of statistical inference problems that are not generally formulated via a full probability model...problem of inference about an unknown parameter, the Bayesian approach requires a full probability 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...the problem of inference about an unknown parameter, the Bayesian approach requires a full probability model/likelihood which can be an obstacle
NASA Technical Reports Server (NTRS)
Tao, Gang; Joshi, Suresh M.
2008-01-01
In this paper, the problem of controlling systems with failures and faults is introduced, and an overview of recent work on direct adaptive control for compensation of uncertain actuator failures is presented. Actuator failures may be characterized by some unknown system inputs being stuck at some unknown (fixed or varying) values at unknown time instants, that cannot be influenced by the control signals. The key task of adaptive compensation is to design the control signals in such a manner that the remaining actuators can automatically and seamlessly take over for the failed ones, and achieve desired stability and asymptotic tracking. A certain degree of redundancy is necessary to accomplish failure compensation. The objective of adaptive control design is to effectively use the available actuation redundancy to handle failures without the knowledge of the failure patterns, parameters, and time of occurrence. This is a challenging problem because failures introduce large uncertainties in the dynamic structure of the system, in addition to parametric uncertainties and unknown disturbances. The paper addresses some theoretical issues in adaptive actuator failure compensation: actuator failure modeling, redundant actuation requirements, plant-model matching, error system dynamics, adaptation laws, and stability, tracking, and performance analysis. Adaptive control designs can be shown to effectively handle uncertain actuator failures without explicit failure detection. Some open technical challenges and research problems in this important research area are discussed.
Estimation of nonlinear pilot model parameters including time delay.
NASA Technical Reports Server (NTRS)
Schiess, J. R.; Roland, V. R.; Wells, W. R.
1972-01-01
Investigation of the feasibility of using a Kalman filter estimator for the identification of unknown parameters in nonlinear dynamic systems with a time delay. The problem considered is the application of estimation theory to determine the parameters of a family of pilot models containing delayed states. In particular, the pilot-plant dynamics are described by differential-difference equations of the retarded type. The pilot delay, included as one of the unknown parameters to be determined, is kept in pure form as opposed to the Pade approximations generally used for these systems. Problem areas associated with processing real pilot response data are included in the discussion.
Deng, Zhimin; Tian, Tianhai
2014-07-29
The advances of systems biology have raised a large number of sophisticated mathematical models for describing the dynamic property of complex biological systems. One of the major steps in developing mathematical models is to estimate unknown parameters of the model based on experimentally measured quantities. However, experimental conditions limit the amount of data that is available for mathematical modelling. The number of unknown parameters in mathematical models may be larger than the number of observation data. The imbalance between the number of experimental data and number of unknown parameters makes reverse-engineering problems particularly challenging. To address the issue of inadequate experimental data, we propose a continuous optimization approach for making reliable inference of model parameters. This approach first uses a spline interpolation to generate continuous functions of system dynamics as well as the first and second order derivatives of continuous functions. The expanded dataset is the basis to infer unknown model parameters using various continuous optimization criteria, including the error of simulation only, error of both simulation and the first derivative, or error of simulation as well as the first and second derivatives. We use three case studies to demonstrate the accuracy and reliability of the proposed new approach. Compared with the corresponding discrete criteria using experimental data at the measurement time points only, numerical results of the ERK kinase activation module show that the continuous absolute-error criteria using both function and high order derivatives generate estimates with better accuracy. This result is also supported by the second and third case studies for the G1/S transition network and the MAP kinase pathway, respectively. This suggests that the continuous absolute-error criteria lead to more accurate estimates than the corresponding discrete criteria. We also study the robustness property of these three models to examine the reliability of estimates. Simulation results show that the models with estimated parameters using continuous fitness functions have better robustness properties than those using the corresponding discrete fitness functions. The inference studies and robustness analysis suggest that the proposed continuous optimization criteria are effective and robust for estimating unknown parameters in mathematical models.
Iqbal, Muhammad; Rehan, Muhammad; Khaliq, Abdul; Saeed-ur-Rehman; Hong, Keum-Shik
2014-01-01
This paper investigates the chaotic behavior and synchronization of two different coupled chaotic FitzHugh-Nagumo (FHN) neurons with unknown parameters under external electrical stimulation (EES). The coupled FHN neurons of different parameters admit unidirectional and bidirectional gap junctions in the medium between them. Dynamical properties, such as the increase in synchronization error as a consequence of the deviation of neuronal parameters for unlike neurons, the effect of difference in coupling strengths caused by the unidirectional gap junctions, and the impact of large time-delay due to separation of neurons, are studied in exploring the behavior of the coupled system. A novel integral-based nonlinear adaptive control scheme, to cope with the infeasibility of the recovery variable, for synchronization of two coupled delayed chaotic FHN neurons of different and unknown parameters under uncertain EES is derived. Further, to guarantee robust synchronization of different neurons against disturbances, the proposed control methodology is modified to achieve the uniformly ultimately bounded synchronization. The parametric estimation errors can be reduced by selecting suitable control parameters. The effectiveness of the proposed control scheme is illustrated via numerical simulations.
NASA Astrophysics Data System (ADS)
Kryanev, A. V.; Ivanov, V. V.; Romanova, A. O.; Sevastyanov, L. A.; Udumyan, D. K.
2018-03-01
This paper considers the problem of separating the trend and the chaotic component of chaotic time series in the absence of information on the characteristics of the chaotic component. Such a problem arises in nuclear physics, biomedicine, and many other applied fields. The scheme has two stages. At the first stage, smoothing linear splines with different values of smoothing parameter are used to separate the "trend component." At the second stage, the method of least squares is used to find the unknown variance σ2 of the noise component.
NASA Astrophysics Data System (ADS)
Lubey, D.; Ko, H.; Scheeres, D.
The classical orbit determination (OD) method of dealing with unknown maneuvers is to restart the OD process with post-maneuver observations. However, it is also possible to continue the OD process through such unknown maneuvers by representing those unknown maneuvers with an appropriate event representation. It has been shown in previous work (Ko & Scheeres, JGCD 2014) that any maneuver performed by a satellite transitioning between two arbitrary orbital states can be represented as an equivalent maneuver connecting those two states using Thrust-Fourier-Coefficients (TFCs). Event representation using TFCs rigorously provides a unique control law that can generate the desired secular behavior for a given unknown maneuver. This paper presents applications of this representation approach to orbit prediction and maneuver detection problem across unknown maneuvers. The TFCs are appended to a sequential filter as an adjoint state to compensate unknown perturbing accelerations and the modified filter estimates the satellite state and thrust coefficients by processing OD across the time of an unknown maneuver. This modified sequential filter with TFCs is capable of fitting tracking data and maintaining an OD solution in the presence of unknown maneuvers. Also, the modified filter is found effective in detecting a sudden change in TFC values which indicates a maneuver. In order to illustrate that the event representation approach with TFCs is robust and sufficiently general to be easily adjustable, different types of measurement data are processed with the filter in a realistic LEO setting. Further, cases with mis-modeling of non-gravitational force are included in our study to verify the versatility and efficiency of our presented algorithm. Simulation results show that the modified sequential filter with TFCs can detect and estimate the orbit and thrust parameters in the presence of unknown maneuvers with or without measurement data during maneuvers. With no measurement data during maneuvers, the modified filter with TFCs uses an existing pre-maneuver orbit solution to compute a post-maneuver orbit solution by forcing TFCs to compensate for an unknown maneuver. With observation data available during maneuvers, maneuver start time and stop time is determined
Park, Soo Hyun; Talebi, Mohammad; Amos, Ruth I J; Tyteca, Eva; Haddad, Paul R; Szucs, Roman; Pohl, Christopher A; Dolan, John W
2017-11-10
Quantitative Structure-Retention Relationships (QSRR) are used to predict retention times of compounds based only on their chemical structures encoded by molecular descriptors. The main concern in QSRR modelling is to build models with high predictive power, allowing reliable retention prediction for the unknown compounds across the chromatographic space. With the aim of enhancing the prediction power of the models, in this work, our previously proposed QSRR modelling approach called "federation of local models" is extended in ion chromatography to predict retention times of unknown ions, where a local model for each target ion (unknown) is created using only structurally similar ions from the dataset. A Tanimoto similarity (TS) score was utilised as a measure of structural similarity and training sets were developed by including ions that were similar to the target ion, as defined by a threshold value. The prediction of retention parameters (a- and b-values) in the linear solvent strength (LSS) model in ion chromatography, log k=a - blog[eluent], allows the prediction of retention times under all eluent concentrations. The QSRR models for a- and b-values were developed by a genetic algorithm-partial least squares method using the retention data of inorganic and small organic anions and larger organic cations (molecular mass up to 507) on four Thermo Fisher Scientific columns (AS20, AS19, AS11HC and CS17). The corresponding predicted retention times were calculated by fitting the predicted a- and b-values of the models into the LSS model equation. The predicted retention times were also plotted against the experimental values to evaluate the goodness of fit and the predictive power of the models. The application of a TS threshold of 0.6 was found to successfully produce predictive and reliable QSRR models (Q ext(F2) 2 >0.8 and Mean Absolute Error<0.1), and hence accurate retention time predictions with an average Mean Absolute Error of 0.2min. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
Multistationarity in mass action networks with applications to ERK activation.
Conradi, Carsten; Flockerzi, Dietrich
2012-07-01
Ordinary Differential Equations (ODEs) are an important tool in many areas of Quantitative Biology. For many ODE systems multistationarity (i.e. the existence of at least two positive steady states) is a desired feature. In general establishing multistationarity is a difficult task as realistic biological models are large in terms of states and (unknown) parameters and in most cases poorly parameterized (because of noisy measurement data of few components, a very small number of data points and only a limited number of repetitions). For mass action networks establishing multistationarity hence is equivalent to establishing the existence of at least two positive solutions of a large polynomial system with unknown coefficients. For mass action networks with certain structural properties, expressed in terms of the stoichiometric matrix and the reaction rate-exponent matrix, we present necessary and sufficient conditions for multistationarity that take the form of linear inequality systems. Solutions of these inequality systems define pairs of steady states and parameter values. We also present a sufficient condition to identify networks where the aforementioned conditions hold. To show the applicability of our results we analyse an ODE system that is defined by the mass action network describing the extracellular signal-regulated kinase (ERK) cascade (i.e. ERK-activation).
An adaptive technique for a redundant-sensor navigation system.
NASA Technical Reports Server (NTRS)
Chien, T.-T.
1972-01-01
An on-line adaptive technique is developed to provide a self-contained redundant-sensor navigation system with a capability to utilize its full potentiality in reliability and performance. This adaptive system is structured as a multistage stochastic process of detection, identification, and compensation. It is shown that the detection system can be effectively constructed on the basis of a design value, specified by mission requirements, of the unknown parameter in the actual system, and of a degradation mode in the form of a constant bias jump. A suboptimal detection system on the basis of Wald's sequential analysis is developed using the concept of information value and information feedback. The developed system is easily implemented, and demonstrates a performance remarkably close to that of the optimal nonlinear detection system. An invariant transformation is derived to eliminate the effect of nuisance parameters such that the ambiguous identification system can be reduced to a set of disjoint simple hypotheses tests. By application of a technique of decoupled bias estimation in the compensation system the adaptive system can be operated without any complicated reorganization.
Dual Rate Adaptive Control for an Industrial Heat Supply Process Using Signal Compensation Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chai, Tianyou; Jia, Yao; Wang, Hong
The industrial heat supply process (HSP) is a highly nonlinear cascaded process which uses a steam valve opening as its control input, the steam flow-rate as its inner loop output and the supply water temperature as its outer loop output. The relationship between the heat exchange rate and the model parameters, such as steam density, entropy, and fouling correction factor and heat exchange efficiency are unknown and nonlinear. Moreover, these model parameters vary in line with steam pressure, ambient temperature and the residuals caused by the quality variations of the circulation water. When the steam pressure and the ambient temperaturemore » are of high values and are subjected to frequent external random disturbances, the supply water temperature and the steam flow-rate would interact with each other and fluctuate a lot. This is also true when the process exhibits unknown characteristic variations of the process dynamics caused by the unexpected changes of the heat exchange residuals. As a result, it is difficult to control the supply water temperature and the rates of changes of steam flow-rate well inside their targeted ranges. In this paper, a novel compensation signal based dual rate adaptive controller is developed by representing the unknown variations of dynamics as unmodeled dynamics. In the proposed controller design, such a compensation signal is constructed and added onto the control signal obtained from the linear deterministic model based feedback control design. Such a compensation signal aims at eliminating the unmodeled dynamics and the rate of changes of the currently sample unmodeled dynamics. A successful industrial application is carried out, where it has been shown that both the supply water temperature and the rate of the changes of the steam flow-rate can be controlled well inside their targeted ranges when the process is subjected to unknown variations of its dynamics.« less
NASA Astrophysics Data System (ADS)
Soulis, K. X.; Valiantzas, J. D.
2012-03-01
The Soil Conservation Service Curve Number (SCS-CN) approach is widely used as a simple method for predicting direct runoff volume for a given rainfall event. The CN parameter values corresponding to various soil, land cover, and land management conditions can be selected from tables, but it is preferable to estimate the CN value from measured rainfall-runoff data if available. However, previous researchers indicated that the CN values calculated from measured rainfall-runoff data vary systematically with the rainfall depth. Hence, they suggested the determination of a single asymptotic CN value observed for very high rainfall depths to characterize the watersheds' runoff response. In this paper, the hypothesis that the observed correlation between the calculated CN value and the rainfall depth in a watershed reflects the effect of soils and land cover spatial variability on its hydrologic response is being tested. Based on this hypothesis, the simplified concept of a two-CN heterogeneous system is introduced to model the observed CN-rainfall variation by reducing the CN spatial variability into two classes. The behaviour of the CN-rainfall function produced by the simplified two-CN system is approached theoretically, it is analysed systematically, and it is found to be similar to the variation observed in natural watersheds. Synthetic data tests, natural watersheds examples, and detailed study of two natural experimental watersheds with known spatial heterogeneity characteristics were used to evaluate the method. The results indicate that the determination of CN values from rainfall runoff data using the proposed two-CN system approach provides reasonable accuracy and it over performs the previous methods based on the determination of a single asymptotic CN value. Although the suggested method increases the number of unknown parameters to three (instead of one), a clear physical reasoning for them is presented.
Back analysis of geomechanical parameters in underground engineering using artificial bee colony.
Zhu, Changxing; Zhao, Hongbo; Zhao, Ming
2014-01-01
Accurate geomechanical parameters are critical in tunneling excavation, design, and supporting. In this paper, a displacements back analysis based on artificial bee colony (ABC) algorithm is proposed to identify geomechanical parameters from monitored displacements. ABC was used as global optimal algorithm to search the unknown geomechanical parameters for the problem with analytical solution. To the problem without analytical solution, optimal back analysis is time-consuming, and least square support vector machine (LSSVM) was used to build the relationship between unknown geomechanical parameters and displacement and improve the efficiency of back analysis. The proposed method was applied to a tunnel with analytical solution and a tunnel without analytical solution. The results show the proposed method is feasible.
Grosse, Constantino
2014-04-01
The description and interpretation of dielectric spectroscopy data usually require the use of analytical functions, which include unknown parameters that must be determined iteratively by means of a fitting procedure. This is not a trivial task and much effort has been spent to find the best way to accomplish it. While the theoretical approach based on the Levenberg-Marquardt algorithm is well known, no freely available program specifically adapted to the dielectric spectroscopy problem exists to the best of our knowledge. Moreover, even the more general commercial packages usually fail on the following aspects: (1) allow to keep temporarily fixed some of the parameters, (2) allow to freely specify the uncertainty values for each data point, (3) check that parameter values fall within prescribed bounds during the fitting process, and (4) allow to fit either the real, or the imaginary, or simultaneously both parts of the complex permittivity. A program that satisfies all these requirements and allows fitting any superposition of the Debye, Cole-Cole, Cole-Davidson, and Havriliak-Negami dispersions plus a conductivity term to measured dielectric spectroscopy data is presented. It is available on request from the author. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
de Lorenzo, Salvatore; Bianco, Francesca; Del Pezzo, Edoardo
2013-06-01
The coda normalization method is one of the most used methods in the inference of attenuation parameters Qα and Qβ. Since, in this method, the geometrical spreading exponent γ is an unknown model parameter, the most part of studies assumes a fixed γ, generally equal to 1. However γ and Q could be also jointly inferred from the non-linear inversion of coda-normalized logarithms of amplitudes, but the trade-off between γ and Q could give rise to unreasonable values of these parameters. To minimize the trade-off between γ and Q, an inversion method based on a parabolic expression of the coda-normalization equation has been developed. The method has been applied to the waveforms recorded during the 1997 Umbria-Marche seismic crisis. The Akaike criterion has been used to compare results of the parabolic model with those of the linear model, corresponding to γ = 1. A small deviation from the spherical geometrical spreading has been inferred, but this is accompanied by a significant variation of Qα and Qβ values. For almost all the considered stations, Qα smaller than Qβ has been inferred, confirming that seismic attenuation, in the Umbria-Marche region, is controlled by crustal pore fluids.
Groundwater Pollution Source Identification using Linked ANN-Optimization Model
NASA Astrophysics Data System (ADS)
Ayaz, Md; Srivastava, Rajesh; Jain, Ashu
2014-05-01
Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration values. The main advantage of the proposed model is that it requires only upper half of the breakthrough curve and is capable of predicting source parameters when the lag time is not known. Linking of ANN model with proposed optimization model reduces the dimensionality of the decision variables of the optimization model by one and hence complexity of optimization model is reduced. The results show that our proposed linked ANN-Optimization model is able to predict the source parameters for the error-free data accurately. The proposed model was run several times to obtain the mean, standard deviation and interval estimate of the predicted parameters for observations with random measurement errors. It was observed that mean values as predicted by the model were quite close to the exact values. An increasing trend was observed in the standard deviation of the predicted values with increasing level of measurement error. The model appears to be robust and may be efficiently utilized to solve the inverse pollution source identification problem.
Semiparametric modeling: Correcting low-dimensional model error in parametric models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, Tyrus, E-mail: thb11@psu.edu; Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, 503 Walker Building, University Park, PA 16802-5013
2016-03-01
In this paper, a semiparametric modeling approach is introduced as a paradigm for addressing model error arising from unresolved physical phenomena. Our approach compensates for model error by learning an auxiliary dynamical model for the unknown parameters. Practically, the proposed approach consists of the following steps. Given a physics-based model and a noisy data set of historical observations, a Bayesian filtering algorithm is used to extract a time-series of the parameter values. Subsequently, the diffusion forecast algorithm is applied to the retrieved time-series in order to construct the auxiliary model for the time evolving parameters. The semiparametric forecasting algorithm consistsmore » of integrating the existing physics-based model with an ensemble of parameters sampled from the probability density function of the diffusion forecast. To specify initial conditions for the diffusion forecast, a Bayesian semiparametric filtering method that extends the Kalman-based filtering framework is introduced. In difficult test examples, which introduce chaotically and stochastically evolving hidden parameters into the Lorenz-96 model, we show that our approach can effectively compensate for model error, with forecasting skill comparable to that of the perfect model.« less
Optimization of a Thermodynamic Model Using a Dakota Toolbox Interface
NASA Astrophysics Data System (ADS)
Cyrus, J.; Jafarov, E. E.; Schaefer, K. M.; Wang, K.; Clow, G. D.; Piper, M.; Overeem, I.
2016-12-01
Scientific modeling of the Earth physical processes is an important driver of modern science. The behavior of these scientific models is governed by a set of input parameters. It is crucial to choose accurate input parameters that will also preserve the corresponding physics being simulated in the model. In order to effectively simulate real world processes the models output data must be close to the observed measurements. To achieve this optimal simulation, input parameters are tuned until we have minimized the objective function, which is the error between the simulation model outputs and the observed measurements. We developed an auxiliary package, which serves as a python interface between the user and DAKOTA. The package makes it easy for the user to conduct parameter space explorations, parameter optimizations, as well as sensitivity analysis while tracking and storing results in a database. The ability to perform these analyses via a Python library also allows the users to combine analysis techniques, for example finding an approximate equilibrium with optimization then immediately explore the space around it. We used the interface to calibrate input parameters for the heat flow model, which is commonly used in permafrost science. We performed optimization on the first three layers of the permafrost model, each with two thermal conductivity coefficients input parameters. Results of parameter space explorations indicate that the objective function not always has a unique minimal value. We found that gradient-based optimization works the best for the objective functions with one minimum. Otherwise, we employ more advanced Dakota methods such as genetic optimization and mesh based convergence in order to find the optimal input parameters. We were able to recover 6 initially unknown thermal conductivity parameters within 2% accuracy of their known values. Our initial tests indicate that the developed interface for the Dakota toolbox could be used to perform analysis and optimization on a `black box' scientific model more efficiently than using just Dakota.
Nonlinear adaptive control system design with asymptotically stable parameter estimation error
NASA Astrophysics Data System (ADS)
Mishkov, Rumen; Darmonski, Stanislav
2018-01-01
The paper presents a new general method for nonlinear adaptive system design with asymptotic stability of the parameter estimation error. The advantages of the approach include asymptotic unknown parameter estimation without persistent excitation and capability to directly control the estimates transient response time. The method proposed modifies the basic parameter estimation dynamics designed via a known nonlinear adaptive control approach. The modification is based on the generalised prediction error, a priori constraints with a hierarchical parameter projection algorithm, and the stable data accumulation concepts. The data accumulation principle is the main tool for achieving asymptotic unknown parameter estimation. It relies on the parametric identifiability system property introduced. Necessary and sufficient conditions for exponential stability of the data accumulation dynamics are derived. The approach is applied in a nonlinear adaptive speed tracking vector control of a three-phase induction motor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Sen; Zhang, Wei; Lian, Jianming
This paper focuses on the coordination of a population of Thermostatically Controlled Loads (TCLs) with unknown parameters to achieve group objectives. The problem involves designing the bidding and market clearing strategy to motivate self-interested users to realize efficient energy allocation subject to a peak power constraint. Using the mechanism design approach, we propose a market-based coordination framework, which can effectively incorporate heterogeneous load dynamics, systematically deal with user preferences, account for the unknown load model parameters, and enable the real-world implementation with limited communication resources. This paper is divided into two parts. Part I presents a mathematical formulation of themore » problem and develops a coordination framework using the mechanism design approach. Part II presents a learning scheme to account for the unknown load model parameters, and evaluates the proposed framework through realistic simulations.« less
NASA Astrophysics Data System (ADS)
He, Jia; Xu, You-Lin; Zhan, Sheng; Huang, Qin
2017-03-01
When health monitoring system and vibration control system both are required for a building structure, it will be beneficial and cost-effective to integrate these two systems together for creating a smart building structure. Recently, on the basis of extended Kalman filter (EKF), a time-domain integrated approach was proposed for the identification of structural parameters of the controlled buildings with unknown ground excitations. The identified physical parameters and structural state vectors were then utilized to determine the control force for vibration suppression. In this paper, the possibility of establishing such a smart building structure with the function of simultaneous damage detection and vibration suppression was explored experimentally. A five-story shear building structure equipped with three magneto-rheological (MR) dampers was built. Four additional columns were added to the building model, and several damage scenarios were then simulated by symmetrically cutting off these columns in certain stories. Two sets of earthquakes, i.e. Kobe earthquake and Northridge earthquake, were considered as seismic input and assumed to be unknown during the tests. The structural parameters and the unknown ground excitations were identified during the tests by using the proposed identification method with the measured control forces. Based on the identified structural parameters and system states, a switching control law was employed to adjust the current applied to the MR dampers for the purpose of vibration attenuation. The experimental results show that the presented approach is capable of satisfactorily identifying structural damages and unknown excitations on one hand and significantly mitigating the structural vibration on the other hand.
Batstone, D J; Torrijos, M; Ruiz, C; Schmidt, J E
2004-01-01
The model structure in anaerobic digestion has been clarified following publication of the IWA Anaerobic Digestion Model No. 1 (ADM1). However, parameter values are not well known, and uncertainty and variability in the parameter values given is almost unknown. Additionally, platforms for identification of parameters, namely continuous-flow laboratory digesters, and batch tests suffer from disadvantages such as long run times, and difficulty in defining initial conditions, respectively. Anaerobic sequencing batch reactors (ASBRs) are sequenced into fill-react-settle-decant phases, and offer promising possibilities for estimation of parameters, as they are by nature, dynamic in behaviour, and allow repeatable behaviour to establish initial conditions, and evaluate parameters. In this study, we estimated parameters describing winery wastewater (most COD as ethanol) degradation using data from sequencing operation, and validated these parameters using unsequenced pulses of ethanol and acetate. The model used was the ADM1, with an extension for ethanol degradation. Parameter confidence spaces were found by non-linear, correlated analysis of the two main Monod parameters; maximum uptake rate (k(m)), and half saturation concentration (K(S)). These parameters could be estimated together using only the measured acetate concentration (20 points per cycle). From interpolating the single cycle acetate data to multiple cycles, we estimate that a practical "optimal" identifiability could be achieved after two cycles for the acetate parameters, and three cycles for the ethanol parameters. The parameters found performed well in the short term, and represented the pulses of acetate and ethanol (within 4 days of the winery-fed cycles) very well. The main discrepancy was poor prediction of pH dynamics, which could be due to an unidentified buffer with an overall influence the same as a weak base (possibly CaCO3). Based on this work, ASBR systems are effective for parameter estimation, especially for comparative wastewater characterisation. The main disadvantages are heavy computational requirements for multiple cycles, and difficulty in establishing the correct biomass concentration in the reactor, though the last is also a disadvantage for continuous fixed film reactors, and especially, batch tests.
Optical diagnosis of malaria infection in human plasma using Raman spectroscopy
NASA Astrophysics Data System (ADS)
Bilal, Muhammad; Saleem, Muhammad; Amanat, Samina Tufail; Shakoor, Huma Abdul; Rashid, Rashad; Mahmood, Arshad; Ahmed, Mushtaq
2015-01-01
We present the prediction of malaria infection in human plasma using Raman spectroscopy. Raman spectra of malaria-infected samples are compared with those of healthy and dengue virus infected ones for disease recognition. Raman spectra were acquired using a laser at 532 nm as an excitation source and 10 distinct spectral signatures that statistically differentiated malaria from healthy and dengue-infected cases were found. A multivariate regression model has been developed that utilized Raman spectra of 20 malaria-infected, 10 non-malarial with fever, 10 healthy, and 6 dengue-infected samples to optically predict the malaria infection. The model yields the correlation coefficient r2 value of 0.981 between the predicted values and clinically known results of trainee samples, and the root mean square error in cross validation was found to be 0.09; both these parameters validated the model. The model was further blindly tested for 30 unknown suspected samples and found to be 86% accurate compared with the clinical results, with the inaccuracy due to three samples which were predicted in the gray region. Standard deviation and root mean square error in prediction for unknown samples were found to be 0.150 and 0.149, which are accepted for the clinical validation of the model.
K→(ππ)(I=2) decay amplitude from lattice QCD.
Blum, T; Boyle, P A; Christ, N H; Garron, N; Goode, E; Izubuchi, T; Jung, C; Kelly, C; Lehner, C; Lightman, M; Liu, Q; Lytle, A T; Mawhinney, R D; Sachrajda, C T; Soni, A; Sturm, C
2012-04-06
We report on the first realistic ab initio calculation of a hadronic weak decay, that of the amplitude A(2) for a kaon to decay into two π mesons with isospin 2. We find ReA(2)=(1.436±0.063(stat)±0.258(syst))10(-8) GeV in good agreement with the experimental result and for the hitherto unknown imaginary part we find ImA(2)=-(6.83±0.51(stat)±1.30(syst))10(-13) GeV. Moreover combining our result for ImA(2) with experimental values of ReA(2), ReA(0), and ε'/ε, we obtain the following value for the unknown ratio ImA(0)/ReA(0) within the standard model: ImA(0)/ReA(0)=-1.63(19)(stat)(20(syst)×10(-4). One consequence of these results is that the contribution from ImA(2) to the direct CP violation parameter ε' (the so-called Electroweak Penguin contribution) is Re(ε'/ε)(EWP)=-(6.52±0.49(stat)±1.24(syst))×10(-4). We explain why this calculation of A(2) represents a major milestone for lattice QCD and discuss the exciting prospects for a full quantitative understanding of CP violation in kaon decays. © 2012 American Physical Society
K → ( π π ) I = 2 Decay Amplitude from Lattice QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blum, T.; Boyle, P. A.; Christ, N. H.
2012-04-04
We report on the first realistic ab initio calculation of a hadronic weak decay, that of the amplitude A 2 for a kaon to decay into two π mesons with isospin 2. We find ReA 2=(1.436±0.063 stat±0.258 syst)10 -8 GeV in good agreement with the experimental result and for the hitherto unknown imaginary part we find ImA 2=-(6.83±0.51 stat±1.30 syst)10 -13 GeV. Moreover combining our result for ImA 2 with experimental values of ReA 2, ReA0, and ϵ'/ϵ, we obtain the following value for the unknown ratio ImA 0/ReA 0 within the standard model: ImA 0/ReA 0=-1.63(19) stat(20) syst×10 -4.more » One consequence of these results is that the contribution from ImA 2 to the direct CP violation parameter ϵ' (the so-called Electroweak Penguin contribution) is Re(ϵ'/ϵ)EWP=-(6.52±0.49 stat±1.24 syst)×10 -4. We explain why this calculation of A 2 represents a major milestone for lattice QCD and discuss the exciting prospects for a full quantitative understanding of CP violation in kaon decays.« less
Pisharady, Pramod Kumar; Sotiropoulos, Stamatios N; Sapiro, Guillermo; Lenglet, Christophe
2017-09-01
We propose a sparse Bayesian learning algorithm for improved estimation of white matter fiber parameters from compressed (under-sampled q-space) multi-shell diffusion MRI data. The multi-shell data is represented in a dictionary form using a non-monoexponential decay model of diffusion, based on continuous gamma distribution of diffusivities. The fiber volume fractions with predefined orientations, which are the unknown parameters, form the dictionary weights. These unknown parameters are estimated with a linear un-mixing framework, using a sparse Bayesian learning algorithm. A localized learning of hyperparameters at each voxel and for each possible fiber orientations improves the parameter estimation. Our experiments using synthetic data from the ISBI 2012 HARDI reconstruction challenge and in-vivo data from the Human Connectome Project demonstrate the improvements.
NASA Astrophysics Data System (ADS)
Stergiopoulos, Ch.; Stavrakas, I.; Triantis, D.; Vallianatos, F.; Stonham, J.
2015-02-01
Weak electric signals termed as 'Pressure Stimulated Currents, PSC' are generated and detected while cement based materials are found under mechanical load, related to the creation of cracks and the consequent evolution of cracks' network in the bulk of the specimen. During the experiment a set of cement mortar beams of rectangular cross-section were subjected to Three-Point Bending (3PB). For each one of the specimens an abrupt mechanical load step was applied, increased from the low load level (Lo) to a high final value (Lh) , where Lh was different for each specimen and it was maintained constant for long time. The temporal behavior of the recorded PSC show that during the load increase a spike-like PSC emission was recorded and consequently a relaxation of the PSC, after reaching its final value, follows. The relaxation process of the PSC was studied using non-extensive statistical physics (NESP) based on Tsallis entropy equation. The behavior of the Tsallis q parameter was studied in relaxation PSCs in order to investigate its potential use as an index for monitoring the crack evolution process with a potential use in non-destructive laboratory testing of cement-based specimens of unknown internal damage level. The dependence of the q-parameter on the Lh (when Lh <0.8Lf), where Lf represents the 3PB strength of the specimen, shows an increase on the q value when the specimens are subjected to gradually higher bending loadings and reaches a maximum value close to 1.4 when the applied Lh becomes higher than 0.8Lf. While the applied Lh becomes higher than 0.9Lf the value of the q-parameter gradually decreases. This analysis of the experimental data manifests that the value of the entropic index q obtains a characteristic decrease while reaching the ultimate strength of the specimen, and thus could be used as a forerunner of the expected failure.
NASA Astrophysics Data System (ADS)
Estakhr, Ahmad Reza
2013-03-01
In linear algebra, [Cramer's rule][1] is an explicit formula for the solution of a system of linear equations with as many equations as unknowns. 2u+1d=1 1u+2d=0 a_1d+b_1u=c_1, a_2d +b_2u=c_2 u={c_1b_2- c_2b_1}/{a_1b_2-a_2b_1} and d={a_1c_2-a_2c_1}/{a_1b_2-a_2b_1} u=+2/3 d=-1/3 now i think an up quark has no electric charge and infact this is down quark which has electeric charge of (+1,-1), then fractional electric charge completely breakdown 2u(0)+1d(+1)=+1 1u (0)+d(-1)+d(+1)=0 which means probabilities is associated with unknown parameters, Thus, Quarks fractional electric charge value is possible charge of quarks ``not'' accurate value. And also it is consisted with neutron decay, While bound neutrons in stable nuclei are stable, free neutrons are unstable; they undergo beta decay with a mean lifetime of just under 15 minutes (881.5 ± 1.5 s). (thanks god!) Free neutrons decay by emission of an electron and an electron antineutrino to become a proton, a process known as beta decay n^0 to p^{+1}+e^{-1}+ overline ν_e ref 1: http://en.wikipedia.org/wiki/Cramer's_rule
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, Chuanfei; Lingam, Manasvi; Ma, Yingjuan
We address the important question of whether the newly discovered exoplanet, Proxima Centauri b (PCb), is capable of retaining an atmosphere over long periods of time. This is done by adapting a sophisticated multi-species MHD model originally developed for Venus and Mars and computing the ion escape losses from PCb. The results suggest that the ion escape rates are about two orders of magnitude higher than the terrestrial planets of our Solar system if PCb is unmagnetized. In contrast, if the planet does have an intrinsic dipole magnetic field, the rates are lowered for certain values of the stellar windmore » dynamic pressure, but they are still higher than the observed values for our solar system’s terrestrial planets. These results must be interpreted with due caution since most of the relevant parameters for PCb remain partly or wholly unknown.« less
Bohling, Geoffrey C.; Butler, J.J.
2001-01-01
We have developed a program for inverse analysis of two-dimensional linear or radial groundwater flow problems. The program, 1r2dinv, uses standard finite difference techniques to solve the groundwater flow equation for a horizontal or vertical plane with heterogeneous properties. In radial mode, the program simulates flow to a well in a vertical plane, transforming the radial flow equation into an equivalent problem in Cartesian coordinates. The physical parameters in the model are horizontal or x-direction hydraulic conductivity, anisotropy ratio (vertical to horizontal conductivity in a vertical model, y-direction to x-direction in a horizontal model), and specific storage. The program allows the user to specify arbitrary and independent zonations of these three parameters and also to specify which zonal parameter values are known and which are unknown. The Levenberg-Marquardt algorithm is used to estimate parameters from observed head values. Particularly powerful features of the program are the ability to perform simultaneous analysis of heads from different tests and the inclusion of the wellbore in the radial mode. These capabilities allow the program to be used for analysis of suites of well tests, such as multilevel slug tests or pumping tests in a tomographic format. The combination of information from tests stressing different vertical levels in an aquifer provides the means for accurately estimating vertical variations in conductivity, a factor profoundly influencing contaminant transport in the subsurface. ?? 2001 Elsevier Science Ltd. All rights reserved.
Li, Zhenyu; Wang, Bin; Liu, Hong
2016-08-30
Satellite capturing with free-floating space robots is still a challenging task due to the non-fixed base and unknown mass property issues. In this paper gyro and eye-in-hand camera data are adopted as an alternative choice for solving this problem. For this improved system, a new modeling approach that reduces the complexity of system control and identification is proposed. With the newly developed model, the space robot is equivalent to a ground-fixed manipulator system. Accordingly, a self-tuning control scheme is applied to handle such a control problem including unknown parameters. To determine the controller parameters, an estimator is designed based on the least-squares technique for identifying the unknown mass properties in real time. The proposed method is tested with a credible 3-dimensional ground verification experimental system, and the experimental results confirm the effectiveness of the proposed control scheme.
Li, Zhenyu; Wang, Bin; Liu, Hong
2016-01-01
Satellite capturing with free-floating space robots is still a challenging task due to the non-fixed base and unknown mass property issues. In this paper gyro and eye-in-hand camera data are adopted as an alternative choice for solving this problem. For this improved system, a new modeling approach that reduces the complexity of system control and identification is proposed. With the newly developed model, the space robot is equivalent to a ground-fixed manipulator system. Accordingly, a self-tuning control scheme is applied to handle such a control problem including unknown parameters. To determine the controller parameters, an estimator is designed based on the least-squares technique for identifying the unknown mass properties in real time. The proposed method is tested with a credible 3-dimensional ground verification experimental system, and the experimental results confirm the effectiveness of the proposed control scheme. PMID:27589748
NASA Astrophysics Data System (ADS)
Singh, R.; Verma, H. K.
2013-12-01
This paper presents a teaching-learning-based optimization (TLBO) algorithm to solve parameter identification problems in the designing of digital infinite impulse response (IIR) filter. TLBO based filter modelling is applied to calculate the parameters of unknown plant in simulations. Unlike other heuristic search algorithms, TLBO algorithm is an algorithm-specific parameter-less algorithm. In this paper big bang-big crunch (BB-BC) optimization and PSO algorithms are also applied to filter design for comparison. Unknown filter parameters are considered as a vector to be optimized by these algorithms. MATLAB programming is used for implementation of proposed algorithms. Experimental results show that the TLBO is more accurate to estimate the filter parameters than the BB-BC optimization algorithm and has faster convergence rate when compared to PSO algorithm. TLBO is used where accuracy is more essential than the convergence speed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Selle, J.E.
A modification was made to the Kaufman method of calculating binary phase diagrams to permit calculation of intra-rare earth diagrams. Atomic volumes for all phases, real or hypothetical, are necessary to determine interaction parameters for calculation of complete diagrams. The procedures used to determine unknown atomic volumes are describes. Also, procedures are described for determining lattice stability parameters for unknown transformations. Results are presented on the calculation of intra-rare earth diagrams between both trivalent and divalent rare earths. 13 refs., 36 figs., 11 tabs.
NASA Astrophysics Data System (ADS)
Miyata, Y.; Suzuki, T.; Takechi, M.; Urano, H.; Ide, S.
2015-07-01
For the purpose of stable plasma equilibrium control and detailed analysis, it is essential to reconstruct an accurate plasma boundary on the poloidal cross section in tokamak devices. The Cauchy condition surface (CCS) method is a numerical approach for calculating the spatial distribution of the magnetic flux outside a hypothetical surface and reconstructing the plasma boundary from the magnetic measurements located outside the plasma. The accuracy of the plasma shape reconstruction has been assessed by comparing the CCS method and an equilibrium calculation in JT-60SA with a high elongation and triangularity of plasma shape. The CCS, on which both Dirichlet and Neumann conditions are unknown, is defined as a hypothetical surface located inside the real plasma region. The accuracy of the plasma shape reconstruction is sensitive to the CCS free parameters such as the number of unknown parameters and the shape in JT-60SA. It is found that the optimum number of unknown parameters and the size of the CCS that minimizes errors in the reconstructed plasma shape are in proportion to the plasma size. Furthermore, it is shown that the accuracy of the plasma shape reconstruction is greatly improved using the optimum number of unknown parameters and shape of the CCS, and the reachable reconstruction errors in plasma shape and locations of strike points are within the target ranges in JT-60SA.
Potocki, J K; Tharp, H S
1993-01-01
The success of treating cancerous tissue with heat depends on the temperature elevation, the amount of tissue elevated to that temperature, and the length of time that the tissue temperature is elevated. In clinical situations the temperature of most of the treated tissue volume is unknown, because only a small number of temperature sensors can be inserted into the tissue. A state space model based on a finite difference approximation of the bioheat transfer equation (BHTE) is developed for identification purposes. A full-order extended Kalman filter (EKF) is designed to estimate both the unknown blood perfusion parameters and the temperature at unmeasured locations. Two reduced-order estimators are designed as computationally less intensive alternatives to the full-order EKF. Simulation results show that the success of the estimation scheme depends strongly on the number and location of the temperature sensors. Superior results occur when a temperature sensor exists in each unknown blood perfusion zone, and the number of sensors is at least as large as the number of unknown perfusion zones. Unacceptable results occur when there are more unknown perfusion parameters than temperature sensors, or when the sensors are placed in locations that do not sample the unknown perfusion information.
Li, Zhongyu; Wu, Junjie; Huang, Yulin; Yang, Haiguang; Yang, Jianyu
2017-01-23
Bistatic forward-looking SAR (BFSAR) is a kind of bistatic synthetic aperture radar (SAR) system that can image forward-looking terrain in the flight direction of an aircraft. Until now, BFSAR imaging theories and methods for a stationary scene have been researched thoroughly. However, for moving-target imaging with BFSAR, the non-cooperative movement of the moving target induces some new issues: (I) large and unknown range cell migration (RCM) (including range walk and high-order RCM); (II) the spatial-variances of the Doppler parameters (including the Doppler centroid and high-order Doppler) are not only unknown, but also nonlinear for different point-scatterers. In this paper, we put forward an adaptive moving-target imaging method for BFSAR. First, the large and unknown range walk is corrected by applying keystone transform over the whole received echo, and then, the relationships among the unknown high-order RCM, the nonlinear spatial-variances of the Doppler parameters, and the speed of the mover, are established. After that, using an optimization nonlinear chirp scaling (NLCS) technique, not only can the unknown high-order RCM be accurately corrected, but also the nonlinear spatial-variances of the Doppler parameters can be balanced. At last, a high-order polynomial filter is applied to compress the whole azimuth data of the moving target. Numerical simulations verify the effectiveness of the proposed method.
Optimal Decision Making in a Class of Uncertain Systems Based on Uncertain Variables
NASA Astrophysics Data System (ADS)
Bubnicki, Z.
2006-06-01
The paper is concerned with a class of uncertain systems described by relational knowledge representations with unknown parameters which are assumed to be values of uncertain variables characterized by a user in the form of certainty distributions. The first part presents the basic optimization problem consisting in finding the decision maximizing the certainty index that the requirement given by a user is satisfied. The main part is devoted to the description of the optimization problem with the given certainty threshold. It is shown how the approach presented in the paper may be applied to some problems for anticipatory systems.
The continuum fusion theory of signal detection applied to a bi-modal fusion problem
NASA Astrophysics Data System (ADS)
Schaum, A.
2011-05-01
A new formalism has been developed that produces detection algorithms for model-based problems, in which one or more parameter values is unknown. Continuum Fusion can be used to generate different flavors of algorithm for any composite hypothesis testing problem. The methodology is defined by a fusion logic that can be translated into max/min conditions. Here it is applied to a simple sensor fusion model, but one for which the generalized likelihood ratio test is intractable. By contrast, a fusion-based response to the same problem can be devised that is solvable in closed form and represents a good approximation to the GLR test.
Genuine binding energy of the hydrated electron
Luckhaus, David; Yamamoto, Yo-ichi; Suzuki, Toshinori; Signorell, Ruth
2017-01-01
The unknown influence of inelastic and elastic scattering of slow electrons in water has made it difficult to clarify the role of the solvated electron in radiation chemistry and biology. We combine accurate scattering simulations with experimental photoemission spectroscopy of the hydrated electron in a liquid water microjet, with the aim of resolving ambiguities regarding the influence of electron scattering on binding energy spectra, photoelectron angular distributions, and probing depths. The scattering parameters used in the simulations are retrieved from independent photoemission experiments of water droplets. For the ground-state hydrated electron, we report genuine values devoid of scattering contributions for the vertical binding energy and the anisotropy parameter of 3.7 ± 0.1 eV and 0.6 ± 0.2, respectively. Our probing depths suggest that even vacuum ultraviolet probing is not particularly surface-selective. Our work demonstrates the importance of quantitative scattering simulations for a detailed analysis of key properties of the hydrated electron. PMID:28508051
NASA Technical Reports Server (NTRS)
Bahrami, Parviz A.
1996-01-01
Theoretical analysis and numerical computations are performed to set forth a new model of film condensation on a horizontal cylinder. The model is more general than the well-known Nusselt model of film condensation and is designed to encompass all essential features of the Nusselt model. It is shown that a single parameter, constructed explicitly and without specification of the cylinder wall temperature, determines the degree of departure from the Nusselt model, which assumes a known and uniform wall temperature. It is also known that the Nusselt model is reached for very small, as well as very large, values of this parameter. In both limiting cases the cylinder wall temperature assumes a uniform distribution and the Nusselt model is approached. The maximum deviations between the two models is rather small for cases which are representative of cylinder dimensions, materials and conditions encountered in practice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gewering-Peine, A.; Horns, D.; Schmitt, J.H.M.M., E-mail: alexander.gewering-peine@desy.de, E-mail: dieter.horns@desy.de, E-mail: jschmitt@hs.uni-hamburg.de
The Standard Model of particle physics can be extended to include sterile (right-handed) neutrinos or axions to solve the dark matter problem. Depending upon the mixing angle between active and sterile neutrinos, the latter have the possibility to decay into monoenergetic active neutrinos and photons in the keV-range while axions can couple to two photons. We have used data taken with the X-ray telescope XMM-Newton for the search of line emissions. We used pointings with high exposures and expected dark matter column densities with respect to the dark matter halo of the Milky Way. The posterior predictive p-value analysis hasmore » been applied to locate parameter space regions which favour additional emission lines. In addition, upper limits of the parameter space of the models have been generated such that the preexisting limits have been significantly improved.« less
Jiang, Ye; Hu, Qinglei; Ma, Guangfu
2010-01-01
In this paper, a robust adaptive fault-tolerant control approach to attitude tracking of flexible spacecraft is proposed for use in situations when there are reaction wheel/actuator failures, persistent bounded disturbances and unknown inertia parameter uncertainties. The controller is designed based on an adaptive backstepping sliding mode control scheme, and a sufficient condition under which this control law can render the system semi-globally input-to-state stable is also provided such that the closed-loop system is robust with respect to any disturbance within a quantifiable restriction on the amplitude, as well as the set of initial conditions, if the control gains are designed appropriately. Moreover, in the design, the control law does not need a fault detection and isolation mechanism even if the failure time instants, patterns and values on actuator failures are also unknown for the designers, as motivated from a practical spacecraft control application. In addition to detailed derivations of the new controller design and a rigorous sketch of all the associated stability and attitude error convergence proofs, illustrative simulation results of an application to flexible spacecraft show that high precise attitude control and vibration suppression are successfully achieved using various scenarios of controlling effective failures. 2009. Published by Elsevier Ltd.
Baigzadehnoe, Barmak; Rahmani, Zahra; Khosravi, Alireza; Rezaie, Behrooz
2017-09-01
In this paper, the position and force tracking control problem of cooperative robot manipulator system handling a common rigid object with unknown dynamical models and unknown external disturbances is investigated. The universal approximation properties of fuzzy logic systems are employed to estimate the unknown system dynamics. On the other hand, by defining new state variables based on the integral and differential of position and orientation errors of the grasped object, the error system of coordinated robot manipulators is constructed. Subsequently by defining the appropriate change of coordinates and using the backstepping design strategy, an adaptive fuzzy backstepping position tracking control scheme is proposed for multi-robot manipulator systems. By utilizing the properties of internal forces, extra terms are also added to the control signals to consider the force tracking problem. Moreover, it is shown that the proposed adaptive fuzzy backstepping position/force control approach ensures all the signals of the closed loop system uniformly ultimately bounded and tracking errors of both positions and forces can converge to small desired values by proper selection of the design parameters. Finally, the theoretic achievements are tested on the two three-link planar robot manipulators cooperatively handling a common object to illustrate the effectiveness of the proposed approach. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Du, Jialu; Hu, Xin; Liu, Hongbo; Chen, C L Philip
2015-11-01
This paper develops an adaptive robust output feedback control scheme for dynamically positioned ships with unavailable velocities and unknown dynamic parameters in an unknown time-variant disturbance environment. The controller is designed by incorporating the high-gain observer and radial basis function (RBF) neural networks in vectorial backstepping method. The high-gain observer provides the estimations of the ship position and heading as well as velocities. The RBF neural networks are employed to compensate for the uncertainties of ship dynamics. The adaptive laws incorporating a leakage term are designed to estimate the weights of RBF neural networks and the bounds of unknown time-variant environmental disturbances. In contrast to the existing results of dynamic positioning (DP) controllers, the proposed control scheme relies only on the ship position and heading measurements and does not require a priori knowledge of the ship dynamics and external disturbances. By means of Lyapunov functions, it is theoretically proved that our output feedback controller can control a ship's position and heading to the arbitrarily small neighborhood of the desired target values while guaranteeing that all signals in the closed-loop DP control system are uniformly ultimately bounded. Finally, simulations involving two ships are carried out, and simulation results demonstrate the effectiveness of the proposed control scheme.
Ik Han, Seong; Lee, Jangmyung
2016-11-01
This paper presents finite-time sliding mode control (FSMC) with predefined constraints for the tracking error and sliding surface in order to obtain robust positioning of a robot manipulator with input nonlinearity due to an unknown deadzone and external disturbance. An assumed model feedforward FSMC was designed to avoid tedious identification procedures for the manipulator parameters and to obtain a fast response time. Two constraint switching control functions based on the tracking error and finite-time sliding surface were added to the FSMC to guarantee the predefined tracking performance despite the presence of an unknown deadzone and disturbance. The tracking error due to the deadzone and disturbance can be suppressed within the predefined error boundary simply by tuning the gain value of the constraint switching function and without the addition of an extra compensator. Therefore, the designed constraint controller has a simpler structure than conventional transformed error constraint methods and the sliding surface constraint scheme can also indirectly guarantee the tracking error constraint while being more stable than the tracking error constraint control. A simulation and experiment were performed on an articulated robot manipulator to validate the proposed control schemes. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Soulis, K. X.; Valiantzas, J. D.
2011-10-01
The Soil Conservation Service Curve Number (SCS-CN) approach is widely used as a simple method for predicting direct runoff volume for a given rainfall event. The CN values can be estimated by being selected from tables. However, it is more accurate to estimate the CN value from measured rainfall-runoff data (assumed available) in a watershed. Previous researchers indicated that the CN values calculated from measured rainfall-runoff data vary systematically with the rainfall depth. They suggested the determination of a single asymptotic CN value observed for very high rainfall depths to characterize the watersheds' runoff response. In this paper, the novel hypothesis that the observed correlation between the calculated CN value and the rainfall depth in a watershed reflects the effect of the inevitable presence of soil-cover complex spatial variability along watersheds is being tested. Based on this hypothesis, the simplified concept of a two-CN heterogeneous system is introduced to model the observed CN-rainfall variation by reducing the CN spatial variability into two classes. The behavior of the CN-rainfall function produced by the proposed two-CN system concept is approached theoretically, it is analyzed systematically, and it is found to be similar to the variation observed in natural watersheds. Synthetic data tests, natural watersheds examples, and detailed study of two natural experimental watersheds with known spatial heterogeneity characteristics were used to evaluate the method. The results indicate that the determination of CN values from rainfall runoff data using the proposed two-CN system approach provides reasonable accuracy and it over performs the previous original method based on the determination of a single asymptotic CN value. Although the suggested method increases the number of unknown parameters to three (instead of one), a clear physical reasoning for them is presented.
Dynamic parameter identification of robot arms with servo-controlled electrical motors
NASA Astrophysics Data System (ADS)
Jiang, Zhao-Hui; Senda, Hiroshi
2005-12-01
This paper addresses the issue of dynamic parameter identification of the robot manipulator with servo-controlled electrical motors. An assumption is made that all kinematical parameters, such as link lengths, are known, and only dynamic parameters containing mass, moment of inertia, and their functions need to be identified. First, we derive dynamics of the robot arm with a linear form of the unknown dynamic parameters by taking dynamic characteristics of the motor and servo unit into consideration. Then, we implement the parameter identification approach to identify the unknown parameters with respect to individual link separately. A pseudo-inverse matrix is used for formulation of the parameter identification. The optimal solution is guaranteed in a sense of least-squares of the mean errors. A Direct Drive (DD) SCARA type industrial robot arm AdeptOne is used as an application example of the parameter identification. Simulations and experiments for both open loop and close loop controls are carried out. Comparison of the results confirms the correctness and usefulness of the parameter identification and the derived dynamic model.
An almost-parameter-free harmony search algorithm for groundwater pollution source identification.
Jiang, Simin; Zhang, Yali; Wang, Pei; Zheng, Maohui
2013-01-01
The spatiotemporal characterization of unknown sources of groundwater pollution is frequently encountered in environmental problems. This study adopts a simulation-optimization approach that combines a contaminant transport simulation model with a heuristic harmony search algorithm to identify unknown pollution sources. In the proposed methodology, an almost-parameter-free harmony search algorithm is developed. The performance of this methodology is evaluated on an illustrative groundwater pollution source identification problem, and the identified results indicate that the proposed almost-parameter-free harmony search algorithm-based optimization model can give satisfactory estimations, even when the irregular geometry, erroneous monitoring data, and prior information shortage of potential locations are considered.
Karabagias, Ioannis K; Karabournioti, Sofia
2018-05-03
Twenty-two honey samples, namely clover and citrus honeys, were collected from the greater Cairo area during the harvesting year 2014⁻2015. The main purpose of the present study was to characterize the aforementioned honey types and to investigate whether the use of easily assessable physicochemical parameters, including color attributes in combination with chemometrics, could differentiate honey floral origin. Parameters taken into account were: pH, electrical conductivity, ash, free acidity, lactonic acidity, total acidity, moisture content, total sugars (degrees Brix-°Bx), total dissolved solids and their ratio to total acidity, salinity, CIELAB color parameters, along with browning index values. Results showed that all honey samples analyzed met the European quality standards set for honey and had variations in the aforementioned physicochemical parameters depending on floral origin. Application of linear discriminant analysis showed that eight physicochemical parameters, including color, could classify Egyptian honeys according to floral origin ( p < 0.05). Correct classification rate was 95.5% using the original method and 90.9% using the cross validation method. The discriminatory ability of the developed model was further validated using unknown honey samples. The overall correct classification rate was not affected. Specific physicochemical parameter analysis in combination with chemometrics has the potential to enhance the differences in floral honeys produced in a given geographical zone.
Karabournioti, Sofia
2018-01-01
Twenty-two honey samples, namely clover and citrus honeys, were collected from the greater Cairo area during the harvesting year 2014–2015. The main purpose of the present study was to characterize the aforementioned honey types and to investigate whether the use of easily assessable physicochemical parameters, including color attributes in combination with chemometrics, could differentiate honey floral origin. Parameters taken into account were: pH, electrical conductivity, ash, free acidity, lactonic acidity, total acidity, moisture content, total sugars (degrees Brix-°Bx), total dissolved solids and their ratio to total acidity, salinity, CIELAB color parameters, along with browning index values. Results showed that all honey samples analyzed met the European quality standards set for honey and had variations in the aforementioned physicochemical parameters depending on floral origin. Application of linear discriminant analysis showed that eight physicochemical parameters, including color, could classify Egyptian honeys according to floral origin (p < 0.05). Correct classification rate was 95.5% using the original method and 90.9% using the cross validation method. The discriminatory ability of the developed model was further validated using unknown honey samples. The overall correct classification rate was not affected. Specific physicochemical parameter analysis in combination with chemometrics has the potential to enhance the differences in floral honeys produced in a given geographical zone. PMID:29751543
NASA Technical Reports Server (NTRS)
Rapp, R. H.
1974-01-01
The equations needed for the incorporation of gravity anomalies as unknown parameters in an orbit determination program are described. These equations were implemented in the Geodyn computer program which was used to process optical satellite observations. The arc dependent parameter unknowns, 184 unknown 15 deg and coordinates of 7 tracking stations were considered. Up to 39 arcs (5 to 7 days) involving 10 different satellites, were processed. An anomaly solution from the satellite data and a combination solution with 15 deg terrestrial anomalies were made. The limited data samples indicate that the method works. The 15 deg anomalies from various solutions and the potential coefficients implied by the different solutions are reported.
Indirect estimation of emission factors for phosphate surface mining using air dispersion modeling.
Tartakovsky, Dmitry; Stern, Eli; Broday, David M
2016-06-15
To date, phosphate surface mining suffers from lack of reliable emission factors. Due to complete absence of data to derive emissions factors, we developed a methodology for estimating them indirectly by studying a range of possible emission factors for surface phosphate mining operations and comparing AERMOD calculated concentrations to concentrations measured around the mine. We applied this approach for the Khneifiss phosphate mine, Syria, and the Al-Hassa and Al-Abyad phosphate mines, Jordan. The work accounts for numerous model unknowns and parameter uncertainties by applying prudent assumptions concerning the parameter values. Our results suggest that the net mining operations (bulldozing, grading and dragline) contribute rather little to ambient TSP concentrations in comparison to phosphate processing and transport. Based on our results, the common practice of deriving the emission rates for phosphate mining operations from the US EPA emission factors for surface coal mining or from the default emission factor of the EEA seems to be reasonable. Yet, since multiple factors affect dispersion from surface phosphate mines, a range of emission factors, rather than only a single value, was found to satisfy the model performance. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hodam, Sanayanbi; Sarkar, Sajal; Marak, Areor G. R.; Bandyopadhyay, A.; Bhadra, A.
2017-12-01
In the present study, to understand the spatial distribution characteristics of the ETo over India, spatial interpolation was performed on the means of 32 years (1971-2002) monthly data of 131 India Meteorological Department stations uniformly distributed over the country by two methods, namely, inverse distance weighted (IDW) interpolation and kriging. Kriging was found to be better while developing the monthly surfaces during cross-validation. However, in station-wise validation, IDW performed better than kriging in almost all the cases, hence is recommended for spatial interpolation of ETo and its governing meteorological parameters. This study also checked if direct kriging of FAO-56 Penman-Monteith (PM) (Allen et al. in Crop evapotranspiration—guidelines for computing crop water requirements, Irrigation and drainage paper 56, Food and Agriculture Organization of the United Nations (FAO), Rome, 1998) point ETo produced comparable results against ETo estimated with individually kriged weather parameters (indirect kriging). Indirect kriging performed marginally well compared to direct kriging. Point ETo values were extended to areal ETo values by IDW and FAO-56 PM mean ETo maps for India were developed to obtain sufficiently accurate ETo estimates at unknown locations.
Tufto, Jarle
2010-01-01
Domesticated species frequently spread their genes into populations of wild relatives through interbreeding. The domestication process often involves artificial selection for economically desirable traits. This can lead to an indirect response in unknown correlated traits and a reduction in fitness of domesticated individuals in the wild. Previous models for the effect of gene flow from domesticated species to wild relatives have assumed that evolution occurs in one dimension. Here, I develop a quantitative genetic model for the balance between migration and multivariate stabilizing selection. Different forms of correlational selection consistent with a given observed ratio between average fitness of domesticated and wild individuals offsets the phenotypic means at migration-selection balance away from predictions based on simpler one-dimensional models. For almost all parameter values, correlational selection leads to a reduction in the migration load. For ridge selection, this reduction arises because the distance the immigrants deviates from the local optimum in effect is reduced. For realistic parameter values, however, the effect of correlational selection on the load is small, suggesting that simpler one-dimensional models may still be adequate in terms of predicting mean population fitness and viability.
Spectrophotometric properties of Moon's and Mars's surfaces exploration by shadow mechanism
NASA Astrophysics Data System (ADS)
Morozhenko, Alexandr; Vidmachenko, Anatolij; Kostogryz, Nadiia
2015-03-01
Typically, to analyze the data of the phase dependence of brightness atmosphereless celestial bodies one use some modification of the shadow mechanism involving the coherent mechanism. There are several modification of B.Hapke [2] model divided into two groups by the number of unknown parameters: the first one with 4 parameters [3,4] and the second one with up to 10 unknown parameters [1] providing a good agreement of observations and calculations in several wavelengths. However, they are complicated by analysing of the colorindex C(α) dependence and photometric contrast of details with phase K(α) and on the disk (μ o = cos i). We have got good agreement between observed and calculated values of C(α) = U(α)-I(α), K(α), K(muo) for Moon and Mars with a minimum number of unknown parameters [4]. We used an empirical dependence of single scattering albedo (ω) and particle semi-transparency(æ): æ = (1-ω)n. Assuming that [χ (0°)/χ(5°)] = χ (5°)/χ (0°)], where χ(α) is scattering function, using the phase dependence of brightness and opposition effect in a single wavelength, we have defined ω,χ(α),g (particle packing factor), and the first term expansion of χ(α) in a series of Legendre polynomials x1. Good agreement between calculated and observed data of C(α) = U(α)-I(α) for the light and dark parts of the lunar surface and the integral disk reached at n ~ 0,25, g = 0,4 (porosity 0,91), x1 = -0,93, ω = 0,137 at λ = 359nm and 0,394 at λ = 1064nm;, for Mars with n ~ 0,25,g = 0,6 (porosity 0,84), x1 ~ 0, ω = 0,210 at λ = 359nm and ω = 0,784 at λ = 730nm. 1. Bowell E., Hapke B., Domingue D., Lumme K., et al. Applications of photometric models to asteroids, in Asteroids II. Tucson: Univ. Arizona Press. p.524-556. (1989) 2. Hapke B. A theoretical function for the lunar surface, J.Geophys.Res. 68, No.15., 4571-4586(1963). 3. Irwine W. M., The shadowing effect in diffuse reflection, J.Geophys.Res. 71,No.12, 2931-2937(1966). 4. Morozhenko A. V., Yanovitskij E.G., An optical model of the Martian surface in the visible region of spectrum, Astronomy Reports 48, No.4, 795-809(1971).
An Enhanced Box-Wing Solar Radiation pressure model for BDS and initial results
NASA Astrophysics Data System (ADS)
Zhao, Qunhe; Wang, Xiaoya; Hu, Xiaogong; Guo, Rui; Shang, Lin; Tang, Chengpan; Shao, Fan
2016-04-01
Solar radiation pressure forces are the largest non-gravitational perturbations acting on GNSS satellites, which is difficult to be accurately modeled due to the complicated and changing satellite attitude and unknown surface material characteristics. By the end of 2015, there are more than 50 stations of the Multi-GNSS Experiment(MGEX) set-up by the IGS. The simple box-plate model relies on coarse assumptions about the dimensions and optical properties of the satellite due to lack of more detailed information. So, a physical model based on BOX-WING model is developed, which is more sophisticated and more detailed physical structure has been taken into account, then calculating pressure forces according to the geometric relations between light rays and surfaces. All the MGEX stations and IGS core stations had been processed for precise orbit determination tests with GPS and BDS observations. Calculation range covers all the two kinds of Eclipsing and non-eclipsing periods in 2015, and we adopted the un-differential observation mode and more accurate values of satellite phase centers. At first, we tried nine parameters model, and then eliminated the parameters with strong correlation between them, came into being five parameters of the model. Five parameters were estimated, such as solar scale, y-bias, three material coefficients of solar panel, x-axis and z-axis panels. Initial results showed that, in the period of yaw-steering mode, use of Enhanced ADBOXW model results in small improvement for IGSO and MEO satellites, and the Root-Mean-Square(RMS) error value of one-day arc orbit decreased by about 10%~30% except for C08 and C14. The new model mainly improved the along track acceleration, up to 30% while in the radial track was not obvious. The Satellite Laser Ranging(SLR) validation showed, however, that this model had higher prediction accuracy in the period of orbit-normal mode, compared to GFZ multi-GNSS orbit products, as well with relative post-processing results. Because of the system bias and unknown reasons, GEO satellites had bad results, when after adding some Chinese regional stations, there had an obviously improvement of the orbit precision. This model can be used as a priori model to help build experience models for the later works.
NASA Technical Reports Server (NTRS)
Stepner, D. E.; Mehra, R. K.
1973-01-01
A new method of extracting aircraft stability and control derivatives from flight test data is developed based on the maximum likelihood cirterion. It is shown that this new method is capable of processing data from both linear and nonlinear models, both with and without process noise and includes output error and equation error methods as special cases. The first application of this method to flight test data is reported for lateral maneuvers of the HL-10 and M2/F3 lifting bodies, including the extraction of stability and control derivatives in the presence of wind gusts. All the problems encountered in this identification study are discussed. Several different methods (including a priori weighting, parameter fixing and constrained parameter values) for dealing with identifiability and uniqueness problems are introduced and the results given. The method for the design of optimal inputs for identifying the parameters of linear dynamic systems is also given. The criterion used for the optimization is the sensitivity of the system output to the unknown parameters. Several simple examples are first given and then the results of an extensive stability and control dervative identification simulation for a C-8 aircraft are detailed.
NASA Astrophysics Data System (ADS)
Liu, X. Y.; Alfi, S.; Bruni, S.
2016-06-01
A model-based condition monitoring strategy for the railway vehicle suspension is proposed in this paper. This approach is based on recursive least square (RLS) algorithm focusing on the deterministic 'input-output' model. RLS has Kalman filtering feature and is able to identify the unknown parameters from a noisy dynamic system by memorising the correlation properties of variables. The identification of suspension parameter is achieved by machine learning of the relationship between excitation and response in a vehicle dynamic system. A fault detection method for the vertical primary suspension is illustrated as an instance of this condition monitoring scheme. Simulation results from the rail vehicle dynamics software 'ADTreS' are utilised as 'virtual measurements' considering a trailer car of Italian ETR500 high-speed train. The field test data from an E464 locomotive are also employed to validate the feasibility of this strategy for the real application. Results of the parameter identification performed indicate that estimated suspension parameters are consistent or approximate with the reference values. These results provide the supporting evidence that this fault diagnosis technique is capable of paving the way for the future vehicle condition monitoring system.
Human arm stiffness and equilibrium-point trajectory during multi-joint movement.
Gomi, H; Kawato, M
1997-03-01
By using a newly designed high-performance manipulandum and a new estimation algorithm, we measured human multi-joint arm stiffness parameters during multi-joint point-to-point movements on a horizontal plane. This manipulandum allows us to apply a sufficient perturbation to subject's arm within a brief period during movement. Arm stiffness parameters were reliably estimated using a new algorithm, in which all unknown structural parameters could be estimated independent of arm posture (i.e., constant values under any arm posture). Arm stiffness during transverse movement was considerably greater than that during corresponding posture, but not during a longitudinal movement. Although the ratios of elbow, shoulder, and double-joint stiffness were varied in time, the orientation of stiffness ellipses during the movement did not change much. Equilibrium-point trajectories that were predicted from measured stiffness parameters and actual trajectories were slightly sinusoidally curved in Cartesian space and their velocity profiles were quite different from the velocity profiles of actual hand trajectories. This result contradicts the hypothesis that the brain does not take the dynamics into account in movement control depending on the neuromuscular servo mechanism; rather, it implies that the brain needs to acquire some internal models of controlled objects.
A hybrid Pade-Galerkin technique for differential equations
NASA Technical Reports Server (NTRS)
Geer, James F.; Andersen, Carl M.
1993-01-01
A three-step hybrid analysis technique, which successively uses the regular perturbation expansion method, the Pade expansion method, and then a Galerkin approximation, is presented and applied to some model boundary value problems. In the first step of the method, the regular perturbation method is used to construct an approximation to the solution in the form of a finite power series in a small parameter epsilon associated with the problem. In the second step of the method, the series approximation obtained in step one is used to construct a Pade approximation in the form of a rational function in the parameter epsilon. In the third step, the various powers of epsilon which appear in the Pade approximation are replaced by new (unknown) parameters (delta(sub j)). These new parameters are determined by requiring that the residual formed by substituting the new approximation into the governing differential equation is orthogonal to each of the perturbation coordinate functions used in step one. The technique is applied to model problems involving ordinary or partial differential equations. In general, the technique appears to provide good approximations to the solution even when the perturbation and Pade approximations fail to do so. The method is discussed and topics for future investigations are indicated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, I; Ahmad, S; Alsbou, N
Purpose: To develop 4D-cone-beam CT (CBCT) algorithm by motion modeling that extracts actual length, CT numbers level and motion amplitude of a mobile target retrospective to image reconstruction by motion modeling. Methods: The algorithm used three measurable parameters: apparent length and blurred CT number distribution of a mobile target obtained from CBCT images to determine actual length, CT-number value of the stationary target, and motion amplitude. The predictions of this algorithm were tested with mobile targets that with different well-known sizes made from tissue-equivalent gel which was inserted into a thorax phantom. The phantom moved sinusoidally in one-direction to simulatemore » respiratory motion using eight amplitudes ranging 0–20mm. Results: Using this 4D-CBCT algorithm, three unknown parameters were extracted that include: length of the target, CT number level, speed or motion amplitude for the mobile targets retrospective to image reconstruction. The motion algorithms solved for the three unknown parameters using measurable apparent length, CT number level and gradient for a well-defined mobile target obtained from CBCT images. The motion model agreed with measured apparent lengths which were dependent on the actual target length and motion amplitude. The gradient of the CT number distribution of the mobile target is dependent on the stationary CT number level, actual target length and motion amplitude. Motion frequency and phase did not affect the elongation and CT number distribution of the mobile target and could not be determined. Conclusion: A 4D-CBCT motion algorithm was developed to extract three parameters that include actual length, CT number level and motion amplitude or speed of mobile targets directly from reconstructed CBCT images without prior knowledge of the stationary target parameters. This algorithm provides alternative to 4D-CBCT without requirement to motion tracking and sorting of the images into different breathing phases which has potential applications in diagnostic CT imaging and radiotherapy.« less
Alsbou, Nesreen; Ahmad, Salahuddin; Ali, Imad
2016-05-17
A motion algorithm has been developed to extract length, CT number level and motion amplitude of a mobile target from cone-beam CT (CBCT) images. The algorithm uses three measurable parameters: Apparent length and blurred CT number distribution of a mobile target obtained from CBCT images to determine length, CT-number value of the stationary target, and motion amplitude. The predictions of this algorithm are tested with mobile targets having different well-known sizes that are made from tissue-equivalent gel which is inserted into a thorax phantom. The phantom moves sinusoidally in one-direction to simulate respiratory motion using eight amplitudes ranging 0-20 mm. Using this motion algorithm, three unknown parameters are extracted that include: Length of the target, CT number level, speed or motion amplitude for the mobile targets from CBCT images. The motion algorithm solves for the three unknown parameters using measured length, CT number level and gradient for a well-defined mobile target obtained from CBCT images. The motion model agrees with the measured lengths which are dependent on the target length and motion amplitude. The gradient of the CT number distribution of the mobile target is dependent on the stationary CT number level, the target length and motion amplitude. Motion frequency and phase do not affect the elongation and CT number distribution of the mobile target and could not be determined. A motion algorithm has been developed to extract three parameters that include length, CT number level and motion amplitude or speed of mobile targets directly from reconstructed CBCT images without prior knowledge of the stationary target parameters. This algorithm provides alternative to 4D-CBCT without requirement of motion tracking and sorting of the images into different breathing phases. The motion model developed here works well for tumors that have simple shapes, high contrast relative to surrounding tissues and move nearly in regular motion pattern that can be approximated with a simple sinusoidal function. This algorithm has potential applications in diagnostic CT imaging and radiotherapy in terms of motion management.
Malik, Suheel Abdullah; Qureshi, Ijaz Mansoor; Amir, Muhammad; Malik, Aqdas Naveed; Haq, Ihsanul
2015-01-01
In this paper, a new heuristic scheme for the approximate solution of the generalized Burgers'-Fisher equation is proposed. The scheme is based on the hybridization of Exp-function method with nature inspired algorithm. The given nonlinear partial differential equation (NPDE) through substitution is converted into a nonlinear ordinary differential equation (NODE). The travelling wave solution is approximated by the Exp-function method with unknown parameters. The unknown parameters are estimated by transforming the NODE into an equivalent global error minimization problem by using a fitness function. The popular genetic algorithm (GA) is used to solve the minimization problem, and to achieve the unknown parameters. The proposed scheme is successfully implemented to solve the generalized Burgers'-Fisher equation. The comparison of numerical results with the exact solutions, and the solutions obtained using some traditional methods, including adomian decomposition method (ADM), homotopy perturbation method (HPM), and optimal homotopy asymptotic method (OHAM), show that the suggested scheme is fairly accurate and viable for solving such problems.
Malik, Suheel Abdullah; Qureshi, Ijaz Mansoor; Amir, Muhammad; Malik, Aqdas Naveed; Haq, Ihsanul
2015-01-01
In this paper, a new heuristic scheme for the approximate solution of the generalized Burgers'-Fisher equation is proposed. The scheme is based on the hybridization of Exp-function method with nature inspired algorithm. The given nonlinear partial differential equation (NPDE) through substitution is converted into a nonlinear ordinary differential equation (NODE). The travelling wave solution is approximated by the Exp-function method with unknown parameters. The unknown parameters are estimated by transforming the NODE into an equivalent global error minimization problem by using a fitness function. The popular genetic algorithm (GA) is used to solve the minimization problem, and to achieve the unknown parameters. The proposed scheme is successfully implemented to solve the generalized Burgers'-Fisher equation. The comparison of numerical results with the exact solutions, and the solutions obtained using some traditional methods, including adomian decomposition method (ADM), homotopy perturbation method (HPM), and optimal homotopy asymptotic method (OHAM), show that the suggested scheme is fairly accurate and viable for solving such problems. PMID:25811858
Determining H {sub 0} with Bayesian hyper-parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cardona, Wilmar; Kunz, Martin; Pettorino, Valeria, E-mail: wilmar.cardona@unige.ch, E-mail: Martin.Kunz@unige.ch, E-mail: valeria.pettorino@thphys.uni-heidelberg.de
We re-analyse recent Cepheid data to estimate the Hubble parameter H {sub 0} by using Bayesian hyper-parameters (HPs). We consider the two data sets from Riess et al. 2011 and 2016 (labelled R11 and R16, with R11 containing less than half the data of R16) and include the available anchor distances (megamaser system NGC4258, detached eclipsing binary distances to LMC and M31, and MW Cepheids with parallaxes), use a weak metallicity prior and no period cut for Cepheids. We find that part of the R11 data is down-weighted by the HPs but that R16 is mostly consistent with expectations formore » a Gaussian distribution, meaning that there is no need to down-weight the R16 data set. For R16, we find a value of H {sub 0} = 73.75 ± 2.11 km s{sup −1} Mpc{sup −1} if we use HPs for all data points (including Cepheid stars, supernovae type Ia, and the available anchor distances), which is about 2.6 σ larger than the Planck 2015 value of H {sub 0} = 67.81 ± 0.92 km s{sup −1} Mpc{sup −1} and about 3.1 σ larger than the updated Planck 2016 value 66.93 ± 0.62 km s{sup −1} Mpc{sup −1}. If we perfom a standard χ{sup 2} analysis as in R16, we find H {sub 0} = 73.46 ± 1.40 (stat) km s{sup −1} Mpc{sup −1}. We test the effect of different assumptions, and find that the choice of anchor distances affects the final value significantly. If we exclude the Milky Way from the anchors, then the value of H {sub 0} decreases. We find however no evident reason to exclude the MW data. The HP method used here avoids subjective rejection criteria for outliers and offers a way to test datasets for unknown systematics.« less
Park, Haejun; Rangwala, Ali S; Dembsey, Nicholas A
2009-08-30
A method to estimate thermal and kinetic parameters of Pittsburgh seam coal subject to thermal runaway is presented using the standard ASTM E 2021 hot surface ignition test apparatus. Parameters include thermal conductivity (k), activation energy (E), coupled term (QA) of heat of reaction (Q) and pre-exponential factor (A) which are required, but rarely known input values to determine the thermal runaway propensity of a dust material. Four different dust layer thicknesses: 6.4, 12.7, 19.1 and 25.4mm, are tested, and among them, a single steady state dust layer temperature profile of 12.7 mm thick dust layer is used to estimate k, E and QA. k is calculated by equating heat flux from the hot surface layer and heat loss rate on the boundary assuming negligible heat generation in the coal dust layer at a low hot surface temperature. E and QA are calculated by optimizing a numerically estimated steady state dust layer temperature distribution to the experimentally obtained temperature profile of a 12.7 mm thick dust layer. Two unknowns, E and QA, are reduced to one from the correlation of E and QA obtained at criticality of thermal runaway. The estimated k is 0.1 W/mK matching the previously reported value. E ranges from 61.7 to 83.1 kJ/mol, and the corresponding QA ranges from 1.7 x 10(9) to 4.8 x 10(11)J/kg s. The mean values of E (72.4 kJ/mol) and QA (2.8 x 10(10)J/kg s) are used to predict the critical hot surface temperatures for other thicknesses, and good agreement is observed between measured and experimental values. Also, the estimated E and QA ranges match the corresponding ranges calculated from the multiple tests method and values reported in previous research.
NASA Astrophysics Data System (ADS)
Gaál, Ladislav; Szolgay, Ján; Kohnová, Silvia; Hlavčová, Kamila; Viglione, Alberto
2010-01-01
The paper deals with at-site flood frequency estimation in the case when also information on hydrological events from the past with extraordinary magnitude are available. For the joint frequency analysis of systematic observations and historical data, respectively, the Bayesian framework is chosen, which, through adequately defined likelihood functions, allows for incorporation of different sources of hydrological information, e.g., maximum annual flood peaks, historical events as well as measurement errors. The distribution of the parameters of the fitted distribution function and the confidence intervals of the flood quantiles are derived by means of the Markov chain Monte Carlo simulation (MCMC) technique. The paper presents a sensitivity analysis related to the choice of the most influential parameters of the statistical model, which are the length of the historical period
Inverse modeling with RZWQM2 to predict water quality
USDA-ARS?s Scientific Manuscript database
Agricultural systems models such as RZWQM2 are complex and have numerous parameters that are unknown and difficult to estimate. Inverse modeling provides an objective statistical basis for calibration that involves simultaneous adjustment of model parameters and yields parameter confidence intervals...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miyata, Y.; Suzuki, T.; Takechi, M.
2015-07-15
For the purpose of stable plasma equilibrium control and detailed analysis, it is essential to reconstruct an accurate plasma boundary on the poloidal cross section in tokamak devices. The Cauchy condition surface (CCS) method is a numerical approach for calculating the spatial distribution of the magnetic flux outside a hypothetical surface and reconstructing the plasma boundary from the magnetic measurements located outside the plasma. The accuracy of the plasma shape reconstruction has been assessed by comparing the CCS method and an equilibrium calculation in JT-60SA with a high elongation and triangularity of plasma shape. The CCS, on which both Dirichletmore » and Neumann conditions are unknown, is defined as a hypothetical surface located inside the real plasma region. The accuracy of the plasma shape reconstruction is sensitive to the CCS free parameters such as the number of unknown parameters and the shape in JT-60SA. It is found that the optimum number of unknown parameters and the size of the CCS that minimizes errors in the reconstructed plasma shape are in proportion to the plasma size. Furthermore, it is shown that the accuracy of the plasma shape reconstruction is greatly improved using the optimum number of unknown parameters and shape of the CCS, and the reachable reconstruction errors in plasma shape and locations of strike points are within the target ranges in JT-60SA.« less
The Kaon B-parameter in mixed action chiral perturbation theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aubin, C.; /Columbia U.; Laiho, Jack
2006-09-01
We calculate the kaon B-parameter, B{sub K}, in chiral perturbation theory for a partially quenched, mixed action theory with Ginsparg-Wilson valence quarks and staggered sea quarks. We find that the resulting expression is similar to that in the continuum, and in fact has only two additional unknown parameters. At one-loop order, taste-symmetry violations in the staggered sea sector only contribute to flavor-disconnected diagrams by generating an {Omicron}(a{sup 2}) shift to the masses of taste-singlet sea-sea mesons. Lattice discretization errors also give rise to an analytic term which shifts the tree-level value of B{sub K} by an amount of {Omicron}(a{sup 2}).more » This term, however, is not strictly due to taste-breaking, and is therefore also present in the expression for B{sub K} for pure G-W lattice fermions. We also present a numerical study of the mixed B{sub K} expression in order to demonstrate that both discretization errors and finite volume effects are small and under control on the MILC improved staggered lattices.« less
Kaon B-parameter in mixed action chiral perturbation theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aubin, C.; Laiho, Jack; Water, Ruth S. van de
2007-02-01
We calculate the kaon B-parameter, B{sub K}, in chiral perturbation theory for a partially quenched, mixed-action theory with Ginsparg-Wilson valence quarks and staggered sea quarks. We find that the resulting expression is similar to that in the continuum, and in fact has only two additional unknown parameters. At 1-loop order, taste-symmetry violations in the staggered sea sector only contribute to flavor-disconnected diagrams by generating an O(a{sup 2}) shift to the masses of taste-singlet sea-sea mesons. Lattice discretization errors also give rise to an analytic term which shifts the tree-level value of B{sub K} by an amount of O(a{sup 2}). Thismore » term, however, is not strictly due to taste breaking, and is therefore also present in the expression for B{sub K} for pure Ginsparg-Wilson lattice fermions. We also present a numerical study of the mixed B{sub K} expression in order to demonstrate that both discretization errors and finite volume effects are small and under control on the MILC improved staggered lattices.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langrish, T.A.G.; Harvey, A.C.
2000-01-01
A model of a well-mixed fluidized-bed dryer within a process flowsheeting package (SPEEDUP{trademark}) has been developed and applied to a parameter sensitivity study, a steady-state controllability analysis and an optimization study. This approach is more general and would be more easily applied to a complex flowsheet than one which relied on stand-alone dryer modeling packages. The simulation has shown that industrial data may be fitted to the model outputs with sensible values of unknown parameters. For this case study, the parameter sensitivity study has found that the heat loss from the dryer and the critical moisture content of the materialmore » have the greatest impact on the dryer operation at the current operating point. An optimization study has demonstrated the dominant effect of the heat loss from the dryer on the current operating cost and the current operating conditions, and substantial cost savings (around 50%) could be achieved with a well-insulated and airtight dryer, for the specific case studied here.« less
NASA Technical Reports Server (NTRS)
Abbott, Terence S.; Nataupsky, Mark; Steinmetz, George G.
1987-01-01
A ground-based aircraft simulation study was conducted to determine the effects on pilot preference and performance of integrating airspeed and altitude information into an advanced electronic primary flight display via moving-tape (linear moving scale) formats. Several key issues relating to the implementation of moving-tape formats were examined in this study: tape centering, tape orientation, and trend information. The factor of centering refers to whether the tape was centered about the actual airspeed or altitude or about some other defined reference value. Tape orientation refers to whether the represented values are arranged in descending or ascending order. Two pilots participated in this study, with each performing 32 runs along seemingly random, previously unknown flight profiles. The data taken, analyzed, and presented consisted of path performance parameters, pilot-control inputs, and electrical brain response measurements.
A MATLAB implementation of the minimum relative entropy method for linear inverse problems
NASA Astrophysics Data System (ADS)
Neupauer, Roseanna M.; Borchers, Brian
2001-08-01
The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.
Selection of latent variables for multiple mixed-outcome models
ZHOU, LING; LIN, HUAZHEN; SONG, XINYUAN; LI, YI
2014-01-01
Latent variable models have been widely used for modeling the dependence structure of multiple outcomes data. However, the formulation of a latent variable model is often unknown a priori, the misspecification will distort the dependence structure and lead to unreliable model inference. Moreover, multiple outcomes with varying types present enormous analytical challenges. In this paper, we present a class of general latent variable models that can accommodate mixed types of outcomes. We propose a novel selection approach that simultaneously selects latent variables and estimates parameters. We show that the proposed estimator is consistent, asymptotically normal and has the oracle property. The practical utility of the methods is confirmed via simulations as well as an application to the analysis of the World Values Survey, a global research project that explores peoples’ values and beliefs and the social and personal characteristics that might influence them. PMID:27642219
NASA Astrophysics Data System (ADS)
Dudaryonok, A. S.; Lavrentieva, N. N.; Buldyreva, J.
2018-06-01
(J, K)-line broadening and shift coefficients with their temperature-dependence characteristics are computed for the perpendicular (ΔK = ±1) ν6 band of the 12CH3D-N2 system. The computations are based on a semi-empirical approach which consists in the use of analytical Anderson-type expressions multiplied by a few-parameter correction factor to account for various deviations from Anderson's theory approximations. A mathematically convenient form of the correction factor is chosen on the basis of experimental rotational dependencies of line widths, and its parameters are fitted on some experimental line widths at 296 K. To get the unknown CH3D polarizability in the excited vibrational state v6 for line-shift calculations, a parametric vibration-state-dependent expression is suggested, with two parameters adjusted on some room-temperature experimental values of line shifts. Having been validated by comparison with available in the literature experimental values for various sub-branches of the band, this approach is used to generate massive data of line-shape parameters for extended ranges of rotational quantum numbers (J up to 70 and K up to 20) typically requested for spectroscopic databases. To obtain the temperature-dependence characteristics of line widths and line shifts, computations are done for various temperatures in the range 200-400 K recommended for HITRAN and least-squares fit procedures are applied. For the case of line widths strong sub-branch dependence with increasing K is observed in the R- and P-branches; for the line shifts such dependence is stated for the Q-branch.
Nelson, Stacy; English, Shawn; Briggs, Timothy
2016-05-06
Fiber-reinforced composite materials offer light-weight solutions to many structural challenges. In the development of high-performance composite structures, a thorough understanding is required of the composite materials themselves as well as methods for the analysis and failure prediction of the relevant composite structures. However, the mechanical properties required for the complete constitutive definition of a composite material can be difficult to determine through experimentation. Therefore, efficient methods are necessary that can be used to determine which properties are relevant to the analysis of a specific structure and to establish a structure's response to a material parameter that can only be definedmore » through estimation. The objectives of this paper deal with demonstrating the potential value of sensitivity and uncertainty quantification techniques during the failure analysis of loaded composite structures; and the proposed methods are applied to the simulation of the four-point flexural characterization of a carbon fiber composite material. Utilizing a recently implemented, phenomenological orthotropic material model that is capable of predicting progressive composite damage and failure, a sensitivity analysis is completed to establish which material parameters are truly relevant to a simulation's outcome. Then, a parameter study is completed to determine the effect of the relevant material properties' expected variations on the simulated four-point flexural behavior as well as to determine the value of an unknown material property. This process demonstrates the ability to formulate accurate predictions in the absence of a rigorous material characterization effort. Finally, the presented results indicate that a sensitivity analysis and parameter study can be used to streamline the material definition process as the described flexural characterization was used for model validation.« less
Distributed parameter estimation in unreliable sensor networks via broadcast gossip algorithms.
Wang, Huiwei; Liao, Xiaofeng; Wang, Zidong; Huang, Tingwen; Chen, Guo
2016-01-01
In this paper, we present an asynchronous algorithm to estimate the unknown parameter under an unreliable network which allows new sensors to join and old sensors to leave, and can tolerate link failures. Each sensor has access to partially informative measurements when it is awakened. In addition, the proposed algorithm can avoid the interference among messages and effectively reduce the accumulated measurement and quantization errors. Based on the theory of stochastic approximation, we prove that our proposed algorithm almost surely converges to the unknown parameter. Finally, we present a numerical example to assess the performance and the communication cost of the algorithm. Copyright © 2015 Elsevier Ltd. All rights reserved.
Analysis of multinomial models with unknown index using data augmentation
Royle, J. Andrew; Dorazio, R.M.; Link, W.A.
2007-01-01
Multinomial models with unknown index ('sample size') arise in many practical settings. In practice, Bayesian analysis of such models has proved difficult because the dimension of the parameter space is not fixed, being in some cases a function of the unknown index. We describe a data augmentation approach to the analysis of this class of models that provides for a generic and efficient Bayesian implementation. Under this approach, the data are augmented with all-zero detection histories. The resulting augmented dataset is modeled as a zero-inflated version of the complete-data model where an estimable zero-inflation parameter takes the place of the unknown multinomial index. Interestingly, data augmentation can be justified as being equivalent to imposing a discrete uniform prior on the multinomial index. We provide three examples involving estimating the size of an animal population, estimating the number of diabetes cases in a population using the Rasch model, and the motivating example of estimating the number of species in an animal community with latent probabilities of species occurrence and detection.
Jasper, Niklas; Däbritz, Jan; Frosch, Michael; Loeffler, Markus; Weckesser, Matthias; Foell, Dirk
2010-01-01
Fever of unknown origin (FUO) and unexplained signs of inflammation are challenging medical problems especially in children and predominantly caused by infections, malignancies or noninfectious inflammatory diseases. The aim of this study was to assess the diagnostic value of (18)F-FDG PET and PET/CT in the diagnostic work-up in paediatric patients. In this retrospective study, 47 FDG PET and 30 PET/CT scans from 69 children (median age 8.1 years, range 0.2-18.1 years, 36 male, 33 female) were analysed. The diagnostic value of PET investigations in paediatric patients presenting with FUO (44 scans) or unexplained signs of inflammation without fever (33 scans) was analysed. A diagnosis in paediatric patients with FUO or unexplained signs of inflammation could be established in 32 patients (54%). Of all scans, 63 (82%) were abnormal, and of the total number of 77 PET and PET/CT scans 35 (45%) were clinically helpful. In patients with a final diagnosis, scans were found to have contributed to the diagnosis in 73%. Laboratory, demographic or clinical parameters of the children did not predict the usefulness of FDG PET scans. This is the first larger study demonstrating that FDG PET and PET/CT may be valuable diagnostic tools for the evaluation of children with FUO and unexplained signs of inflammation. Depicting inflammation in the whole body, while not being traumatic, it is attractive for use especially in children. The combination of PET with CT seems to be superior, since the site of inflammation can be localized more accurately.
Initial report on the photometric study of Vestoids from Modra
NASA Astrophysics Data System (ADS)
Galád, A.; Gajdoš, Š.; Világi, J.
2014-07-01
Our new survey with a 0.6-m f/5.5 telescope starting in August 2012 is intended to enlarge the sample of V-type asteroids studied photometrically. It is focused on objects with unknown rotation periods. Due to some limitations of the facility, exposure times are usually only 60 s and only a clear filter is used. About 12 vestoids with previously unknown rotation periods can be studied in detail during one season (from August to May) in Modra (though in some cases the period is still not determined). The list of studied targets during the first two seasons is available at http://www.fmph.uniba.sk/index.php?id=3161. Lightcurves are roughly linked using the Carlsberg Meridian Catalogue 14 (CMC14) stars in the field of view to about 0.05 mag accuracy. The slope parameter G is assumed to be as high as 0.3--0.4. When the observations cover a wide range of phase angles and the rotation period can be determined (however, not in the case of tumblers), the G value is roughly determined. In some cases, even higher values provide a better match to the lightcurve data. In one case, the best nominal value is formally lower, but the uncertainty is large. Up to date we have detected two binary candidates having attenuation(s) in lightcurves. Lightcurves of a few targets indicate tumbling. Study of rotational properties of Vestoids is a long-term process. To speed it up, we would appreciate collaboration with other research groups and/or volunteers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Stacy; English, Shawn; Briggs, Timothy
Fiber-reinforced composite materials offer light-weight solutions to many structural challenges. In the development of high-performance composite structures, a thorough understanding is required of the composite materials themselves as well as methods for the analysis and failure prediction of the relevant composite structures. However, the mechanical properties required for the complete constitutive definition of a composite material can be difficult to determine through experimentation. Therefore, efficient methods are necessary that can be used to determine which properties are relevant to the analysis of a specific structure and to establish a structure's response to a material parameter that can only be definedmore » through estimation. The objectives of this paper deal with demonstrating the potential value of sensitivity and uncertainty quantification techniques during the failure analysis of loaded composite structures; and the proposed methods are applied to the simulation of the four-point flexural characterization of a carbon fiber composite material. Utilizing a recently implemented, phenomenological orthotropic material model that is capable of predicting progressive composite damage and failure, a sensitivity analysis is completed to establish which material parameters are truly relevant to a simulation's outcome. Then, a parameter study is completed to determine the effect of the relevant material properties' expected variations on the simulated four-point flexural behavior as well as to determine the value of an unknown material property. This process demonstrates the ability to formulate accurate predictions in the absence of a rigorous material characterization effort. Finally, the presented results indicate that a sensitivity analysis and parameter study can be used to streamline the material definition process as the described flexural characterization was used for model validation.« less
Doloc-Mihu, Anca; Calabrese, Ronald L
2016-01-01
The underlying mechanisms that support robustness in neuronal networks are as yet unknown. However, recent studies provide evidence that neuronal networks are robust to natural variations, modulation, and environmental perturbations of parameters, such as maximal conductances of intrinsic membrane and synaptic currents. Here we sought a method for assessing robustness, which might easily be applied to large brute-force databases of model instances. Starting with groups of instances with appropriate activity (e.g., tonic spiking), our method classifies instances into much smaller subgroups, called families, in which all members vary only by the one parameter that defines the family. By analyzing the structures of families, we developed measures of robustness for activity type. Then, we applied these measures to our previously developed model database, HCO-db, of a two-neuron half-center oscillator (HCO), a neuronal microcircuit from the leech heartbeat central pattern generator where the appropriate activity type is alternating bursting. In HCO-db, the maximal conductances of five intrinsic and two synaptic currents were varied over eight values (leak reversal potential also varied, five values). We focused on how variations of particular conductance parameters maintain normal alternating bursting activity while still allowing for functional modulation of period and spike frequency. We explored the trade-off between robustness of activity type and desirable change in activity characteristics when intrinsic conductances are altered and identified the hyperpolarization-activated (h) current as an ideal target for modulation. We also identified ensembles of model instances that closely approximate physiological activity and can be used in future modeling studies.
NASA Astrophysics Data System (ADS)
Xiong, Wei; Skalský, Rastislav; Porter, Cheryl H.; Balkovič, Juraj; Jones, James W.; Yang, Di
2016-09-01
Understanding the interactions between agricultural production and climate is necessary for sound decision-making in climate policy. Gridded and high-resolution crop simulation has emerged as a useful tool for building this understanding. Large uncertainty exists in this utilization, obstructing its capacity as a tool to devise adaptation strategies. Increasing focus has been given to sources of uncertainties for climate scenarios, input-data, and model, but uncertainties due to model parameter or calibration are still unknown. Here, we use publicly available geographical data sets as input to the Environmental Policy Integrated Climate model (EPIC) for simulating global-gridded maize yield. Impacts of climate change are assessed up to the year 2099 under a climate scenario generated by HadEM2-ES under RCP 8.5. We apply five strategies by shifting one specific parameter in each simulation to calibrate the model and understand the effects of calibration. Regionalizing crop phenology or harvest index appears effective to calibrate the model for the globe, but using various values of phenology generates pronounced difference in estimated climate impact. However, projected impacts of climate change on global maize production are consistently negative regardless of the parameter being adjusted. Different values of model parameter result in a modest uncertainty at global level, with difference of the global yield change less than 30% by the 2080s. The uncertainty subjects to decrease if applying model calibration or input data quality control. Calibration has a larger effect at local scales, implying the possible types and locations for adaptation.
Janisse, Kevyn; Doucet, Stéphanie M.
2017-01-01
Perceptual models of animal vision have greatly contributed to our understanding of animal-animal and plant-animal communication. The receptor-noise model of color contrasts has been central to this research as it quantifies the difference between two colors for any visual system of interest. However, if the properties of the visual system are unknown, assumptions regarding parameter values must be made, generally with unknown consequences. In this study, we conduct a sensitivity analysis of the receptor-noise model using avian visual system parameters to systematically investigate the influence of variation in light environment, photoreceptor sensitivities, photoreceptor densities, and light transmission properties of the ocular media and the oil droplets. We calculated the chromatic contrast of 15 plumage patches to quantify a dichromatism score for 70 species of Galliformes, a group of birds that display a wide range of sexual dimorphism. We found that the photoreceptor densities and the wavelength of maximum sensitivity of the short-wavelength-sensitive photoreceptor 1 (SWS1) can change dichromatism scores by 50% to 100%. In contrast, the light environment, transmission properties of the oil droplets, transmission properties of the ocular media, and the peak sensitivities of the cone photoreceptors had a smaller impact on the scores. By investigating the effect of varying two or more parameters simultaneously, we further demonstrate that improper parameterization could lead to differences between calculated and actual contrasts of more than 650%. Our findings demonstrate that improper parameterization of tetrachromatic visual models can have very large effects on measures of dichromatism scores, potentially leading to erroneous inferences. We urge more complete characterization of avian retinal properties and recommend that researchers either determine whether their species of interest possess an ultraviolet or near-ultraviolet sensitive SWS1 photoreceptor, or present models for both. PMID:28076391
NASA Astrophysics Data System (ADS)
Arnaud, Patrick; Cantet, Philippe; Odry, Jean
2017-11-01
Flood frequency analyses (FFAs) are needed for flood risk management. Many methods exist ranging from classical purely statistical approaches to more complex approaches based on process simulation. The results of these methods are associated with uncertainties that are sometimes difficult to estimate due to the complexity of the approaches or the number of parameters, especially for process simulation. This is the case of the simulation-based FFA approach called SHYREG presented in this paper, in which a rainfall generator is coupled with a simple rainfall-runoff model in an attempt to estimate the uncertainties due to the estimation of the seven parameters needed to estimate flood frequencies. The six parameters of the rainfall generator are mean values, so their theoretical distribution is known and can be used to estimate the generator uncertainties. In contrast, the theoretical distribution of the single hydrological model parameter is unknown; consequently, a bootstrap method is applied to estimate the calibration uncertainties. The propagation of uncertainty from the rainfall generator to the hydrological model is also taken into account. This method is applied to 1112 basins throughout France. Uncertainties coming from the SHYREG method and from purely statistical approaches are compared, and the results are discussed according to the length of the recorded observations, basin size and basin location. Uncertainties of the SHYREG method decrease as the basin size increases or as the length of the recorded flow increases. Moreover, the results show that the confidence intervals of the SHYREG method are relatively small despite the complexity of the method and the number of parameters (seven). This is due to the stability of the parameters and takes into account the dependence of uncertainties due to the rainfall model and the hydrological calibration. Indeed, the uncertainties on the flow quantiles are on the same order of magnitude as those associated with the use of a statistical law with two parameters (here generalised extreme value Type I distribution) and clearly lower than those associated with the use of a three-parameter law (here generalised extreme value Type II distribution). For extreme flood quantiles, the uncertainties are mostly due to the rainfall generator because of the progressive saturation of the hydrological model.
Theoretical Advances in Sequential Data Assimilation for the Atmosphere and Oceans
NASA Astrophysics Data System (ADS)
Ghil, M.
2007-05-01
We concentrate here on two aspects of advanced Kalman--filter-related methods: (i) the stability of the forecast- assimilation cycle, and (ii) parameter estimation for the coupled ocean-atmosphere system. The nonlinear stability of a prediction-assimilation system guarantees the uniqueness of the sequentially estimated solutions in the presence of partial and inaccurate observations, distributed in space and time; this stability is shown to be a necessary condition for the convergence of the state estimates to the true evolution of the turbulent flow. The stability properties of the governing nonlinear equations and of several data assimilation systems are studied by computing the spectrum of the associated Lyapunov exponents. These ideas are applied to a simple and an intermediate model of atmospheric variability and we show that the degree of stabilization depends on the type and distribution of the observations, as well as on the data assimilation method. These results represent joint work with A. Carrassi, A. Trevisan and F. Uboldi. Much is known by now about the main physical mechanisms that give rise to and modulate the El-Nino/Southern- Oscillation (ENSO), but the values of several parameters that enter these mechanisms are an important unknown. We apply Extended Kalman Filtering (EKF) for both model state and parameter estimation in an intermediate, nonlinear, coupled ocean-atmosphere model of ENSO. Model behavior is very sensitive to two key parameters: (a) "mu", the ocean-atmosphere coupling coefficient between the sea-surface temperature (SST) and wind stress anomalies; and (b) "delta-s", the surface-layer coefficient. Previous work has shown that "delta- s" determines the period of the model's self-sustained oscillation, while "mu' measures the degree of nonlinearity. Depending on the values of these parameters, the spatio-temporal pattern of model solutions is either that of a delayed oscillator or of a westward propagating mode. Assimilation of SST data from the NCEP- NCAR Reanalysis-2 shows that the parameters can vary on fairly short time scales and switch between values that approximate the two distinct modes of ENSO behavior. Rapid adjustments of these parameters occur, in particular, during strong ENSO events. Ways to apply EKF parameter estimation efficiently to state-of-the-art coupled ocean-atmosphere GCMs will be discussed. These results arise from joint work with D. Kondrashov and C.-j. Sun.
Probabilistic Open Set Recognition
NASA Astrophysics Data System (ADS)
Jain, Lalit Prithviraj
Real-world tasks in computer vision, pattern recognition and machine learning often touch upon the open set recognition problem: multi-class recognition with incomplete knowledge of the world and many unknown inputs. An obvious way to approach such problems is to develop a recognition system that thresholds probabilities to reject unknown classes. Traditional rejection techniques are not about the unknown; they are about the uncertain boundary and rejection around that boundary. Thus traditional techniques only represent the "known unknowns". However, a proper open set recognition algorithm is needed to reduce the risk from the "unknown unknowns". This dissertation examines this concept and finds existing probabilistic multi-class recognition approaches are ineffective for true open set recognition. We hypothesize the cause is due to weak adhoc assumptions combined with closed-world assumptions made by existing calibration techniques. Intuitively, if we could accurately model just the positive data for any known class without overfitting, we could reject the large set of unknown classes even under this assumption of incomplete class knowledge. For this, we formulate the problem as one of modeling positive training data by invoking statistical extreme value theory (EVT) near the decision boundary of positive data with respect to negative data. We provide a new algorithm called the PI-SVM for estimating the unnormalized posterior probability of class inclusion. This dissertation also introduces a new open set recognition model called Compact Abating Probability (CAP), where the probability of class membership decreases in value (abates) as points move from known data toward open space. We show that CAP models improve open set recognition for multiple algorithms. Leveraging the CAP formulation, we go on to describe the novel Weibull-calibrated SVM (W-SVM) algorithm, which combines the useful properties of statistical EVT for score calibration with one-class and binary support vector machines. Building from the success of statistical EVT based recognition methods such as PI-SVM and W-SVM on the open set problem, we present a new general supervised learning algorithm for multi-class classification and multi-class open set recognition called the Extreme Value Local Basis (EVLB). The design of this algorithm is motivated by the observation that extrema from known negative class distributions are the closest negative points to any positive sample during training, and thus should be used to define the parameters of a probabilistic decision model. In the EVLB, the kernel distribution for each positive training sample is estimated via an EVT distribution fit over the distances to the separating hyperplane between positive training sample and closest negative samples, with a subset of the overall positive training data retained to form a probabilistic decision boundary. Using this subset as a frame of reference, the probability of a sample at test time decreases as it moves away from the positive class. Possessing this property, the EVLB is well-suited to open set recognition problems where samples from unknown or novel classes are encountered at test. Our experimental evaluation shows that the EVLB provides a substantial improvement in scalability compared to standard radial basis function kernel machines, as well as P I-SVM and W-SVM, with improved accuracy in many cases. We evaluate our algorithm on open set variations of the standard visual learning benchmarks, as well as with an open subset of classes from Caltech 256 and ImageNet. Our experiments show that PI-SVM, WSVM and EVLB provide significant advances over the previous state-of-the-art solutions for the same tasks.
NASA Astrophysics Data System (ADS)
MØller, Klaes; Suliga, Anna M.; Tamborra, Irene; Denton, Peter B.
2018-05-01
The detection of the diffuse supernova neutrino background (DSNB) will preciously contribute to gauge the properties of the core-collapse supernova population. We estimate the DSNB event rate in the next-generation neutrino detectors, Hyper-Kamiokande enriched with Gadolinium, JUNO, and DUNE. The determination of the supernova unknowns through the DSNB will be heavily driven by Hyper-Kamiokande, given its higher expected event rate, and complemented by DUNE that will help in reducing the parameters uncertainties. Meanwhile, JUNO will be sensitive to the DSNB signal over the largest energy range. A joint statistical analysis of the expected rates in 20 years of data taking from the above detectors suggests that we will be sensitive to the local supernova rate at most at a 20‑33% level. A non-zero fraction of supernovae forming black holes will be confirmed at a 90% CL, if the true value of that fraction is gtrsim20%. On the other hand, the DSNB events show extremely poor statistical sensitivity to the nuclear equation of state and mass accretion rate of the progenitors forming black holes.
NASA Astrophysics Data System (ADS)
Sun, Y.; Hou, Z.; Huang, M.; Tian, F.; Leung, L. Ruby
2013-12-01
This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Both deterministic least-square fitting and stochastic Markov-chain Monte Carlo (MCMC)-Bayesian inversion approaches are evaluated by applying them to CLM4 at selected sites with different climate and soil conditions. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find that using model parameters calibrated by the sampling-based stochastic inversion approaches provides significant improvements in the model simulations compared to using default CLM4 parameter values, and that as more information comes in, the predictive intervals (ranges of posterior distributions) of the calibrated parameters become narrower. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.
NASA Astrophysics Data System (ADS)
Fukuda, Jun'ichi; Johnson, Kaj M.
2010-06-01
We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.
Handling the unknown soil hydraulic parameters in data assimilation for unsaturated flow problems
NASA Astrophysics Data System (ADS)
Lange, Natascha; Erdal, Daniel; Neuweiler, Insa
2017-04-01
Model predictions of flow in the unsaturated zone require the soil hydraulic parameters. However, these parameters cannot be determined easily in applications, in particular if observations are indirect and cover only a small range of possible states. Correlation of parameters or their correlation in the range of states that are observed is a problem, as different parameter combinations may reproduce approximately the same measured water content. In field campaigns this problem can be helped by adding more measurement devices. Often, observation networks are designed to feed models for long term prediction purposes (i.e. for weather forecasting). A popular way of making predictions with such kind of observations are data assimilation methods, like the ensemble Kalman filter (Evensen, 1994). These methods can be used for parameter estimation if the unknown parameters are included in the state vector and updated along with the model states. Given the difficulties related to estimation of the soil hydraulic parameters in general, it is questionable, though, whether these methods can really be used for parameter estimation under natural conditions. Therefore, we investigate the ability of the ensemble Kalman filter to estimate the soil hydraulic parameters. We use synthetic identical twin-experiments to guarantee full knowledge of the model and the true parameters. We use the van Genuchten model to describe the soil water retention and relative permeability functions. This model is unfortunately prone to the above mentioned pseudo-correlations of parameters. Therefore, we also test the simpler Russo Gardner model, which is less affected by that problem, in our experiments. The total number of unknown parameters is varied by considering different layers of soil. Besides, we study the influence of the parameter updates on the water content predictions. We test different iterative filter approaches and compare different observation strategies for parameter identification. Considering heterogeneous soils, we discuss the representativeness of different observation types to be used for the assimilation. G. Evensen. Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics. Journal of Geophysical Research: Oceans, 99(C5):10143-10162, 1994
NASA Astrophysics Data System (ADS)
Koma, Zsófia; Székely, Balázs; Dorninger, Peter; Kovács, Gábor
2013-04-01
Due to the need for quantitative analysis of various geomorphological landforms, the importance of fast and effective automatic processing of the different kind of digital terrain models (DTMs) is increasing. The robust plane fitting (segmentation) method, developed at the Institute of Photogrammetry and Remote Sensing at Vienna University of Technology, allows the processing of large 3D point clouds (containing millions of points), performs automatic detection of the planar elements of the surface via parameter estimation, and provides a considerable data reduction for the modeled area. Its geoscientific application allows the modeling of different landforms with the fitted planes as planar facets. In our study we aim to analyze the accuracy of the resulting set of fitted planes in terms of accuracy, model reliability and dependence on the input parameters. To this end we used DTMs of different scales and accuracy: (1) artificially generated 3D point cloud model with different magnitudes of error; (2) LiDAR data with 0.1 m error; (3) SRTM (Shuttle Radar Topography Mission) DTM database with 5 m accuracy; (4) DTM data from HRSC (High Resolution Stereo Camera) of the planet Mars with 10 m error. The analysis of the simulated 3D point cloud with normally distributed errors comprised different kinds of statistical tests (for example Chi-square and Kolmogorov-Smirnov tests) applied on the residual values and evaluation of dependence of the residual values on the input parameters. These tests have been repeated on the real data supplemented with the categorization of the segmentation result depending on the input parameters, model reliability and the geomorphological meaning of the fitted planes. The simulation results show that for the artificially generated data with normally distributed errors the null hypothesis can be accepted based on the residual value distribution being also normal, but in case of the test on the real data the residual value distribution is often mixed or unknown. The residual values are found to be dependent on two input parameters (standard deviation and maximum point-plane distance both defining distance thresholds for assigning points to a segment) mainly and the curvature of the surface affected mostly the distributions. The results of the analysis helped to decide which parameter set is the best for further modelling and provides the highest accuracy. With these results in mind the success of quasi-automatic modelling of the planar (for example plateau-like) features became more successful and often provided more accuracy. These studies were carried out partly in the framework of TMIS.ascrea project (Nr. 2001978) financed by the Austrian Research Promotion Agency (FFG); the contribution of ZsK was partly funded by Campus Hungary Internship TÁMOP-424B1.
pK(A) in proteins solving the Poisson-Boltzmann equation with finite elements.
Sakalli, Ilkay; Knapp, Ernst-Walter
2015-11-05
Knowledge on pK(A) values is an eminent factor to understand the function of proteins in living systems. We present a novel approach demonstrating that the finite element (FE) method of solving the linearized Poisson-Boltzmann equation (lPBE) can successfully be used to compute pK(A) values in proteins with high accuracy as a possible replacement to finite difference (FD) method. For this purpose, we implemented the software molecular Finite Element Solver (mFES) in the framework of the Karlsberg+ program to compute pK(A) values. This work focuses on a comparison between pK(A) computations obtained with the well-established FD method and with the new developed FE method mFES, solving the lPBE using protein crystal structures without conformational changes. Accurate and coarse model systems are set up with mFES using a similar number of unknowns compared with the FD method. Our FE method delivers results for computations of pK(A) values and interaction energies of titratable groups, which are comparable in accuracy. We introduce different thermodynamic cycles to evaluate pK(A) values and we show for the FE method how different parameters influence the accuracy of computed pK(A) values. © 2015 Wiley Periodicals, Inc.
Mass properties measurement system dynamics
NASA Technical Reports Server (NTRS)
Doty, Keith L.
1993-01-01
The MPMS mechanism possess two revolute degrees-of-freedom and allows the user to measure the mass, center of gravity, and the inertia tensor of an unknown mass. The dynamics of the Mass Properties Measurement System (MPMS) from the Lagrangian approach to illustrate the dependency of the motion on the unknown parameters.
Weak-value amplification and optimal parameter estimation in the presence of correlated noise
NASA Astrophysics Data System (ADS)
Sinclair, Josiah; Hallaji, Matin; Steinberg, Aephraim M.; Tollaksen, Jeff; Jordan, Andrew N.
2017-11-01
We analytically and numerically investigate the performance of weak-value amplification (WVA) and related parameter estimation methods in the presence of temporally correlated noise. WVA is a special instance of a general measurement strategy that involves sorting data into separate subsets based on the outcome of a second "partitioning" measurement. Using a simplified correlated noise model that can be analyzed exactly together with optimal statistical estimators, we compare WVA to a conventional measurement method. We find that WVA indeed yields a much lower variance of the parameter of interest than the conventional technique does, optimized in the absence of any partitioning measurements. In contrast, a statistically optimal analysis that employs partitioning measurements, incorporating all partitioned results and their known correlations, is found to yield an improvement—typically slight—over the noise reduction achieved by WVA. This result occurs because the simple WVA technique is not tailored to any specific noise environment and therefore does not make use of correlations between the different partitions. We also compare WVA to traditional background subtraction, a familiar technique where measurement outcomes are partitioned to eliminate unknown offsets or errors in calibration. Surprisingly, for the cases we consider, background subtraction turns out to be a special case of the optimal partitioning approach, possessing a similar typically slight advantage over WVA. These results give deeper insight into the role of partitioning measurements (with or without postselection) in enhancing measurement precision, which some have found puzzling. They also resolve previously made conflicting claims about the usefulness of weak-value amplification to precision measurement in the presence of correlated noise. We finish by presenting numerical results to model a more realistic laboratory situation of time-decaying correlations, showing that our conclusions hold for a wide range of statistical models.
Wang, Xinghu; Hong, Yiguang; Yi, Peng; Ji, Haibo; Kang, Yu
2017-05-24
In this paper, a distributed optimization problem is studied for continuous-time multiagent systems with unknown-frequency disturbances. A distributed gradient-based control is proposed for the agents to achieve the optimal consensus with estimating unknown frequencies and rejecting the bounded disturbance in the semi-global sense. Based on convex optimization analysis and adaptive internal model approach, the exact optimization solution can be obtained for the multiagent system disturbed by exogenous disturbances with uncertain parameters.
Stochastic Inversion of 2D Magnetotelluric Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jinsong
2010-07-01
The algorithm is developed to invert 2D magnetotelluric (MT) data based on sharp boundary parametrization using a Bayesian framework. Within the algorithm, we consider the locations and the resistivity of regions formed by the interfaces are as unknowns. We use a parallel, adaptive finite-element algorithm to forward simulate frequency-domain MT responses of 2D conductivity structure. Those unknown parameters are spatially correlated and are described by a geostatistical model. The joint posterior probability distribution function is explored by Markov Chain Monte Carlo (MCMC) sampling methods. The developed stochastic model is effective for estimating the interface locations and resistivity. Most importantly, itmore » provides details uncertainty information on each unknown parameter. Hardware requirements: PC, Supercomputer, Multi-platform, Workstation; Software requirements C and Fortan; Operation Systems/version is Linux/Unix or Windows« less
NASA Astrophysics Data System (ADS)
Fukahata, Y.; Wright, T. J.
2006-12-01
We developed a method of geodetic data inversion for slip distribution on a fault with an unknown dip angle. When fault geometry is unknown, the problem of geodetic data inversion is non-linear. A common strategy for obtaining slip distribution is to first determine the fault geometry by minimizing the square misfit under the assumption of a uniform slip on a rectangular fault, and then apply the usual linear inversion technique to estimate a slip distribution on the determined fault. It is not guaranteed, however, that the fault determined under the assumption of a uniform slip gives the best fault geometry for a spatially variable slip distribution. In addition, in obtaining a uniform slip fault model, we have to simultaneously determine the values of the nine mutually dependent parameters, which is a highly non-linear, complicated process. Although the inverse problem is non-linear for cases with unknown fault geometries, the non-linearity of the problems is actually weak, when we can assume the fault surface to be flat. In particular, when a clear fault trace is observed on the EarthOs surface after an earthquake, we can precisely estimate the strike and the location of the fault. In this case only the dip angle has large ambiguity. In geodetic data inversion we usually need to introduce smoothness constraints in order to compromise reciprocal requirements for model resolution and estimation errors in a natural way. Strictly speaking, the inverse problem with smoothness constraints is also non-linear, even if the fault geometry is known. The non-linearity has been dissolved by introducing AkaikeOs Bayesian Information Criterion (ABIC), with which the optimal value of the relative weight of observed data to smoothness constraints is objectively determined. In this study, using ABIC in determining the optimal dip angle, we dissolved the non-linearity of the inverse problem. We applied the method to the InSAR data of the 1995 Dinar, Turkey earthquake and obtained a much shallower dip angle than before.
Three-dimensional cinematography with control object of unknown shape.
Dapena, J; Harman, E A; Miller, J A
1982-01-01
A technique for reconstruction of three-dimensional (3D) motion which involves a simple filming procedure but allows the deduction of coordinates in large object volumes was developed. Internal camera parameters are calculated from measurements of the film images of two calibrated crosses while external camera parameters are calculated from the film images of points in a control object of unknown shape but at least one known length. The control object, which includes the volume in which the activity is to take place, is formed by a series of poles placed at unknown locations, each carrying two targets. From the internal and external camera parameters, and from locations of the images of point in the films of the two cameras, 3D coordinates of the point can be calculated. Root mean square errors of the three coordinates of points in a large object volume (5m x 5m x 1.5m) were 15 mm, 13 mm, 13 mm and 6 mm, and relative errors in lengths averaged 0.5%, 0.7% and 0.5%, respectively.
Mansoor, S E; McHaourab, H S; Farrens, D L
1999-12-07
We report an investigation of how much protein structural information could be obtained using a site-directed fluorescence labeling (SDFL) strategy. In our experiments, we used 21 consecutive single-cysteine substitution mutants in T4 lysozyme (residues T115-K135), located in a helix-turn-helix motif. The mutants were labeled with the fluorescent probe monobromobimane and subjected to an array of fluorescence measurements. Thermal stability measurements show that introduction of the label is substantially perturbing only when it is located at buried residue sites. At buried sites (solvent surface accessibility of <40 A(2)), the destabilizations are between 3 and 5.5 kcal/mol, whereas at more exposed sites, DeltaDeltaG values of < or = 1.5 kcal/mol are obtained. Of all the fluorescence parameters that were explored (excitation lambda(max), emission lambda(max), fluorescence lifetime, quantum yield, and steady-state anisotropy), the emission lambda(max) and the steady-state anisotropy values most accurately reflect the solvent surface accessibility at each site as calculated from the crystal structure of cysteine-less T4 lysozyme. The parameters we identify allow the classification of each site as buried, partially buried, or exposed. We find that the variations in these parameters as a function of residue number reflect the sequence-specific secondary structure, the determination of which is a key step for modeling a protein of unknown structure.
Advective transport in heterogeneous aquifers: Are proxy models predictive?
NASA Astrophysics Data System (ADS)
Fiori, A.; Zarlenga, A.; Gotovac, H.; Jankovic, I.; Volpi, E.; Cvetkovic, V.; Dagan, G.
2015-12-01
We examine the prediction capability of two approximate models (Multi-Rate Mass Transfer (MRMT) and Continuous Time Random Walk (CTRW)) of non-Fickian transport, by comparison with accurate 2-D and 3-D numerical simulations. Both nonlocal in time approaches circumvent the need to solve the flow and transport equations by using proxy models to advection, providing the breakthrough curves (BTC) at control planes at any x, depending on a vector of five unknown parameters. Although underlain by different mechanisms, the two models have an identical structure in the Laplace Transform domain and have the Markovian property of independent transitions. We show that also the numerical BTCs enjoy the Markovian property. Following the procedure recommended in the literature, along a practitioner perspective, we first calibrate the parameters values by a best fit with the numerical BTC at a control plane at x1, close to the injection plane, and subsequently use it for prediction at further control planes for a few values of σY2≤8. Due to a similar structure and Markovian property, the two methods perform equally well in matching the numerical BTC. The identified parameters are generally not unique, making their identification somewhat arbitrary. The inverse Gaussian model and the recently developed Multi-Indicator Model (MIM), which does not require any fitting as it relates the BTC to the permeability structure, are also discussed. The application of the proxy models for prediction requires carrying out transport field tests of large plumes for a long duration.
Bayesian aerosol retrieval algorithm for MODIS AOD retrieval over land
NASA Astrophysics Data System (ADS)
Lipponen, Antti; Mielonen, Tero; Pitkänen, Mikko R. A.; Levy, Robert C.; Sawyer, Virginia R.; Romakkaniemi, Sami; Kolehmainen, Ville; Arola, Antti
2018-03-01
We have developed a Bayesian aerosol retrieval (BAR) algorithm for the retrieval of aerosol optical depth (AOD) over land from the Moderate Resolution Imaging Spectroradiometer (MODIS). In the BAR algorithm, we simultaneously retrieve all dark land pixels in a granule, utilize spatial correlation models for the unknown aerosol parameters, use a statistical prior model for the surface reflectance, and take into account the uncertainties due to fixed aerosol models. The retrieved parameters are total AOD at 0.55 µm, fine-mode fraction (FMF), and surface reflectances at four different wavelengths (0.47, 0.55, 0.64, and 2.1 µm). The accuracy of the new algorithm is evaluated by comparing the AOD retrievals to Aerosol Robotic Network (AERONET) AOD. The results show that the BAR significantly improves the accuracy of AOD retrievals over the operational Dark Target (DT) algorithm. A reduction of about 29 % in the AOD root mean square error and decrease of about 80 % in the median bias of AOD were found globally when the BAR was used instead of the DT algorithm. Furthermore, the fraction of AOD retrievals inside the ±(0.05+15 %) expected error envelope increased from 55 to 76 %. In addition to retrieving the values of AOD, FMF, and surface reflectance, the BAR also gives pixel-level posterior uncertainty estimates for the retrieved parameters. The BAR algorithm always results in physical, non-negative AOD values, and the average computation time for a single granule was less than a minute on a modern personal computer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, I; Ahmad, S; Alsbou, N
Purpose: A motion algorithm was developed to extract actual length, CT-numbers and motion amplitude of a mobile target imaged with cone-beam-CT (CBCT) retrospective to image-reconstruction. Methods: The motion model considered a mobile target moving with a sinusoidal motion and employed three measurable parameters: apparent length, CT number level and gradient of a mobile target obtained from CBCT images to extract information about the actual length and CT number value of the stationary target and motion amplitude. The algorithm was verified experimentally with a mobile phantom setup that has three targets with different sizes manufactured from homogenous tissue-equivalent gel material embeddedmore » into a thorax phantom. The phantom moved sinusoidal in one-direction using eight amplitudes (0–20mm) and a frequency of 15-cycles-per-minute. The model required imaging parameters such as slice thickness, imaging time. Results: This motion algorithm extracted three unknown parameters: length of the target, CT-number-level, motion amplitude for a mobile target retrospective to CBCT image reconstruction. The algorithm relates three unknown parameters to measurable apparent length, CT-number-level and gradient for well-defined mobile targets obtained from CBCT images. The motion model agreed with measured apparent lengths which were dependent on actual length of the target and motion amplitude. The cumulative CT-number for a mobile target was dependent on CT-number-level of the stationary target and motion amplitude. The gradient of the CT-distribution of mobile target is dependent on the stationary CT-number-level, actual target length along the direction of motion, and motion amplitude. Motion frequency and phase did not affect the elongation and CT-number distributions of mobile targets when imaging time included several motion cycles. Conclusion: The motion algorithm developed in this study has potential applications in diagnostic CT imaging and radiotherapy to extract actual length, size and CT-numbers distorted by motion in CBCT imaging. The model provides further information about motion of the target.« less
NASA Astrophysics Data System (ADS)
Astroza, Rodrigo; Ebrahimian, Hamed; Conte, Joel P.
2015-03-01
This paper describes a novel framework that combines advanced mechanics-based nonlinear (hysteretic) finite element (FE) models and stochastic filtering techniques to estimate unknown time-invariant parameters of nonlinear inelastic material models used in the FE model. Using input-output data recorded during earthquake events, the proposed framework updates the nonlinear FE model of the structure. The updated FE model can be directly used for damage identification and further used for damage prognosis. To update the unknown time-invariant parameters of the FE model, two alternative stochastic filtering methods are used: the extended Kalman filter (EKF) and the unscented Kalman filter (UKF). A three-dimensional, 5-story, 2-by-1 bay reinforced concrete (RC) frame is used to verify the proposed framework. The RC frame is modeled using fiber-section displacement-based beam-column elements with distributed plasticity and is subjected to the ground motion recorded at the Sylmar station during the 1994 Northridge earthquake. The results indicate that the proposed framework accurately estimate the unknown material parameters of the nonlinear FE model. The UKF outperforms the EKF when the relative root-mean-square error of the recorded responses are compared. In addition, the results suggest that the convergence of the estimate of modeling parameters is smoother and faster when the UKF is utilized.
Zaikin, Alexey; Míguez, Joaquín
2017-01-01
We compare three state-of-the-art Bayesian inference methods for the estimation of the unknown parameters in a stochastic model of a genetic network. In particular, we introduce a stochastic version of the paradigmatic synthetic multicellular clock model proposed by Ullner et al., 2007. By introducing dynamical noise in the model and assuming that the partial observations of the system are contaminated by additive noise, we enable a principled mechanism to represent experimental uncertainties in the synthesis of the multicellular system and pave the way for the design of probabilistic methods for the estimation of any unknowns in the model. Within this setup, we tackle the Bayesian estimation of a subset of the model parameters. Specifically, we compare three Monte Carlo based numerical methods for the approximation of the posterior probability density function of the unknown parameters given a set of partial and noisy observations of the system. The schemes we assess are the particle Metropolis-Hastings (PMH) algorithm, the nonlinear population Monte Carlo (NPMC) method and the approximate Bayesian computation sequential Monte Carlo (ABC-SMC) scheme. We present an extensive numerical simulation study, which shows that while the three techniques can effectively solve the problem there are significant differences both in estimation accuracy and computational efficiency. PMID:28797087
Underwater passive acoustic localization of Pacific walruses in the northeastern Chukchi Sea.
Rideout, Brendan P; Dosso, Stan E; Hannay, David E
2013-09-01
This paper develops and applies a linearized Bayesian localization algorithm based on acoustic arrival times of marine mammal vocalizations at spatially-separated receivers which provides three-dimensional (3D) location estimates with rigorous uncertainty analysis. To properly account for uncertainty in receiver parameters (3D hydrophone locations and synchronization times) and environmental parameters (water depth and sound-speed correction), these quantities are treated as unknowns constrained by prior estimates and prior uncertainties. Unknown scaling factors on both the prior and arrival-time uncertainties are estimated by minimizing Akaike's Bayesian information criterion (a maximum entropy condition). Maximum a posteriori estimates for sound source locations and times, receiver parameters, and environmental parameters are calculated simultaneously using measurements of arrival times for direct and interface-reflected acoustic paths. Posterior uncertainties for all unknowns incorporate both arrival time and prior uncertainties. Monte Carlo simulation results demonstrate that, for the cases considered here, linearization errors are small and the lack of an accurate sound-speed profile does not cause significant biases in the estimated locations. A sequence of Pacific walrus vocalizations, recorded in the Chukchi Sea northwest of Alaska, is localized using this technique, yielding a track estimate and uncertainties with an estimated speed comparable to normal walrus swim speeds.
NASA Technical Reports Server (NTRS)
Duong, N.; Winn, C. B.; Johnson, G. R.
1975-01-01
Two approaches to an identification problem in hydrology are presented, based upon concepts from modern control and estimation theory. The first approach treats the identification of unknown parameters in a hydrologic system subject to noisy inputs as an adaptive linear stochastic control problem; the second approach alters the model equation to account for the random part in the inputs, and then uses a nonlinear estimation scheme to estimate the unknown parameters. Both approaches use state-space concepts. The identification schemes are sequential and adaptive and can handle either time-invariant or time-dependent parameters. They are used to identify parameters in the Prasad model of rainfall-runoff. The results obtained are encouraging and confirm the results from two previous studies; the first using numerical integration of the model equation along with a trial-and-error procedure, and the second using a quasi-linearization technique. The proposed approaches offer a systematic way of analyzing the rainfall-runoff process when the input data are imbedded in noise.
A new chaotic communication scheme based on adaptive synchronization.
Xiang-Jun, Wu
2006-12-01
A new chaotic communication scheme using adaptive synchronization technique of two unified chaotic systems is proposed. Different from the existing secure communication methods, the transmitted signal is modulated into the parameter of chaotic systems. The adaptive synchronization technique is used to synchronize two identical chaotic systems embedded in the transmitter and the receiver. It is assumed that the parameter of the receiver system is unknown. Based on the Lyapunov stability theory, an adaptive control law is derived to make the states of two identical unified chaotic systems with unknown system parameters asymptotically synchronized; thus the parameter of the receiver system is identified. Then the recovery of the original information signal in the receiver is successfully achieved on the basis of the estimated parameter. It is noticed that the time required for recovering the information signal and the accuracy of the recovered signal very sensitively depends on the frequency of the information signal. Numerical results have verified the effectiveness of the proposed scheme.
NASA Technical Reports Server (NTRS)
Murphy, K. A.
1988-01-01
A parameter estimation algorithm is developed which can be used to estimate unknown time- or state-dependent delays and other parameters (e.g., initial condition) appearing within a nonlinear nonautonomous functional differential equation. The original infinite dimensional differential equation is approximated using linear splines, which are allowed to move with the variable delay. The variable delays are approximated using linear splines as well. The approximation scheme produces a system of ordinary differential equations with nice computational properties. The unknown parameters are estimated within the approximating systems by minimizing a least-squares fit-to-data criterion. Convergence theorems are proved for time-dependent delays and state-dependent delays within two classes, which say essentially that fitting the data by using approximations will, in the limit, provide a fit to the data using the original system. Numerical test examples are presented which illustrate the method for all types of delay.
NASA Technical Reports Server (NTRS)
Murphy, K. A.
1990-01-01
A parameter estimation algorithm is developed which can be used to estimate unknown time- or state-dependent delays and other parameters (e.g., initial condition) appearing within a nonlinear nonautonomous functional differential equation. The original infinite dimensional differential equation is approximated using linear splines, which are allowed to move with the variable delay. The variable delays are approximated using linear splines as well. The approximation scheme produces a system of ordinary differential equations with nice computational properties. The unknown parameters are estimated within the approximating systems by minimizing a least-squares fit-to-data criterion. Convergence theorems are proved for time-dependent delays and state-dependent delays within two classes, which say essentially that fitting the data by using approximations will, in the limit, provide a fit to the data using the original system. Numerical test examples are presented which illustrate the method for all types of delay.
Bodgi, Larry; Canet, Aurélien; Pujo-Menjouet, Laurent; Lesne, Annick; Victor, Jean-Marc; Foray, Nicolas
2016-04-07
Cell survival is conventionally defined as the capability of irradiated cells to produce colonies. It is quantified by the clonogenic assays that consist in determining the number of colonies resulting from a known number of irradiated cells. Several mathematical models were proposed to describe the survival curves, notably from the target theory. The Linear-Quadratic (LQ) model, which is to date the most frequently used model in radiobiology and radiotherapy, dominates all the other models by its robustness and simplicity. Its usefulness is particularly important because the ratio of the values of the adjustable parameters, α and β, on which it is based, predicts the occurrence of post-irradiation tissue reactions. However, the biological interpretation of these parameters is still unknown. Throughout this review, we revisit and discuss historically, mathematically and biologically, the different models of the radiation action by providing clues for resolving the enigma of the LQ model. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roehm, Dominic; Pavel, Robert S.; Barros, Kipton
We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our predictionmore » scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters.« less
Arefin, Md Shamsul
2012-01-01
This work presents a technique for the chirality (n, m) assignment of semiconducting single wall carbon nanotubes by solving a set of empirical equations of the tight binding model parameters. The empirical equations of the nearest neighbor hopping parameters, relating the term (2n− m) with the first and second optical transition energies of the semiconducting single wall carbon nanotubes, are also proposed. They provide almost the same level of accuracy for lower and higher diameter nanotubes. An algorithm is presented to determine the chiral index (n, m) of any unknown semiconducting tube by solving these empirical equations using values of radial breathing mode frequency and the first or second optical transition energy from resonant Raman spectroscopy. In this paper, the chirality of 55 semiconducting nanotubes is assigned using the first and second optical transition energies. Unlike the existing methods of chirality assignment, this technique does not require graphical comparison or pattern recognition between existing experimental and theoretical Kataura plot. PMID:28348319
Distributed database kriging for adaptive sampling (D²KAS)
Roehm, Dominic; Pavel, Robert S.; Barros, Kipton; ...
2015-03-18
We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our predictionmore » scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5 to 25, while retaining high accuracy for various choices of the algorithm parameters.« less
A physiologically based model for tramadol pharmacokinetics in horses.
Abbiati, Roberto Andrea; Cagnardi, Petra; Ravasio, Giuliano; Villa, Roberto; Manca, Davide
2017-09-21
This work proposes an application of a minimal complexity physiologically based pharmacokinetic model to predict tramadol concentration vs time profiles in horses. Tramadol is an opioid analgesic also used for veterinary treatments. Researchers and medical doctors can profit from the application of mathematical models as supporting tools to optimize the pharmacological treatment of animal species. The proposed model is based on physiology but adopts the minimal compartmental architecture necessary to describe the experimental data. The model features a system of ordinary differential equations, where most of the model parameters are either assigned or individualized for a given horse, using literature data and correlations. Conversely, residual parameters, whose value is unknown, are regressed exploiting experimental data. The model proved capable of simulating pharmacokinetic profiles with accuracy. In addition, it provides further insights on un-observable tramadol data, as for instance tramadol concentration in the liver or hepatic metabolism and renal excretion extent. Copyright © 2017 Elsevier Ltd. All rights reserved.
Perrault, Justin R; Miller, Debra L; Eads, Erica; Johnson, Chris; Merrill, Anita; Thompson, Larry J; Wyneken, Jeanette
2012-01-01
Of the seven sea turtle species, the critically endangered leatherback sea turtle (Dermochelys coriacea) exhibits the lowest and most variable nest success (i.e., hatching success and emergence success) for reasons that remain largely unknown. In an attempt to identify or rule out causes of low reproductive success in this species, we established the largest sample size (n = 60-70 for most values) of baseline blood parameters (protein electrophoresis, hematology, plasma biochemistry) for this species to date. Hematologic, protein electrophoretic and biochemical values are important tools that can provide information regarding the physiological condition of an individual and population health as a whole. It has been proposed that the health of nesting individuals affects their reproductive output. In order to establish correlations with low reproductive success in leatherback sea turtles from Florida, we compared maternal health indices to hatching success and emergence success of their nests. As expected, hatching success (median = 57.4%) and emergence success (median = 49.1%) in Floridian leatherbacks were low during the study period (2007-2008 nesting seasons), a trend common in most nesting leatherback populations (average global hatching success = ∼50%). One protein electrophoretic value (gamma globulin protein) and one hematologic value (red blood cell count) significantly correlated with hatching success and emergence success. Several maternal biochemical parameters correlated with hatching success and/or emergence success including alkaline phosphatase activity, blood urea nitrogen, calcium, calcium:phosphorus ratio, carbon dioxide, cholesterol, creatinine, and phosphorus. Our results suggest that in leatherbacks, physiological parameters correlate with hatching success and emergence success of their nests. We conclude that long-term and comparative studies are needed to determine if certain individuals produce nests with lower hatching success and emergence success than others, and if those individuals with evidence of chronic suboptimal health have lower reproductive success.
Perrault, Justin R.; Miller, Debra L.; Eads, Erica; Johnson, Chris; Merrill, Anita; Thompson, Larry J.; Wyneken, Jeanette
2012-01-01
Of the seven sea turtle species, the critically endangered leatherback sea turtle (Dermochelys coriacea) exhibits the lowest and most variable nest success (i.e., hatching success and emergence success) for reasons that remain largely unknown. In an attempt to identify or rule out causes of low reproductive success in this species, we established the largest sample size (n = 60–70 for most values) of baseline blood parameters (protein electrophoresis, hematology, plasma biochemistry) for this species to date. Hematologic, protein electrophoretic and biochemical values are important tools that can provide information regarding the physiological condition of an individual and population health as a whole. It has been proposed that the health of nesting individuals affects their reproductive output. In order to establish correlations with low reproductive success in leatherback sea turtles from Florida, we compared maternal health indices to hatching success and emergence success of their nests. As expected, hatching success (median = 57.4%) and emergence success (median = 49.1%) in Floridian leatherbacks were low during the study period (2007–2008 nesting seasons), a trend common in most nesting leatherback populations (average global hatching success = ∼50%). One protein electrophoretic value (gamma globulin protein) and one hematologic value (red blood cell count) significantly correlated with hatching success and emergence success. Several maternal biochemical parameters correlated with hatching success and/or emergence success including alkaline phosphatase activity, blood urea nitrogen, calcium, calcium∶phosphorus ratio, carbon dioxide, cholesterol, creatinine, and phosphorus. Our results suggest that in leatherbacks, physiological parameters correlate with hatching success and emergence success of their nests. We conclude that long-term and comparative studies are needed to determine if certain individuals produce nests with lower hatching success and emergence success than others, and if those individuals with evidence of chronic suboptimal health have lower reproductive success. PMID:22359635
Kimura, Yuki; Aoki, Takahiro; Chiba, Akiko; Nambo, Yasuo
2017-01-01
Dystocia is often lethal for neonatal foals; however, its clinicopathological features remain largely unknown. We investigated the effect of dystocia on the foal blood profile. Venous blood samples were collected from 35 foals (5 Percheron and 30 crossbreds between Percheron, Belgian, and Breton heavy draft horses) at 0 hr, 1 hr, 12 hr and 1 day after birth. Dystocia was defined as prolonged labor >30 min with strong fetal traction with or without fetal displacement. The dystocia group (n=13) showed lower mean values for pH (P<0.01), bicarbonate (P<0.01), total carbon dioxide (P<0.05), and base excess (P<0.01) and higher mean values for anion gap (P<0.05) and lactate (P<0.01) immediately after birth than the normal group (n=22). Remarkably high pCO 2 values (>90 mmHg) were observed in three foals in the dystocia group but in none of the foals in the normal birth group immediately after birth. These results suggest that dystocia results in lactic acidosis and may be related to respiratory distress.
Elmiger, Marco P; Poetzsch, Michael; Steuer, Andrea E; Kraemer, Thomas
2018-03-06
High resolution mass spectrometry and modern data independent acquisition (DIA) methods enable the creation of general unknown screening (GUS) procedures. However, even when DIA is used, its potential is far from being exploited, because often, the untargeted acquisition is followed by a targeted search. Applying an actual GUS (including untargeted screening) produces an immense amount of data that must be dealt with. An optimization of the parameters regulating the feature detection and hit generation algorithms of the data processing software could significantly reduce the amount of unnecessary data and thereby the workload. Design of experiment (DoE) approaches allow a simultaneous optimization of multiple parameters. In a first step, parameters are evaluated (crucial or noncrucial). Second, crucial parameters are optimized. The aim in this study was to reduce the number of hits, without missing analytes. The obtained parameter settings from the optimization were compared to the standard settings by analyzing a test set of blood samples spiked with 22 relevant analytes as well as 62 authentic forensic cases. The optimization lead to a marked reduction of workload (12.3 to 1.1% and 3.8 to 1.1% hits for the test set and the authentic cases, respectively) while simultaneously increasing the identification rate (68.2 to 86.4% and 68.8 to 88.1%, respectively). This proof of concept study emphasizes the great potential of DoE approaches to master the data overload resulting from modern data independent acquisition methods used for general unknown screening procedures by optimizing software parameters.
Computational methods for estimation of parameters in hyperbolic systems
NASA Technical Reports Server (NTRS)
Banks, H. T.; Ito, K.; Murphy, K. A.
1983-01-01
Approximation techniques for estimating spatially varying coefficients and unknown boundary parameters in second order hyperbolic systems are discussed. Methods for state approximation (cubic splines, tau-Legendre) and approximation of function space parameters (interpolatory splines) are outlined and numerical findings for use of the resulting schemes in model "one dimensional seismic inversion' problems are summarized.
Polarimetric LIDAR with FRI sampling for target characterization
NASA Astrophysics Data System (ADS)
Wijerathna, Erandi; Creusere, Charles D.; Voelz, David; Castorena, Juan
2017-09-01
Polarimetric LIDAR is a significant tool for current remote sensing applications. In addition, measurement of the full waveform of the LIDAR echo provides improved ranging and target discrimination, although, data storage volume in this approach can be problematic. In the work presented here, we investigated the practical issues related to the implementation of a full waveform LIDAR system to identify polarization characteristics of multiple targets within the footprint of the illumination beam. This work was carried out on a laboratory LIDAR testbed that features a flexible arrangement of targets and the ability to change the target polarization characteristics. Targets with different retardance characteristics were illuminated with a linearly polarized laser beam and the return pulse intensities were analyzed by rotating a linear analyzer polarizer in front of a high-speed detector. Additionally, we explored the applicability and the limitations of applying a sparse sampling approach based on Finite Rate of Innovations (FRI) to compress and recover the characteristic parameters of the pulses reflected from the targets. The pulse parameter values extracted by the FRI analysis were accurate and we successfully distinguished the polarimetric characteristics and the range of multiple targets at different depths within the same beam footprint. We also demonstrated the recovery of an unknown target retardance value from the echoes by applying a Mueller matrix system model.
Muir, W M; Howard, R D
2001-07-01
Any release of transgenic organisms into nature is a concern because ecological relationships between genetically engineered organisms and other organisms (including their wild-type conspecifics) are unknown. To address this concern, we developed a method to evaluate risk in which we input estimates of fitness parameters from a founder population into a recurrence model to predict changes in transgene frequency after a simulated transgenic release. With this method, we grouped various aspects of an organism's life cycle into six net fitness components: juvenile viability, adult viability, age at sexual maturity, female fecundity, male fertility, and mating advantage. We estimated these components for wild-type and transgenic individuals using the fish, Japanese medaka (Oryzias latipes). We generalized our model's predictions using various combinations of fitness component values in addition to our experimentally derived estimates. Our model predicted that, for a wide range of parameter values, transgenes could spread in populations despite high juvenile viability costs if transgenes also have sufficiently high positive effects on other fitness components. Sensitivity analyses indicated that transgene effects on age at sexual maturity should have the greatest impact on transgene frequency, followed by juvenile viability, mating advantage, female fecundity, and male fertility, with changes in adult viability, resulting in the least impact.
Barratclough, Ashley; Conner, Bobbi J; Brooks, Marjory B; Pontes Stablein, Alyssa; Gerlach, Trevor J; Reep, Roger L; Ball, Ray L; Floyd, Ruth Francis
2017-08-09
Cold stress syndrome (CSS) in the Florida manatee Trichechus manatus latirostris has been defined as morbidity and mortality resulting from prolonged exposure to water temperatures <20°C. The pathophysiology is described as multifactorial, involving nutritional, immunological and metabolic disturbances; however, the exact mechanisms are unknown. We hypothesized that thromboembolic complications contribute to the pathophysiology of CSS in addition to the previously described factors. During the winter of 2014-2015, 10 Florida manatees with clinical signs of CSS were presented to Lowry Park Zoo, Tampa, FL, USA. Thromboelastography (TEG) and coagulation panels were performed at admission. In addition, coagulation panel data from 23 retrospective CSS cases were included in the analyses. There were numerous differences between mean values of TEG and coagulation parameters for healthy manatees and those for CSS cases. Among TEG parameters, reaction time (R), clot formation time (K) and percentage of clot lysed after 30 min (LY30) values were significantly different (p < 0.05) between the 2 groups. CSS cases also had significantly higher mean D-dimer concentration and coagulation factor XI activity, prolonged mean activated partial thromboplastin time (aPTT) and significantly decreased mean antithrombin activity. These combined abnormalities include clinicopathologic criteria of disseminated intravascular coagulation, indicating an increased risk of thromboembolic disease associated with manatee CSS.
Characterizing unknown systematics in large scale structure surveys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, Nishant; Ho, Shirley; Myers, Adam D.
Photometric large scale structure (LSS) surveys probe the largest volumes in the Universe, but are inevitably limited by systematic uncertainties. Imperfect photometric calibration leads to biases in our measurements of the density fields of LSS tracers such as galaxies and quasars, and as a result in cosmological parameter estimation. Earlier studies have proposed using cross-correlations between different redshift slices or cross-correlations between different surveys to reduce the effects of such systematics. In this paper we develop a method to characterize unknown systematics. We demonstrate that while we do not have sufficient information to correct for unknown systematics in the data,more » we can obtain an estimate of their magnitude. We define a parameter to estimate contamination from unknown systematics using cross-correlations between different redshift slices and propose discarding bins in the angular power spectrum that lie outside a certain contamination tolerance level. We show that this method improves estimates of the bias using simulated data and further apply it to photometric luminous red galaxies in the Sloan Digital Sky Survey as a case study.« less
A dynamical approach in exploring the unknown mass in the Solar system using pulsar timing arrays
NASA Astrophysics Data System (ADS)
Guo, Y. J.; Lee, K. J.; Caballero, R. N.
2018-04-01
The error in the Solar system ephemeris will lead to dipolar correlations in the residuals of pulsar timing array for widely separated pulsars. In this paper, we utilize such correlated signals, and construct a Bayesian data-analysis framework to detect the unknown mass in the Solar system and to measure the orbital parameters. The algorithm is designed to calculate the waveform of the induced pulsar-timing residuals due to the unmodelled objects following the Keplerian orbits in the Solar system. The algorithm incorporates a Bayesian-analysis suit used to simultaneously analyse the pulsar-timing data of multiple pulsars to search for coherent waveforms, evaluate the detection significance of unknown objects, and to measure their parameters. When the object is not detectable, our algorithm can be used to place upper limits on the mass. The algorithm is verified using simulated data sets, and cross-checked with analytical calculations. We also investigate the capability of future pulsar-timing-array experiments in detecting the unknown objects. We expect that the future pulsar-timing data can limit the unknown massive objects in the Solar system to be lighter than 10-11-10-12 M⊙, or measure the mass of Jovian system to a fractional precision of 10-8-10-9.
Ding, A Adam; Wu, Hulin
2014-10-01
We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method.
Ding, A. Adam; Wu, Hulin
2015-01-01
We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method. PMID:26401093
Calibration process of highly parameterized semi-distributed hydrological model
NASA Astrophysics Data System (ADS)
Vidmar, Andrej; Brilly, Mitja
2017-04-01
Hydrological phenomena take place in the hydrological system, which is governed by nature, and are essentially stochastic. These phenomena are unique, non-recurring, and changeable across space and time. Since any river basin with its own natural characteristics and any hydrological event therein, are unique, this is a complex process that is not researched enough. Calibration is a procedure of determining the parameters of a model that are not known well enough. Input and output variables and mathematical model expressions are known, while only some parameters are unknown, which are determined by calibrating the model. The software used for hydrological modelling nowadays is equipped with sophisticated algorithms for calibration purposes without possibility to manage process by modeler. The results are not the best. We develop procedure for expert driven process of calibration. We use HBV-light-CLI hydrological model which has command line interface and coupling it with PEST. PEST is parameter estimation tool which is used widely in ground water modeling and can be used also on surface waters. Process of calibration managed by expert directly, and proportionally to the expert knowledge, affects the outcome of the inversion procedure and achieves better results than if the procedure had been left to the selected optimization algorithm. First step is to properly define spatial characteristic and structural design of semi-distributed model including all morphological and hydrological phenomena, like karstic area, alluvial area and forest area. This step includes and requires geological, meteorological, hydraulic and hydrological knowledge of modeler. Second step is to set initial parameter values at their preferred values based on expert knowledge. In this step we also define all parameter and observation groups. Peak data are essential in process of calibration if we are mainly interested in flood events. Each Sub Catchment in the model has own observations group. Third step is to set appropriate bounds to parameters in their range of realistic values. Fourth step is to use of singular value decomposition (SVD) ensures that PEST maintains numerical stability, regardless of how ill-posed is the inverse problem Fifth step is to run PWTADJ1. This creates a new PEST control file in which weights are adjusted such that the contribution made to the total objective function by each observation group is the same. This prevents the information content of any group from being invisible to the inversion process. Sixth step is to add Tikhonov regularization to the PEST control file by running the ADDREG1 utility (Doherty, J, 2013). In adding regularization to the PEST control file ADDREG1 automatically provides a prior information equation for each parameter in which the preferred value of that parameter is equated to its initial value. Last step is to run PEST. We run BeoPEST which a parallel version of PEST and can be run on multiple computers in parallel in same time on TCP communications and this speedup process of calibrations. The case study with results of calibration and validation of the model will be presented.
Quantification of type I error probabilities for heterogeneity LOD scores.
Abreu, Paula C; Hodge, Susan E; Greenberg, David A
2002-02-01
Locus heterogeneity is a major confounding factor in linkage analysis. When no prior knowledge of linkage exists, and one aims to detect linkage and heterogeneity simultaneously, classical distribution theory of log-likelihood ratios does not hold. Despite some theoretical work on this problem, no generally accepted practical guidelines exist. Nor has anyone rigorously examined the combined effect of testing for linkage and heterogeneity and simultaneously maximizing over two genetic models (dominant, recessive). The effect of linkage phase represents another uninvestigated issue. Using computer simulation, we investigated type I error (P value) of the "admixture" heterogeneity LOD (HLOD) score, i.e., the LOD score maximized over both recombination fraction theta and admixture parameter alpha and we compared this with the P values when one maximizes only with respect to theta (i.e., the standard LOD score). We generated datasets of phase-known and -unknown nuclear families, sizes k = 2, 4, and 6 children, under fully penetrant autosomal dominant inheritance. We analyzed these datasets (1) assuming a single genetic model, and maximizing the HLOD over theta and alpha; and (2) maximizing the HLOD additionally over two dominance models (dominant vs. recessive), then subtracting a 0.3 correction. For both (1) and (2), P values increased with family size k; rose less for phase-unknown families than for phase-known ones, with the former approaching the latter as k increased; and did not exceed the one-sided mixture distribution xi = (1/2) chi1(2) + (1/2) chi2(2). Thus, maximizing the HLOD over theta and alpha appears to add considerably less than an additional degree of freedom to the associated chi1(2) distribution. We conclude with practical guidelines for linkage investigators. Copyright 2002 Wiley-Liss, Inc.
Efficient hierarchical trans-dimensional Bayesian inversion of magnetotelluric data
NASA Astrophysics Data System (ADS)
Xiang, Enming; Guo, Rongwen; Dosso, Stan E.; Liu, Jianxin; Dong, Hao; Ren, Zhengyong
2018-06-01
This paper develops an efficient hierarchical trans-dimensional (trans-D) Bayesian algorithm to invert magnetotelluric (MT) data for subsurface geoelectrical structure, with unknown geophysical model parameterization (the number of conductivity-layer interfaces) and data-error models parameterized by an auto-regressive (AR) process to account for potential error correlations. The reversible-jump Markov-chain Monte Carlo algorithm, which adds/removes interfaces and AR parameters in birth/death steps, is applied to sample the trans-D posterior probability density for model parameterization, model parameters, error variance and AR parameters, accounting for the uncertainties of model dimension and data-error statistics in the uncertainty estimates of the conductivity profile. To provide efficient sampling over the multiple subspaces of different dimensions, advanced proposal schemes are applied. Parameter perturbations are carried out in principal-component space, defined by eigen-decomposition of the unit-lag model covariance matrix, to minimize the effect of inter-parameter correlations and provide effective perturbation directions and length scales. Parameters of new layers in birth steps are proposed from the prior, instead of focused distributions centred at existing values, to improve birth acceptance rates. Parallel tempering, based on a series of parallel interacting Markov chains with successively relaxed likelihoods, is applied to improve chain mixing over model dimensions. The trans-D inversion is applied in a simulation study to examine the resolution of model structure according to the data information content. The inversion is also applied to a measured MT data set from south-central Australia.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hosking, Jonathan R. M.; Natarajan, Ramesh
The computer creates a utility demand forecast model for weather parameters by receiving a plurality of utility parameter values, wherein each received utility parameter value corresponds to a weather parameter value. Determining that a range of weather parameter values lacks a sufficient amount of corresponding received utility parameter values. Determining one or more utility parameter values that corresponds to the range of weather parameter values. Creating a model which correlates the received and the determined utility parameter values with the corresponding weather parameters values.
NASA Astrophysics Data System (ADS)
Frazer, Gordon J.; Anderson, Stuart J.
1997-10-01
The radar returns from some classes of time-varying point targets can be represented by the discrete-time signal plus noise model: xt equals st plus [vt plus (eta) t] equals (summation)i equals o P minus 1 Aiej2(pi f(i)/f(s)t) plus vt plus (eta) t, t (epsilon) 0, . . ., N minus 1, fi equals kfI plus fo where the received signal xt corresponds to the radar return from the target of interest from one azimuth-range cell. The signal has an unknown number of components, P, unknown complex amplitudes Ai and frequencies fi. The frequency parameters fo and fI are unknown, although constrained such that fo less than fI/2 and parameter k (epsilon) {minus u, . . ., minus 2, minus 1, 0, 1, 2, . . ., v} is constrained such that the component frequencies fi are bound by (minus fs/2, fs/2). The noise term vt, is typically colored, and represents clutter, interference and various noise sources. It is unknown, except that (summation)tvt2 less than infinity; in general, vt is not well modelled as an auto-regressive process of known order. The additional noise term (eta) t represents time-invariant point targets in the same azimuth-range cell. An important characteristic of the target is the unknown parameter, fI, representing the frequency interval between harmonic lines. It is desired to determine an estimate of fI from N samples of xt. We propose an algorithm to estimate fI based on Thomson's harmonic line F-Test, which is part of the multi-window spectrum estimation method and demonstrate the proposed estimator applied to target echo time series collected using an experimental HF skywave radar.
Demixing-stimulated lane formation in binary complex plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Du, C.-R.; Jiang, K.; Suetterlin, K. R.
2011-11-29
Recently lane formation and phase separation have been reported for experiments with binary complex plasmas in the PK3-Plus laboratory onboard the International Space Station (ISS). Positive non-additivity of particle interactions is known to stimulate phase separation (demixing), but its effect on lane formation is unknown. In this work, we used Langevin dynamics (LD) simulation to probe the role of non-additivity interactions on lane formation. The competition between laning and demixing leads to thicker lanes. Analysis based on anisotropic scaling indices reveals a crossover from normal laning mode to a demixing-stimulated laning mode. Extensive numerical simulations enabled us to identify amore » critical value of the non-additivity parameter {Delta} for the crossover.« less
Estimation of TOA based MUSIC algorithm and cross correlation algorithm of appropriate interval
NASA Astrophysics Data System (ADS)
Lin, Wei; Liu, Jun; Zhou, Yineng; Huang, Jiyan
2017-03-01
Localization of mobile station (MS) has now gained considerable attention due to its wide applications in military, environmental, health and commercial systems. Phrase angle and encode data of MSK system model are two critical parameters in time-of-arrival (TOA) localization technique; nevertheless, precise value of phrase angle and encode data are not easy to achieved in general. In order to meet the actual situation, we should consider the condition that phase angle and encode data is unknown. In this paper, a novel TOA localization method, which combine MUSIC algorithm and cross correlation algorithm in an appropriate interval, is proposed. Simulations show that the proposed method has better performance than music algorithm and cross correlation algorithm of the whole interval.
Rational solitons in deep nonlinear optical Bragg grating.
Alatas, H; Iskandar, A A; Tjia, M O; Valkering, T P
2006-06-01
We have examined the rational solitons in the Generalized Coupled Mode model for a deep nonlinear Bragg grating. These solitons are the degenerate forms of the ordinary solitons and appear at the transition lines in the parameter plane. A simple formulation is presented for the investigation of the bifurcations induced by detuning the carrier wave frequency. The analysis yields among others the appearance of in-gap dark and antidark rational solitons unknown in the nonlinear shallow grating. The exact expressions for the corresponding rational solitons are also derived in the process, which are characterized by rational algebraic functions. It is further demonstrated that certain effects in the soliton energy variations are to be expected when the frequency is varied across the values where the rational solitons appear.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ensslin, Torsten A.; Frommert, Mona
2011-05-15
The optimal reconstruction of cosmic metric perturbations and other signals requires knowledge of their power spectra and other parameters. If these are not known a priori, they have to be measured simultaneously from the same data used for the signal reconstruction. We formulate the general problem of signal inference in the presence of unknown parameters within the framework of information field theory. To solve this, we develop a generic parameter-uncertainty renormalized estimation (PURE) technique. As a concrete application, we address the problem of reconstructing Gaussian signals with unknown power-spectrum with five different approaches: (i) separate maximum-a-posteriori power-spectrum measurement and subsequentmore » reconstruction, (ii) maximum-a-posteriori reconstruction with marginalized power-spectrum, (iii) maximizing the joint posterior of signal and spectrum, (iv) guessing the spectrum from the variance in the Wiener-filter map, and (v) renormalization flow analysis of the field-theoretical problem providing the PURE filter. In all cases, the reconstruction can be described or approximated as Wiener-filter operations with assumed signal spectra derived from the data according to the same recipe, but with differing coefficients. All of these filters, except the renormalized one, exhibit a perception threshold in case of a Jeffreys prior for the unknown spectrum. Data modes with variance below this threshold do not affect the signal reconstruction at all. Filter (iv) seems to be similar to the so-called Karhune-Loeve and Feldman-Kaiser-Peacock estimators for galaxy power spectra used in cosmology, which therefore should also exhibit a marginal perception threshold if correctly implemented. We present statistical performance tests and show that the PURE filter is superior to the others, especially if the post-Wiener-filter corrections are included or in case an additional scale-independent spectral smoothness prior can be adopted.« less
NASA Astrophysics Data System (ADS)
Salama, Paul
2008-02-01
Multi-photon microscopy has provided biologists with unprecedented opportunities for high resolution imaging deep into tissues. Unfortunately deep tissue multi-photon microscopy images are in general noisy since they are acquired at low photon counts. To aid in the analysis and segmentation of such images it is sometimes necessary to initially enhance the acquired images. One way to enhance an image is to find the maximum a posteriori (MAP) estimate of each pixel comprising an image, which is achieved by finding a constrained least squares estimate of the unknown distribution. In arriving at the distribution it is assumed that the noise is Poisson distributed, the true but unknown pixel values assume a probability mass function over a finite set of non-negative values, and since the observed data also assumes finite values because of low photon counts, the sum of the probabilities of the observed pixel values (obtained from the histogram of the acquired pixel values) is less than one. Experimental results demonstrate that it is possible to closely estimate the unknown probability mass function with these assumptions.
The framed Standard Model (II) — A first test against experiment
NASA Astrophysics Data System (ADS)
Chan, Hong-Mo; Tsou, Sheung Tsun
2015-10-01
Apart from the qualitative features described in Paper I (Ref. 1), the renormalization group equation derived for the rotation of the fermion mass matrices are amenable to quantitative study. The equation depends on a coupling and a fudge factor and, on integration, on 3 integration constants. Its application to data analysis, however, requires the input from experiment of the heaviest generation masses mt, mb, mτ, mν3 all of which are known, except for mν3. Together then with the theta-angle in the QCD action, there are in all 7 real unknown parameters. Determining these 7 parameters by fitting to the experimental values of the masses mc, mμ, me, the CKM elements |Vus|, |Vub|, and the neutrino oscillation angle sin2θ 13, one can then calculate and compare with experiment the following 12 other quantities ms, mu/md, |Vud|, |Vcs|, |Vtb|, |Vcd|, |Vcb|, |Vts|, |Vtd|, J, sin22θ 12, sin22θ 23, and the results all agree reasonably well with data, often to within the stringent experimental error now achieved. Counting the predictions not yet measured by experiment, this means that 17 independent parameters of the standard model are now replaced by 7 in the FSM.
Theoretical relationship between elastic wave velocity and electrical resistivity
NASA Astrophysics Data System (ADS)
Lee, Jong-Sub; Yoon, Hyung-Koo
2015-05-01
Elastic wave velocity and electrical resistivity have been commonly applied to estimate stratum structures and obtain subsurface soil design parameters. Both elastic wave velocity and electrical resistivity are related to the void ratio; the objective of this study is therefore to suggest a theoretical relationship between the two physical parameters. Gassmann theory and Archie's equation are applied to propose a new theoretical equation, which relates the compressional wave velocity to shear wave velocity and electrical resistivity. The piezo disk element (PDE) and bender element (BE) are used to measure the compressional and shear wave velocities, respectively. In addition, the electrical resistivity is obtained by using the electrical resistivity probe (ERP). The elastic wave velocity and electrical resistivity are recorded in several types of soils including sand, silty sand, silty clay, silt, and clay-sand mixture. The appropriate input parameters are determined based on the error norm in order to increase the reliability of the proposed relationship. The predicted compressional wave velocities from the shear wave velocity and electrical resistivity are similar to the measured compressional velocities. This study demonstrates that the new theoretical relationship may be effectively used to predict the unknown geophysical property from the measured values.
NASA Astrophysics Data System (ADS)
Li, Zhengxiang; Gonzalez, J. E.; Yu, Hongwei; Zhu, Zong-Hong; Alcaniz, J. S.
2016-02-01
We apply two methods, i.e., the Gaussian processes and the nonparametric smoothing procedure, to reconstruct the Hubble parameter H (z ) as a function of redshift from 15 measurements of the expansion rate obtained from age estimates of passively evolving galaxies. These reconstructions enable us to derive the luminosity distance to a certain redshift z , calibrate the light-curve fitting parameters accounting for the (unknown) intrinsic magnitude of type Ia supernova (SNe Ia), and construct cosmological model-independent Hubble diagrams of SNe Ia. In order to test the compatibility between the reconstructed functions of H (z ), we perform a statistical analysis considering the latest SNe Ia sample, the so-called joint light-curve compilation. We find that, for the Gaussian processes, the reconstructed functions of Hubble parameter versus redshift, and thus the following analysis on SNe Ia calibrations and cosmological implications, are sensitive to prior mean functions. However, for the nonparametric smoothing method, the reconstructed functions are not dependent on initial guess models, and consistently require high values of H0, which are in excellent agreement with recent measurements of this quantity from Cepheids and other local distance indicators.
Hutchinson, John R; Ng-Thow-Hing, Victor; Anderson, Frank C
2007-06-21
We developed a method based on interactive B-spline solids for estimating and visualizing biomechanically important parameters for animal body segments. Although the method is most useful for assessing the importance of unknowns in extinct animals, such as body contours, muscle bulk, or inertial parameters, it is also useful for non-invasive measurement of segmental dimensions in extant animals. Points measured directly from bodies or skeletons are digitized and visualized on a computer, and then a B-spline solid is fitted to enclose these points, allowing quantification of segment dimensions. The method is computationally fast enough so that software implementations can interactively deform the shape of body segments (by warping the solid) or adjust the shape quantitatively (e.g., expanding the solid boundary by some percentage or a specific distance beyond measured skeletal coordinates). As the shape changes, the resulting changes in segment mass, center of mass (CM), and moments of inertia can be recomputed immediately. Volumes of reduced or increased density can be embedded to represent lungs, bones, or other structures within the body. The method was validated by reconstructing an ostrich body from a fleshed and defleshed carcass and comparing the estimated dimensions to empirically measured values from the original carcass. We then used the method to calculate the segmental masses, centers of mass, and moments of inertia for an adult Tyrannosaurus rex, with measurements taken directly from a complete skeleton. We compare these results to other estimates, using the model to compute the sensitivities of unknown parameter values based upon 30 different combinations of trunk, lung and air sac, and hindlimb dimensions. The conclusion that T. rex was not an exceptionally fast runner remains strongly supported by our models-the main area of ambiguity for estimating running ability seems to be estimating fascicle lengths, not body dimensions. Additionally, the craniad position of the CM in all of our models reinforces the notion that T. rex did not stand or move with extremely columnar, elephantine limbs. It required some flexion in the limbs to stand still, but how much flexion depends directly on where its CM is assumed to lie. Finally we used our model to test an unsolved problem in dinosaur biomechanics: how fast a huge biped like T. rex could turn. Depending on the assumptions, our whole body model integrated with a musculoskeletal model estimates that turning 45 degrees on one leg could be achieved slowly, in about 1-2s.
An internal pilot design for prospective cancer screening trials with unknown disease prevalence.
Brinton, John T; Ringham, Brandy M; Glueck, Deborah H
2015-10-13
For studies that compare the diagnostic accuracy of two screening tests, the sample size depends on the prevalence of disease in the study population, and on the variance of the outcome. Both parameters may be unknown during the design stage, which makes finding an accurate sample size difficult. To solve this problem, we propose adapting an internal pilot design. In this adapted design, researchers will accrue some percentage of the planned sample size, then estimate both the disease prevalence and the variances of the screening tests. The updated estimates of the disease prevalence and variance are used to conduct a more accurate power and sample size calculation. We demonstrate that in large samples, the adapted internal pilot design produces no Type I inflation. For small samples (N less than 50), we introduce a novel adjustment of the critical value to control the Type I error rate. We apply the method to two proposed prospective cancer screening studies: 1) a small oral cancer screening study in individuals with Fanconi anemia and 2) a large oral cancer screening trial. Conducting an internal pilot study without adjusting the critical value can cause Type I error rate inflation in small samples, but not in large samples. An internal pilot approach usually achieves goal power and, for most studies with sample size greater than 50, requires no Type I error correction. Further, we have provided a flexible and accurate approach to bound Type I error below a goal level for studies with small sample size.
Adaptive Value Normalization in the Prefrontal Cortex Is Reduced by Memory Load.
Holper, L; Van Brussel, L D; Schmidt, L; Schulthess, S; Burke, C J; Louie, K; Seifritz, E; Tobler, P N
2017-01-01
Adaptation facilitates neural representation of a wide range of diverse inputs, including reward values. Adaptive value coding typically relies on contextual information either obtained from the environment or retrieved from and maintained in memory. However, it is unknown whether having to retrieve and maintain context information modulates the brain's capacity for value adaptation. To address this issue, we measured hemodynamic responses of the prefrontal cortex (PFC) in two studies on risky decision-making. In each trial, healthy human subjects chose between a risky and a safe alternative; half of the participants had to remember the risky alternatives, whereas for the other half they were presented visually. The value of safe alternatives varied across trials. PFC responses adapted to contextual risk information, with steeper coding of safe alternative value in lower-risk contexts. Importantly, this adaptation depended on working memory load, such that response functions relating PFC activity to safe values were steeper with presented versus remembered risk. An independent second study replicated the findings of the first study and showed that similar slope reductions also arose when memory maintenance demands were increased with a secondary working memory task. Formal model comparison showed that a divisive normalization model fitted effects of both risk context and working memory demands on PFC activity better than alternative models of value adaptation, and revealed that reduced suppression of background activity was the critical parameter impairing normalization with increased memory maintenance demand. Our findings suggest that mnemonic processes can constrain normalization of neural value representations.
The Absolute Abundance of Iron in the Solar Corona.
White; Thomas; Brosius; Kundu
2000-05-10
We present a measurement of the abundance of Fe relative to H in the solar corona using a technique that differs from previous spectroscopic and solar wind measurements. Our method combines EUV line data from the Coronal Diagnostic Spectrometer (CDS) on the Solar and Heliospheric Observatory with thermal bremsstrahlung radio data from the VLA. The coronal Fe abundance is derived by equating the thermal bremsstrahlung radio emission calculated from the EUV Fe line data to that observed with the VLA, treating the Fe/H abundance as the sole unknown. We apply this technique to a compact cool active region and find Fe&solm0;H=1.56x10-4, or about 4 times its value in the solar photosphere. Uncertainties in the CDS radiometric calibration, the VLA intensity measurements, the atomic parameters, and the assumptions made in the spectral analysis yield net uncertainties of approximately 20%. This result implies that low first ionization potential elements such as Fe are enhanced in the solar corona relative to photospheric values.
NASA Astrophysics Data System (ADS)
Ou, Meiying; Sun, Haibin; Gu, Shengwei; Zhang, Yangyi
2017-11-01
This paper investigates the distributed finite-time trajectory tracking control for a group of nonholonomic mobile robots with time-varying unknown parameters and external disturbances. At first, the tracking error system is derived for each mobile robot with the aid of a global invertible transformation, which consists of two subsystems, one is a first-order subsystem and another is a second-order subsystem. Then, the two subsystems are studied respectively, and finite-time disturbance observers are proposed for each robot to estimate the external disturbances. Meanwhile, distributed finite-time tracking controllers are developed for each mobile robot such that all states of each robot can reach the desired value in finite time, where the desired reference value is assumed to be the trajectory of a virtual leader whose information is available to only a subset of the followers, and the followers are assumed to have only local interaction. The effectiveness of the theoretical results is finally illustrated by numerical simulations.
Oblique wave trapping by vertical permeable membrane barriers located near a wall
NASA Astrophysics Data System (ADS)
Koley, Santanu; Sahoo, Trilochan
2017-12-01
The effectiveness of a vertical partial flexible porous membrane wave barrier located near a rigid vertical impermeable seawall for trapping obliquely incident surface gravity waves are analyzed in water of uniform depth under the assumption of linear water wave theory and small amplitude membrane barrier response. From the general formulation of the submerged membrane barrier, results for bottom-standing and surface-piercing barriers are computed and analyzed in special cases. Using the eigenfunction expansion method, the boundary-value problems are converted into series relations and then the required unknowns are obtained using the least squares approximation method. Various physical quantities of interests like reflection coefficient, wave energy dissipation, wave forces acting on the membrane barrier and the seawall are computed and analyzed for different values of the wave and structural parameters. The study will be useful in the design of the membrane wave barrier for the creation of tranquility zone in the lee side of the barrier to protect the seawall.
Eddy, Sean R.
2008-01-01
Sequence database searches require accurate estimation of the statistical significance of scores. Optimal local sequence alignment scores follow Gumbel distributions, but determining an important parameter of the distribution (λ) requires time-consuming computational simulation. Moreover, optimal alignment scores are less powerful than probabilistic scores that integrate over alignment uncertainty (“Forward” scores), but the expected distribution of Forward scores remains unknown. Here, I conjecture that both expected score distributions have simple, predictable forms when full probabilistic modeling methods are used. For a probabilistic model of local sequence alignment, optimal alignment bit scores (“Viterbi” scores) are Gumbel-distributed with constant λ = log 2, and the high scoring tail of Forward scores is exponential with the same constant λ. Simulation studies support these conjectures over a wide range of profile/sequence comparisons, using 9,318 profile-hidden Markov models from the Pfam database. This enables efficient and accurate determination of expectation values (E-values) for both Viterbi and Forward scores for probabilistic local alignments. PMID:18516236
Iqbal, Muhammad; Rehan, Muhammad; Hong, Keum-Shik
2018-01-01
This paper exploits the dynamical modeling, behavior analysis, and synchronization of a network of four different FitzHugh–Nagumo (FHN) neurons with unknown parameters linked in a ring configuration under direction-dependent coupling. The main purpose is to investigate a robust adaptive control law for the synchronization of uncertain and perturbed neurons, communicating in a medium of bidirectional coupling. The neurons are assumed to be different and interconnected in a ring structure. The strength of the gap junctions is taken to be different for each link in the network, owing to the inter-neuronal coupling medium properties. Robust adaptive control mechanism based on Lyapunov stability analysis is employed and theoretical criteria are derived to realize the synchronization of the network of four FHN neurons in a ring form with unknown parameters under direction-dependent coupling and disturbances. The proposed scheme for synchronization of dissimilar neurons, under external electrical stimuli, coupled in a ring communication topology, having all parameters unknown, and subject to directional coupling medium and perturbations, is addressed for the first time as per our knowledge. To demonstrate the efficacy of the proposed strategy, simulation results are provided. PMID:29535622
NASA Astrophysics Data System (ADS)
Ma, Lin
2017-11-01
This paper develops a method for precisely determining the tension of an inclined cable with unknown boundary conditions. First, the nonlinear motion equation of an inclined cable is derived, and a numerical model of the motion of the cable is proposed using the finite difference method. The proposed numerical model includes the sag-extensibility, flexural stiffness, inclination angle and rotational stiffness at two ends of the cable. Second, the influence of the dynamic parameters of the cable on its frequencies is discussed in detail, and a method for precisely determining the tension of an inclined cable is proposed based on the derivatives of the eigenvalues of the matrices. Finally, a multiparameter identification method is developed that can simultaneously identify multiple parameters, including the rotational stiffness at two ends. This scheme is applicable to inclined cables with varying sag, varying flexural stiffness and unknown boundary conditions. Numerical examples indicate that the method provides good precision. Because the parameters of cables other than tension (e.g., the flexural stiffness and rotational stiffness at the ends) are not accurately known in practical engineering, the multiparameter identification method could further improve the accuracy of cable tension measurements.
NASA Astrophysics Data System (ADS)
Pyt'ev, Yu. P.
2018-01-01
mathematical formalism for subjective modeling, based on modelling of uncertainty, reflecting unreliability of subjective information and fuzziness that is common for its content. The model of subjective judgments on values of an unknown parameter x ∈ X of the model M( x) of a research object is defined by the researcher-modeler as a space1 ( X, p( X), P{I^{\\bar x}}, Be{l^{\\bar x}}) with plausibility P{I^{\\bar x}} and believability Be{l^{\\bar x}} measures, where x is an uncertain element taking values in X that models researcher—modeler's uncertain propositions about an unknown x ∈ X, measures P{I^{\\bar x}}, Be{l^{\\bar x}} model modalities of a researcher-modeler's subjective judgments on the validity of each x ∈ X: the value of P{I^{\\bar x}}(\\tilde x = x) determines how relatively plausible, in his opinion, the equality (\\tilde x = x) is, while the value of Be{l^{\\bar x}}(\\tilde x = x) determines how the inequality (\\tilde x = x) should be relatively believed in. Versions of plausibility Pl and believability Bel measures and pl- and bel-integrals that inherit some traits of probabilities, psychophysics and take into account interests of researcher-modeler groups are considered. It is shown that the mathematical formalism of subjective modeling, unlike "standard" mathematical modeling, •enables a researcher-modeler to model both precise formalized knowledge and non-formalized unreliable knowledge, from complete ignorance to precise knowledge of the model of a research object, to calculate relative plausibilities and believabilities of any features of a research object that are specified by its subjective model M(\\tilde x), and if the data on observations of a research object is available, then it: •enables him to estimate the adequacy of subjective model to the research objective, to correct it by combining subjective ideas and the observation data after testing their consistency, and, finally, to empirically recover the model of a research object.
NASA Astrophysics Data System (ADS)
Chakraborty, A.; Goto, H.
2017-12-01
The 2011 off the Pacific coast of Tohoku earthquake caused severe damage in many areas further inside the mainland because of site-amplification. Furukawa district in Miyagi Prefecture, Japan recorded significant spatial differences in ground motion even at sub-kilometer scales. The site responses in the damage zone far exceeded the levels in the hazard maps. A reason why the mismatch occurred is that mapping follow only the mean value at the measurement locations with no regard to the data uncertainties and thus are not always reliable. Our research objective is to develop a methodology to incorporate data uncertainties in mapping and propose a reliable map. The methodology is based on a hierarchical Bayesian modeling of normally-distributed site responses in space where the mean (μ), site-specific variance (σ2) and between-sites variance(s2) parameters are treated as unknowns with a prior distribution. The observation data is artificially created site responses with varying means and variances for 150 seismic events across 50 locations in one-dimensional space. Spatially auto-correlated random effects were added to the mean (μ) using a conditionally autoregressive (CAR) prior. The inferences on the unknown parameters are done using Markov Chain Monte Carlo methods from the posterior distribution. The goal is to find reliable estimates of μ sensitive to uncertainties. During initial trials, we observed that the tau (=1/s2) parameter of CAR prior controls the μ estimation. Using a constraint, s = 1/(k×σ), five spatial models with varying k-values were created. We define reliability to be measured by the model likelihood and propose the maximum likelihood model to be highly reliable. The model with maximum likelihood was selected using a 5-fold cross-validation technique. The results show that the maximum likelihood model (μ*) follows the site-specific mean at low uncertainties and converges to the model-mean at higher uncertainties (Fig.1). This result is highly significant as it successfully incorporates the effect of data uncertainties in mapping. This novel approach can be applied to any research field using mapping techniques. The methodology is now being applied to real records from a very dense seismic network in Furukawa district, Miyagi Prefecture, Japan to generate a reliable map of the site responses.
Parameter and state estimation in a Neisseria meningitidis model: A study case of Niger
NASA Astrophysics Data System (ADS)
Bowong, S.; Mountaga, L.; Bah, A.; Tewa, J. J.; Kurths, J.
2016-12-01
Neisseria meningitidis (Nm) is a major cause of bacterial meningitidis outbreaks in Africa and the Middle East. The availability of yearly reported meningitis cases in the African meningitis belt offers the opportunity to analyze the transmission dynamics and the impact of control strategies. In this paper, we propose a method for the estimation of state variables that are not accessible to measurements and an unknown parameter in a Nm model. We suppose that the yearly number of Nm induced mortality and the total population are known inputs, which can be obtained from data, and the yearly number of new Nm cases is the model output. We also suppose that the Nm transmission rate is an unknown parameter. We first show how the recruitment rate into the population can be estimated using real data of the total population and Nm induced mortality. Then, we use an auxiliary system called observer whose solutions converge exponentially to those of the original model. This observer does not use the unknown infection transmission rate but only uses the known inputs and the model output. This allows us to estimate unmeasured state variables such as the number of carriers that play an important role in the transmission of the infection and the total number of infected individuals within a human community. Finally, we also provide a simple method to estimate the unknown Nm transmission rate. In order to validate the estimation results, numerical simulations are conducted using real data of Niger.
Improved central confidence intervals for the ratio of Poisson means
NASA Astrophysics Data System (ADS)
Cousins, R. D.
The problem of confidence intervals for the ratio of two unknown Poisson means was "solved" decades ago, but a closer examination reveals that the standard solution is far from optimal from the frequentist point of view. We construct a more powerful set of central confidence intervals, each of which is a (typically proper) subinterval of the corresponding standard interval. They also provide upper and lower confidence limits which are more restrictive than the standard limits. The construction follows Neyman's original prescription, though discreteness of the Poisson distribution and the presence of a nuisance parameter (one of the unknown means) lead to slightly conservative intervals. Philosophically, the issue of the appropriateness of the construction method is similar to the issue of conditioning on the margins in 2×2 contingency tables. From a frequentist point of view, the new set maintains (over) coverage of the unknown true value of the ratio of means at each stated confidence level, even though the new intervals are shorter than the old intervals by any measure (except for two cases where they are identical). As an example, when the number 2 is drawn from each Poisson population, the 90% CL central confidence interval on the ratio of means is (0.169, 5.196), rather than (0.108, 9.245). In the cited literature, such confidence intervals have applications in numerous branches of pure and applied science, including agriculture, wildlife studies, manufacturing, medicine, reliability theory, and elementary particle physics.
Parameter Estimation for a Turbulent Buoyant Jet Using Approximate Bayesian Computation
NASA Astrophysics Data System (ADS)
Christopher, Jason D.; Wimer, Nicholas T.; Hayden, Torrey R. S.; Lapointe, Caelan; Grooms, Ian; Rieker, Gregory B.; Hamlington, Peter E.
2016-11-01
Approximate Bayesian Computation (ABC) is a powerful tool that allows sparse experimental or other "truth" data to be used for the prediction of unknown model parameters in numerical simulations of real-world engineering systems. In this presentation, we introduce the ABC approach and then use ABC to predict unknown inflow conditions in simulations of a two-dimensional (2D) turbulent, high-temperature buoyant jet. For this test case, truth data are obtained from a simulation with known boundary conditions and problem parameters. Using spatially-sparse temperature statistics from the 2D buoyant jet truth simulation, we show that the ABC method provides accurate predictions of the true jet inflow temperature. The success of the ABC approach in the present test suggests that ABC is a useful and versatile tool for engineering fluid dynamics research.
Lee, Jung Myung; Hong, Geu-Ru; Pak, Hui-Nam; Shim, Chi Young; Houle, Helene; Vannan, Mani A; Kim, Minji; Chung, Namsik
2015-08-01
Recently, left atrial (LA) vortex flow analysis using contrast transesophageal echocardiography (TEE) has been shown to be feasible and has demonstrated significant differences in vortex flow morphology and pulsatility between normal subjects and patients with atrial fibrillation (AF). However, the relationship between LA vortex flow and electrophysiological properties and the clinical significance of LA vortex flow are unknown. The aims of this study were (1) to compare LA vortex flow parameters with LA voltage and (2) to assess the predictive value of LA vortex flow parameters for the recurrence of AF after radiofrequency catheter ablation (RFCA). Thirty-nine patients with symptomatic non-valvular AF underwent contrast TEE before undergoing RFCA for AF. Quantitative LA vortex flow parameters were analyzed by Omega flow (Siemens Medical Solution, Mountain View, CA, USA). The morphology and pulsatility of LA vortex flow were compared with electrophysiologic parameters that were measured invasively. Hemodynamic, electrophysiological, and vortex flow parameters were compared between patients with and without early recurrence of AF after RFCA. Morphologic parameters, including LA vortex depth, length, width, and sphericity index were not associated with LA voltage or hemodynamic parameters. The relative strength (RS), which represents the pulsatility power of LA, was positively correlated with LA voltage (R = 0.53, p = 0.01) and LA appendage flow velocity (R = 0.73, p < 0.001) and negatively correlated with LA volume index (R = -0.56, p < 0.001). Patients with recurrent AF after RFCA showed significantly lower RS (1.7 ± 0.2 vs 1.9 ± 0.4, p = 0.048) and LA voltage (0.9 ± 0.7 vs 1.7 ± 0.8, p = 0.004) than patients without AF recurrence. In the relatively small LA dimension group (LA volume index ≤ 33 ml/m(2)), RS was significantly lower (2.1 ± 0.3 vs 1.7 ± 0.1, p = 0.029) in patients with the recurrent AF. Quantitative LA vortex flow analysis, especially RS, correlated well with LA voltage. Decreased pulsatility strength in the LA was associated with recurrent AF. LA vortex may have incremental value in predicting the recurrence of AF.
Mountain, James E.; Santer, Peter; O’Neill, David P.; Smith, Nicholas M. J.; Ciaffoni, Luca; Couper, John H.; Ritchie, Grant A. D.; Hancock, Gus; Whiteley, Jonathan P.
2018-01-01
Inhomogeneity in the lung impairs gas exchange and can be an early marker of lung disease. We hypothesized that highly precise measurements of gas exchange contain sufficient information to quantify many aspects of the inhomogeneity noninvasively. Our aim was to explore whether one parameterization of lung inhomogeneity could both fit such data and provide reliable parameter estimates. A mathematical model of gas exchange in an inhomogeneous lung was developed, containing inhomogeneity parameters for compliance, vascular conductance, and dead space, all relative to lung volume. Inputs were respiratory flow, cardiac output, and the inspiratory and pulmonary arterial gas compositions. Outputs were expiratory and pulmonary venous gas compositions. All values were specified every 10 ms. Some parameters were set to physiologically plausible values. To estimate the remaining unknown parameters and inputs, the model was embedded within a nonlinear estimation routine to minimize the deviations between model and data for CO2, O2, and N2 flows during expiration. Three groups, each of six individuals, were studied: young (20–30 yr); old (70–80 yr); and patients with mild to moderate chronic obstructive pulmonary disease (COPD). Each participant undertook a 15-min measurement protocol six times. For all parameters reflecting inhomogeneity, highly significant differences were found between the three participant groups (P < 0.001, ANOVA). Intraclass correlation coefficients were 0.96, 0.99, and 0.94 for the parameters reflecting inhomogeneity in deadspace, compliance, and vascular conductance, respectively. We conclude that, for the particular participants selected, highly repeatable estimates for parameters reflecting inhomogeneity could be obtained from noninvasive measurements of respiratory gas exchange. NEW & NOTEWORTHY This study describes a new method, based on highly precise measures of gas exchange, that quantifies three distributions that are intrinsic to the lung. These distributions represent three fundamentally different types of inhomogeneity that together give rise to ventilation-perfusion mismatch and result in impaired gas exchange. The measurement technique has potentially broad clinical applicability because it is simple for both patient and operator, it does not involve ionizing radiation, and it is completely noninvasive. PMID:29074714
Investigation of flow and transport processes at the MADE site using ensemble Kalman filter
Liu, Gaisheng; Chen, Y.; Zhang, Dongxiao
2008-01-01
In this work the ensemble Kalman filter (EnKF) is applied to investigate the flow and transport processes at the macro-dispersion experiment (MADE) site in Columbus, MS. The EnKF is a sequential data assimilation approach that adjusts the unknown model parameter values based on the observed data with time. The classic advection-dispersion (AD) and the dual-domain mass transfer (DDMT) models are employed to analyze the tritium plume during the second MADE tracer experiment. The hydraulic conductivity (K), longitudinal dispersivity in the AD model, and mass transfer rate coefficient and mobile porosity ratio in the DDMT model, are estimated in this investigation. Because of its sequential feature, the EnKF allows for the temporal scaling of transport parameters during the tritium concentration analysis. Inverse simulation results indicate that for the AD model to reproduce the extensive spatial spreading of the tritium observed in the field, the K in the downgradient area needs to be increased significantly. The estimated K in the AD model becomes an order of magnitude higher than the in situ flowmeter measurements over a large portion of media. On the other hand, the DDMT model gives an estimation of K that is much more comparable with the flowmeter values. In addition, the simulated concentrations by the DDMT model show a better agreement with the observed values. The root mean square (RMS) between the observed and simulated tritium plumes is 0.77 for the AD model and 0.45 for the DDMT model at 328 days. Unlike the AD model, which gives inconsistent K estimates at different times, the DDMT model is able to invert the K values that consistently reproduce the observed tritium concentrations through all times. ?? 2008 Elsevier Ltd. All rights reserved.
Zenker, Sven
2010-08-01
Combining mechanistic mathematical models of physiology with quantitative observations using probabilistic inference may offer advantages over established approaches to computerized decision support in acute care medicine. Particle filters (PF) can perform such inference successively as data becomes available. The potential of PF for real-time state estimation (SE) for a model of cardiovascular physiology is explored using parallel computers and the ability to achieve joint state and parameter estimation (JSPE) given minimal prior knowledge tested. A parallelized sequential importance sampling/resampling algorithm was implemented and its scalability for the pure SE problem for a non-linear five-dimensional ODE model of the cardiovascular system evaluated on a Cray XT3 using up to 1,024 cores. JSPE was implemented using a state augmentation approach with artificial stochastic evolution of the parameters. Its performance when simultaneously estimating the 5 states and 18 unknown parameters when given observations only of arterial pressure, central venous pressure, heart rate, and, optionally, cardiac output, was evaluated in a simulated bleeding/resuscitation scenario. SE was successful and scaled up to 1,024 cores with appropriate algorithm parametrization, with real-time equivalent performance for up to 10 million particles. JSPE in the described underdetermined scenario achieved excellent reproduction of observables and qualitative tracking of enddiastolic ventricular volumes and sympathetic nervous activity. However, only a subset of the posterior distributions of parameters concentrated around the true values for parts of the estimated trajectories. Parallelized PF's performance makes their application to complex mathematical models of physiology for the purpose of clinical data interpretation, prediction, and therapy optimization appear promising. JSPE in the described extremely underdetermined scenario nevertheless extracted information of potential clinical relevance from the data in this simulation setting. However, fully satisfactory resolution of this problem when minimal prior knowledge about parameter values is available will require further methodological improvements, which are discussed.
Characterizability of metabolic pathway systems from time series data.
Voit, Eberhard O
2013-12-01
Over the past decade, the biomathematical community has devoted substantial effort to the complicated challenge of estimating parameter values for biological systems models. An even more difficult issue is the characterization of functional forms for the processes that govern these systems. Most parameter estimation approaches tacitly assume that these forms are known or can be assumed with some validity. However, this assumption is not always true. The recently proposed method of Dynamic Flux Estimation (DFE) addresses this problem in a genuinely novel fashion for metabolic pathway systems. Specifically, DFE allows the characterization of fluxes within such systems through an analysis of metabolic time series data. Its main drawback is the fact that DFE can only directly be applied if the pathway system contains as many metabolites as unknown fluxes. This situation is unfortunately rare. To overcome this roadblock, earlier work in this field had proposed strategies for augmenting the set of unknown fluxes with independent kinetic information, which however is not always available. Employing Moore-Penrose pseudo-inverse methods of linear algebra, the present article discusses an approach for characterizing fluxes from metabolic time series data that is applicable even if the pathway system is underdetermined and contains more fluxes than metabolites. Intriguingly, this approach is independent of a specific modeling framework and unaffected by noise in the experimental time series data. The results reveal whether any fluxes may be characterized and, if so, which subset is characterizable. They also help with the identification of fluxes that, if they could be determined independently, would allow the application of DFE. Copyright © 2013 Elsevier Inc. All rights reserved.
Characterizability of Metabolic Pathway Systems from Time Series Data
Voit, Eberhard O.
2013-01-01
Over the past decade, the biomathematical community has devoted substantial effort to the complicated challenge of estimating parameter values for biological systems models. An even more difficult issue is the characterization of functional forms for the processes that govern these systems. Most parameter estimation approaches tacitly assume that these forms are known or can be assumed with some validity. However, this assumption is not always true. The recently proposed method of Dynamic Flux Estimation (DFE) addresses this problem in a genuinely novel fashion for metabolic pathway systems. Specifically, DFE allows the characterization of fluxes within such systems through an analysis of metabolic time series data. Its main drawback is the fact that DFE can only directly be applied if the pathway system contains as many metabolites as unknown fluxes. This situation is unfortunately rare. To overcome this roadblock, earlier work in this field had proposed strategies for augmenting the set of unknown fluxes with independent kinetic information, which however is not always available. Employing Moore-Penrose pseudo-inverse methods of linear algebra, the present article discusses an approach for characterizing fluxes from metabolic time series data that is applicable even if the pathway system is underdetermined and contains more fluxes than metabolites. Intriguingly, this approach is independent of a specific modeling framework and unaffected by noise in the experimental time series data. The results reveal whether any fluxes may be characterized and, if so, which subset is characterizable. They also help with the identification of fluxes that, if they could be determined independently, would allow the application of DFE. PMID:23391489
Parameter estimation for lithium ion batteries
NASA Astrophysics Data System (ADS)
Santhanagopalan, Shriram
With an increase in the demand for lithium based batteries at the rate of about 7% per year, the amount of effort put into improving the performance of these batteries from both experimental and theoretical perspectives is increasing. There exist a number of mathematical models ranging from simple empirical models to complicated physics-based models to describe the processes leading to failure of these cells. The literature is also rife with experimental studies that characterize the various properties of the system in an attempt to improve the performance of lithium ion cells. However, very little has been done to quantify the experimental observations and relate these results to the existing mathematical models. In fact, the best of the physics based models in the literature show as much as 20% discrepancy when compared to experimental data. The reasons for such a big difference include, but are not limited to, numerical complexities involved in extracting parameters from experimental data and inconsistencies in interpreting directly measured values for the parameters. In this work, an attempt has been made to implement simplified models to extract parameter values that accurately characterize the performance of lithium ion cells. The validity of these models under a variety of experimental conditions is verified using a model discrimination procedure. Transport and kinetic properties are estimated using a non-linear estimation procedure. The initial state of charge inside each electrode is also maintained as an unknown parameter, since this value plays a significant role in accurately matching experimental charge/discharge curves with model predictions and is not readily known from experimental data. The second part of the dissertation focuses on parameters that change rapidly with time. For example, in the case of lithium ion batteries used in Hybrid Electric Vehicle (HEV) applications, the prediction of the State of Charge (SOC) of the cell under a variety of road conditions is important. An algorithm to predict the SOC in time intervals as small as 5 ms is of critical demand. In such cases, the conventional non-linear estimation procedure is not time-effective. There exist methodologies in the literature, such as those based on fuzzy logic; however, these techniques require a lot of computational storage space. Consequently, it is not possible to implement such techniques on a micro-chip for integration as a part of a real-time device. The Extended Kalman Filter (EKF) based approach presented in this work is a first step towards developing an efficient method to predict online, the State of Charge of a lithium ion cell based on an electrochemical model. The final part of the dissertation focuses on incorporating uncertainty in parameter values into electrochemical models using the polynomial chaos theory (PCT).
Parameter Estimation for a Pulsating Turbulent Buoyant Jet Using Approximate Bayesian Computation
NASA Astrophysics Data System (ADS)
Christopher, Jason; Wimer, Nicholas; Lapointe, Caelan; Hayden, Torrey; Grooms, Ian; Rieker, Greg; Hamlington, Peter
2017-11-01
Approximate Bayesian Computation (ABC) is a powerful tool that allows sparse experimental or other ``truth'' data to be used for the prediction of unknown parameters, such as flow properties and boundary conditions, in numerical simulations of real-world engineering systems. Here we introduce the ABC approach and then use ABC to predict unknown inflow conditions in simulations of a two-dimensional (2D) turbulent, high-temperature buoyant jet. For this test case, truth data are obtained from a direct numerical simulation (DNS) with known boundary conditions and problem parameters, while the ABC procedure utilizes lower fidelity large eddy simulations. Using spatially-sparse statistics from the 2D buoyant jet DNS, we show that the ABC method provides accurate predictions of true jet inflow parameters. The success of the ABC approach in the present test suggests that ABC is a useful and versatile tool for predicting flow information, such as boundary conditions, that can be difficult to determine experimentally.
Distributed weighted least-squares estimation with fast convergence for large-scale systems.
Marelli, Damián Edgardo; Fu, Minyue
2015-01-01
In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods.
Distributed weighted least-squares estimation with fast convergence for large-scale systems☆
Marelli, Damián Edgardo; Fu, Minyue
2015-01-01
In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods. PMID:25641976
Modern control concepts in hydrology
NASA Technical Reports Server (NTRS)
Duong, N.; Johnson, G. R.; Winn, C. B.
1974-01-01
Two approaches to an identification problem in hydrology are presented based upon concepts from modern control and estimation theory. The first approach treats the identification of unknown parameters in a hydrologic system subject to noisy inputs as an adaptive linear stochastic control problem; the second approach alters the model equation to account for the random part in the inputs, and then uses a nonlinear estimation scheme to estimate the unknown parameters. Both approaches use state-space concepts. The identification schemes are sequential and adaptive and can handle either time invariant or time dependent parameters. They are used to identify parameters in the Prasad model of rainfall-runoff. The results obtained are encouraging and conform with results from two previous studies; the first using numerical integration of the model equation along with a trial-and-error procedure, and the second, by using a quasi-linearization technique. The proposed approaches offer a systematic way of analyzing the rainfall-runoff process when the input data are imbedded in noise.
Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown
ERIC Educational Resources Information Center
Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi
2014-01-01
When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…
Lanspa, Michael J.; Grissom, Colin K.; Hirshberg, Eliotte L.; Jones, Jason P.; Brown, Samuel M.
2013-01-01
Background Volume expansion is a mainstay of therapy in septic shock, although its effect is difficult to predict using conventional measurements. Dynamic parameters, which vary with respiratory changes, appear to predict hemodynamic response to fluid challenge in mechanically ventilated, paralyzed patients. Whether they predict response in patients who are free from mechanical ventilation is unknown. We hypothesized that dynamic parameters would be predictive in patients not receiving mechanical ventilation. Methods This is a prospective, observational, pilot study. Patients with early septic shock and who were not receiving mechanical ventilation received 10 ml/kg volume expansion (VE) at their treating physician's discretion after initial resuscitation in the emergency department. We used transthoracic echocardiography to measure vena cava collapsibility index (VCCI) and aortic velocity variation (AoVV) prior to VE. We used a pulse contour analysis device to measure stroke volume variation (SVV). Cardiac index was measured immediately before and after VE using transthoracic echocardiography. Hemodynamic response was defined as an increase in cardiac index ≥ 15%. Results 14 patients received VE, 5 of which demonstrated a hemodynamic response. VCCI and SVV were predictive (Area under curve = 0.83, 0.92, respectively). Optimal thresholds were calculated: VCCI ≥ 15% (Positive predictive value, PPV 62%, negative predictive value, NPV 100%, p = 0.03); SVV ≥ 17% (PPV 100%, NPV 82%, p = 0.03). AoVV was not predictive. Conclusions VCCI and SVV predict hemodynamic response to fluid challenge patients with septic shock who are not mechanically ventilated. Optimal thresholds differ from those described in mechanically ventilated patients. PMID:23324885
Parameter Estimation and Model Selection in Computational Biology
Lillacci, Gabriele; Khammash, Mustafa
2010-01-01
A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants) are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection. PMID:20221262
A Systematic Approach for Model-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.
An evaluation of the predictive capabilities of CTRW and MRMT
NASA Astrophysics Data System (ADS)
Fiori, Aldo; Zarlenga, Antonio; Gotovac, Hrvoje; Jankovic, Igor; Cvetkovic, Vladimir; Dagan, Gedeon
2016-04-01
The prediction capability of two approximate models of non-Fickian transport in highly heterogeneous aquifers is checked by comparison with accurate numerical simulations, for mean uniform flow of velocity U. The two models considered are the MRMT (Multi Rate Mass Transfer) and CTRW (Continuous Time Random Walk) models. Both circumvent the need to solve the flow and transport equations by using proxy models, which provide the BTC μ(x,t) depending on a vector a of unknown 5 parameters. Although underlain by different conceptualisations, the two models have a similar mathematical structure. The proponents of the models suggest using field transport experiments at a small scale to calibrate a, toward predicting transport at larger scale. The strategy was tested with the aid of accurate numerical simulations in two and three dimensions from the literature. First, the 5 parameter values were calibrated by using the simulated μ at a control plane close to the injection one and subsequently using these same parameters for predicting μ at further 10 control planes. It is found that the two methods perform equally well, though the parameters identification is nonunique, with a large set of parameters providing similar fitting. Also, errors in the determination of the mean eulerian velocity may lead to significant shifts of the predicted BTC. It is found that the simulated BTCs satisfy Markovianity: they can be found as n-fold convolutions of a "kernel", in line with the models' main assumption.
Optimal lunar soft landing trajectories using taboo evolutionary programming
NASA Astrophysics Data System (ADS)
Mutyalarao, M.; Raj, M. Xavier James
A safe lunar landing is a key factor to undertake an effective lunar exploration. Lunar lander consists of four phases such as launch phase, the earth-moon transfer phase, circumlunar phase and landing phase. The landing phase can be either hard landing or soft landing. Hard landing means the vehicle lands under the influence of gravity without any deceleration measures. However, soft landing reduces the vertical velocity of the vehicle before landing. Therefore, for the safety of the astronauts as well as the vehicle lunar soft landing with an acceptable velocity is very much essential. So it is important to design the optimal lunar soft landing trajectory with minimum fuel consumption. Optimization of Lunar Soft landing is a complex optimal control problem. In this paper, an analysis related to lunar soft landing from a parking orbit around Moon has been carried out. A two-dimensional trajectory optimization problem is attempted. The problem is complex due to the presence of system constraints. To solve the time-history of control parameters, the problem is converted into two point boundary value problem by using the maximum principle of Pontrygen. Taboo Evolutionary Programming (TEP) technique is a stochastic method developed in recent years and successfully implemented in several fields of research. It combines the features of taboo search and single-point mutation evolutionary programming. Identifying the best unknown parameters of the problem under consideration is the central idea for many space trajectory optimization problems. The TEP technique is used in the present methodology for the best estimation of initial unknown parameters by minimizing objective function interms of fuel requirements. The optimal estimation subsequently results into an optimal trajectory design of a module for soft landing on the Moon from a lunar parking orbit. Numerical simulations demonstrate that the proposed approach is highly efficient and it reduces the minimum fuel consumption. The results are compared with the available results in literature shows that the solution of present algorithm is better than some of the existing algorithms. Keywords: soft landing, trajectory optimization, evolutionary programming, control parameters, Pontrygen principle.
NASA Astrophysics Data System (ADS)
Jia, M.; Panning, M. P.; Lekic, V.; Gao, C.
2017-12-01
The InSight (Interior Exploration using Seismic Investigations, Geodesy and Heat Transport) mission will deploy a geophysical station on Mars in 2018. Using seismology to explore the interior structure of the Mars is one of the main targets, and as part of the mission, we will use 3-component seismic data to constrain the crust and upper mantle structure including P and S wave velocities and densities underneath the station. We will apply a reversible jump Markov chain Monte Carlo algorithm in the transdimensional hierarchical Bayesian inversion framework, in which the number of parameters in the model space and the noise level of the observed data are also treated as unknowns in the inversion process. Bayesian based methods produce an ensemble of models which can be analyzed to quantify uncertainties and trade-offs of the model parameters. In order to get better resolution, we will simultaneously invert three different types of seismic data: receiver functions, surface wave dispersion (SWD), and ZH ratios. Because the InSight mission will only deliver a single seismic station to Mars, and both the source location and the interior structure will be unknown, we will jointly invert the ray parameter in our approach. In preparation for this work, we first verify our approach by using a set of synthetic data. We find that SWD can constrain the absolute value of velocities while receiver functions constrain the discontinuities. By joint inversion, the velocity structure in the crust and upper mantle is well recovered. Then, we apply our approach to real data from an earth-based seismic station BFO located in Black Forest Observatory in Germany, as already used in a demonstration study for single station location methods. From the comparison of the results, our hierarchical treatment shows its advantage over the conventional method in which the noise level of observed data is fixed as a prior.
PaCO2 measurement in cerebral haemodynamics: face mask or nasal cannulae?
Minhas, J S; Robinson, T; Panerai, R
2017-06-22
PaCO 2 affects cerebral blood flow (CBF) and its regulatory mechanisms, but the effects of CO 2 measurement technique on cerebrovascular parameters are unknown. In order to determine if the two most commonly used approaches, face mask (FM) or nasal cannulae (NC), are interchangeable or not, we tested the hypothesis that the use of FM versus NC does not lead to significant differences in CO 2 -related systemic and cerebrovascular parameters. Recordings of CBF velocity (CBFV), blood pressure (BP), heart rate, and end-tidal CO 2 (EtCO 2 ) were performed in 42 subjects during normocapnia (FM or NC) and 5% CO 2 inhalation (FM) or hyperventilation (NC). Dynamic cerebral autoregulation was assessed with the autoregulation index (ARI), derived by transfer function analysis from the CBFV response to a hypothetical step change in BP. Significant differences in physiological parameters were seen between FM and NC: EtCO 2 (37.40 versus 35.26 mmHg, p = 0.001) and heart rate (69.6 versus 66.7 bpm, p = 0.001) respectively. No differences were observed for mean BP, CBFV or the ARI index. Use of FM or NC for measurement of EtCO 2 leads to physiological changes and differences in parameter values that need to be taken into consideration when interpreting and/or comparing results in studies of cerebral haemodynamics.
Structural Identifiability of Dynamic Systems Biology Models
Villaverde, Alejandro F.
2016-01-01
A powerful way of gaining insight into biological systems is by creating a nonlinear differential equation model, which usually contains many unknown parameters. Such a model is called structurally identifiable if it is possible to determine the values of its parameters from measurements of the model outputs. Structural identifiability is a prerequisite for parameter estimation, and should be assessed before exploiting a model. However, this analysis is seldom performed due to the high computational cost involved in the necessary symbolic calculations, which quickly becomes prohibitive as the problem size increases. In this paper we show how to analyse the structural identifiability of a very general class of nonlinear models by extending methods originally developed for studying observability. We present results about models whose identifiability had not been previously determined, report unidentifiabilities that had not been found before, and show how to modify those unidentifiable models to make them identifiable. This method helps prevent problems caused by lack of identifiability analysis, which can compromise the success of tasks such as experiment design, parameter estimation, and model-based optimization. The procedure is called STRIKE-GOLDD (STRuctural Identifiability taKen as Extended-Generalized Observability with Lie Derivatives and Decomposition), and it is implemented in a MATLAB toolbox which is available as open source software. The broad applicability of this approach facilitates the analysis of the increasingly complex models used in systems biology and other areas. PMID:27792726
Hiraba, Hisao; Inoue, Motoharu; Gora, Kanako; Sato, Takako; Nishimura, Satoshi; Yamaoka, Masaru; Kumakura, Ayano; Ono, Shinya; Wakasa, Hirotugu; Nakayama, Enri; Abe, Kimiko; Ueda, Koichiro
2014-01-01
We previously found that the greatest salivation response in healthy human subjects is produced by facial vibrotactile stimulation of 89 Hz frequency with 1.9 μm amplitude (89 Hz-S), as reported by Hiraba et al. (2012, 20011, and 2008). We assessed relationships between the blood flow to brain via functional near-infrared spectroscopy (fNIRS) in the frontal cortex and autonomic parameters. We used the heart rate (HRV: heart rate variability analysis in RR intervals), pupil reflex, and salivation as parameters, but the interrelation between each parameter and fNIRS measures remains unknown. We were to investigate the relationship in response to established paradigms using simultaneously each parameter-fNIRS recording in healthy human subjects. Analysis of fNIRS was examined by a comparison of various values between before and after various stimuli (89 Hz-S, 114 Hz-S, listen to classic music, and “Ahh” vocalization). We confirmed that vibrotactile stimulation (89 Hz) of the parotid glands led to the greatest salivation, greatest increase in heart rate variability, and the most constricted pupils. Furthermore, there were almost no detectable differences between fNIRS during 89 Hz-S and fNIRS during listening to classical music of fans. Thus, vibrotactile stimulation of 89 Hz seems to evoke parasympathetic activity. PMID:24511550
The Effect of Multigrid Parameters in a 3D Heat Diffusion Equation
NASA Astrophysics Data System (ADS)
Oliveira, F. De; Franco, S. R.; Pinto, M. A. Villela
2018-02-01
The aim of this paper is to reduce the necessary CPU time to solve the three-dimensional heat diffusion equation using Dirichlet boundary conditions. The finite difference method (FDM) is used to discretize the differential equations with a second-order accuracy central difference scheme (CDS). The algebraic equations systems are solved using the lexicographical and red-black Gauss-Seidel methods, associated with the geometric multigrid method with a correction scheme (CS) and V-cycle. Comparisons are made between two types of restriction: injection and full weighting. The used prolongation process is the trilinear interpolation. This work is concerned with the study of the influence of the smoothing value (v), number of mesh levels (L) and number of unknowns (N) on the CPU time, as well as the analysis of algorithm complexity.
A Stochastic Total Least Squares Solution of Adaptive Filtering Problem
Ahmad, Noor Atinah
2014-01-01
An efficient and computationally linear algorithm is derived for total least squares solution of adaptive filtering problem, when both input and output signals are contaminated by noise. The proposed total least mean squares (TLMS) algorithm is designed by recursively computing an optimal solution of adaptive TLS problem by minimizing instantaneous value of weighted cost function. Convergence analysis of the algorithm is given to show the global convergence of the proposed algorithm, provided that the stepsize parameter is appropriately chosen. The TLMS algorithm is computationally simpler than the other TLS algorithms and demonstrates a better performance as compared with the least mean square (LMS) and normalized least mean square (NLMS) algorithms. It provides minimum mean square deviation by exhibiting better convergence in misalignment for unknown system identification under noisy inputs. PMID:24688412
A study of parameter identification
NASA Technical Reports Server (NTRS)
Herget, C. J.; Patterson, R. E., III
1978-01-01
A set of definitions for deterministic parameter identification ability were proposed. Deterministic parameter identificability properties are presented based on four system characteristics: direct parameter recoverability, properties of the system transfer function, properties of output distinguishability, and uniqueness properties of a quadratic cost functional. Stochastic parameter identifiability was defined in terms of the existence of an estimation sequence for the unknown parameters which is consistent in probability. Stochastic parameter identifiability properties are presented based on the following characteristics: convergence properties of the maximum likelihood estimate, properties of the joint probability density functions of the observations, and properties of the information matrix.
Vision System for Coarsely Estimating Motion Parameters for Unknown Fast Moving Objects in Space
Chen, Min; Hashimoto, Koichi
2017-01-01
Motivated by biological interests in analyzing navigation behaviors of flying animals, we attempt to build a system measuring their motion states. To do this, in this paper, we build a vision system to detect unknown fast moving objects within a given space, calculating their motion parameters represented by positions and poses. We proposed a novel method to detect reliable interest points from images of moving objects, which can be hardly detected by general purpose interest point detectors. 3D points reconstructed using these interest points are then grouped and maintained for detected objects, according to a careful schedule, considering appearance and perspective changes. In the estimation step, a method is introduced to adapt the robust estimation procedure used for dense point set to the case for sparse set, reducing the potential risk of greatly biased estimation. Experiments are conducted against real scenes, showing the capability of the system of detecting multiple unknown moving objects and estimating their positions and poses. PMID:29206189
Design and analysis of adaptive Super-Twisting sliding mode control for a microgyroscope.
Feng, Zhilin; Fei, Juntao
2018-01-01
This paper proposes a novel adaptive Super-Twisting sliding mode control for a microgyroscope under unknown model uncertainties and external disturbances. In order to improve the convergence rate of reaching the sliding surface and the accuracy of regulating and trajectory tracking, a high order Super-Twisting sliding mode control strategy is employed, which not only can combine the advantages of the traditional sliding mode control with the Super-Twisting sliding mode control, but also guarantee that the designed control system can reach the sliding surface and equilibrium point in a shorter finite time from any initial state and avoid chattering problems. In consideration of unknown parameters of micro gyroscope system, an adaptive algorithm based on Lyapunov stability theory is designed to estimate the unknown parameters and angular velocity of microgyroscope. Finally, the effectiveness of the proposed scheme is demonstrated by simulation results. The comparative study between adaptive Super-Twisting sliding mode control and conventional sliding mode control demonstrate the superiority of the proposed method.
ERIC Educational Resources Information Center
Lee, Yi-Hsuan; Zhang, Jinming
2008-01-01
The method of maximum-likelihood is typically applied to item response theory (IRT) models when the ability parameter is estimated while conditioning on the true item parameters. In practice, the item parameters are unknown and need to be estimated first from a calibration sample. Lewis (1985) and Zhang and Lu (2007) proposed the expected response…
Bayesian approach to the analysis of neutron Brillouin scattering data on liquid metals
NASA Astrophysics Data System (ADS)
De Francesco, A.; Guarini, E.; Bafile, U.; Formisano, F.; Scaccia, L.
2016-08-01
When the dynamics of liquids and disordered systems at mesoscopic level is investigated by means of inelastic scattering (e.g., neutron or x ray), spectra are often characterized by a poor definition of the excitation lines and spectroscopic features in general and one important issue is to establish how many of these lines need to be included in the modeling function and to estimate their parameters. Furthermore, when strongly damped excitations are present, commonly used and widespread fitting algorithms are particularly affected by the choice of initial values of the parameters. An inadequate choice may lead to an inefficient exploration of the parameter space, resulting in the algorithm getting stuck in a local minimum. In this paper, we present a Bayesian approach to the analysis of neutron Brillouin scattering data in which the number of excitation lines is treated as unknown and estimated along with the other model parameters. We propose a joint estimation procedure based on a reversible-jump Markov chain Monte Carlo algorithm, which efficiently explores the parameter space, producing a probabilistic measure to quantify the uncertainty on the number of excitation lines as well as reliable parameter estimates. The method proposed could turn out of great importance in extracting physical information from experimental data, especially when the detection of spectral features is complicated not only because of the properties of the sample, but also because of the limited instrumental resolution and count statistics. The approach is tested on generated data set and then applied to real experimental spectra of neutron Brillouin scattering from a liquid metal, previously analyzed in a more traditional way.
Chambers, D.; Paulden, M.; Paton, F.; Heirs, M.; Duffy, S.; Hunter, J. M.; Sculpher, M.; Woolacott, N.
2010-01-01
Summary Sugammadex 16 mg kg−1 can be used for the immediate reversal of neuromuscular block 3 min after administration of rocuronium and could be used in place of succinylcholine for emergency intubation. We have systematically reviewed the efficacy and cost-effectiveness and made an economic assessment of sugammadex for immediate reversal. The economic assessment investigated whether sugammadex appears cost-effective under various assumptions about the value of any reduction in recovery time with sugammadex, the likelihood of a ‘can't intubate, can't ventilate’ (CICV) event, the age of the patient, and the length of the procedure. Three trials were included in the efficacy review. Sugammadex administered 3 or 5 min after rocuronium produced markedly faster recovery than placebo or spontaneous recovery from succinylcholine-induced block. No published economic evaluations were found. Our economic analyses showed that sugammadex appears more cost-effective, where the value of any reduction in recovery time is greater, where the reduction in mortality compared with succinylcholine is greater, and where the patient is younger, for lower probabilities of a CICV event and for long procedures which do not require profound block throughout. Because of the lack of evidence, the value of some parameters remains unknown, which makes it difficult to provide a definitive assessment of the cost-effectiveness of sugammadex in practice. The use of sugammadex in combination with high-dose rocuronium is efficacious. Further research is needed to clarify key parameters in the analysis and to allow a fuller economic assessment. PMID:20937718
NASA Astrophysics Data System (ADS)
Russo, T. A.; Devineni, N.; Lall, U.
2015-12-01
Lasting success of the Green Revolution in Punjab, India relies on continued availability of local water resources. Supplying primarily rice and wheat for the rest of India, Punjab supports crop irrigation with a canal system and groundwater, which is vastly over-exploited. The detailed data required to physically model future impacts on water supplies agricultural production is not readily available for this region, therefore we use Bayesian methods to estimate hydrologic properties and irrigation requirements for an under-constrained mass balance model. Using measured values of historical precipitation, total canal water delivery, crop yield, and water table elevation, we present a method using a Markov chain Monte Carlo (MCMC) algorithm to solve for a distribution of values for each unknown parameter in a conceptual mass balance model. Due to heterogeneity across the state, and the resolution of input data, we estimate model parameters at the district-scale using spatial pooling. The resulting model is used to predict the impact of precipitation change scenarios on groundwater availability under multiple cropping options. Predicted groundwater declines vary across the state, suggesting that crop selection and water management strategies should be determined at a local scale. This computational method can be applied in data-scarce regions across the world, where water resource management is required to resolve competition between food security and available resources in a changing climate.
Prepositioning emergency supplies under uncertainty: a parametric optimization method
NASA Astrophysics Data System (ADS)
Bai, Xuejie; Gao, Jinwu; Liu, Yankui
2018-07-01
Prepositioning of emergency supplies is an effective method for increasing preparedness for disasters and has received much attention in recent years. In this article, the prepositioning problem is studied by a robust parametric optimization method. The transportation cost, supply, demand and capacity are unknown prior to the extraordinary event, which are represented as fuzzy parameters with variable possibility distributions. The variable possibility distributions are obtained through the credibility critical value reduction method for type-2 fuzzy variables. The prepositioning problem is formulated as a fuzzy value-at-risk model to achieve a minimum total cost incurred in the whole process. The key difficulty in solving the proposed optimization model is to evaluate the quantile of the fuzzy function in the objective and the credibility in the constraints. The objective function and constraints can be turned into their equivalent parametric forms through chance constrained programming under the different confidence levels. Taking advantage of the structural characteristics of the equivalent optimization model, a parameter-based domain decomposition method is developed to divide the original optimization problem into six mixed-integer parametric submodels, which can be solved by standard optimization solvers. Finally, to explore the viability of the developed model and the solution approach, some computational experiments are performed on realistic scale case problems. The computational results reported in the numerical example show the credibility and superiority of the proposed parametric optimization method.
NASA Astrophysics Data System (ADS)
d'Onofrio, Alberto; Caravagna, Giulio; de Franciscis, Sebastiano
2018-02-01
In this work we consider, from a statistical mechanics point of view, the effects of bounded stochastic perturbations of the protein decay rate for a bistable biomolecular network module. Namely, we consider the perturbations of the protein decay/binding rate constant (DBRC) in a circuit modeling the positive feedback of a transcription factor (TF) on its own synthesis. The DBRC models both the spontaneous degradation of the TF and its linking to other unknown biomolecular factors or drugs. We show that bounded perturbations of the DBRC preserve the positivity of the parameter value (and also its limited variation), and induce effects of interest. First, the noise amplitude induces a first-order phase transition. This is of interest since the system in study has neither spatial components nor it is composed by multiple interacting networks. In particular, we observe that the system passes from two to a unique stochastic attractor, and vice-versa. This behavior is different from noise-induced transitions (also termed phenomenological bifurcations), where a unique stochastic attractor changes its shape depending on the values of a parameter. Moreover, we observe irreversible jumps as a consequence of the above-mentioned phase transition. We show that the illustrated mechanism holds for general models with the same deterministic hysteresis bifurcation structure. Finally, we illustrate the possible implications of our findings to the intracellular pharmacodynamics of drugs delivered in continuous infusion.
Force Limited Vibration Testing: Computation C2 for Real Load and Probabilistic Source
NASA Astrophysics Data System (ADS)
Wijker, J. J.; de Boer, A.; Ellenbroek, M. H. M.
2014-06-01
To prevent over-testing of the test-item during random vibration testing Scharton proposed and discussed the force limited random vibration testing (FLVT) in a number of publications, in which the factor C2 is besides the random vibration specification, the total mass and the turnover frequency of the load(test item), a very important parameter. A number of computational methods to estimate C2 are described in the literature, i.e. the simple and the complex two degrees of freedom system, STDFS and CTDFS, respectively. Both the STDFS and the CTDFS describe in a very reduced (simplified) manner the load and the source (adjacent structure to test item transferring the excitation forces, i.e. spacecraft supporting an instrument).The motivation of this work is to establish a method for the computation of a realistic value of C2 to perform a representative random vibration test based on force limitation, when the adjacent structure (source) description is more or less unknown. Marchand formulated a conservative estimation of C2 based on maximum modal effective mass and damping of the test item (load) , when no description of the supporting structure (source) is available [13].Marchand discussed the formal description of getting C 2 , using the maximum PSD of the acceleration and maximum PSD of the force, both at the interface between load and source, in combination with the apparent mass and total mass of the the load. This method is very convenient to compute the factor C 2 . However, finite element models are needed to compute the spectra of the PSD of both the acceleration and force at the interface between load and source.Stevens presented the coupled systems modal approach (CSMA), where simplified asparagus patch models (parallel-oscillator representation) of load and source are connected, consisting of modal effective masses and the spring stiffnesses associated with the natural frequencies. When the random acceleration vibration specification is given the CMSA method is suitable to compute the valueof the parameter C 2 .When no mathematical model of the source can be made available, estimations of the value C2 can be find in literature.In this paper a probabilistic mathematical representation of the unknown source is proposed, such that the asparagus patch model of the source can be approximated. The computation of the value C2 can be done in conjunction with the CMSA method, knowing the apparent mass of the load and the random acceleration specification at the interface between load and source, respectively.Strength & stiffness design rules for spacecraft, instrumentation, units, etc. will be practiced, as mentioned in ECSS Standards and Handbooks, Launch Vehicle User's manuals, papers, books , etc. A probabilistic description of the design parameters is foreseen.As an example a simple experiment has been worked out.
Extensions of Rasch's Multiplicative Poisson Model.
ERIC Educational Resources Information Center
Jansen, Margo G. H.; van Duijn, Marijtje A. J.
1992-01-01
A model developed by G. Rasch that assumes scores on some attainment tests can be realizations of a Poisson process is explained and expanded by assuming a prior distribution, with fixed but unknown parameters, for the subject parameters. How additional between-subject and within-subject factors can be incorporated is discussed. (SLD)
KIMURA, Yuki; AOKI, Takahiro; CHIBA, Akiko; NAMBO, Yasuo
2017-01-01
ABSTRACT Dystocia is often lethal for neonatal foals; however, its clinicopathological features remain largely unknown. We investigated the effect of dystocia on the foal blood profile. Venous blood samples were collected from 35 foals (5 Percheron and 30 crossbreds between Percheron, Belgian, and Breton heavy draft horses) at 0 hr, 1 hr, 12 hr and 1 day after birth. Dystocia was defined as prolonged labor >30 min with strong fetal traction with or without fetal displacement. The dystocia group (n=13) showed lower mean values for pH (P<0.01), bicarbonate (P<0.01), total carbon dioxide (P<0.05), and base excess (P<0.01) and higher mean values for anion gap (P<0.05) and lactate (P<0.01) immediately after birth than the normal group (n=22). Remarkably high pCO2 values (>90 mmHg) were observed in three foals in the dystocia group but in none of the foals in the normal birth group immediately after birth. These results suggest that dystocia results in lactic acidosis and may be related to respiratory distress. PMID:28400704
Perm-Fit: a new program to estimate permeability at high P-T conditions
NASA Astrophysics Data System (ADS)
Moulas, Evangelos; Madonna, Claudio
2016-04-01
Several geological processes are controlled by porous fluid flow. The circulation of porous fluids influences many physical phenomena and in turn it depends on the rock permeability. The permeability of rocks is a physical property that needs to be measured since it depends on many factors such as secondary porosity (fractures etc). We present a numerical approach to estimate permeability using the transient step method (Brace et al., 1968). When a non-reacting, compressible fluid is considered in a relative incompressible solid matrix, the only unknown parameter in the equations of porous flow is permeability. Porosity is assumed to be known and the physical properties of the fluid (compressibility, density, viscosity) are taken from the NIST database. Forward numerical calculations for different values of permeability are used and the results are compared to experimental measurements. The extracted permeability value is the one that minimizes the misfit between experimental and numerical results. The uncertainty on the value of permeability is estimated using a Monte Carlo method. REFERENCES Brace, W.F., Walsh J.B., & Frangos, W.T. 1968: Permeability of Granite under High Pressure, Journal of Geophysical Research, 73, 6, 2225-2236
Adaptive Value Normalization in the Prefrontal Cortex Is Reduced by Memory Load
Burke, C. J.; Seifritz, E.; Tobler, P. N.
2017-01-01
Abstract Adaptation facilitates neural representation of a wide range of diverse inputs, including reward values. Adaptive value coding typically relies on contextual information either obtained from the environment or retrieved from and maintained in memory. However, it is unknown whether having to retrieve and maintain context information modulates the brain’s capacity for value adaptation. To address this issue, we measured hemodynamic responses of the prefrontal cortex (PFC) in two studies on risky decision-making. In each trial, healthy human subjects chose between a risky and a safe alternative; half of the participants had to remember the risky alternatives, whereas for the other half they were presented visually. The value of safe alternatives varied across trials. PFC responses adapted to contextual risk information, with steeper coding of safe alternative value in lower-risk contexts. Importantly, this adaptation depended on working memory load, such that response functions relating PFC activity to safe values were steeper with presented versus remembered risk. An independent second study replicated the findings of the first study and showed that similar slope reductions also arose when memory maintenance demands were increased with a secondary working memory task. Formal model comparison showed that a divisive normalization model fitted effects of both risk context and working memory demands on PFC activity better than alternative models of value adaptation, and revealed that reduced suppression of background activity was the critical parameter impairing normalization with increased memory maintenance demand. Our findings suggest that mnemonic processes can constrain normalization of neural value representations. PMID:28462394
Learning from the Unknown Student
ERIC Educational Resources Information Center
Barlow, Angela T.; Gerstenschlager, Natasha E.; Harmon, Shannon E.
2016-01-01
In this article, three instructional situations demonstrate the value of using an "unknown" student's work to allow the advancement of students' mathematical thinking as well as their engagement in the mathematical practice of critiquing the reasoning of others: (1) introducing alternative solution strategies; (2) critiquing inaccuracies…
Deriving a model for influenza epidemics from historical data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray, Jaideep; Lefantzi, Sophia
In this report we describe how we create a model for influenza epidemics from historical data collected from both civilian and military societies. We derive the model when the population of the society is unknown but the size of the epidemic is known. Our interest lies in estimating a time-dependent infection rate to within a multiplicative constant. The model form fitted is chosen for its similarity to published models for HIV and plague, enabling application of Bayesian techniques to discriminate among infectious agents during an emerging epidemic. We have developed models for the progression of influenza in human populations. Themore » model is framed as a integral, and predicts the number of people who exhibit symptoms and seek care over a given time-period. The start and end of the time period form the limits of integration. The disease progression model, in turn, contains parameterized models for the incubation period and a time-dependent infection rate. The incubation period model is obtained from literature, and the parameters of the infection rate are fitted from historical data including both military and civilian populations. The calibrated infection rate models display a marked difference in which the 1918 Spanish Influenza pandemic differed from the influenza seasons in the US between 2001-2008 and the progression of H1N1 in Catalunya, Spain. The data for the 1918 pandemic was obtained from military populations, while the rest are country-wide or province-wide data from the twenty-first century. We see that the initial growth of infection in all cases were about the same; however, military populations were able to control the epidemic much faster i.e., the decay of the infection-rate curve is much higher. It is not clear whether this was because of the much higher level of organization present in a military society or the seriousness with which the 1918 pandemic was addressed. Each outbreak to which the influenza model was fitted yields a separate set of parameter values. We suggest 'consensus' parameter values for military and civilian populations in the form of normal distributions so that they may be further used in other applications. Representing the parameter values as distributions, instead of point values, allows us to capture the uncertainty and scatter in the parameters. Quantifying the uncertainty allows us to use these models further in inverse problems, predictions under uncertainty and various other studies involving risk.« less
Influence of dense plasma on the energy levels and transition properties in highly charged ions
NASA Astrophysics Data System (ADS)
Chen, Zhan-Bin; Hu, Hong-Wei; Ma, Kun; Liu, Xiao-Bin; Guo, Xue-Ling; Li, Shuang; Zhu, Bo-Hong; Huang, Lian; Wang, Kai
2018-03-01
The studies of the influence of plasma environments on the level structures and transition properties for highly charged ions are presented. For the relativistic treatment, we implemented the multiconfiguration Dirac-Fock method incorporating the ion sphere (IS) model potential, in which the plasma screening is taken into account as a modified interaction potential between the electron and the nucleus. For the nonrelativistic treatment, analytical solutions of the Schrödinger equation with two types of the IS screened potential are proposed. The Ritz variation method is used with hydrogenic wave function as a trial wave function that contains two unknown variational parameters. Bound energies are derived from an energy equation, and the variational parameters are obtained from the minimisation condition of the expectation value of the energy. Numerical results for hydrogen-like ions in dense plasmas are presented as examples. A detailed analysis of the influence of relativistic effects on the energy levels and transition properties is also reported. Our results are compared with available results in the literature showing a good quantitative agreement.
Box-Cox transformation for QTL mapping.
Yang, Runqing; Yi, Nengjun; Xu, Shizhong
2006-01-01
The maximum likelihood method of QTL mapping assumes that the phenotypic values of a quantitative trait follow a normal distribution. If the assumption is violated, some forms of transformation should be taken to make the assumption approximately true. The Box-Cox transformation is a general transformation method which can be applied to many different types of data. The flexibility of the Box-Cox transformation is due to a variable, called transformation factor, appearing in the Box-Cox formula. We developed a maximum likelihood method that treats the transformation factor as an unknown parameter, which is estimated from the data simultaneously along with the QTL parameters. The method makes an objective choice of data transformation and thus can be applied to QTL analysis for many different types of data. Simulation studies show that (1) Box-Cox transformation can substantially increase the power of QTL detection; (2) Box-Cox transformation can replace some specialized transformation methods that are commonly used in QTL mapping; and (3) applying the Box-Cox transformation to data already normally distributed does not harm the result.
Guidelines to electrode positioning for human and animal electrical impedance myography research
NASA Astrophysics Data System (ADS)
Sanchez, Benjamin; Pacheck, Adam; Rutkove, Seward B.
2016-09-01
The positioning of electrodes in electrical impedance myography (EIM) is critical for accurately assessing disease progression and effectiveness of treatment. In human and animal trials for neuromuscular disorders, inconsistent electrode positioning adds errors to the muscle impedance. Despite its importance, how the reproducibility of resistance and reactance, the two parameters that define EIM, are affected by changes in electrode positioning remains unknown. In this paper, we present a novel approach founded on biophysical principles to study the reproducibility of resistance and reactance to electrode misplacements. The analytical framework presented allows the user to quantify a priori the effect on the muscle resistance and reactance using only one parameter: the uncertainty placing the electrodes. We also provide quantitative data on the precision needed to position the electrodes and the minimum muscle length needed to achieve a pre-specified EIM reproducibility. The results reported here are confirmed with finite element model simulations and measurements on five healthy subjects. Ultimately, our data can serve as normative values to enhance the reliability of EIM as a biomarker and facilitate comparability of future human and animal studies.
NASA Astrophysics Data System (ADS)
Blake, Samantha L.; Walker, S. Hunter; Muddiman, David C.; Hinks, David; Beck, Keith R.
2011-12-01
Color Index Disperse Yellow 42 (DY42), a high-volume disperse dye for polyester, was used to compare the capabilities of the LTQ-Orbitrap XL and the LTQ-FT-ICR with respect to mass measurement accuracy (MMA), spectral accuracy, and sulfur counting. The results of this research will be used in the construction of a dye database for forensic purposes; the additional spectral information will increase the confidence in the identification of unknown dyes found in fibers at crime scenes. Initial LTQ-Orbitrap XL data showed MMAs greater than 3 ppm and poor spectral accuracy. Modification of several Orbitrap installation parameters (e.g., deflector voltage) resulted in a significant improvement of the data. The LTQ-FT-ICR and LTQ-Orbitrap XL (after installation parameters were modified) exhibited MMA ≤ 3 ppm, good spectral accuracy (χ2 values for the isotopic distribution ≤ 2), and were correctly able to ascertain the number of sulfur atoms in the compound at all resolving powers investigated for AGC targets of 5.00 × 105 and 1.00 × 106.
qPIPSA: Relating enzymatic kinetic parameters and interaction fields
Gabdoulline, Razif R; Stein, Matthias; Wade, Rebecca C
2007-01-01
Background The simulation of metabolic networks in quantitative systems biology requires the assignment of enzymatic kinetic parameters. Experimentally determined values are often not available and therefore computational methods to estimate these parameters are needed. It is possible to use the three-dimensional structure of an enzyme to perform simulations of a reaction and derive kinetic parameters. However, this is computationally demanding and requires detailed knowledge of the enzyme mechanism. We have therefore sought to develop a general, simple and computationally efficient procedure to relate protein structural information to enzymatic kinetic parameters that allows consistency between the kinetic and structural information to be checked and estimation of kinetic constants for structurally and mechanistically similar enzymes. Results We describe qPIPSA: quantitative Protein Interaction Property Similarity Analysis. In this analysis, molecular interaction fields, for example, electrostatic potentials, are computed from the enzyme structures. Differences in molecular interaction fields between enzymes are then related to the ratios of their kinetic parameters. This procedure can be used to estimate unknown kinetic parameters when enzyme structural information is available and kinetic parameters have been measured for related enzymes or were obtained under different conditions. The detailed interaction of the enzyme with substrate or cofactors is not modeled and is assumed to be similar for all the proteins compared. The protein structure modeling protocol employed ensures that differences between models reflect genuine differences between the protein sequences, rather than random fluctuations in protein structure. Conclusion Provided that the experimental conditions and the protein structural models refer to the same protein state or conformation, correlations between interaction fields and kinetic parameters can be established for sets of related enzymes. Outliers may arise due to variation in the importance of different contributions to the kinetic parameters, such as protein stability and conformational changes. The qPIPSA approach can assist in the validation as well as estimation of kinetic parameters, and provide insights into enzyme mechanism. PMID:17919319
Rendezvous with connectivity preservation for multi-robot systems with an unknown leader
NASA Astrophysics Data System (ADS)
Dong, Yi
2018-02-01
This paper studies the leader-following rendezvous problem with connectivity preservation for multi-agent systems composed of uncertain multi-robot systems subject to external disturbances and an unknown leader, both of which are generated by a so-called exosystem with parametric uncertainty. By combining internal model design, potential function technique and adaptive control, two distributed control strategies are proposed to maintain the connectivity of the communication network, to achieve the asymptotic tracking of all the followers to the output of the unknown leader system, as well as to reject unknown external disturbances. It is also worth to mention that the uncertain parameters in the multi-robot systems and exosystem are further allowed to belong to unknown and unbounded sets when applying the second fully distributed control law containing a dynamic gain inspired by high-gain adaptive control or self-tuning regulator.
Design of a DNA chip for detection of unknown genetically modified organisms (GMOs).
Nesvold, Håvard; Kristoffersen, Anja Bråthen; Holst-Jensen, Arne; Berdal, Knut G
2005-05-01
Unknown genetically modified organisms (GMOs) have not undergone a risk evaluation, and hence might pose a danger to health and environment. There are, today, no methods for detecting unknown GMOs. In this paper we propose a novel method intended as a first step in an approach for detecting unknown genetically modified (GM) material in a single plant. A model is designed where biological and combinatorial reduction rules are applied to a set of DNA chip probes containing all possible sequences of uniform length n, creating probes capable of detecting unknown GMOs. The model is theoretically tested for Arabidopsis thaliana Columbia, and the probabilities for detecting inserts and receiving false positives are assessed for various parameters for this organism. From a theoretical standpoint, the model looks very promising but should be tested further in the laboratory. The model and algorithms will be available upon request to the corresponding author.
Shi, Wuxi; Luo, Rui; Li, Baoquan
2017-01-01
In this study, an adaptive fuzzy prescribed performance control approach is developed for a class of uncertain multi-input and multi-output (MIMO) nonlinear systems with unknown control direction and unknown dead-zone inputs. The properties of symmetric matrix are exploited to design adaptive fuzzy prescribed performance controller, and a Nussbaum-type function is incorporated in the controller to estimate the unknown control direction. This method has two prominent advantages: it does not require the priori knowledge of control direction and only three parameters need to be updated on-line for this MIMO systems. It is proved that all the signals in the resulting closed-loop system are bounded and that the tracking errors converge to a small residual set with the prescribed performance bounds. The effectiveness of the proposed approach is validated by simulation results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
2010-01-01
Background Patients-Reported Outcomes (PRO) are increasingly used in clinical and epidemiological research. Two main types of analytical strategies can be found for these data: classical test theory (CTT) based on the observed scores and models coming from Item Response Theory (IRT). However, whether IRT or CTT would be the most appropriate method to analyse PRO data remains unknown. The statistical properties of CTT and IRT, regarding power and corresponding effect sizes, were compared. Methods Two-group cross-sectional studies were simulated for the comparison of PRO data using IRT or CTT-based analysis. For IRT, different scenarios were investigated according to whether items or person parameters were assumed to be known, to a certain extent for item parameters, from good to poor precision, or unknown and therefore had to be estimated. The powers obtained with IRT or CTT were compared and parameters having the strongest impact on them were identified. Results When person parameters were assumed to be unknown and items parameters to be either known or not, the power achieved using IRT or CTT were similar and always lower than the expected power using the well-known sample size formula for normally distributed endpoints. The number of items had a substantial impact on power for both methods. Conclusion Without any missing data, IRT and CTT seem to provide comparable power. The classical sample size formula for CTT seems to be adequate under some conditions but is not appropriate for IRT. In IRT, it seems important to take account of the number of items to obtain an accurate formula. PMID:20338031
Zhao, Xuefeng; Raghavan, Madhavan L; Lu, Jia
2011-05-01
Knowledge of elastic properties of cerebral aneurysms is crucial for understanding the biomechanical behavior of the lesion. However, characterizing tissue properties using in vivo motion data presents a tremendous challenge. Aside from the limitation of data accuracy, a pressing issue is that the in vivo motion does not expose the stress-free geometry. This is compounded by the nonlinearity, anisotropy, and heterogeneity of the tissue behavior. This article introduces a method for identifying the heterogeneous properties of aneurysm wall tissue under unknown stress-free configuration. In the proposed approach, an accessible configuration is taken as the reference; the unknown stress-free configuration is represented locally by a metric tensor describing the prestrain from the stress-free configuration to the reference configuration. Material parameters are identified together with the metric tensor pointwisely. The paradigm is tested numerically using a forward-inverse analysis loop. An image-derived sac is considered. The aneurysm tissue is modeled as an eightply laminate whose constitutive behavior is described by an anisotropic hyperelastic strain-energy function containing four material parameters. The parameters are assumed to vary continuously in two assigned patterns to represent two types of material heterogeneity. Nine configurations between the diastolic and systolic pressures are generated by forward quasi-static finite element analyses. These configurations are fed to the inverse analysis to delineate the material parameters and the metric tensor. The recovered and the assigned distributions are in good agreement. A forward verification is conducted by comparing the displacement solutions obtained from the recovered and the assigned material parameters at a different pressure. The nodal displacements are found in excellent agreement.
Onset of the convection in a supercritical fluid.
Meyer, H
2006-01-01
A model is proposed that leads to the scaled relation tp/tau D=Ftp(Ra-Rac) for the development of convection in a pure fluid in a Rayleigh-Bénard cell after the start of the heat current at t=0. Here tp is the time of the first maximum of the temperature drop DeltaT(t) across the fluid layer, the signature of rapidly growing convection, tau D is the diffusion relaxation time, and Rac is the critical Rayleigh number. Such a relation was first obtained empirically from experimental data. Because of the unknown perturbations in the cell that lead to convection development beyond the point of the fluid instability, the model determines tp/tau D within a multiplicative factor Psi square root Rac(HBL), the only fit parameter product. Here Rac(HBL), of the order 10(3), is the critical Rayleigh number of the hot boundary layer and Psi is a fit parameter. There is then good agreement over more than four decades of Ra-Rac between the model and the experiments on supercritical 3He at various heat currents and temperatures. The value of the parameter Psi, which phenomenologically represents the effectiveness of the perturbations, is discussed in connection with predictions by El Khouri and Carlès of the fluid instability onset time.
How far are extraterrestrial life and intelligence after Kepler?
NASA Astrophysics Data System (ADS)
Wandel, Amri
2017-08-01
The Kepler mission has shown that a significant fraction of all stars may have an Earth-size habitable planet. A dramatic support was the recent detection of Proxima Centauri b. Using a Drake-equation like formalism I derive an equation for the abundance of biotic planets as a function of the relatively modest uncertainty in the astronomical data and of the (yet unknown) probability for the evolution of biotic life, Fb. I suggest that Fb may be estimated by future spectral observations of exoplanet biomarkers. It follows that if Fb is not very small, then a biotic planet may be expected within about 10 light years from Earth. Extending this analyses to advanced life, I derive expressions for the distance to putative civilizations in terms of two additional Drake parameters - the probability for evolution of a civilization, Fc, and its average longevity. Assuming "optimistic" values for the Drake parameters, (Fb Fc 1), and a broadcasting duration of a few thousand years, the likely distance to the nearest civilizations detectable by SETI is of the order of a few thousand light years. Finally I calculate the distance and probability of detecting intelligent signals with present and future radio telescopes such as Arecibo and SKA and how it could constrain the Drake parameters.
Jacobsen, Svein; Stauffer, Paul R
2007-02-21
The total thermal dose that can be delivered during hyperthermia treatments is frequently limited by temperature heterogeneities in the heated tissue volume. Reliable temperature information on the heated area is thus vital for the optimization of clinical dosimetry. Microwave radiometry has been proposed as an accurate, quick and painless temperature sensing technique for biological tissue. Advantages include the ability to sense volume-averaged temperatures from subsurface tissue non-invasively, rather than with a limited set of point measurements typical of implanted temperature probes. We present a procedure to estimate the maximum tissue temperature from a single radiometric brightness temperature which is based on a numerical simulation of 3D tissue temperature distributions induced by microwave heating at 915 MHz. The temperature retrieval scheme is evaluated against errors arising from unknown variations in thermal, electromagnetic and design model parameters. Whereas realistic deviations from base values of dielectric and thermal parameters have only marginal impact on performance, pronounced deviations in estimated maximum tissue temperature are observed for unanticipated variations of the temperature or thickness of the bolus compartment. The need to pay particular attention to these latter applicator construction parameters in future clinical implementation of the thermometric method is emphasized.
NASA Astrophysics Data System (ADS)
Jacobsen, Svein; Stauffer, Paul R.
2007-02-01
The total thermal dose that can be delivered during hyperthermia treatments is frequently limited by temperature heterogeneities in the heated tissue volume. Reliable temperature information on the heated area is thus vital for the optimization of clinical dosimetry. Microwave radiometry has been proposed as an accurate, quick and painless temperature sensing technique for biological tissue. Advantages include the ability to sense volume-averaged temperatures from subsurface tissue non-invasively, rather than with a limited set of point measurements typical of implanted temperature probes. We present a procedure to estimate the maximum tissue temperature from a single radiometric brightness temperature which is based on a numerical simulation of 3D tissue temperature distributions induced by microwave heating at 915 MHz. The temperature retrieval scheme is evaluated against errors arising from unknown variations in thermal, electromagnetic and design model parameters. Whereas realistic deviations from base values of dielectric and thermal parameters have only marginal impact on performance, pronounced deviations in estimated maximum tissue temperature are observed for unanticipated variations of the temperature or thickness of the bolus compartment. The need to pay particular attention to these latter applicator construction parameters in future clinical implementation of the thermometric method is emphasized.
Tomlinson, Ryan E.; Silva, Matthew J.; Shoghi, Kooresh I.
2013-01-01
Purpose Blood flow is an important factor in bone production and repair, but its role in osteogenesis induced by mechanical loading is unknown. Here, we present techniques for evaluating blood flow and fluoride metabolism in a pre-clinical stress fracture model of osteogenesis in rats. Procedures Bone formation was induced by forelimb compression in adult rats. 15O water and 18F fluoride PET imaging were used to evaluate blood flow and fluoride kinetics 7 days after loading. 15O water was modeled using a one-compartment, two-parameter model, while a two-compartment, three-parameter model was used to model 18F fluoride. Input functions were created from the heart, and a stochastic search algorithm was implemented to provide initial parameter values in conjunction with a Levenberg–Marquardt optimization algorithm. Results Loaded limbs are shown to have a 26% increase in blood flow rate, 113% increase in fluoride flow rate, 133% increase in fluoride flux, and 13% increase in fluoride incorporation into bone as compared to non-loaded limbs (p < 0.05 for all results). Conclusions The results shown here are consistent with previous studies, confirming this technique is suitable for evaluating the vascular response and mineral kinetics of osteogenic mechanical loading. PMID:21785919
Landslide susceptibility estimations in the Gerecse hills (Hungary).
NASA Astrophysics Data System (ADS)
Gerzsenyi, Dávid; Gáspár, Albert
2017-04-01
Surface movement processes are constantly posing threat to property in populated and agricultural areas in the Gerecse hills (Hungary). The affected geological formations are mainly unconsolidated sediments. Pleistocene loess and alluvial terrace sediments are overwhelmingly present, but fluvio-lacustrine sediments of the latest Miocene, and consolidated Eocene and Mesozoic limestones and marls can also be found in the area. Landslides and other surface movement processes are being studied for a long time in the area, but a comprehensive GIS-based geostatistical analysis have not yet been made for the whole area. This was the reason for choosing the Gerecse as the focus area of the study. However, the base data of our study are freely accessible from online servers, so the used method can be applied to other regions in Hungary. Qualitative data was acquired from the landslide-inventory map of the Hungarian Surface Movement Survey and from the Geological Map of Hungary (1 : 100 000). Morphometric parameters derived from the SRMT-1 DEM were used as quantitative variables. Using these parameters the distribution of elevation, slope gradient, aspect and categorized geological features were computed, both for areas affected and not affected by slope movements. Then likelihood values were computed for each parameters by comparing their distribution in the two areas. With combining the likelihood values of the four parameters relative hazard values were computed for each cell. This method is known as the "empirical probability estimation" originally published by Chung (2005). The map created this way shows each cell's place in their ranking based on the relative hazard values as a percentage for the whole study area (787 km2). These values provide information about how similar is a certain area to the areas already affected by landslides based on the four predictor variables. This map can also serve as a base for more complex landslide vulnerability studies involving economic factors. The landslide-inventory database used in the research provides information regarding the state of activity of the past surface movements, however the activity of many sites are stated as unknown. A complementary field survey have been carried out aiming to categorize these areas - near to Dunaszentmiklós and Neszmély villages - in one of the most landslide-affected part of the Gerecse. Reference: Chung, C. (2005). Using likelihood ratio functions for modeling the conditional probability of occurrence of future landslides for risk assessment. Computers & Geosciences, 32., pp. 1052-1068.
Estimating Mass Properties of Dinosaurs Using Laser Imaging and 3D Computer Modelling
Bates, Karl T.; Manning, Phillip L.; Hodgetts, David; Sellers, William I.
2009-01-01
Body mass reconstructions of extinct vertebrates are most robust when complete to near-complete skeletons allow the reconstruction of either physical or digital models. Digital models are most efficient in terms of time and cost, and provide the facility to infinitely modify model properties non-destructively, such that sensitivity analyses can be conducted to quantify the effect of the many unknown parameters involved in reconstructions of extinct animals. In this study we use laser scanning (LiDAR) and computer modelling methods to create a range of 3D mass models of five specimens of non-avian dinosaur; two near-complete specimens of Tyrannosaurus rex, the most complete specimens of Acrocanthosaurus atokensis and Strutiomimum sedens, and a near-complete skeleton of a sub-adult Edmontosaurus annectens. LiDAR scanning allows a full mounted skeleton to be imaged resulting in a detailed 3D model in which each bone retains its spatial position and articulation. This provides a high resolution skeletal framework around which the body cavity and internal organs such as lungs and air sacs can be reconstructed. This has allowed calculation of body segment masses, centres of mass and moments or inertia for each animal. However, any soft tissue reconstruction of an extinct taxon inevitably represents a best estimate model with an unknown level of accuracy. We have therefore conducted an extensive sensitivity analysis in which the volumes of body segments and respiratory organs were varied in an attempt to constrain the likely maximum plausible range of mass parameters for each animal. Our results provide wide ranges in actual mass and inertial values, emphasizing the high level of uncertainty inevitable in such reconstructions. However, our sensitivity analysis consistently places the centre of mass well below and in front of hip joint in each animal, regardless of the chosen combination of body and respiratory structure volumes. These results emphasize that future biomechanical assessments of extinct taxa should be preceded by a detailed investigation of the plausible range of mass properties, in which sensitivity analyses are used to identify a suite of possible values to be tested as inputs in analytical models. PMID:19225569
Estimating mass properties of dinosaurs using laser imaging and 3D computer modelling.
Bates, Karl T; Manning, Phillip L; Hodgetts, David; Sellers, William I
2009-01-01
Body mass reconstructions of extinct vertebrates are most robust when complete to near-complete skeletons allow the reconstruction of either physical or digital models. Digital models are most efficient in terms of time and cost, and provide the facility to infinitely modify model properties non-destructively, such that sensitivity analyses can be conducted to quantify the effect of the many unknown parameters involved in reconstructions of extinct animals. In this study we use laser scanning (LiDAR) and computer modelling methods to create a range of 3D mass models of five specimens of non-avian dinosaur; two near-complete specimens of Tyrannosaurus rex, the most complete specimens of Acrocanthosaurus atokensis and Strutiomimum sedens, and a near-complete skeleton of a sub-adult Edmontosaurus annectens. LiDAR scanning allows a full mounted skeleton to be imaged resulting in a detailed 3D model in which each bone retains its spatial position and articulation. This provides a high resolution skeletal framework around which the body cavity and internal organs such as lungs and air sacs can be reconstructed. This has allowed calculation of body segment masses, centres of mass and moments or inertia for each animal. However, any soft tissue reconstruction of an extinct taxon inevitably represents a best estimate model with an unknown level of accuracy. We have therefore conducted an extensive sensitivity analysis in which the volumes of body segments and respiratory organs were varied in an attempt to constrain the likely maximum plausible range of mass parameters for each animal. Our results provide wide ranges in actual mass and inertial values, emphasizing the high level of uncertainty inevitable in such reconstructions. However, our sensitivity analysis consistently places the centre of mass well below and in front of hip joint in each animal, regardless of the chosen combination of body and respiratory structure volumes. These results emphasize that future biomechanical assessments of extinct taxa should be preceded by a detailed investigation of the plausible range of mass properties, in which sensitivity analyses are used to identify a suite of possible values to be tested as inputs in analytical models.
ERIC Educational Resources Information Center
Bar, Karl-Jurgen; Boettger, Silke; Wagner, Gerd; Wilsdorf, Christine; Gerhard, Uwe Jens; Boettger, Michael K.; Blanz, Bernhard; Sauer, Heinrich
2006-01-01
Objectives: The underlying mechanisms of reduced pain perception in anorexia nervosa (AN) are unknown. To gain more insight into the pathology, the authors investigated pain perception, autonomic function, and endocrine parameters before and during successful treatment of adolescent AN patients. Method: Heat pain perception was assessed in 15…
ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative–quantitative modeling
Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf
2012-01-01
Summary: Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if–then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLabTM-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. Availability: ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/ Contact: stefan.streif@ovgu.de PMID:22451270
Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf
2012-05-01
Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if-then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLab(TM)-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/
Hua, Yongzhao; Dong, Xiwang; Li, Qingdong; Ren, Zhang
2017-05-18
This paper investigates the time-varying formation robust tracking problems for high-order linear multiagent systems with a leader of unknown control input in the presence of heterogeneous parameter uncertainties and external disturbances. The followers need to accomplish an expected time-varying formation in the state space and track the state trajectory produced by the leader simultaneously. First, a time-varying formation robust tracking protocol with a totally distributed form is proposed utilizing the neighborhood state information. With the adaptive updating mechanism, neither any global knowledge about the communication topology nor the upper bounds of the parameter uncertainties, external disturbances and leader's unknown input are required in the proposed protocol. Then, in order to determine the control parameters, an algorithm with four steps is presented, where feasible conditions for the followers to accomplish the expected time-varying formation tracking are provided. Furthermore, based on the Lyapunov-like analysis theory, it is proved that the formation tracking error can converge to zero asymptotically. Finally, the effectiveness of the theoretical results is verified by simulation examples.
Multiparameter Estimation in Networked Quantum Sensors
NASA Astrophysics Data System (ADS)
Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.
2018-02-01
We introduce a general model for a network of quantum sensors, and we use this model to consider the following question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. This immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or nonlinear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.
NASA Astrophysics Data System (ADS)
Behmanesh, Iman; Yousefianmoghadam, Seyedsina; Nozari, Amin; Moaveni, Babak; Stavridis, Andreas
2018-07-01
This paper investigates the application of Hierarchical Bayesian model updating for uncertainty quantification and response prediction of civil structures. In this updating framework, structural parameters of an initial finite element (FE) model (e.g., stiffness or mass) are calibrated by minimizing error functions between the identified modal parameters and the corresponding parameters of the model. These error functions are assumed to have Gaussian probability distributions with unknown parameters to be determined. The estimated parameters of error functions represent the uncertainty of the calibrated model in predicting building's response (modal parameters here). The focus of this paper is to answer whether the quantified model uncertainties using dynamic measurement at building's reference/calibration state can be used to improve the model prediction accuracies at a different structural state, e.g., damaged structure. Also, the effects of prediction error bias on the uncertainty of the predicted values is studied. The test structure considered here is a ten-story concrete building located in Utica, NY. The modal parameters of the building at its reference state are identified from ambient vibration data and used to calibrate parameters of the initial FE model as well as the error functions. Before demolishing the building, six of its exterior walls were removed and ambient vibration measurements were also collected from the structure after the wall removal. These data are not used to calibrate the model; they are only used to assess the predicted results. The model updating framework proposed in this paper is applied to estimate the modal parameters of the building at its reference state as well as two damaged states: moderate damage (removal of four walls) and severe damage (removal of six walls). Good agreement is observed between the model-predicted modal parameters and those identified from vibration tests. Moreover, it is shown that including prediction error bias in the updating process instead of commonly-used zero-mean error function can significantly reduce the prediction uncertainties.
Liang, Hua; Miao, Hongyu; Wu, Hulin
2010-03-01
Modeling viral dynamics in HIV/AIDS studies has resulted in deep understanding of pathogenesis of HIV infection from which novel antiviral treatment guidance and strategies have been derived. Viral dynamics models based on nonlinear differential equations have been proposed and well developed over the past few decades. However, it is quite challenging to use experimental or clinical data to estimate the unknown parameters (both constant and time-varying parameters) in complex nonlinear differential equation models. Therefore, investigators usually fix some parameter values, from the literature or by experience, to obtain only parameter estimates of interest from clinical or experimental data. However, when such prior information is not available, it is desirable to determine all the parameter estimates from data. In this paper, we intend to combine the newly developed approaches, a multi-stage smoothing-based (MSSB) method and the spline-enhanced nonlinear least squares (SNLS) approach, to estimate all HIV viral dynamic parameters in a nonlinear differential equation model. In particular, to the best of our knowledge, this is the first attempt to propose a comparatively thorough procedure, accounting for both efficiency and accuracy, to rigorously estimate all key kinetic parameters in a nonlinear differential equation model of HIV dynamics from clinical data. These parameters include the proliferation rate and death rate of uninfected HIV-targeted cells, the average number of virions produced by an infected cell, and the infection rate which is related to the antiviral treatment effect and is time-varying. To validate the estimation methods, we verified the identifiability of the HIV viral dynamic model and performed simulation studies. We applied the proposed techniques to estimate the key HIV viral dynamic parameters for two individual AIDS patients treated with antiretroviral therapies. We demonstrate that HIV viral dynamics can be well characterized and quantified for individual patients. As a result, personalized treatment decision based on viral dynamic models is possible.
Multisite EPR oximetry from multiple quadrature harmonics.
Ahmad, R; Som, S; Johnson, D H; Zweier, J L; Kuppusamy, P; Potter, L C
2012-01-01
Multisite continuous wave (CW) electron paramagnetic resonance (EPR) oximetry using multiple quadrature field modulation harmonics is presented. First, a recently developed digital receiver is used to extract multiple harmonics of field modulated projection data. Second, a forward model is presented that relates the projection data to unknown parameters, including linewidth at each site. Third, a maximum likelihood estimator of unknown parameters is reported using an iterative algorithm capable of jointly processing multiple quadrature harmonics. The data modeling and processing are applicable for parametric lineshapes under nonsaturating conditions. Joint processing of multiple harmonics leads to 2-3-fold acceleration of EPR data acquisition. For demonstration in two spatial dimensions, both simulations and phantom studies on an L-band system are reported. Copyright © 2011 Elsevier Inc. All rights reserved.
Parametric system identification of catamaran for improving controller design
NASA Astrophysics Data System (ADS)
Timpitak, Surasak; Prempraneerach, Pradya; Pengwang, Eakkachai
2018-01-01
This paper presents an estimation of simplified dynamic model for only surge- and yaw- motions of catamaran by using system identification (SI) techniques to determine associated unknown parameters. These methods will enhance the performance of designing processes for the motion control system of Unmanned Surface Vehicle (USV). The simulation results demonstrate an effective way to solve for damping forces and to determine added masses by applying least-square and AutoRegressive Exogenous (ARX) methods. Both methods are then evaluated according to estimated parametric errors from the vehicle’s dynamic model. The ARX method, which yields better estimated accuracy, can then be applied to identify unknown parameters as well as to help improving a controller design of a real unmanned catamaran.
Data Processing Algorithm for Diagnostics of Combustion Using Diode Laser Absorption Spectrometry.
Mironenko, Vladimir R; Kuritsyn, Yuril A; Liger, Vladimir V; Bolshov, Mikhail A
2018-02-01
A new algorithm for the evaluation of the integral line intensity for inferring the correct value for the temperature of a hot zone in the diagnostic of combustion by absorption spectroscopy with diode lasers is proposed. The algorithm is based not on the fitting of the baseline (BL) but on the expansion of the experimental and simulated spectra in a series of orthogonal polynomials, subtracting of the first three components of the expansion from both the experimental and simulated spectra, and fitting the spectra thus modified. The algorithm is tested in the numerical experiment by the simulation of the absorption spectra using a spectroscopic database, the addition of white noise, and the parabolic BL. Such constructed absorption spectra are treated as experimental in further calculations. The theoretical absorption spectra were simulated with the parameters (temperature, total pressure, concentration of water vapor) close to the parameters used for simulation of the experimental data. Then, spectra were expanded in the series of orthogonal polynomials and first components were subtracted from both spectra. The value of the correct integral line intensities and hence the correct temperature evaluation were obtained by fitting of the thus modified experimental and simulated spectra. The dependence of the mean and standard deviation of the evaluation of the integral line intensity on the linewidth and the number of subtracted components (first two or three) were examined. The proposed algorithm provides a correct estimation of temperature with standard deviation better than 60 K (for T = 1000 K) for the line half-width up to 0.6 cm -1 . The proposed algorithm allows for obtaining the parameters of a hot zone without the fitting of usually unknown BL.
Quantum pattern recognition with multi-neuron interactions
NASA Astrophysics Data System (ADS)
Fard, E. Rezaei; Aghayar, K.; Amniat-Talab, M.
2018-03-01
We present a quantum neural network with multi-neuron interactions for pattern recognition tasks by a combination of extended classic Hopfield network and adiabatic quantum computation. This scheme can be used as an associative memory to retrieve partial patterns with any number of unknown bits. Also, we propose a preprocessing approach to classifying the pattern space S to suppress spurious patterns. The results of pattern clustering show that for pattern association, the number of weights (η ) should equal the numbers of unknown bits in the input pattern ( d). It is also remarkable that associative memory function depends on the location of unknown bits apart from the d and load parameter α.
Digital Detection and Processing of Multiple Quadrature Harmonics for EPR Spectroscopy
Ahmad, R.; Som, S.; Kesselring, E.; Kuppusamy, P.; Zweier, J.L.; Potter, L.C.
2010-01-01
A quadrature digital receiver and associated signal estimation procedure are reported for L-band electron paramagnetic resonance (EPR) spectroscopy. The approach provides simultaneous acquisition and joint processing of multiple harmonics in both in-phase and out-of-phase channels. The digital receiver, based on a high-speed dual-channel analog-to-digital converter, allows direct digital down-conversion with heterodyne processing using digital capture of the microwave reference signal. Thus, the receiver avoids noise and nonlinearity associated with analog mixers. Also, the architecture allows for low-Q anti-alias filtering and does not require the sampling frequency to be time-locked to the microwave reference. A noise model applicable for arbitrary contributions of oscillator phase noise is presented, and a corresponding maximum-likelihood estimator of unknown parameters is also reported. The signal processing is applicable for Lorentzian lineshape under nonsaturating conditions. The estimation is carried out using a convergent iterative algorithm capable of jointly processing the in-phase and out-of-phase data in the presence of phase noise and unknown microwave phase. Cramér-Rao bound analysis and simulation results demonstrate a significant reduction in linewidth estimation error using quadrature detection, for both low and high values of phase noise. EPR spectroscopic data are also reported for illustration. PMID:20971667
Digital detection and processing of multiple quadrature harmonics for EPR spectroscopy.
Ahmad, R; Som, S; Kesselring, E; Kuppusamy, P; Zweier, J L; Potter, L C
2010-12-01
A quadrature digital receiver and associated signal estimation procedure are reported for L-band electron paramagnetic resonance (EPR) spectroscopy. The approach provides simultaneous acquisition and joint processing of multiple harmonics in both in-phase and out-of-phase channels. The digital receiver, based on a high-speed dual-channel analog-to-digital converter, allows direct digital down-conversion with heterodyne processing using digital capture of the microwave reference signal. Thus, the receiver avoids noise and nonlinearity associated with analog mixers. Also, the architecture allows for low-Q anti-alias filtering and does not require the sampling frequency to be time-locked to the microwave reference. A noise model applicable for arbitrary contributions of oscillator phase noise is presented, and a corresponding maximum-likelihood estimator of unknown parameters is also reported. The signal processing is applicable for Lorentzian lineshape under nonsaturating conditions. The estimation is carried out using a convergent iterative algorithm capable of jointly processing the in-phase and out-of-phase data in the presence of phase noise and unknown microwave phase. Cramér-Rao bound analysis and simulation results demonstrate a significant reduction in linewidth estimation error using quadrature detection, for both low and high values of phase noise. EPR spectroscopic data are also reported for illustration. Copyright © 2010 Elsevier Inc. All rights reserved.
Application of Control Method on a West Antarctic Glacier
NASA Astrophysics Data System (ADS)
Schmeltz, M.; Rignot, E. J.; Macayeal, D. R.
2002-12-01
We use surface velocity inferred with Interferometric synthetic-aperture radar and a control method to estimate unknown basal characteristics of a fast-moving glacier in West Antarctica, Pine Island Glacier. Previous modelling experiments on Pine Island Glacier have shown that using a coupled ice-stream/ice-shelf flow model in a forward approach (trial and error method) we were able to reproduce fairly well the surface velocity. Some discrepancies remained, however, that are partly due to uncertainties in the thickness map and incertainty in our chosen basal stress distribution (because of the non-unicity of the solution). The control method allow us to take the basal stress (or basal friction, since they are related through the velocity), as an unknown parameter. Results given by the control method should provide better reliable inputs for further modelling experiments. We investigate the results' sensitivity to the initial value of the basal stress. The inferred ratio basal drag/driving stress seems to be always low upstream, 60 to 80 km upstream of the grounding line, as if the ice stream was behaving like an ice shelf, and also reveals the presence of a snake shape channel of low ratio basal drag/driving stress, surrounded by a higher ratio, in the main flow of increasing velocity, from 20 to 40 km upstream of the grounding line.
NASA Astrophysics Data System (ADS)
Rafiq Abuturab, Muhammad
2018-01-01
A new asymmetric multiple information cryptosystem based on chaotic spiral phase mask (CSPM) and random spectrum decomposition is put forwarded. In the proposed system, each channel of secret color image is first modulated with a CSPM and then gyrator transformed. The gyrator spectrum is randomly divided into two complex-valued masks. The same procedure is applied to multiple secret images to get their corresponding first and second complex-valued masks. Finally, first and second masks of each channel are independently added to produce first and second complex ciphertexts, respectively. The main feature of the proposed method is the different secret images encrypted by different CSPMs using different parameters as the sensitive decryption/private keys which are completely unknown to unauthorized users. Consequently, the proposed system would be resistant to potential attacks. Moreover, the CSPMs are easier to position in the decoding process owing to their own centering mark on axis focal ring. The retrieved secret images are free from cross-talk noise effects. The decryption process can be implemented by optical experiment. Numerical simulation results demonstrate the viability and security of the proposed method.
NASA Astrophysics Data System (ADS)
Rosandi, Yudi; Grossi, Joás; Bringa, Eduardo M.; Urbassek, Herbert M.
2018-01-01
The incidence of energetic laser pulses on a metal foam may lead to foam ablation. The processes occurring in the foam may differ strongly from those in a bulk metal: The absorption of laser light, energy transfer to the atomic system, heat conduction, and finally, the atomistic processes—such as melting or evaporation—may be different. In addition, novel phenomena take place, such as a reorganization of the ligament network in the foam. We study all these processes in an Au foam of average porosity 79% and an average ligament diameter of 2.5 nm, using molecular dynamics simulation. The coupling of the electronic system to the atomic system is modeled by using the electron-phonon coupling, g, and the electronic heat diffusivity, κe, as model parameters, since their actual values for foams are unknown. We show that the foam coarsens under laser irradiation. While κe governs the homogeneity of the processes, g mainly determines their time scale. The final porosity reached is independent of the value of g.
NASA Astrophysics Data System (ADS)
Chen, Zhen; Chan, Tommy H. T.
2017-08-01
This paper proposes a new methodology for moving force identification (MFI) from the responses of bridge deck. Based on the existing time domain method (TDM), the MFI problem eventually becomes solving the linear algebraic equation in the form Ax = b . The vector b is usually contaminated by an unknown error e generating from measurement error, which often called the vector e as ''noise''. With the ill-posed problems that exist in the inverse problem, the identification force would be sensitive to the noise e . The proposed truncated generalized singular value decomposition method (TGSVD) aims at obtaining an acceptable solution and making the noise to be less sensitive to perturbations with the ill-posed problems. The illustrated results show that the TGSVD has many advantages such as higher precision, better adaptability and noise immunity compared with TDM. In addition, choosing a proper regularization matrix L and a truncation parameter k are very useful to improve the identification accuracy and to solve ill-posed problems when it is used to identify the moving force on bridge.
NASA Technical Reports Server (NTRS)
Watson, W. R.
1984-01-01
A method is developed for determining acoustic liner admittance in a rectangular duct with grazing flow. The axial propagation constant, cross mode order, and mean flow profile is measured. These measured data are then input into an analytical program which determines the unknown admittance value. The analytical program is based upon a finite element discretization of the acoustic field and a reposing of the unknown admittance value as a linear eigenvalue problem on the admittance value. Gaussian elimination is employed to solve this eigenvalue problem. The method used is extendable to grazing flows with boundary layers in both transverse directions of an impedance tube (or duct). Predicted admittance values are compared both with exact values that can be obtained for uniform mean flow profiles and with those from a Runge Kutta integration technique for cases involving a one dimensional boundary layer.
Pecha, Petr; Šmídl, Václav
2016-11-01
A stepwise sequential assimilation algorithm is proposed based on an optimisation approach for recursive parameter estimation and tracking of radioactive plume propagation in the early stage of a radiation accident. Predictions of the radiological situation in each time step of the plume propagation are driven by an existing short-term meteorological forecast and the assimilation procedure manipulates the model parameters to match the observations incoming concurrently from the terrain. Mathematically, the task is a typical ill-posed inverse problem of estimating the parameters of the release. The proposed method is designated as a stepwise re-estimation of the source term release dynamics and an improvement of several input model parameters. It results in a more precise determination of the adversely affected areas in the terrain. The nonlinear least-squares regression methodology is applied for estimation of the unknowns. The fast and adequately accurate segmented Gaussian plume model (SGPM) is used in the first stage of direct (forward) modelling. The subsequent inverse procedure infers (re-estimates) the values of important model parameters from the actual observations. Accuracy and sensitivity of the proposed method for real-time forecasting of the accident propagation is studied. First, a twin experiment generating noiseless simulated "artificial" observations is studied to verify the minimisation algorithm. Second, the impact of the measurement noise on the re-estimated source release rate is examined. In addition, the presented method can be used as a proposal for more advanced statistical techniques using, e.g., importance sampling. Copyright © 2016 Elsevier Ltd. All rights reserved.
Kumaraswamy autoregressive moving average models for double bounded environmental data
NASA Astrophysics Data System (ADS)
Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme
2017-12-01
In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.
Can the ZoMBieS method be used to characterise scintillator non-linearity?
Bignell, L J
2014-05-01
Measurements of the detection efficiency as a function of deposited electron energy in a liquid scintillation cocktail between 4 keV and 49 keV are obtained using the ZoMBieS method. Comparison is made between the measured data and the Poisson-Birks detection efficiency model. Measurements of the Birks non-linearity parameter, kB, and the linearised scintillation response of each photomultiplier, ω(i), were made using these data. However, the value of kB that best linearises the scintillator response is found to vary depending upon which photomultiplier is used in its determination, and the measured kB and ω(i) vary depending on the external source geometry. The cause of this behaviour is unknown. The triple-coincident detection efficiency appears to be unaffected by any systematic errors. © 2013 Published by Elsevier Ltd.
Multi-objective optimization in quantum parameter estimation
NASA Astrophysics Data System (ADS)
Gong, BeiLi; Cui, Wei
2018-04-01
We investigate quantum parameter estimation based on linear and Kerr-type nonlinear controls in an open quantum system, and consider the dissipation rate as an unknown parameter. We show that while the precision of parameter estimation is improved, it usually introduces a significant deformation to the system state. Moreover, we propose a multi-objective model to optimize the two conflicting objectives: (1) maximizing the Fisher information, improving the parameter estimation precision, and (2) minimizing the deformation of the system state, which maintains its fidelity. Finally, simulations of a simplified ɛ-constrained model demonstrate the feasibility of the Hamiltonian control in improving the precision of the quantum parameter estimation.
Zuo, Houjuan; Yan, Jiangtao; Zeng, Hesong; Li, Wenyu; Li, Pengcheng; Liu, Zhengxiang; Cui, Guanglin; Lv, Jiagao; Wang, Daowen; Wang, Hong
2015-01-01
Global longitudinal strain (GLS) measured by 2-D speckle-tracking echocardiography (2-D STE) at rest has been recognized as a sensitive parameter in the detection of significant coronary artery disease (CAD). However, the diagnostic power of 2-D STE in the detection of significant CAD in patients with diabetes mellitus is unknown. Two-dimensional STE features were studied in total of 143 consecutive patients who underwent echocardiography and coronary angiography. Left ventricular global and segmental peak systolic longitudinal strains (PSLSs) were quantified by speckle-tracking imaging. In the presence of obstructive CAD (defined as stenosis ≥75%), global PSLS was significantly lower in patients with diabetes mellitus than in patients without (16.65 ± 2.29% vs. 17.32 ± 2.27%, p < 0.05). Receiver operating characteristic analysis revealed that global PSLS could effectively detect obstructive CAD in patients without diabetes mellitus (cutoff value: -18.35%, sensitivity: 78.8%, specificity: 77.5%). However, global PSLS could detect obstructive CAD in diabetic patients at a lower cutoff value with inadequate sensitivity and specificity (cutoff value: -17.15%; sensitivity: 61.1%, specificity: 52.9%). In addition, the results for segmental PSLS were similar to those for global PSLS. In conclusion, global and segmental PSLSs at rest were significantly lower in patients with both obstructive CAD and diabetes mellitus than in patients with obstructive CAD only; thus, PSLSs at rest might not be a useful parameter in the detection of obstructive CAD in patients with diabetes mellitus. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Qu, Yanfei; Ma, Yongwen; Wan, Jinquan; Wang, Yan
2018-06-01
The silicon oil-air partition coefficients (K SiO/A ) of hydrophobic compounds are vital parameters for applying silicone oil as non-aqueous-phase liquid in partitioning bioreactors. Due to the limited number of K SiO/A values determined by experiment for hydrophobic compounds, there is an urgent need to model the K SiO/A values for unknown chemicals. In the present study, we developed a universal quantitative structure-activity relationship (QSAR) model using a sequential approach with macro-constitutional and micromolecular descriptors for silicone oil-air partition coefficients (K SiO/A ) of hydrophobic compounds with large structural variance. The geometry optimization and vibrational frequencies of each chemical were calculated using the hybrid density functional theory at the B3LYP/6-311G** level. Several quantum chemical parameters that reflect various intermolecular interactions as well as hydrophobicity were selected to develop QSAR model. The result indicates that a regression model derived from logK SiO/A , the number of non-hydrogen atoms (#nonHatoms) and energy gap of E LUMO and E HOMO (E LUMO -E HOMO ) could explain the partitioning mechanism of hydrophobic compounds between silicone oil and air. The correlation coefficient R 2 of the model is 0.922, and the internal and external validation coefficient, Q 2 LOO and Q 2 ext , are 0.91 and 0.89 respectively, implying that the model has satisfactory goodness-of-fit, robustness, and predictive ability and thus provides a robust predictive tool to estimate the logK SiO/A values for chemicals in application domain. The applicability domain of the model was visualized by the Williams plot.
NASA Astrophysics Data System (ADS)
Zhao, L. W.; Du, J. G.; Yin, J. L.
2018-05-01
This paper proposes a novel secured communication scheme in a chaotic system by applying generalized function projective synchronization of the nonlinear Schrödinger equation. This phenomenal approach guarantees a secured and convenient communication. Our study applied the Melnikov theorem with an active control strategy to suppress chaos in the system. The transmitted information signal is modulated into the parameter of the nonlinear Schrödinger equation in the transmitter and it is assumed that the parameter of the receiver system is unknown. Based on the Lyapunov stability theory and the adaptive control technique, the controllers are designed to make two identical nonlinear Schrödinger equation with the unknown parameter asymptotically synchronized. The numerical simulation results of our study confirmed the validity, effectiveness and the feasibility of the proposed novel synchronization method and error estimate for a secure communication. The Chaos masking signals of the information communication scheme, further guaranteed a safer and secured information communicated via this approach.
A review of the meteorological parameters which affect aerial application
NASA Technical Reports Server (NTRS)
Christensen, L. S.; Frost, W.
1979-01-01
The ambient wind field and temperature gradient were found to be the most important parameters. Investigation results indicated that the majority of meteorological parameters affecting dispersion were interdependent and the exact mechanism by which these factors influence the particle dispersion was largely unknown. The types and approximately ranges of instrumented capabilities for a systematic study of the significant meteorological parameters influencing aerial applications were defined. Current mathematical dispersion models were also briefly reviewed. Unfortunately, a rigorous dispersion model which could be applied to aerial application was not available.
SYNTHESIS OF NOVEL ALL-DIELECTRIC GRATING FILTERS USING GENETIC ALGORITHMS
NASA Technical Reports Server (NTRS)
Zuffada, Cinzia; Cwik, Tom; Ditchman, Christopher
1997-01-01
We are concerned with the design of inhomogeneous, all dielectric (lossless) periodic structures which act as filters. Dielectric filters made as stacks of inhomogeneous gratings and layers of materials are being used in optical technology, but are not common at microwave frequencies. The problem is then finding the periodic cell's geometric configuration and permittivity values which correspond to a specified reflectivity/transmittivity response as a function of frequency/illumination angle. This type of design can be thought of as an inverse-source problem, since it entails finding a distribution of sources which produce fields (or quantities derived from them) of given characteristics. Electromagnetic sources (electric and magnetic current densities) in a volume are related to the outside fields by a well known linear integral equation. Additionally, the sources are related to the fields inside the volume by a constitutive equation, involving the material properties. Then, the relationship linking the fields outside the source region to those inside is non-linear, in terms of material properties such as permittivity, permeability and conductivity. The solution of the non-linear inverse problem is cast here as a combination of two linear steps, by explicitly introducing the electromagnetic sources in the computational volume as a set of unknowns in addition to the material unknowns. This allows to solve for material parameters and related electric fields in the source volume which are consistent with Maxwell's equations. Solutions are obtained iteratively by decoupling the two steps. First, we invert for the permittivity only in the minimization of a cost function and second, given the materials, we find the corresponding electric fields through direct solution of the integral equation in the source volume. The sources thus computed are used to generate the far fields and the synthesized triter response. The cost function is obtained by calculating the deviation between the synthesized value of reflectivity/transmittivity and the desired one. Solution geometries for the periodic cell are sought as gratings (ensembles of columns of different heights and widths), or combinations of homogeneous layers of different dielectric materials and gratings. Hence the explicit unknowns of the inversion step are the material permittivities and the relative boundaries separating homogeneous parcels of the periodic cell.
Yu, Zhaoxu; Li, Shugang; Yu, Zhaosheng; Li, Fangfei
2018-04-01
This paper investigates the problem of output feedback adaptive stabilization for a class of nonstrict-feedback stochastic nonlinear systems with both unknown backlashlike hysteresis and unknown control directions. A new linear state transformation is applied to the original system, and then, control design for the new system becomes feasible. By combining the neural network's (NN's) parameterization, variable separation technique, and Nussbaum gain function method, an input-driven observer-based adaptive NN control scheme, which involves only one parameter to be updated, is developed for such systems. All closed-loop signals are bounded in probability and the error signals remain semiglobally bounded in the fourth moment (or mean square). Finally, the effectiveness and the applicability of the proposed control design are verified by two simulation examples.
Prognostic impact of posttransplantation iron overload after allogeneic stem cell transplantation.
Meyer, Sara C; O'Meara, Alix; Buser, Andreas S; Tichelli, André; Passweg, Jakob R; Stern, Martin
2013-03-01
In patients referred for allogeneic hematopoietic stem cell transplantation (HSCT), iron overload is frequent and associated with increased morbidity and mortality. Both the evolution of iron overload after transplantation and its correlation with late posttransplantation events are unknown. We studied 290 patients undergoing myeloablative allogeneic HSCT between 2000 and 2009. Serum ferritin, transferrin saturation, transferrin, iron, and soluble transferrin receptor were determined regularly between 1 and 60 months after HSCT, and values were correlated with transplantation outcome. Ferritin levels peaked in the first 3 months posttransplantation and then decreased to normal values at 5 years. Transferrin saturation and iron behaved analogously, whereas transferrin and soluble transferrin receptor increased after an early nadir. Landmark survival analysis showed that hyperferritinemia had a detrimental effect on survival in all periods analyzed (0 to 6 months P < .001; 6 to 12 months P < .001; 1 to 2 years P = .02; 2 to 5 years P = .002). This effect was independent of red blood cell transfusion dependency and graft-versus-host disease. Similar trends were seen for other iron parameters. These data show the natural dynamics of iron parameters in the setting of allogeneic HSCT and provide evidence for a prognostic role of iron overload extending beyond the immediate posttransplantation period. Interventions to reduce excessive body iron might therefore be beneficial both before and after HSCT. Copyright © 2013 American Society for Blood and Marrow Transplantation. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Dettmer, Jan; Molnar, Sheri; Steininger, Gavin; Dosso, Stan E.; Cassidy, John F.
2012-02-01
This paper applies a general trans-dimensional Bayesian inference methodology and hierarchical autoregressive data-error models to the inversion of microtremor array dispersion data for shear wave velocity (vs) structure. This approach accounts for the limited knowledge of the optimal earth model parametrization (e.g. the number of layers in the vs profile) and of the data-error statistics in the resulting vs parameter uncertainty estimates. The assumed earth model parametrization influences estimates of parameter values and uncertainties due to different parametrizations leading to different ranges of data predictions. The support of the data for a particular model is often non-unique and several parametrizations may be supported. A trans-dimensional formulation accounts for this non-uniqueness by including a model-indexing parameter as an unknown so that groups of models (identified by the indexing parameter) are considered in the results. The earth model is parametrized in terms of a partition model with interfaces given over a depth-range of interest. In this work, the number of interfaces (layers) in the partition model represents the trans-dimensional model indexing. In addition, serial data-error correlations are addressed by augmenting the geophysical forward model with a hierarchical autoregressive error model that can account for a wide range of error processes with a small number of parameters. Hence, the limited knowledge about the true statistical distribution of data errors is also accounted for in the earth model parameter estimates, resulting in more realistic uncertainties and parameter values. Hierarchical autoregressive error models do not rely on point estimates of the model vector to estimate data-error statistics, and have no requirement for computing the inverse or determinant of a data-error covariance matrix. This approach is particularly useful for trans-dimensional inverse problems, as point estimates may not be representative of the state space that spans multiple subspaces of different dimensionalities. The order of the autoregressive process required to fit the data is determined here by posterior residual-sample examination and statistical tests. Inference for earth model parameters is carried out on the trans-dimensional posterior probability distribution by considering ensembles of parameter vectors. In particular, vs uncertainty estimates are obtained by marginalizing the trans-dimensional posterior distribution in terms of vs-profile marginal distributions. The methodology is applied to microtremor array dispersion data collected at two sites with significantly different geology in British Columbia, Canada. At both sites, results show excellent agreement with estimates from invasive measurements.
Pentaerythritol Tetranitrate (PETN) Surveillance by HPLC-MS: Instrumental Parameters Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harvey, C A; Meissner, R
Surveillance of PETN Homologs in the stockpile here at LLNL is currently carried out by high performance liquid chromatography (HPLC) with ultra violet (UV) detection. Identification of unknown chromatographic peaks with this detection scheme is severely limited. The design agency is aware of the limitations of this methodology and ordered this study to develop instrumental parameters for the use of a currently owned mass spectrometer (MS) as the detection system. The resulting procedure would be a ''drop-in'' replacement for the current surveillance method (ERD04-524). The addition of quadrupole mass spectrometry provides qualitative identification of PETN and its homologs (Petrin, DiPEHN,more » TriPEON, and TetraPEDN) using a LLNL generated database, while providing mass clues to the identity of unknown chromatographic peaks.« less
Self-evaluation on Motion Adaptation for Service Robots
NASA Astrophysics Data System (ADS)
Funabora, Yuki; Yano, Yoshikazu; Doki, Shinji; Okuma, Shigeru
We suggest self motion evaluation method to adapt to environmental changes for service robots. Several motions such as walking, dancing, demonstration and so on are described with time series patterns. These motions are optimized with the architecture of the robot and under certain surrounding environment. Under unknown operating environment, robots cannot accomplish their tasks. We propose autonomous motion generation techniques based on heuristic search with histories of internal sensor values. New motion patterns are explored under unknown operating environment based on self-evaluation. Robot has some prepared motions which realize the tasks under the designed environment. Internal sensor values observed under the designed environment with prepared motions show the interaction results with the environment. Self-evaluation is composed of difference of internal sensor values between designed environment and unknown operating environment. Proposed method modifies the motions to synchronize the interaction results on both environment. New motion patterns are generated to maximize self-evaluation function without external information, such as run length, global position of robot, human observation and so on. Experimental results show that the possibility to adapt autonomously patterned motions to environmental changes.
A spline-based parameter estimation technique for static models of elastic structures
NASA Technical Reports Server (NTRS)
Dutt, P.; Taasan, S.
1986-01-01
The problem of identifying the spatially varying coefficient of elasticity using an observed solution to the forward problem is considered. Under appropriate conditions this problem can be treated as a first order hyperbolic equation in the unknown coefficient. Some continuous dependence results are developed for this problem and a spline-based technique is proposed for approximating the unknown coefficient, based on these results. The convergence of the numerical scheme is established and error estimates obtained.
Tsai, Jason S-H; Hsu, Wen-Teng; Lin, Long-Guei; Guo, Shu-Mei; Tann, Joseph W
2014-01-01
A modified nonlinear autoregressive moving average with exogenous inputs (NARMAX) model-based state-space self-tuner with fault tolerance is proposed in this paper for the unknown nonlinear stochastic hybrid system with a direct transmission matrix from input to output. Through the off-line observer/Kalman filter identification method, one has a good initial guess of modified NARMAX model to reduce the on-line system identification process time. Then, based on the modified NARMAX-based system identification, a corresponding adaptive digital control scheme is presented for the unknown continuous-time nonlinear system, with an input-output direct transmission term, which also has measurement and system noises and inaccessible system states. Besides, an effective state space self-turner with fault tolerance scheme is presented for the unknown multivariable stochastic system. A quantitative criterion is suggested by comparing the innovation process error estimated by the Kalman filter estimation algorithm, so that a weighting matrix resetting technique by adjusting and resetting the covariance matrices of parameter estimate obtained by the Kalman filter estimation algorithm is utilized to achieve the parameter estimation for faulty system recovery. Consequently, the proposed method can effectively cope with partially abrupt and/or gradual system faults and input failures by the fault detection. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Kanoatov, Mirzo; Galievsky, Victor A; Krylova, Svetlana M; Cherney, Leonid T; Jankowski, Hanna K; Krylov, Sergey N
2015-03-03
Nonequilibrium capillary electrophoresis of equilibrium mixtures (NECEEM) is a versatile tool for studying affinity binding. Here we describe a NECEEM-based approach for simultaneous determination of both the equilibrium constant, K(d), and the unknown concentration of a binder that we call a target, T. In essence, NECEEM is used to measure the unbound equilibrium fraction, R, for the binder with a known concentration that we call a ligand, L. The first set of experiments is performed at varying concentrations of T, prepared by serial dilution of the stock solution, but at a constant concentration of L, which is as low as its reliable quantitation allows. The value of R is plotted as a function of the dilution coefficient, and dilution corresponding to R = 0.5 is determined. This dilution of T is used in the second set of experiments in which the concentration of T is fixed but the concentration of L is varied. The experimental dependence of R on the concentration of L is fitted with a function describing their theoretical dependence. Both K(d) and the concentration of T are used as fitting parameters, and their sought values are determined as the ones that generate the best fit. We have fully validated this approach in silico by using computer-simulated NECEEM electropherograms and then applied it to experimental determination of the unknown concentration of MutS protein and K(d) of its interactions with a DNA aptamer. The general approach described here is applicable not only to NECEEM but also to any other method that can determine a fraction of unbound molecules at equilibrium.
Ai, Zhipin; Wang, Qinxue; Yang, Yonghui; Manevski, Kiril; Zhao, Xin; Eer, Deni
2017-12-19
Evaporation from land surfaces is a critical component of the Earth water cycle and of water management strategies. The complementary method originally proposed by Bouchet, which describes a linear relation between actual evaporation (E), potential evaporation (E po ) and apparent potential evaporation (E pa ) based on routinely measured weather data, is one of the various methods for evaporation calculation. This study evaluated the reformulated version of the original method, as proposed by Brutsaert, for forest land cover in Japan. The new complementary method is nonlinear and based on boundary conditions with strictly physical considerations. The only unknown parameter (α e ) was for the first time determined for various forest covers located from north to south across Japan. The values of α e ranged from 0.94 to 1.10, with a mean value of 1.01. Furthermore, the calculated evaporation with the new method showed a good fit with the eddy-covariance measured values, with a determination coefficient of 0.78 and a mean bias of 4%. Evaluation results revealed that the new nonlinear complementary relation performs better than the original linear relation in describing the relationship between E/E pa and E po /E pa , and also in depicting the asymmetry variation between E pa /E po and E/E po .
NASA Astrophysics Data System (ADS)
Reaver, N.; Kaplan, D. A.; Jawitz, J. W.
2017-12-01
The Budyko hypothesis states that a catchment's long-term water and energy balances are dependent on two relatively easy to measure quantities: rainfall depth and potential evaporation. This hypothesis is expressed as a simple function, the Budyko equation, which allows for the prediction of a catchment's actual evapotranspiration and discharge from measured rainfall depth and potential evaporation, data which are widely available. However, the two main analytically derived forms of the Budyko equation contain a single unknown watershed parameter, whose value varies across catchments; variation in this parameter has been used to explain the hydrological behavior of different catchments. The watershed parameter is generally thought of as a lumped quantity that represents the influence of all catchment biophysical features (e.g. soil type and depth, vegetation type, timing of rainfall, etc). Previous work has shown that the parameter is statistically correlated with catchment properties, but an explicit expression has been elusive. While the watershed parameter can be determined empirically by fitting the Budyko equation to measured data in gauged catchments where actual evapotranspiration can be estimated, this limits the utility of the framework for predicting impacts to catchment hydrology due to changing climate and land use. In this study, we developed an analytical solution for the lumped catchment parameter for both forms of the Budyko equation. We combined these solutions with a statistical soil moisture model to obtain analytical solutions for the Budyko equation parameter as a function of measurable catchment physical features, including rooting depth, soil porosity, and soil wilting point. We tested the predictive power of these solutions using the U.S. catchments in the MOPEX database. We also compared the Budyko equation parameter estimates generated from our analytical solutions (i.e. predicted parameters) with those obtained through the calibration of the Budyko equation to discharge data (i.e. empirical parameters), and found good agreement. These results suggest that it is possible to predict the Budyko equation watershed parameter directly from physical features, even for ungauged catchments.
Vernon, Ian; Liu, Junli; Goldstein, Michael; Rowe, James; Topping, Jen; Lindsey, Keith
2018-01-02
Many mathematical models have now been employed across every area of systems biology. These models increasingly involve large numbers of unknown parameters, have complex structure which can result in substantial evaluation time relative to the needs of the analysis, and need to be compared to observed data of various forms. The correct analysis of such models usually requires a global parameter search, over a high dimensional parameter space, that incorporates and respects the most important sources of uncertainty. This can be an extremely difficult task, but it is essential for any meaningful inference or prediction to be made about any biological system. It hence represents a fundamental challenge for the whole of systems biology. Bayesian statistical methodology for the uncertainty analysis of complex models is introduced, which is designed to address the high dimensional global parameter search problem. Bayesian emulators that mimic the systems biology model but which are extremely fast to evaluate are embeded within an iterative history match: an efficient method to search high dimensional spaces within a more formal statistical setting, while incorporating major sources of uncertainty. The approach is demonstrated via application to a model of hormonal crosstalk in Arabidopsis root development, which has 32 rate parameters, for which we identify the sets of rate parameter values that lead to acceptable matches between model output and observed trend data. The multiple insights into the model's structure that this analysis provides are discussed. The methodology is applied to a second related model, and the biological consequences of the resulting comparison, including the evaluation of gene functions, are described. Bayesian uncertainty analysis for complex models using both emulators and history matching is shown to be a powerful technique that can greatly aid the study of a large class of systems biology models. It both provides insight into model behaviour and identifies the sets of rate parameters of interest.
Multiparameter Estimation in Networked Quantum Sensors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.
We introduce a general model for a network of quantum sensors, and we use this model to consider the question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. Thismore » immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or non-linear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.« less
Multiparameter Estimation in Networked Quantum Sensors
Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.
2018-02-21
We introduce a general model for a network of quantum sensors, and we use this model to consider the question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. Thismore » immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or non-linear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.« less
Riley, William D.; Brown, Jr., Robert D.
1987-01-01
To identify the composition of a metal alloy, sparks generated from the alloy are optically observed and spectrographically analyzed. The spectrographic data, in the form of a full-spectrum plot of intensity versus wavelength, provide the "signature" of the metal alloy. This signature can be compared with similar plots for alloys of known composition to establish the unknown composition by a positive match with a known alloy. An alternative method is to form intensity ratios for pairs of predetermined wavelengths within the observed spectrum and to then compare the values of such ratios with similar values for known alloy compositions, thereby to positively identify the unknown alloy composition.
Universality of modulation length and time exponents.
Chakrabarty, Saurish; Dobrosavljević, Vladimir; Seidel, Alexander; Nussinov, Zohar
2012-10-01
We study systems with a crossover parameter λ, such as the temperature T, which has a threshold value λ(*) across which the correlation function changes from exhibiting fixed wavelength (or time period) modulations to continuously varying modulation lengths (or times). We introduce a hitherto unknown exponent ν(L) characterizing the universal nature of this crossover and compute its value in general instances. This exponent, similar to standard correlation length exponents, is obtained from motion of the poles of the momentum (or frequency) space correlation functions in the complex k-plane (or ω-plane) as the parameter λ is varied. Near the crossover (i.e., for λ→λ(*)), the characteristic modulation wave vector K(R) in the variable modulation length "phase" is related to that in the fixed modulation length "phase" q via |K(R)-q|[proportionality]|T-T(*)|(νL). We find, in general, that ν(L)=1/2. In some special instances, ν(L) may attain other rational values. We extend this result to general problems in which the eigenvalue of an operator or a pole characterizing general response functions may attain a constant real (or imaginary) part beyond a particular threshold value λ(*). We discuss extensions of this result to multiple other arenas. These include the axial next-nearest-neighbor Ising (ANNNI) model. By extending our considerations, we comment on relations pertaining not only to the modulation lengths (or times), but also to the standard correlation lengths (or times). We introduce the notion of a Josephson time scale. We comment on the presence of aperiodic "chaotic" modulations in "soft-spin" and other systems. These relate to glass-type features. We discuss applications to Fermi systems, with particular application to metal to band insulator transitions, change of Fermi surface topology, divergent effective masses, Dirac systems, and topological insulators. Both regular periodic and glassy (and spatially chaotic behavior) may be found in strongly correlated electronic systems.
Finding Top-kappa Unexplained Activities in Video
2012-03-09
parameters that define an UAP instance affect the running time by varying the values of each parameter while keeping the others fixed to a default...value. Runtime of Top-k TUA. Table 1 reports the values we considered for each parameter along with the corresponding default value. Parameter Values...Default value k 1, 2, 5, All All τ 0.4, 0.6, 0.8 0.6 L 160, 200, 240, 280 200 # worlds 7 E+04, 4 E+05, 2 E+07 2 E+07 TABLE 1: Parameter values used in
NASA Astrophysics Data System (ADS)
Stanaway, D. J.; Flores, A. N.; Haggerty, R.; Benner, S. G.; Feris, K. P.
2011-12-01
Concurrent assessment of biogeochemical and solute transport data (i.e. advection, dispersion, transient storage) within lotic systems remains a challenge in eco-hydrological research. Recently, the Resazurin-Resorufin Smart Tracer System (RRST) was proposed as a mechanism to measure microbial activity at the sediment-water interface [Haggerty et al., 2008, 2009] associating metabolic and hydrologic processes and allowing for the reach scale extrapolation of biotic function in the context of a dynamic physical environment. This study presents a Markov Chain Monte Carlo (MCMC) data assimilation technique to solve the inverse model of the Raz Rru Advection Dispersion Equation (RRADE). The RRADE is a suite of dependent 1-D reactive ADEs, associated through the microbially mediated reduction of Raz to Rru (k12). This reduction is proportional to DO consumption (R^2=0.928). MCMC is a suite of algorithms that solve Bayes theorem to condition uncertain model states and parameters on imperfect observations. Here, the RRST is employed to quantify the effect of chronic metal exposure on hyporheic microbial metabolism along a 100+ year old metal contamination gradient in the Clark Fork River (CF). We hypothesized that 1) the energetic cost of metal tolerance limits heterotrophic microbial respiration in communities evolved in chronic metal contaminated environments, with respiration inhibition directly correlated to degree of contamination (observational experiment) and 2) when experiencing acute metal stress, respiration rate inhibition of metal tolerant communities is less than that of naïve communities (manipulative experiment). To test these hypotheses, 4 replicate columns containing sediment collected from differently contaminated CF reaches and reference sites were fed a solution of RRST, NaCl, and cadmium (manipulative experiment only) within 24 hrs post collection. Column effluent was collected and measured for Raz, Rru, and EC to determine the Raz Rru breakthrough curves (BTC), subsequently modeled by the RRADE and thereby allowing derivation of in situ rates of metabolism. RRADE parameter values are estimated through Metropolis Hastings MCMC optimization. Unknown prior parameter distributions (PD) were constrained via a sensitivity analysis, except for the empirically estimated velocity. MCMC simulations were initiated at random points within the PD. Convergence of target distributions (TD) is achieved when the variance of the mode values of the six RRADE parameters in independent model replication is at least 10^{-3} less than the mode value. Convergence of k12, the parameter of interest, was more resolved, with modal variance of replicate simulations ranging from 10^{-4} less than the modal value to 0. The MCMC algorithm presented here offers a robust approach to solve the inverse RRST model and could be easily adapted to other inverse problems.
Entanglement, number fluctuations and optimized interferometric phase measurement
NASA Astrophysics Data System (ADS)
He, Q. Y.; Vaughan, T. G.; Drummond, P. D.; Reid, M. D.
2012-09-01
We derive a phase-entanglement criterion for two bosonic modes that is immune to number fluctuations, using the generalized Moore-Penrose inverse to normalize the phase-quadrature operator. We also obtain a phase-squeezing criterion that is immune to number fluctuations using similar techniques. These are used to obtain an operational definition of relative phase-measurement sensitivity via the analysis of phase measurement in interferometry. We show that these criteria are proportional to the enhanced phase-measurement sensitivity. The phase-entanglement criterion is the hallmark of a new type of quantum-squeezing, namely planar quantum-squeezing. This has the property that it squeezes simultaneously two orthogonal spin directions, which is possible owing to the fact that the SU(2) group that describes spin symmetry has a three-dimensional parameter space of higher dimension than the group for photonic quadratures. A practical advantage of planar quantum-squeezing is that, unlike conventional spin-squeezing, it allows noise reduction over all phase angles simultaneously. The application of this type of squeezing is to the quantum measurement of an unknown phase. We show that a completely unknown phase requires two orthogonal measurements and that with planar quantum-squeezing it is possible to reduce the measurement uncertainty independently of the unknown phase value. This is a different type of squeezing compared to the usual spin-squeezing interferometric criterion, which is applicable only when the measured phase is already known to a good approximation or can be measured iteratively. As an example, we calculate the phase entanglement of the ground state of a two-well, coupled Bose-Einstein condensate, similarly to recent experiments. This system demonstrates planar squeezing in both the attractive and the repulsive interaction regime.
Kwak, Dai Soon; Tao, Quang Bang; Todo, Mitsugu; Jeon, Insu
2012-05-01
Knee joint implants developed by western companies have been imported to Korea and used for Korean patients. However, many clinical problems occur in knee joints of Korean patients after total knee joint replacement owing to the geometric mismatch between the western implants and Korean knee joint structures. To solve these problems, a method to determine the representative dimension parameter values of Korean knee joints is introduced to aid in the design of knee joint implants appropriate for Korean patients. Measurements of the dimension parameters of 88 male Korean knee joint subjects were carried out. The distribution of the subjects versus each measured parameter value was investigated. The measured dimension parameter values of each parameter were grouped by suitable intervals called the "size group," and average values of the size groups were calculated. The knee joint subjects were grouped as the "patient group" based on "size group numbers" of each parameter. From the iterative calculations to decrease the errors between the average dimension parameter values of each "patient group" and the dimension parameter values of the subjects, the average dimension parameter values that give less than the error criterion were determined to be the representative dimension parameter values for designing knee joint implants for Korean patients.
NASA Astrophysics Data System (ADS)
Kernicky, Timothy; Whelan, Matthew; Al-Shaer, Ehab
2018-06-01
A methodology is developed for the estimation of internal axial force and boundary restraints within in-service, prismatic axial force members of structural systems using interval arithmetic and contractor programming. The determination of the internal axial force and end restraints in tie rods and cables using vibration-based methods has been a long standing problem in the area of structural health monitoring and performance assessment. However, for structural members with low slenderness where the dynamics are significantly affected by the boundary conditions, few existing approaches allow for simultaneous identification of internal axial force and end restraints and none permit for quantifying the uncertainties in the parameter estimates due to measurement uncertainties. This paper proposes a new technique for approaching this challenging inverse problem that leverages the Set Inversion Via Interval Analysis algorithm to solve for the unknown axial forces and end restraints using natural frequency measurements. The framework developed offers the ability to completely enclose the feasible solutions to the parameter identification problem, given specified measurement uncertainties for the natural frequencies. This ability to propagate measurement uncertainty into the parameter space is critical towards quantifying the confidence in the individual parameter estimates to inform decision-making within structural health diagnosis and prognostication applications. The methodology is first verified with simulated data for a case with unknown rotational end restraints and then extended to a case with unknown translational and rotational end restraints. A laboratory experiment is then presented to demonstrate the application of the methodology to an axially loaded rod with progressively increased end restraint at one end.
On the abundance of extraterrestrial life after the Kepler mission
NASA Astrophysics Data System (ADS)
Wandel, Amri
2015-07-01
The data recently accumulated by the Kepler mission have demonstrated that small planets are quite common and that a significant fraction of all stars may have an Earth-like planet within their habitable zone. These results are combined with a Drake-equation formalism to derive the space density of biotic planets as a function of the relatively modest uncertainty in the astronomical data and of the (yet unknown) probability for the evolution of biotic life, F b. I suggest that F b may be estimated by future spectral observations of exoplanet biomarkers. If F b is in the range 0.001-1, then a biotic planet may be expected within 10-100 light years from Earth. Extending the biotic results to advanced life I derive expressions for the distance to putative civilizations in terms of two additional Drake parameters - the probability for evolution of a civilization, F c, and its average longevity. For instance, assuming optimistic probability values (F b~F c~1) and a broadcasting longevity of a few thousand years, the likely distance to the nearest civilizations detectable by searching for intelligent electromagnetic signals is of the order of a few thousand light years. The probability of detecting intelligent signals with present and future radio telescopes is calculated as a function of the Drake parameters. Finally, I describe how the detection of intelligent signals would constrain the Drake parameters.
NASA Astrophysics Data System (ADS)
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Stohl, Andreas
2016-11-01
Estimation of pollutant releases into the atmosphere is an important problem in the environmental sciences. It is typically formalized as an inverse problem using a linear model that can explain observable quantities (e.g., concentrations or deposition values) as a product of the source-receptor sensitivity (SRS) matrix obtained from an atmospheric transport model multiplied by the unknown source-term vector. Since this problem is typically ill-posed, current state-of-the-art methods are based on regularization of the problem and solution of a formulated optimization problem. This procedure depends on manual settings of uncertainties that are often very poorly quantified, effectively making them tuning parameters. We formulate a probabilistic model, that has the same maximum likelihood solution as the conventional method using pre-specified uncertainties. Replacement of the maximum likelihood solution by full Bayesian estimation also allows estimation of all tuning parameters from the measurements. The estimation procedure is based on the variational Bayes approximation which is evaluated by an iterative algorithm. The resulting method is thus very similar to the conventional approach, but with the possibility to also estimate all tuning parameters from the observations. The proposed algorithm is tested and compared with the standard methods on data from the European Tracer Experiment (ETEX) where advantages of the new method are demonstrated. A MATLAB implementation of the proposed algorithm is available for download.
Forecasting financial asset processes: stochastic dynamics via learning neural networks.
Giebel, S; Rainer, M
2010-01-01
Models for financial asset dynamics usually take into account their inherent unpredictable nature by including a suitable stochastic component into their process. Unknown (forward) values of financial assets (at a given time in the future) are usually estimated as expectations of the stochastic asset under a suitable risk-neutral measure. This estimation requires the stochastic model to be calibrated to some history of sufficient length in the past. Apart from inherent limitations, due to the stochastic nature of the process, the predictive power is also limited by the simplifying assumptions of the common calibration methods, such as maximum likelihood estimation and regression methods, performed often without weights on the historic time series, or with static weights only. Here we propose a novel method of "intelligent" calibration, using learning neural networks in order to dynamically adapt the parameters of the stochastic model. Hence we have a stochastic process with time dependent parameters, the dynamics of the parameters being themselves learned continuously by a neural network. The back propagation in training the previous weights is limited to a certain memory length (in the examples we consider 10 previous business days), which is similar to the maximal time lag of autoregressive processes. We demonstrate the learning efficiency of the new algorithm by tracking the next-day forecasts for the EURTRY and EUR-HUF exchange rates each.
Priors in Whole-Genome Regression: The Bayesian Alphabet Returns
Gianola, Daniel
2013-01-01
Whole-genome enabled prediction of complex traits has received enormous attention in animal and plant breeding and is making inroads into human and even Drosophila genetics. The term “Bayesian alphabet” denotes a growing number of letters of the alphabet used to denote various Bayesian linear regressions that differ in the priors adopted, while sharing the same sampling model. We explore the role of the prior distribution in whole-genome regression models for dissecting complex traits in what is now a standard situation with genomic data where the number of unknown parameters (p) typically exceeds sample size (n). Members of the alphabet aim to confront this overparameterization in various manners, but it is shown here that the prior is always influential, unless n ≫ p. This happens because parameters are not likelihood identified, so Bayesian learning is imperfect. Since inferences are not devoid of the influence of the prior, claims about genetic architecture from these methods should be taken with caution. However, all such procedures may deliver reasonable predictions of complex traits, provided that some parameters (“tuning knobs”) are assessed via a properly conducted cross-validation. It is concluded that members of the alphabet have a room in whole-genome prediction of phenotypes, but have somewhat doubtful inferential value, at least when sample size is such that n ≪ p. PMID:23636739
Investigating Response from Turbulent Boundary Layer Excitations on a Real Launch Vehicle using SEA
NASA Technical Reports Server (NTRS)
Harrison, Phillip; LaVerde,Bruce; Teague, David
2009-01-01
Statistical Energy Analysis (SEA) response has been fairly well anchored to test observations for Diffuse Acoustic Field (DAF) loading by others. Meanwhile, not many examples can be found in the literature anchoring the SEA vehicle panel response results to Turbulent Boundary Layer (TBL) fluctuating pressure excitations. This deficiency is especially true for supersonic trajectories such as those required by this nation s launch vehicles. Space Shuttle response and excitation data recorded from vehicle flight measurements during the development flights were used in a trial to assess the capability of the SEA tool to predict similar responses. Various known/measured inputs were used. These were supplemented with a range of assumed values in order to cover unknown parameters of the flight. This comparison is presented as "Part A" of the study. A secondary, but perhaps more important, objective is to provide more clarity concerning the accuracy and conservatism that can be expected from response estimates of TBL-excited vehicle models in SEA (Part B). What range of parameters must be included in such an analysis in order to land on the conservative side in response predictions? What is the sensitivity of changes in these input parameters on the results? The TBL fluid structure loading model used for this study is provided by the SEA module of the commercial code VA One.
The Unknowns and Possible Implications of Mandatory Labeling.
McFadden, Brandon R
2017-01-01
The National Bioengineered Food Disclosure Standard requires a mandatory label for genetically modified (GM) food. Currently, some aspects of the bill are unknown, including what constitutes a food to be considered GM. The costs associated with this legislation will depend on how actors in the food value chain respond. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lukić, M.; Ćojbašić, Ž.; Rabasović, M. D.; Markushev, D. D.; Todorović, D. M.
2017-11-01
In this paper, the possibilities of computational intelligence applications for trace gas monitoring are discussed. For this, pulsed infrared photoacoustics is used to investigate SF6-Ar mixtures in a multiphoton regime, assisted by artificial neural networks. Feedforward multilayer perceptron networks are applied in order to recognize both the spatial characteristics of the laser beam and the values of laser fluence Φ from the given photoacoustic signal and prevent changes. Neural networks are trained in an offline batch training regime to simultaneously estimate four parameters from theoretical or experimental photoacoustic signals: the laser beam spatial profile R(r), vibrational-to-translational relaxation time τ _{V-T} , distance from the laser beam to the absorption molecules in the photoacoustic cell r* and laser fluence Φ . The results presented in this paper show that neural networks can estimate an unknown laser beam spatial profile and the parameters of photoacoustic signals in real time and with high precision. Real-time operation, high accuracy and the possibility of application for higher intensities of radiation for a wide range of laser fluencies are factors that classify the computational intelligence approach as efficient and powerful for the in situ measurement of atmospheric pollutants.
Hataji, Osamu; Nishii, Yoichi; Ito, Kentaro; Sakaguchi, Tadashi; Saiki, Haruko; Suzuki, Yuta; D'Alessandro-Gabazza, Corina; Fujimoto, Hajime; Kobayashi, Tetsu; Gabazza, Esteban C.; Taguchi, Osamu
2017-01-01
Combined therapy with tiotropium and olodaterol notably improves parameters of lung function and quality of life in patients with chronic obstructive pulmonary disease (COPD) compared to mono-components; however, its effect on physical activity is unknown. The present study evaluated whether combination therapy affects daily physical performance in patients with COPD under a smart watch-based encouragement program. This was a non-blinded clinical trial with no randomization or placebo control. A total of 20 patients with COPD were enrolled in the present study. The patients carried an accelerometer for 4 weeks; they received no therapy during the first 2 weeks but they were treated with combined tiotropium and olodaterol under a smart watch-based encouragement program for the last 2 weeks. The pulmonary function test, COPD assessment test, 6-min walk distance and parameters of physical activity were significantly improved (P<0.05) by combination therapy under smart watch-based coaching compared with values prior to treatment. To the best of our knowledge, the present study for the first time provides evidence that smart watch-based coaching in combination with tiotropium and olodaterol may improve daily physical activity in chronic obstructive pulmonary disease. PMID:29104624
Hataji, Osamu; Nishii, Yoichi; Ito, Kentaro; Sakaguchi, Tadashi; Saiki, Haruko; Suzuki, Yuta; D'Alessandro-Gabazza, Corina; Fujimoto, Hajime; Kobayashi, Tetsu; Gabazza, Esteban C; Taguchi, Osamu
2017-11-01
Combined therapy with tiotropium and olodaterol notably improves parameters of lung function and quality of life in patients with chronic obstructive pulmonary disease (COPD) compared to mono-components; however, its effect on physical activity is unknown. The present study evaluated whether combination therapy affects daily physical performance in patients with COPD under a smart watch-based encouragement program. This was a non-blinded clinical trial with no randomization or placebo control. A total of 20 patients with COPD were enrolled in the present study. The patients carried an accelerometer for 4 weeks; they received no therapy during the first 2 weeks but they were treated with combined tiotropium and olodaterol under a smart watch-based encouragement program for the last 2 weeks. The pulmonary function test, COPD assessment test, 6-min walk distance and parameters of physical activity were significantly improved (P<0.05) by combination therapy under smart watch-based coaching compared with values prior to treatment. To the best of our knowledge, the present study for the first time provides evidence that smart watch-based coaching in combination with tiotropium and olodaterol may improve daily physical activity in chronic obstructive pulmonary disease.
Petit, Magali; Vézina, François
2014-01-01
Reaction norms reflect an organisms' capacity to adjust its phenotype to the environment and allows for identifying trait values associated with physiological limits. However, reaction norms of physiological parameters are mostly unknown for endotherms living in natural conditions. Black-capped chickadees (Poecile atricapillus) increase their metabolic performance during winter acclimatization and are thus good model to measure reaction norms in the wild. We repeatedly measured basal (BMR) and summit (Msum) metabolism in chickadees to characterize, for the first time in a free-living endotherm, reaction norms of these parameters across the natural range of weather variation. BMR varied between individuals and was weakly and negatively related to minimal temperature. Msum varied with minimal temperature following a Z-shape curve, increasing linearly between 24°C and −10°C, and changed with absolute humidity following a U-shape relationship. These results suggest that thermal exchanges with the environment have minimal effects on maintenance costs, which may be individual-dependent, while thermogenic capacity is responding to body heat loss. Our results suggest also that BMR and Msum respond to different and likely independent constraints. PMID:25426860
NASA Astrophysics Data System (ADS)
Namwong, Lawit; Authayanun, Suthida; Saebea, Dang; Patcharavorachot, Yaneeporn; Arpornwichanop, Amornchai
2016-11-01
Proton-conducting solid oxide electrolysis cells (SOEC-H+) are a promising technology that can utilize carbon dioxide to produce syngas. In this work, a detailed electrochemical model was developed to predict the behavior of SOEC-H+ and to prove the assumption that the syngas is produced through a reversible water gas-shift (RWGS) reaction. The simulation results obtained from the model, which took into account all of the cell voltage losses (i.e., ohmic, activation, and concentration losses), were validated using experimental data to evaluate the unknown parameters. The developed model was employed to examine the structural and operational parameters. It is found that the cathode-supported SOEC-H+ is the best configuration because it requires the lowest cell potential. SOEC-H+ operated favorably at high temperatures and low pressures. Furthermore, the simulation results revealed that the optimal S/C molar ratio for syngas production, which can be used for methanol synthesis, is approximately 3.9 (at a constant temperature and pressure). The SOEC-H+ was optimized using a response surface methodology, which was used to determine the optimal operating conditions to minimize the cell potential and maximize the carbon dioxide flow rate.
AOTF hyperspectral microscopic imaging for foodborne pathogenic bacteria detection
NASA Astrophysics Data System (ADS)
Park, Bosoon; Lee, Sangdae; Yoon, Seung-Chul; Sundaram, Jaya; Windham, William R.; Hinton, Arthur, Jr.; Lawrence, Kurt C.
2011-06-01
Hyperspectral microscope imaging (HMI) method which provides both spatial and spectral information can be effective for foodborne pathogen detection. The AOTF-based hyperspectral microscope imaging method can be used to characterize spectral properties of biofilm formed by Salmonella enteritidis as well as Escherichia coli. The intensity of spectral imagery and the pattern of spectral distribution varied with system parameters (integration time and gain) of HMI system. The preliminary results demonstrated determination of optimum parameter values of HMI system and the integration time must be no more than 250 ms for quality image acquisition from biofilm formed by S. enteritidis. Among the contiguous spectral imagery between 450 and 800 nm, the intensity of spectral images at 498, 522, 550 and 594 nm were distinctive for biofilm; whereas, the intensity of spectral images at 546 nm was distinctive for E. coli. For more accurate comparison of intensity from spectral images, a calibration protocol, using neutral density filters and multiple exposures, need to be developed to standardize image acquisition. For the identification or classification of unknown food pathogen samples, ground truth regions-of-interest pixels need to be selected for "spectrally pure fingerprints" for the Salmonella and E. coli species.
Structure Elucidation of Unknown Metabolites in Metabolomics by Combined NMR and MS/MS Prediction
Boiteau, Rene M.; Hoyt, David W.; Nicora, Carrie D.; ...
2018-01-17
Here, we introduce a cheminformatics approach that combines highly selective and orthogonal structure elucidation parameters; accurate mass, MS/MS (MS 2), and NMR in a single analysis platform to accurately identify unknown metabolites in untargeted studies. The approach starts with an unknown LC-MS feature, and then combines the experimental MS/MS and NMR information of the unknown to effectively filter the false positive candidate structures based on their predicted MS/MS and NMR spectra. We demonstrate the approach on a model mixture and then we identify an uncatalogued secondary metabolite in Arabidopsis thaliana. The NMR/MS 2 approach is well suited for discovery ofmore » new metabolites in plant extracts, microbes, soils, dissolved organic matter, food extracts, biofuels, and biomedical samples, facilitating the identification of metabolites that are not present in experimental NMR and MS metabolomics databases.« less
Structure Elucidation of Unknown Metabolites in Metabolomics by Combined NMR and MS/MS Prediction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boiteau, Rene M.; Hoyt, David W.; Nicora, Carrie D.
Here, we introduce a cheminformatics approach that combines highly selective and orthogonal structure elucidation parameters; accurate mass, MS/MS (MS 2), and NMR in a single analysis platform to accurately identify unknown metabolites in untargeted studies. The approach starts with an unknown LC-MS feature, and then combines the experimental MS/MS and NMR information of the unknown to effectively filter the false positive candidate structures based on their predicted MS/MS and NMR spectra. We demonstrate the approach on a model mixture and then we identify an uncatalogued secondary metabolite in Arabidopsis thaliana. The NMR/MS 2 approach is well suited for discovery ofmore » new metabolites in plant extracts, microbes, soils, dissolved organic matter, food extracts, biofuels, and biomedical samples, facilitating the identification of metabolites that are not present in experimental NMR and MS metabolomics databases.« less
Structure Elucidation of Unknown Metabolites in Metabolomics by Combined NMR and MS/MS Prediction
Hoyt, David W.; Nicora, Carrie D.; Kinmonth-Schultz, Hannah A.; Ward, Joy K.
2018-01-01
We introduce a cheminformatics approach that combines highly selective and orthogonal structure elucidation parameters; accurate mass, MS/MS (MS2), and NMR into a single analysis platform to accurately identify unknown metabolites in untargeted studies. The approach starts with an unknown LC-MS feature, and then combines the experimental MS/MS and NMR information of the unknown to effectively filter out the false positive candidate structures based on their predicted MS/MS and NMR spectra. We demonstrate the approach on a model mixture, and then we identify an uncatalogued secondary metabolite in Arabidopsis thaliana. The NMR/MS2 approach is well suited to the discovery of new metabolites in plant extracts, microbes, soils, dissolved organic matter, food extracts, biofuels, and biomedical samples, facilitating the identification of metabolites that are not present in experimental NMR and MS metabolomics databases. PMID:29342073
Autopilot for frequency-modulation atomic force microscopy.
Kuchuk, Kfir; Schlesinger, Itai; Sivan, Uri
2015-10-01
One of the most challenging aspects of operating an atomic force microscope (AFM) is finding optimal feedback parameters. This statement applies particularly to frequency-modulation AFM (FM-AFM), which utilizes three feedback loops to control the cantilever excitation amplitude, cantilever excitation frequency, and z-piezo extension. These loops are regulated by a set of feedback parameters, tuned by the user to optimize stability, sensitivity, and noise in the imaging process. Optimization of these parameters is difficult due to the coupling between the frequency and z-piezo feedback loops by the non-linear tip-sample interaction. Four proportional-integral (PI) parameters and two lock-in parameters regulating these loops require simultaneous optimization in the presence of a varying unknown tip-sample coupling. Presently, this optimization is done manually in a tedious process of trial and error. Here, we report on the development and implementation of an algorithm that computes the control parameters automatically. The algorithm reads the unperturbed cantilever resonance frequency, its quality factor, and the z-piezo driving signal power spectral density. It analyzes the poles and zeros of the total closed loop transfer function, extracts the unknown tip-sample transfer function, and finds four PI parameters and two lock-in parameters for the frequency and z-piezo control loops that optimize the bandwidth and step response of the total system. Implementation of the algorithm in a home-built AFM shows that the calculated parameters are consistently excellent and rarely require further tweaking by the user. The new algorithm saves the precious time of experienced users, facilitates utilization of FM-AFM by casual users, and removes the main hurdle on the way to fully automated FM-AFM.
Autopilot for frequency-modulation atomic force microscopy
NASA Astrophysics Data System (ADS)
Kuchuk, Kfir; Schlesinger, Itai; Sivan, Uri
2015-10-01
One of the most challenging aspects of operating an atomic force microscope (AFM) is finding optimal feedback parameters. This statement applies particularly to frequency-modulation AFM (FM-AFM), which utilizes three feedback loops to control the cantilever excitation amplitude, cantilever excitation frequency, and z-piezo extension. These loops are regulated by a set of feedback parameters, tuned by the user to optimize stability, sensitivity, and noise in the imaging process. Optimization of these parameters is difficult due to the coupling between the frequency and z-piezo feedback loops by the non-linear tip-sample interaction. Four proportional-integral (PI) parameters and two lock-in parameters regulating these loops require simultaneous optimization in the presence of a varying unknown tip-sample coupling. Presently, this optimization is done manually in a tedious process of trial and error. Here, we report on the development and implementation of an algorithm that computes the control parameters automatically. The algorithm reads the unperturbed cantilever resonance frequency, its quality factor, and the z-piezo driving signal power spectral density. It analyzes the poles and zeros of the total closed loop transfer function, extracts the unknown tip-sample transfer function, and finds four PI parameters and two lock-in parameters for the frequency and z-piezo control loops that optimize the bandwidth and step response of the total system. Implementation of the algorithm in a home-built AFM shows that the calculated parameters are consistently excellent and rarely require further tweaking by the user. The new algorithm saves the precious time of experienced users, facilitates utilization of FM-AFM by casual users, and removes the main hurdle on the way to fully automated FM-AFM.
Autopilot for frequency-modulation atomic force microscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuchuk, Kfir; Schlesinger, Itai; Sivan, Uri, E-mail: phsivan@tx.technion.ac.il
2015-10-15
One of the most challenging aspects of operating an atomic force microscope (AFM) is finding optimal feedback parameters. This statement applies particularly to frequency-modulation AFM (FM-AFM), which utilizes three feedback loops to control the cantilever excitation amplitude, cantilever excitation frequency, and z-piezo extension. These loops are regulated by a set of feedback parameters, tuned by the user to optimize stability, sensitivity, and noise in the imaging process. Optimization of these parameters is difficult due to the coupling between the frequency and z-piezo feedback loops by the non-linear tip-sample interaction. Four proportional-integral (PI) parameters and two lock-in parameters regulating these loopsmore » require simultaneous optimization in the presence of a varying unknown tip-sample coupling. Presently, this optimization is done manually in a tedious process of trial and error. Here, we report on the development and implementation of an algorithm that computes the control parameters automatically. The algorithm reads the unperturbed cantilever resonance frequency, its quality factor, and the z-piezo driving signal power spectral density. It analyzes the poles and zeros of the total closed loop transfer function, extracts the unknown tip-sample transfer function, and finds four PI parameters and two lock-in parameters for the frequency and z-piezo control loops that optimize the bandwidth and step response of the total system. Implementation of the algorithm in a home-built AFM shows that the calculated parameters are consistently excellent and rarely require further tweaking by the user. The new algorithm saves the precious time of experienced users, facilitates utilization of FM-AFM by casual users, and removes the main hurdle on the way to fully automated FM-AFM.« less
Bayesian source term determination with unknown covariance of measurements
NASA Astrophysics Data System (ADS)
Belal, Alkomiet; Tichý, Ondřej; Šmídl, Václav
2017-04-01
Determination of a source term of release of a hazardous material into the atmosphere is a very important task for emergency response. We are concerned with the problem of estimation of the source term in the conventional linear inverse problem, y = Mx, where the relationship between the vector of observations y is described using the source-receptor-sensitivity (SRS) matrix M and the unknown source term x. Since the system is typically ill-conditioned, the problem is recast as an optimization problem minR,B(y - Mx)TR-1(y - Mx) + xTB-1x. The first term minimizes the error of the measurements with covariance matrix R, and the second term is a regularization of the source term. There are different types of regularization arising for different choices of matrices R and B, for example, Tikhonov regularization assumes covariance matrix B as the identity matrix multiplied by scalar parameter. In this contribution, we adopt a Bayesian approach to make inference on the unknown source term x as well as unknown R and B. We assume prior on x to be a Gaussian with zero mean and unknown diagonal covariance matrix B. The covariance matrix of the likelihood R is also unknown. We consider two potential choices of the structure of the matrix R. First is the diagonal matrix and the second is a locally correlated structure using information on topology of the measuring network. Since the inference of the model is intractable, iterative variational Bayes algorithm is used for simultaneous estimation of all model parameters. The practical usefulness of our contribution is demonstrated on an application of the resulting algorithm to real data from the European Tracer Experiment (ETEX). This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).
Yobbi, D.K.
2000-01-01
A nonlinear least-squares regression technique for estimation of ground-water flow model parameters was applied to an existing model of the regional aquifer system underlying west-central Florida. The regression technique minimizes the differences between measured and simulated water levels. Regression statistics, including parameter sensitivities and correlations, were calculated for reported parameter values in the existing model. Optimal parameter values for selected hydrologic variables of interest are estimated by nonlinear regression. Optimal estimates of parameter values are about 140 times greater than and about 0.01 times less than reported values. Independently estimating all parameters by nonlinear regression was impossible, given the existing zonation structure and number of observations, because of parameter insensitivity and correlation. Although the model yields parameter values similar to those estimated by other methods and reproduces the measured water levels reasonably accurately, a simpler parameter structure should be considered. Some possible ways of improving model calibration are to: (1) modify the defined parameter-zonation structure by omitting and/or combining parameters to be estimated; (2) carefully eliminate observation data based on evidence that they are likely to be biased; (3) collect additional water-level data; (4) assign values to insensitive parameters, and (5) estimate the most sensitive parameters first, then, using the optimized values for these parameters, estimate the entire data set.
Concurrently adjusting interrelated control parameters to achieve optimal engine performance
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna
2015-12-01
Methods and systems for real-time engine control optimization are provided. A value of an engine performance variable is determined, a value of a first operating condition and a value of a second operating condition of a vehicle engine are detected, and initial values for a first engine control parameter and a second engine control parameter are determined based on the detected first operating condition and the detected second operating condition. The initial values for the first engine control parameter and the second engine control parameter are adjusted based on the determined value of the engine performance variable to cause the engine performance variable to approach a target engine performance variable. In order to cause the engine performance variable to approach the target engine performance variable, adjusting the initial value for the first engine control parameter necessitates a corresponding adjustment of the initial value for the second engine control parameter.
Tactile perception and working memory in rats and humans
Fassihi, Arash; Akrami, Athena; Esmaeili, Vahid; Diamond, Mathew E.
2014-01-01
Primates can store sensory stimulus parameters in working memory for subsequent manipulation, but until now, there has been no demonstration of this capacity in rodents. Here we report tactile working memory in rats. Each stimulus is a vibration, generated as a series of velocity values sampled from a normal distribution. To perform the task, the rat positions its whiskers to receive two such stimuli, “base” and “comparison,” separated by a variable delay. It then judges which stimulus had greater velocity SD. In analogous experiments, humans compare two vibratory stimuli on the fingertip. We demonstrate that the ability of rats to hold base stimulus information (for up to 8 s) and their acuity in assessing stimulus differences overlap the performance demonstrated by humans. This experiment highlights the ability of rats to perceive the statistical structure of vibrations and reveals their previously unknown capacity to store sensory information in working memory. PMID:24449850
Catastrophic Shifts in Semiarid Vegetation-Soil Systems May Unfold Rapidly or Slowly.
Karssenberg, Derek; Bierkens, Marc F P; Rietkerk, Max
2017-12-01
Under gradual change of a driver, complex systems may switch between contrasting stable states. For many ecosystems it is unknown how rapidly such a critical transition unfolds. Here we explore the rate of change during the degradation of a semiarid ecosystem with a model coupling the vegetation and geomorphological system. Two stable states-vegetated and bare-are identified, and it is shown that the change between these states is a critical transition. Surprisingly, the critical transition between the vegetated and bare state can unfold either rapidly over a few years or gradually over decennia up to millennia, depending on parameter values. An important condition for the phenomenon is the linkage between slow and fast ecosystems components. Our results show that, next to climate change and disturbance rates, the geological and geomorphological setting of a semiarid ecosystem is crucial in predicting its fate.
Correction of broadband albedo measurements affected by unknown slope and sensor tilts
NASA Astrophysics Data System (ADS)
Weiser, Ursula; Olefs, Marc; Schöner, Wolfgang; Weyss, Gernot; Hynek, Bernhard
2017-02-01
Geometric effects induced by the underlying terrain slope or by tilt errors of radiation sensors lead to an erroneous measurement of snow or ice albedo. Consequently, diurnal albedo variations are observed. A general method to correct tilt errors of albedo measurements in cases where tilts of both the sensors and the slopes are not accurately measured or known is presented. Atmospheric parameters for this correction method can either be taken from a nearby well-maintained and horizontally levelled measurement of global radiation or alternatively from a solar radiation model. In a next step the model is fitted to the measured data to determine tilts and directions of the sensors and the underlying terrain slope. This then allows to correct the measured albedo, the radiative balance and the energy balance. Depending on the direction of the slope and the sensors a comparison between measured and corrected albedo values reveals obvious over-or underestimations of albedo.
Using a derivative-free optimization method for multiple solutions of inverse transport problems
Armstrong, Jerawan C.; Favorite, Jeffrey A.
2016-01-14
Identifying unknown components of an object that emits radiation is an important problem for national and global security. Radiation signatures measured from an object of interest can be used to infer object parameter values that are not known. This problem is called an inverse transport problem. An inverse transport problem may have multiple solutions and the most widely used approach for its solution is an iterative optimization method. This paper proposes a stochastic derivative-free global optimization algorithm to find multiple solutions of inverse transport problems. The algorithm is an extension of a multilevel single linkage (MLSL) method where a meshmore » adaptive direct search (MADS) algorithm is incorporated into the local phase. Furthermore, numerical test cases using uncollided fluxes of discrete gamma-ray lines are presented to show the performance of this new algorithm.« less
Directed search for continuous gravitational waves from the Galactic center
NASA Astrophysics Data System (ADS)
Aasi, J.; Abadie, J.; Abbott, B. P.; Abbott, R.; Abbott, T.; Abernathy, M. R.; Accadia, T.; Acernese, F.; Adams, C.; Adams, T.; Adhikari, R. X.; Affeldt, C.; Agathos, M.; Aggarwal, N.; Aguiar, O. D.; Ajith, P.; Allen, B.; Allocca, A.; Amador Ceron, E.; Amariutei, D.; Anderson, R. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C.; Areeda, J.; Ast, S.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Austin, L.; Aylott, B. E.; Babak, S.; Baker, P. T.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barker, D.; Barnum, S. H.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barton, M. A.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J.; Bauchrowitz, J.; Bauer, Th. S.; Bebronne, M.; Behnke, B.; Bejger, M.; Beker, M. G.; Bell, A. S.; Bell, C.; Belopolski, I.; Bergmann, G.; Berliner, J. M.; Bertolini, A.; Bessis, D.; Betzwieser, J.; Beyersdorf, P. T.; Bhadbhade, T.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Bitossi, M.; Bizouard, M. A.; Black, E.; Blackburn, J. K.; Blackburn, L.; Blair, D.; Blom, M.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogan, C.; Bond, C.; Bondu, F.; Bonelli, L.; Bonnand, R.; Bork, R.; Born, M.; Bose, S.; Bosi, L.; Bowers, J.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brannen, C. A.; Brau, J. E.; Breyer, J.; Briant, T.; Bridges, D. O.; Brillet, A.; Brinkmann, M.; Brisson, V.; Britzger, M.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brückner, F.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Calderón Bustillo, J.; Calloni, E.; Camp, J. B.; Campsie, P.; Cannon, K. C.; Canuel, B.; Cao, J.; Capano, C. D.; Carbognani, F.; Carbone, L.; Caride, S.; Castiglia, A.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C.; Cesarini, E.; Chakraborty, R.; Chalermsongsak, T.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Chen, X.; Chen, Y.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Chow, J.; Christensen, N.; Chu, Q.; Chua, S. S. Y.; Chung, S.; Ciani, G.; Clara, F.; Clark, D. E.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Colombini, M.; Constancio, M., Jr.; Conte, A.; Conte, R.; Cook, D.; Corbitt, T. R.; Cordier, M.; Cornish, N.; Corsi, A.; Costa, C. A.; Coughlin, M. W.; Coulon, J.-P.; Countryman, S.; Couvares, P.; Coward, D. M.; Cowart, M.; Coyne, D. C.; Craig, K.; Creighton, J. D. E.; Creighton, T. D.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dahl, K.; Dal Canton, T.; Damjanic, M.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dattilo, V.; Daudert, B.; Daveloza, H.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; Dayanga, T.; De Rosa, R.; Debreczeni, G.; Degallaix, J.; Del Pozzo, W.; Deleeuw, E.; Deléglise, S.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.; DeRosa, R.; DeSalvo, R.; Dhurandhar, S.; Di Fiore, L.; Di Lieto, A.; Di Palma, I.; Di Virgilio, A.; Díaz, M.; Dietz, A.; Dmitry, K.; Donovan, F.; Dooley, K. L.; Doravari, S.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Dumas, J.-C.; Dwyer, S.; Eberle, T.; Edwards, M.; Effler, A.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Endrőczi, G.; Essick, R.; Etzel, T.; Evans, K.; Evans, M.; Evans, T.; Factourovich, M.; Fafone, V.; Fairhurst, S.; Fang, Q.; Farr, B.; Farr, W.; Favata, M.; Fazi, D.; Fehrmann, H.; Feldbaum, D.; Ferrante, I.; Ferrini, F.; Fidecaro, F.; Finn, L. S.; Fiori, I.; Fisher, R.; Flaminio, R.; Foley, E.; Foley, S.; Forsi, E.; Forte, L. A.; Fotopoulos, N.; Fournier, J.-D.; Franco, S.; Frasca, S.; Frasconi, F.; Frede, M.; Frei, M.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T. T.; Fritschel, P.; Frolov, V. V.; Fujimoto, M.-K.; Fulda, P.; Fyffe, M.; Gair, J.; Gammaitoni, L.; Garcia, J.; Garufi, F.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; Gergely, L.; Ghosh, S.; Giaime, J. A.; Giampanis, S.; Giardina, K. D.; Giazotto, A.; Gil-Casanova, S.; Gill, C.; Gleason, J.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gordon, N.; Gorodetsky, M. L.; Gossan, S.; Goßler, S.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greenhalgh, R. J. S.; Gretarsson, A. M.; Griffo, C.; Grote, H.; Grover, K.; Grunewald, S.; Guidi, G. M.; Guido, C.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hall, B.; Hall, E.; Hammer, D.; Hammond, G.; Hanke, M.; Hanks, J.; Hanna, C.; Hanson, J.; Harms, J.; Harry, G. M.; Harry, I. W.; Harstad, E. D.; Hartman, M. T.; Haughian, K.; Hayama, K.; Heefner, J.; Heidmann, A.; Heintze, M.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Holt, K.; Holtrop, M.; Hong, T.; Hooper, S.; Horrom, T.; Hosken, D. J.; Hough, J.; Howell, E. J.; Hu, Y.; Hua, Z.; Huang, V.; Huerta, E. A.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh, M.; Huynh-Dinh, T.; Iafrate, J.; Ingram, D. R.; Inta, R.; Isogai, T.; Ivanov, A.; Iyer, B. R.; Izumi, K.; Jacobson, M.; James, E.; Jang, H.; Jang, Y. J.; Jaranowski, P.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; K, Haris; Kalmus, P.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Kasprzack, M.; Kasturi, R.; Katsavounidis, E.; Katzman, W.; Kaufer, H.; Kaufman, K.; Kawabe, K.; Kawamura, S.; Kawazoe, F.; Kéfélian, F.; Keitel, D.; Kelley, D. B.; Kells, W.; Keppel, D. G.; Khalaidovski, A.; Khalili, F. Y.; Khazanov, E. A.; Kim, B. K.; Kim, C.; Kim, K.; Kim, N.; Kim, W.; Kim, Y.-M.; King, E. J.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Klimenko, S.; Kline, J.; Koehlenbeck, S.; Kokeyama, K.; Kondrashov, V.; Koranda, S.; Korth, W. Z.; Kowalska, I.; Kozak, D.; Kremin, A.; Kringel, V.; Krishnan, B.; Królak, A.; Kucharczyk, C.; Kudla, S.; Kuehn, G.; Kumar, A.; Kumar, P.; Kumar, R.; Kurdyumov, R.; Kwee, P.; Landry, M.; Lantz, B.; Larson, S.; Lasky, P. D.; Lawrie, C.; Lazzarini, A.; Le Roux, A.; Leaci, P.; Lebigot, E. O.; Lee, C.-H.; Lee, H. K.; Lee, H. M.; Lee, J.; Lee, J.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levine, B.; Lewis, J. B.; Lhuillier, V.; Li, T. G. F.; Lin, A. C.; Littenberg, T. B.; Litvine, V.; Liu, F.; Liu, H.; Liu, Y.; Liu, Z.; Lloyd, D.; Lockerbie, N. A.; Lockett, V.; Lodhia, D.; Loew, K.; Logue, J.; Lombardi, A. L.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J.; Luan, J.; Lubinski, M. J.; Lück, H.; Lundgren, A. P.; Macarthur, J.; Macdonald, E.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magana-Sandoval, F.; Mageswaran, M.; Mailand, K.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Manca, G. M.; Mandel, I.; Mandic, V.; Mangano, V.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A.; Maros, E.; Marque, J.; Martelli, F.; Martin, I. W.; Martin, R. M.; Martinelli, L.; Martynov, D.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Matichard, F.; Matone, L.; Matzner, R. A.; Mavalvala, N.; May, G.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McGuire, S. C.; McIntyre, G.; McIver, J.; Meacher, D.; Meadors, G. D.; Mehmet, M.; Meidam, J.; Meier, T.; Melatos, A.; Mendell, G.; Mercer, R. A.; Meshkov, S.; Messenger, C.; Meyer, M. S.; Miao, H.; Michel, C.; Mikhailov, E. E.; Milano, L.; Miller, J.; Minenkov, Y.; Mingarelli, C. M. F.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moe, B.; Mohan, M.; Mohapatra, S. R. P.; Mokler, F.; Moraru, D.; Moreno, G.; Morgado, N.; Mori, T.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Mukherjee, S.; Mullavey, A.; Munch, J.; Murphy, D.; Murray, P. G.; Mytidis, A.; Nagy, M. F.; Nanda Kumar, D.; Nardecchia, I.; Nash, T.; Naticchioni, L.; Nayak, R.; Necula, V.; Neri, I.; Newton, G.; Nguyen, T.; Nishida, E.; Nishizawa, A.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E.; Nuttall, L. K.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oppermann, P.; O'Reilly, B.; Ortega Larcher, W.; O'Shaughnessy, R.; Osthelder, C.; Ottaway, D. J.; Ottens, R. S.; Ou, J.; Overmier, H.; Owen, B. J.; Padilla, C.; Pai, A.; Palomba, C.; Pan, Y.; Pankow, C.; Paoletti, F.; Paoletti, R.; Papa, M. A.; Paris, H.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Pedraza, M.; Peiris, P.; Penn, S.; Perreca, A.; Phelps, M.; Pichot, M.; Pickenpack, M.; Piergiovanni, F.; Pierro, V.; Pinard, L.; Pindor, B.; Pinto, I. M.; Pitkin, M.; Pletsch, H. J.; Poeld, J.; Poggiani, R.; Poole, V.; Poux, C.; Predoi, V.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Quetschke, V.; Quintero, E.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Rácz, I.; Radkins, H.; Raffai, P.; Raja, S.; Rajalakshmi, G.; Rakhmanov, M.; Ramet, C.; Rapagnani, P.; Raymond, V.; Re, V.; Reed, C. M.; Reed, T.; Regimbau, T.; Reid, S.; Reitze, D. H.; Ricci, F.; Riesen, R.; Riles, K.; Robertson, N. A.; Robinet, F.; Rocchi, A.; Roddy, S.; Rodriguez, C.; Rodruck, M.; Roever, C.; Rolland, L.; Rollins, J. G.; Romano, J. D.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Salemi, F.; Sammut, L.; Sandberg, V.; Sanders, J.; Sannibale, V.; Santiago-Prieto, I.; Saracco, E.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Savage, R.; Schilling, R.; Schnabel, R.; Schofield, R. M. S.; Schreiber, E.; Schuette, D.; Schulz, B.; Schutz, B. F.; Schwinberg, P.; Scott, J.; Scott, S. M.; Seifert, F.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sergeev, A.; Shaddock, D.; Shah, S.; Shahriar, M. S.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sidery, T. L.; Siellez, K.; Siemens, X.; Sigg, D.; Simakov, D.; Singer, A.; Singer, L.; Sintes, A. M.; Skelton, G. R.; Slagmolen, B. J. J.; Slutsky, J.; Smith, J. R.; Smith, M. R.; Smith, R. J. E.; Smith-Lefebvre, N. D.; Soden, K.; Son, E. J.; Sorazu, B.; Souradeep, T.; Sperandio, L.; Staley, A.; Steinert, E.; Steinlechner, J.; Steinlechner, S.; Steplewski, S.; Stevens, D.; Stochino, A.; Stone, R.; Strain, K. A.; Strigin, S.; Stroeer, A. S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Susmithan, S.; Sutton, P. J.; Swinkels, B.; Szeifert, G.; Tacca, M.; Talukder, D.; Tang, L.; Tanner, D. B.; Tarabrin, S. P.; Taylor, R.; ter Braack, A. P. M.; Thirugnanasambandam, M. P.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Toncelli, A.; Tonelli, M.; Torre, O.; Torres, C. V.; Torrie, C. I.; Travasso, F.; Traylor, G.; Tse, M.; Ugolini, D.; Unnikrishnan, C. S.; Vahlbruch, H.; Vajente, G.; Vallisneri, M.; van den Brand, J. F. J.; Van Den Broeck, C.; van der Putten, S.; van der Sluys, M. V.; van Heijningen, J.; van Veggel, A. A.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Verma, S.; Vetrano, F.; Viceré, A.; Vincent-Finley, R.; Vinet, J.-Y.; Vitale, S.; Vlcek, B.; Vo, T.; Vocca, H.; Vorvick, C.; Vousden, W. D.; Vrinceanu, D.; Vyachanin, S. P.; Wade, A.; Wade, L.; Wade, M.; Waldman, S. J.; Walker, M.; Wallace, L.; Wan, Y.; Wang, J.; Wang, M.; Wang, X.; Wanner, A.; Ward, R. L.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.; Wessels, P.; West, M.; Westphal, T.; Wette, K.; Whelan, J. T.; Whitcomb, S. E.; White, D. J.; Whiting, B. F.; Wibowo, S.; Wiesner, K.; Wilkinson, C.; Williams, L.; Williams, R.; Williams, T.; Willis, J. L.; Willke, B.; Wimmer, M.; Winkelmann, L.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Worden, J.; Yablon, J.; Yakushin, I.; Yamamoto, H.; Yancey, C. C.; Yang, H.; Yeaton-Massey, D.; Yoshida, S.; Yum, H.; Yvert, M.; Zadrożny, A.; Zanolin, M.; Zendri, J.-P.; Zhang, F.; Zhang, L.; Zhao, C.; Zhu, H.; Zhu, X. J.; Zotov, N.; Zucker, M. E.; Zweizig, J.
2013-11-01
We present the results of a directed search for continuous gravitational waves from unknown, isolated neutron stars in the Galactic center region, performed on two years of data from LIGO’s fifth science run from two LIGO detectors. The search uses a semicoherent approach, analyzing coherently 630 segments, each spanning 11.5 hours, and then incoherently combining the results of the single segments. It covers gravitational wave frequencies in a range from 78 to 496 Hz and a frequency-dependent range of first-order spindown values down to -7.86×10-8Hz/s at the highest frequency. No gravitational waves were detected. The 90% confidence upper limits on the gravitational wave amplitude of sources at the Galactic center are ˜3.35×10-25 for frequencies near 150 Hz. These upper limits are the most constraining to date for a large-parameter-space search for continuous gravitational wave signals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brewer, Brendon J.; Foreman-Mackey, Daniel; Hogg, David W., E-mail: bj.brewer@auckland.ac.nz
We present and implement a probabilistic (Bayesian) method for producing catalogs from images of stellar fields. The method is capable of inferring the number of sources N in the image and can also handle the challenges introduced by noise, overlapping sources, and an unknown point-spread function. The luminosity function of the stars can also be inferred, even when the precise luminosity of each star is uncertain, via the use of a hierarchical Bayesian model. The computational feasibility of the method is demonstrated on two simulated images with different numbers of stars. We find that our method successfully recovers the inputmore » parameter values along with principled uncertainties even when the field is crowded. We also compare our results with those obtained from the SExtractor software. While the two approaches largely agree about the fluxes of the bright stars, the Bayesian approach provides more accurate inferences about the faint stars and the number of stars, particularly in the crowded case.« less
Schönweiler, R; Wübbelt, P; Tolloczko, R; Rose, C; Ptok, M
2000-01-01
Discriminant analysis (DA) and self-organizing feature maps (SOFM) were used to classify passively evoked auditory event-related potentials (ERP) P(1), N(1), P(2) and N(2). Responses from 16 children with severe behavioral auditory perception deficits, 16 children with marked behavioral auditory perception deficits, and 14 controls were examined. Eighteen ERP amplitude parameters were selected for examination of statistical differences between the groups. Different DA methods and SOFM configurations were trained to the values. SOFM had better classification results than DA methods. Subsequently, measures on another 37 subjects that were unknown for the trained SOFM were used to test the reliability of the system. With 10-dimensional vectors, reliable classifications were obtained that matched behavioral auditory perception deficits in 96%, implying central auditory processing disorder (CAPD). The results also support the assumption that CAPD includes a 'non-peripheral' auditory processing deficit. Copyright 2000 S. Karger AG, Basel.
Arroyo, Paula; Sáenz de Miera, Luis E; Ansola, Gemma
2015-02-15
Bacteria are key players in wetland ecosystems, however many essential aspects regarding the ecology of wetland bacterial communities remain unknown. The present study characterizes soil bacterial communities from natural and constructed wetlands through the pyrosequencing of 16S rDNA genes in order to evaluate the influence of wetland variables on bacterial community composition and structure. The results show that the composition of soil bacterial communities was significantly associated with the wetland type (natural or constructed wetland), the type of environment (lagoon, Typha or Salix) and three continuous parameters (SOM, COD and TKN). However, no clear associations were observed with soil pH. Bacterial diversity values were significantly lower in the constructed wetland with the highest inlet nutrient concentrations. The abundances of particular metabolic groups were also related to wetland characteristics. Copyright © 2014 Elsevier B.V. All rights reserved.
Khazaee, Mostafa; Markazi, Amir H D; Omidi, Ehsan
2015-11-01
In this paper, a new Adaptive Fuzzy Predictive Sliding Mode Control (AFP-SMC) is presented for nonlinear systems with uncertain dynamics and unknown input delay. The control unit consists of a fuzzy inference system to approximate the ideal linearization control, together with a switching strategy to compensate for the estimation errors. Also, an adaptive fuzzy predictor is used to estimate the future values of the system states to compensate for the time delay. The adaptation laws are used to tune the controller and predictor parameters, which guarantee the stability based on a Lyapunov-Krasovskii functional. To evaluate the method effectiveness, the simulation and experiment on an overhead crane system are presented. According to the obtained results, AFP-SMC can effectively control the uncertain nonlinear systems, subject to input delays of known bound. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
MXLKID: a maximum likelihood parameter identifier. [In LRLTRAN for CDC 7600
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gavel, D.T.
MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables.
On estimating the phase of periodic waveform in additive Gaussian noise, part 2
NASA Astrophysics Data System (ADS)
Rauch, L. L.
1984-11-01
Motivated by advances in signal processing technology that support more complex algorithms, a new look is taken at the problem of estimating the phase and other parameters of a periodic waveform in additive Gaussian noise. The general problem was introduced and the maximum a posteriori probability criterion with signal space interpretation was used to obtain the structures of optimum and some suboptimum phase estimators for known constant frequency and unknown constant phase with an a priori distribution. Optimal algorithms are obtained for some cases where the frequency is a parameterized function of time with the unknown parameters and phase having a joint a priori distribution. In the last section, the intrinsic and extrinsic geometry of hypersurfaces is introduced to provide insight to the estimation problem for the small noise and large noise cases.
On Estimating the Phase of Periodic Waveform in Additive Gaussian Noise, Part 2
NASA Technical Reports Server (NTRS)
Rauch, L. L.
1984-01-01
Motivated by advances in signal processing technology that support more complex algorithms, a new look is taken at the problem of estimating the phase and other parameters of a periodic waveform in additive Gaussian noise. The general problem was introduced and the maximum a posteriori probability criterion with signal space interpretation was used to obtain the structures of optimum and some suboptimum phase estimators for known constant frequency and unknown constant phase with an a priori distribution. Optimal algorithms are obtained for some cases where the frequency is a parameterized function of time with the unknown parameters and phase having a joint a priori distribution. In the last section, the intrinsic and extrinsic geometry of hypersurfaces is introduced to provide insight to the estimation problem for the small noise and large noise cases.
A Multi-Resolution Nonlinear Mapping Technique for Design and Analysis Applications
NASA Technical Reports Server (NTRS)
Phan, Minh Q.
1998-01-01
This report describes a nonlinear mapping technique where the unknown static or dynamic system is approximated by a sum of dimensionally increasing functions (one-dimensional curves, two-dimensional surfaces, etc.). These lower dimensional functions are synthesized from a set of multi-resolution basis functions, where the resolutions specify the level of details at which the nonlinear system is approximated. The basis functions also cause the parameter estimation step to become linear. This feature is taken advantage of to derive a systematic procedure to determine and eliminate basis functions that are less significant for the particular system under identification. The number of unknown parameters that must be estimated is thus reduced and compact models obtained. The lower dimensional functions (identified curves and surfaces) permit a kind of "visualization" into the complexity of the nonlinearity itself.
3D tomographic reconstruction using geometrical models
NASA Astrophysics Data System (ADS)
Battle, Xavier L.; Cunningham, Gregory S.; Hanson, Kenneth M.
1997-04-01
We address the issue of reconstructing an object of constant interior density in the context of 3D tomography where there is prior knowledge about the unknown shape. We explore the direct estimation of the parameters of a chosen geometrical model from a set of radiographic measurements, rather than performing operations (segmentation for example) on a reconstructed volume. The inverse problem is posed in the Bayesian framework. A triangulated surface describes the unknown shape and the reconstruction is computed with a maximum a posteriori (MAP) estimate. The adjoint differentiation technique computes the derivatives needed for the optimization of the model parameters. We demonstrate the usefulness of the approach and emphasize the techniques of designing forward and adjoint codes. We use the system response of the University of Arizona Fast SPECT imager to illustrate this method by reconstructing the shape of a heart phantom.
Global identifiability of linear compartmental models--a computer algebra algorithm.
Audoly, S; D'Angiò, L; Saccomani, M P; Cobelli, C
1998-01-01
A priori global identifiability deals with the uniqueness of the solution for the unknown parameters of a model and is, thus, a prerequisite for parameter estimation of biological dynamic models. Global identifiability is however difficult to test, since it requires solving a system of algebraic nonlinear equations which increases both in nonlinearity degree and number of terms and unknowns with increasing model order. In this paper, a computer algebra tool, GLOBI (GLOBal Identifiability) is presented, which combines the topological transfer function method with the Buchberger algorithm, to test global identifiability of linear compartmental models. GLOBI allows for the automatic testing of a priori global identifiability of general structure compartmental models from general multi input-multi output experiments. Examples of usage of GLOBI to analyze a priori global identifiability of some complex biological compartmental models are provided.
Multilevel adaptive control of nonlinear interconnected systems.
Motallebzadeh, Farzaneh; Ozgoli, Sadjaad; Momeni, Hamid Reza
2015-01-01
This paper presents an adaptive backstepping-based multilevel approach for the first time to control nonlinear interconnected systems with unknown parameters. The system consists of a nonlinear controller at the first level to neutralize the interaction terms, and some adaptive controllers at the second level, in which the gains are optimally tuned using genetic algorithm. The presented scheme can be used in systems with strong couplings where completely ignoring the interactions leads to problems in performance or stability. In order to test the suitability of the method, two case studies are provided: the uncertain double and triple coupled inverted pendulums connected by springs with unknown parameters. The simulation results show that the method is capable of controlling the system effectively, in both regulation and tracking tasks. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
A Multi-Resolution Nonlinear Mapping Technique for Design and Analysis Application
NASA Technical Reports Server (NTRS)
Phan, Minh Q.
1997-01-01
This report describes a nonlinear mapping technique where the unknown static or dynamic system is approximated by a sum of dimensionally increasing functions (one-dimensional curves, two-dimensional surfaces, etc.). These lower dimensional functions are synthesized from a set of multi-resolution basis functions, where the resolutions specify the level of details at which the nonlinear system is approximated. The basis functions also cause the parameter estimation step to become linear. This feature is taken advantage of to derive a systematic procedure to determine and eliminate basis functions that are less significant for the particular system under identification. The number of unknown parameters that must be estimated is thus reduced and compact models obtained. The lower dimensional functions (identified curves and surfaces) permit a kind of "visualization" into the complexity of the nonlinearity itself.
Fuzzy similarity measures for ultrasound tissue characterization
NASA Astrophysics Data System (ADS)
Emara, Salem M.; Badawi, Ahmed M.; Youssef, Abou-Bakr M.
1995-03-01
Computerized ultrasound tissue characterization has become an objective means for diagnosis of diseases. It is difficult to differentiate diffuse liver diseases, namely cirrhotic and fatty liver from a normal one, by visual inspection from the ultrasound images. The visual criteria for differentiating diffused diseases is rather confusing and highly dependent upon the sonographer's experience. The need for computerized tissue characterization is thus justified to quantitatively assist the sonographer for accurate differentiation and to minimize the degree of risk from erroneous interpretation. In this paper we used the fuzzy similarity measure as an approximate reasoning technique to find the maximum degree of matching between an unknown case defined by a feature vector and a family of prototypes (knowledge base). The feature vector used for the matching process contains 8 quantitative parameters (textural, acoustical, and speckle parameters) extracted from the ultrasound image. The steps done to match an unknown case with the family of prototypes (cirr, fatty, normal) are: Choosing the membership functions for each parameter, then obtaining the fuzzification matrix for the unknown case and the family of prototypes, then by the linguistic evaluation of two fuzzy quantities we obtain the similarity matrix, then by a simple aggregation method and the fuzzy integrals we obtain the degree of similarity. Finally, we find that the similarity measure results are comparable to the neural network classification techniques and it can be used in medical diagnosis to determine the pathology of the liver and to monitor the extent of the disease.
Novel methodologies for spectral classification of exon and intron sequences
NASA Astrophysics Data System (ADS)
Kwan, Hon Keung; Kwan, Benjamin Y. M.; Kwan, Jennifer Y. Y.
2012-12-01
Digital processing of a nucleotide sequence requires it to be mapped to a numerical sequence in which the choice of nucleotide to numeric mapping affects how well its biological properties can be preserved and reflected from nucleotide domain to numerical domain. Digital spectral analysis of nucleotide sequences unfolds a period-3 power spectral value which is more prominent in an exon sequence as compared to that of an intron sequence. The success of a period-3 based exon and intron classification depends on the choice of a threshold value. The main purposes of this article are to introduce novel codes for 1-sequence numerical representations for spectral analysis and compare them to existing codes to determine appropriate representation, and to introduce novel thresholding methods for more accurate period-3 based exon and intron classification of an unknown sequence. The main findings of this study are summarized as follows: Among sixteen 1-sequence numerical representations, the K-Quaternary Code I offers an attractive performance. A windowed 1-sequence numerical representation (with window length of 9, 15, and 24 bases) offers a possible speed gain over non-windowed 4-sequence Voss representation which increases as sequence length increases. A winner threshold value (chosen from the best among two defined threshold values and one other threshold value) offers a top precision for classifying an unknown sequence of specified fixed lengths. An interpolated winner threshold value applicable to an unknown and arbitrary length sequence can be estimated from the winner threshold values of fixed length sequences with a comparable performance. In general, precision increases as sequence length increases. The study contributes an effective spectral analysis of nucleotide sequences to better reveal embedded properties, and has potential applications in improved genome annotation.
Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems
Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R
2006-01-01
Background We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. Results We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Conclusion Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems. PMID:17081289
Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems.
Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R
2006-11-02
We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems.
Piatanesi, A.; Cirella, A.; Spudich, P.; Cocco, M.
2007-01-01
We present a two-stage nonlinear technique to invert strong motions records and geodetic data to retrieve the rupture history of an earthquake on a finite fault. To account for the actual rupture complexity, the fault parameters are spatially variable peak slip velocity, slip direction, rupture time and risetime. The unknown parameters are given at the nodes of the subfaults, whereas the parameters within a subfault are allowed to vary through a bilinear interpolation of the nodal values. The forward modeling is performed with a discrete wave number technique, whose Green's functions include the complete response of the vertically varying Earth structure. During the first stage, an algorithm based on the heat-bath simulated annealing generates an ensemble of models that efficiently sample the good data-fitting regions of parameter space. In the second stage (appraisal), the algorithm performs a statistical analysis of the model ensemble and computes a weighted mean model and its standard deviation. This technique, rather than simply looking at the best model, extracts the most stable features of the earthquake rupture that are consistent with the data and gives an estimate of the variability of each model parameter. We present some synthetic tests to show the effectiveness of the method and its robustness to uncertainty of the adopted crustal model. Finally, we apply this inverse technique to the well recorded 2000 western Tottori, Japan, earthquake (Mw 6.6); we confirm that the rupture process is characterized by large slip (3-4 m) at very shallow depths but, differently from previous studies, we imaged a new slip patch (2-2.5 m) located deeper, between 14 and 18 km depth. Copyright 2007 by the American Geophysical Union.
NASA Technical Reports Server (NTRS)
Sehgal, Neelima; Trac, Hy; Acquaviva, Viviana; Ade, Peter A. R.; Aguirre, Paula; Amiri, Mandana; Appel, John W.; Barrientos, L. Felipe; Battistelli, Elia S.; Bond, J. Richard;
2010-01-01
We present constraints on cosmological parameters based on a sample of Sunyaev-Zel'dovich-selected galaxy clusters detected in a millimeter-wave survey by the Atacama Cosmology Telescope. The cluster sample used in this analysis consists of 9 optically-confirmed high-mass clusters comprising the high-significance end of the total cluster sample identified in 455 square degrees of sky surveyed during 2008 at 148 GHz. We focus on the most massive systems to reduce the degeneracy between unknown cluster astrophysics and cosmology derived from SZ surveys. We describe the scaling relation between cluster mass and SZ signal with a 4-parameter fit. Marginalizing over the values of the parameters in this fit with conservative priors gives (sigma)8 = 0.851 +/- 0.115 and w = -1.14 +/- 0.35 for a spatially-flat wCDM cosmological model with WMAP 7-year priors on cosmological parameters. This gives a modest improvement in statistical uncertainty over WMAP 7-year constraints alone. Fixing the scaling relation between cluster mass and SZ signal to a fiducial relation obtained from numerical simulations and calibrated by X-ray observations, we find (sigma)8 + 0.821 +/- 0.044 and w = -1.05 +/- 0.20. These results are consistent with constraints from WMAP 7 plus baryon acoustic oscillations plus type Ia supernova which give (sigma)8 = 0.802 +/- 0.038 and w = -0.98 +/- 0.053. A stacking analysis of the clusters in this sample compared to clusters simulated assuming the fiducial model also shows good agreement. These results suggest that, given the sample of clusters used here, both the astrophysics of massive clusters and the cosmological parameters derived from them are broadly consistent with current models.
NASA Astrophysics Data System (ADS)
Modak, Soumita; Chattopadhyay, Tanuka; Chattopadhyay, Asis Kumar
2017-11-01
Area of study is the formation mechanism of the present-day population of elliptical galaxies, in the context of hierarchical cosmological models accompanied by accretion and minor mergers. The present work investigates the formation and evolution of several components of the nearby massive early-type galaxies (ETGs) through cross-correlation function (CCF), using the spatial parameters right ascension (RA) and declination (DEC), and the intrinsic parameters mass (M_{*}) and size. According to the astrophysical terminology, here these variables, namely mass, size, RA and DEC are termed as parameters, whereas the unknown constants involved in the kernel function are called hyperparameters. Throughout this paper, the parameter size is used to represent the effective radius (Re). Following Huang et al. (2013a), each nearby ETG is divided into three parts on the basis of its Re value. We study the CCF between each of these three components of nearby massive ETGs and the ETGs in the high redshift range, 0.5< z≤ 2.7. It is found that the innermost components of nearby ETGs are highly correlated with ETGs in the redshift range, 2< z≤ 2.7, known as `red nuggets'. The intermediate and the outermost parts have moderate correlations with ETGs in the redshift range, 0.5< z≤ 0.75. The quantitative measures are highly consistent with the two phase formation scenario of nearby massive ETGs, as suggested by various authors, and resolve the conflict raised in a previous work (De et al. 2014) suggesting other possibilities for the formation of the outermost part. A probable cause of this improvement is the inclusion of the spatial effects in addition to the other parameters in the study.
Spin vectors of asteroids 21 Lutetia, 196 Philomela, 250 Bettina, 337 Devosa, and 804 Hispania
NASA Technical Reports Server (NTRS)
Michalowski, Tadeusz
1992-01-01
Such parameters as shape, orientation of spin axis, prograde or retrograde rotation are important for understanding the collisional evolution of asteroids since the primordial epochs of solar system history. These parameters remain unknown for most asteroids and poorly constrained for all but a few. This work presents results for five asteroids: 21, 196, 250, 337, and 804.
1995-11-01
network - based AFS concepts. Neural networks can addition of vanes in each engine exhaust for thrust provide...parameter estimation programs 19-11 8.6 Neural Network Based Methods unknown parameters of the postulated state space model Artificial neural network ...Forward Neural Network the network that the applicability of the recurrent neural and ii) Recurrent Neural Network [117-119]. network to
Si, Wenjie; Dong, Xunde; Yang, Feifei
2018-03-01
This paper is concerned with the problem of decentralized adaptive backstepping state-feedback control for uncertain high-order large-scale stochastic nonlinear time-delay systems. For the control design of high-order large-scale nonlinear systems, only one adaptive parameter is constructed to overcome the over-parameterization, and neural networks are employed to cope with the difficulties raised by completely unknown system dynamics and stochastic disturbances. And then, the appropriate Lyapunov-Krasovskii functional and the property of hyperbolic tangent functions are used to deal with the unknown unmatched time-delay interactions of high-order large-scale systems for the first time. At last, on the basis of Lyapunov stability theory, the decentralized adaptive neural controller was developed, and it decreases the number of learning parameters. The actual controller can be designed so as to ensure that all the signals in the closed-loop system are semi-globally uniformly ultimately bounded (SGUUB) and the tracking error converges in the small neighborhood of zero. The simulation example is used to further show the validity of the design method. Copyright © 2018 Elsevier Ltd. All rights reserved.
Inverse gas chromatographic determination of solubility parameters of excipients.
Adamska, Katarzyna; Voelkel, Adam
2005-11-04
The principle aim of this work was an application of inverse gas chromatography (IGC) for the estimation of solubility parameter for pharmaceutical excipients. The retention data of number of test solutes were used to calculate Flory-Huggins interaction parameter (chi1,2infinity) and than solubility parameter (delta2), corrected solubility parameter (deltaT) and its components (deltad, deltap, deltah) by using different procedures. The influence of different values of test solutes solubility parameter (delta1) over calculated values was estimated. The solubility parameter values obtained for all excipients from the slope, from Guillet and co-workers' procedure are higher than that obtained from components according Voelkel and Janas procedure. It was found that solubility parameter's value of the test solutes influences, but not significantly, values of solubility parameter of excipients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Shaohua
This paper is concerned with the problem of adaptive fuzzy dynamic surface control (DSC) for the permanent magnet synchronous motor (PMSM) system with chaotic behavior, disturbance and unknown control gain and parameters. Nussbaum gain is adopted to cope with the situation that the control gain is unknown. And the unknown items can be estimated by fuzzy logic system. The proposed controller guarantees that all the signals in the closed-loop system are bounded and the system output eventually converges to a small neighborhood of the desired reference signal. Finally, the numerical simulations indicate that the proposed scheme can suppress the chaosmore » of PMSM and show the effectiveness and robustness of the proposed method.« less
Luo, Shaohua
2014-09-01
This paper is concerned with the problem of adaptive fuzzy dynamic surface control (DSC) for the permanent magnet synchronous motor (PMSM) system with chaotic behavior, disturbance and unknown control gain and parameters. Nussbaum gain is adopted to cope with the situation that the control gain is unknown. And the unknown items can be estimated by fuzzy logic system. The proposed controller guarantees that all the signals in the closed-loop system are bounded and the system output eventually converges to a small neighborhood of the desired reference signal. Finally, the numerical simulations indicate that the proposed scheme can suppress the chaos of PMSM and show the effectiveness and robustness of the proposed method.
NASA Astrophysics Data System (ADS)
Sichan, N.
2007-12-01
This study was aimed to understand the nature of the resistivity value of the sediment when it is contaminated, in order to use the information solving the obscure interpretation in the field. The pilot laboratory experiments were designed to simulate various degree of contamination and degree of saturation then observe the resulting changes in resistivity. The study was expected to get a better understanding of how various physical parameters effect the resistivity values in term of mathematic function. And also expected to apply those obtained function to a practical quantitatively interpretation. The sediment underlying the Mae-Hia Landfill consists of clay-rich material, with interfingerings of colluvium and sandy alluvium. A systematic study identified four kinds of sediment, sand, clayey sand, sandy clay, and clay. Representative sediment and leachate samples were taken from the field and returned to the laboratory. Both the physical and chemical properties of the sediments and leachate were analyzed to delineate the necessary parameters that could be used in Archie's equation. Sediment samples were mixed with various concentration of leachate solutions. Then the resistivity values were measured at various controlled steps in the saturation degree in a well- calibrated six-electrode model resistivity box. The measured resistivity values for sand, clayey sand, sandy clay when fully and partly saturated were collected, then plotted and fitted to Archie's equation, to obtain a mathematical relationship between bulk resistivity, porosity, saturation degree and resistivity of pore fluid. The results fit well to Archie's equation, and it was possible to determine all the unknown parameters representative of the sediment samples. For sand, clayey sand, sandy clay, and clay, the formation resistivity factors (F) are 2.90, 5.77, 7.85, and 7.85 with the products of cementation factor (m) and the pore geometry factors (a) (in term of -am) are 1.49, -1.63, -1.92, -2.24 and the saturation exponents (n) are 2.06, 2.58, 3.52, and 2.46, respectively. These results were used to reinterpret the existing resistivity data of this area. The result of this reinterpretation is a map showing the quantitative distribution of the contaminant plume in the vicinity of the Mae-Hia Landfill.
Mohkam, Kayvan; Rode, Agnès; Darnis, Benjamin; Manichon, Anne-Frédérique; Boussel, Loïc; Ducerf, Christian; Merle, Philippe; Lesurtel, Mickaël; Mabrut, Jean-Yves
2018-05-09
The impact of portal hemodynamic variations after portal vein embolization on liver regeneration remains unknown. We studied the correlation between the parameters of hepatic venous pressure measured before and after portal vein embolization and future hypertrophy of the liver remnant after portal vein embolization. Between 2014 and 2017, we reviewed patients who were eligible for major hepatectomy and who had portal vein embolization. Patients had undergone simultaneous measurement of portal venous pressure and hepatic venous pressure gradient before and after portal vein embolization by direct puncture of portal vein and inferior vena cava. We assessed these parameters to predict future liver remnant hypertrophy. Twenty-six patients were included. After portal vein embolization, median portal venous pressure (range) increased from 15 (9-24) to 19 (10-27) mm Hg and hepatic venous pressure gradient increased from 5 (0-12) to 8 (0-14) mm Hg. Median future liver remnant volume (range) was 513 (299-933) mL before portal vein embolization versus 724 (499-1279) mL 3 weeks after portal vein embolization, representing a 35% (7.4-83.6) median hypertrophy. Post-portal vein embolization hepatic venous pressure gradient was the most accurate parameter to predict failure of future liver remnant to reach a 30% hypertrophy (c-statistic: 0.882 [95% CI: 0.727-1.000], P < 0.001). A cut-off value of post-portal vein embolization hepatic venous pressure gradient of 8 mm Hg showed a sensitivity of 91% (95% CI: 57%-99%), specificity of 80% (95% CI: 52%-96%), positive predictive value of 77% (95% CI: 46%-95%) and negative predictive value of 92.3% (95% CI: 64.0%-99.8%). On multivariate analysis, post-portal vein embolization hepatic venous pressure gradient and previous chemotherapy were identified as predictors of impaired future liver remnant hypertrophy. Post-portal vein embolization hepatic venous pressure gradient is a simple and reproducible tool which accurately predicts future liver remnant hypertrophy after portal vein embolization and allows early detection of patients who may benefit from more aggressive procedures inducing future liver remnant hypertrophy. (Surgery 2018;143:1-2.). Copyright © 2018 Elsevier Inc. All rights reserved.
Performance Analysis for Channel Estimation With 1-Bit ADC and Unknown Quantization Threshold
NASA Astrophysics Data System (ADS)
Stein, Manuel S.; Bar, Shahar; Nossek, Josef A.; Tabrikian, Joseph
2018-05-01
In this work, the problem of signal parameter estimation from measurements acquired by a low-complexity analog-to-digital converter (ADC) with $1$-bit output resolution and an unknown quantization threshold is considered. Single-comparator ADCs are energy-efficient and can be operated at ultra-high sampling rates. For analysis of such systems, a fixed and known quantization threshold is usually assumed. In the symmetric case, i.e., zero hard-limiting offset, it is known that in the low signal-to-noise ratio (SNR) regime the signal processing performance degrades moderately by ${2}/{\\pi}$ ($-1.96$ dB) when comparing to an ideal $\\infty$-bit converter. Due to hardware imperfections, low-complexity $1$-bit ADCs will in practice exhibit an unknown threshold different from zero. Therefore, we study the accuracy which can be obtained with receive data processed by a hard-limiter with unknown quantization level by using asymptotically optimal channel estimation algorithms. To characterize the estimation performance of these nonlinear algorithms, we employ analytic error expressions for different setups while modeling the offset as a nuisance parameter. In the low SNR regime, we establish the necessary condition for a vanishing loss due to missing offset knowledge at the receiver. As an application, we consider the estimation of single-input single-output wireless channels with inter-symbol interference and validate our analysis by comparing the analytic and experimental performance of the studied estimation algorithms. Finally, we comment on the extension to multiple-input multiple-output channel models.
Adaptive Control Based Harvesting Strategy for a Predator-Prey Dynamical System.
Sen, Moitri; Simha, Ashutosh; Raha, Soumyendu
2018-04-23
This paper deals with designing a harvesting control strategy for a predator-prey dynamical system, with parametric uncertainties and exogenous disturbances. A feedback control law for the harvesting rate of the predator is formulated such that the population dynamics is asymptotically stabilized at a positive operating point, while maintaining a positive, steady state harvesting rate. The hierarchical block strict feedback structure of the dynamics is exploited in designing a backstepping control law, based on Lyapunov theory. In order to account for unknown parameters, an adaptive control strategy has been proposed in which the control law depends on an adaptive variable which tracks the unknown parameter. Further, a switching component has been incorporated to robustify the control performance against bounded disturbances. Proofs have been provided to show that the proposed adaptive control strategy ensures asymptotic stability of the dynamics at a desired operating point, as well as exact parameter learning in the disturbance-free case and learning with bounded error in the disturbance prone case. The dynamics, with uncertainty in the death rate of the predator, subjected to a bounded disturbance has been simulated with the proposed control strategy.
Efficient Bayesian experimental design for contaminant source identification
NASA Astrophysics Data System (ADS)
Zhang, Jiangjiang; Zeng, Lingzao; Chen, Cheng; Chen, Dingjiang; Wu, Laosheng
2015-01-01
In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameters identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from concentration measurements in identifying unknown parameters. In this approach, the sampling locations that give the maximum expected relative entropy are selected as the optimal design. After the sampling locations are determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport equation. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. It is shown that the methods can be used to assist in both single sampling location and monitoring network design for contaminant source identifications in groundwater.
Jafari, Masoumeh; Salimifard, Maryam; Dehghani, Maryam
2014-07-01
This paper presents an efficient method for identification of nonlinear Multi-Input Multi-Output (MIMO) systems in the presence of colored noises. The method studies the multivariable nonlinear Hammerstein and Wiener models, in which, the nonlinear memory-less block is approximated based on arbitrary vector-based basis functions. The linear time-invariant (LTI) block is modeled by an autoregressive moving average with exogenous (ARMAX) model which can effectively describe the moving average noises as well as the autoregressive and the exogenous dynamics. According to the multivariable nature of the system, a pseudo-linear-in-the-parameter model is obtained which includes two different kinds of unknown parameters, a vector and a matrix. Therefore, the standard least squares algorithm cannot be applied directly. To overcome this problem, a Hierarchical Least Squares Iterative (HLSI) algorithm is used to simultaneously estimate the vector and the matrix of unknown parameters as well as the noises. The efficiency of the proposed identification approaches are investigated through three nonlinear MIMO case studies. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Adaptive control of nonlinear uncertain active suspension systems with prescribed performance.
Huang, Yingbo; Na, Jing; Wu, Xing; Liu, Xiaoqin; Guo, Yu
2015-01-01
This paper proposes adaptive control designs for vehicle active suspension systems with unknown nonlinear dynamics (e.g., nonlinear spring and piece-wise linear damper dynamics). An adaptive control is first proposed to stabilize the vertical vehicle displacement and thus to improve the ride comfort and to guarantee other suspension requirements (e.g., road holding and suspension space limitation) concerning the vehicle safety and mechanical constraints. An augmented neural network is developed to online compensate for the unknown nonlinearities, and a novel adaptive law is developed to estimate both NN weights and uncertain model parameters (e.g., sprung mass), where the parameter estimation error is used as a leakage term superimposed on the classical adaptations. To further improve the control performance and simplify the parameter tuning, a prescribed performance function (PPF) characterizing the error convergence rate, maximum overshoot and steady-state error is used to propose another adaptive control. The stability for the closed-loop system is proved and particular performance requirements are analyzed. Simulations are included to illustrate the effectiveness of the proposed control schemes. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Quantum Tasks with Non-maximally Quantum Channels via Positive Operator-Valued Measurement
NASA Astrophysics Data System (ADS)
Peng, Jia-Yin; Luo, Ming-Xing; Mo, Zhi-Wen
2013-01-01
By using a proper positive operator-valued measure (POVM), we present two new schemes for probabilistic transmission with non-maximally four-particle cluster states. In the first scheme, we demonstrate that two non-maximally four-particle cluster states can be used to realize probabilistically sharing an unknown three-particle GHZ-type state within either distant agent's place. In the second protocol, we demonstrate that a non-maximally four-particle cluster state can be used to teleport an arbitrary unknown multi-particle state in a probabilistic manner with appropriate unitary operations and POVM. Moreover the total success probability of these two schemes are also worked out.
NASA Astrophysics Data System (ADS)
Tlijani, M.; Ben Younes, R.; Durastanti, J. F.; Boudenne, A.
2010-11-01
A periodic method is used to determine simultaneously both thermal conductivity and diffusivity of various insulate materials at room temperature. The sample is placed between two metallic plates and temperature modulation is applied on the front side of one of the metallic plates. The temperature at the front and rear sides of both plates is measured and the experimental transfer function is calculated. The theoretical thermal heat transfer function is calculated by the quadripole method. Thermal conductivity and diffusivity are simultaneously identified from both real and imaginary parts of the experimental transfer function. The thermophysical parameters of several wood scale samples obtained from palm wood trees and common trees with unknown thermal properties (E) with different thicknesses were studied. The value identified for the thermal conductivity 0.03 Wm-1 K-1 compared with different insulate solid material such as glass, glass-wool and PVC is much better and close to the air conductivity, It allowed us to consider the wood scale extracted from palm wood trees, bio and renewable material as good heat insulator aiming in the future as a use for lightness applications, insulating or as a reinforcement in a given matrix. These potentialities still unknown are stengthened by the enormous quantity of such kind of wood gathered annually from palm trees and considered as wastes.
Robust distributed control of spacecraft formation flying with adaptive network topology
NASA Astrophysics Data System (ADS)
Shasti, Behrouz; Alasty, Aria; Assadian, Nima
2017-07-01
In this study, the distributed six degree-of-freedom (6-DOF) coordinated control of spacecraft formation flying in low earth orbit (LEO) has been investigated. For this purpose, an accurate coupled translational and attitude relative dynamics model of the spacecraft with respect to the reference orbit (virtual leader) is presented by considering the most effective perturbation acceleration forces on LEO satellites, i.e. the second zonal harmonic and the atmospheric drag. Subsequently, the 6-DOF coordinated control of spacecraft in formation is studied. During the mission, the spacecraft communicate with each other through a switching network topology in which the weights of its graph Laplacian matrix change adaptively based on a distance-based connectivity function between neighboring agents. Because some of the dynamical system parameters such as spacecraft masses and moments of inertia may vary with time, an adaptive law is developed to estimate the parameter values during the mission. Furthermore, for the case that there is no knowledge of the unknown and time-varying parameters of the system, a robust controller has been developed. It is proved that the stability of the closed-loop system coupled with adaptation in network topology structure and optimality and robustness in control is guaranteed by the robust contraction analysis as an incremental stability method for multiple synchronized systems. The simulation results show the effectiveness of each control method in the presence of uncertainties and parameter variations. The adaptive and robust controllers show their superiority in reducing the state error integral as well as decreasing the control effort and settling time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng Wei; Zhang Bing; Li Hui
The early optical afterglow emission of several gamma-ray bursts (GRBs) shows a high linear polarization degree (PD) of tens of percent, suggesting an ordered magnetic field in the emission region. The light curves are consistent with being of a reverse shock (RS) origin. However, the magnetization parameter, σ , of the outflow is unknown. If σ is too small, an ordered field in the RS may be quickly randomized due to turbulence driven by various perturbations so that the PD may not be as high as observed. Here we use the “Athena++” relativistic MHD code to simulate a relativistic jetmore » with an ordered magnetic field propagating into a clumpy ambient medium, with a focus on how density fluctuations may distort the ordered magnetic field and reduce PD in the RS emission for different σ values. For a given density fluctuation, we discover a clear power-law relationship between the relative PD reduction and the σ value of the outflow. Such a relation may be applied to estimate σ of the GRB outflows using the polarization data of early afterglows.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, Wei; Zhang, Bing; Li, Hui
We report that the early optical afterglow emission of several gamma-ray bursts (GRBs) shows a high linear polarization degree (PD) of tens of percent, suggesting an ordered magnetic field in the emission region. The light curves are consistent with being of a reverse shock (RS) origin. However, the magnetization parameter, σ, of the outflow is unknown. If σ is too small, an ordered field in the RS may be quickly randomized due to turbulence driven by various perturbations so that the PD may not be as high as observed. Here we use the "Athena++" relativistic MHD code to simulate amore » relativistic jet with an ordered magnetic field propagating into a clumpy ambient medium, with a focus on how density fluctuations may distort the ordered magnetic field and reduce PD in the RS emission for different σ values. For a given density fluctuation, we discover a clear power-law relationship between the relative PD reduction and the σ value of the outflow. Finally, such a relation may be applied to estimate σ of the GRB outflows using the polarization data of early afterglows.« less
Zafar, Raheel; Kamel, Nidal; Naufal, Mohamad; Malik, Aamir Saeed; Dass, Sarat C; Ahmad, Rana Fayyaz; Abdullah, Jafri M; Reza, Faruque
2017-01-01
Decoding of human brain activity has always been a primary goal in neuroscience especially with functional magnetic resonance imaging (fMRI) data. In recent years, Convolutional neural network (CNN) has become a popular method for the extraction of features due to its higher accuracy, however it needs a lot of computation and training data. In this study, an algorithm is developed using Multivariate pattern analysis (MVPA) and modified CNN to decode the behavior of brain for different images with limited data set. Selection of significant features is an important part of fMRI data analysis, since it reduces the computational burden and improves the prediction performance; significant features are selected using t-test. MVPA uses machine learning algorithms to classify different brain states and helps in prediction during the task. General linear model (GLM) is used to find the unknown parameters of every individual voxel and the classification is done using multi-class support vector machine (SVM). MVPA-CNN based proposed algorithm is compared with region of interest (ROI) based method and MVPA based estimated values. The proposed method showed better overall accuracy (68.6%) compared to ROI (61.88%) and estimation values (64.17%).
Deng, Wei; Zhang, Bing; Li, Hui; ...
2017-08-03
We report that the early optical afterglow emission of several gamma-ray bursts (GRBs) shows a high linear polarization degree (PD) of tens of percent, suggesting an ordered magnetic field in the emission region. The light curves are consistent with being of a reverse shock (RS) origin. However, the magnetization parameter, σ, of the outflow is unknown. If σ is too small, an ordered field in the RS may be quickly randomized due to turbulence driven by various perturbations so that the PD may not be as high as observed. Here we use the "Athena++" relativistic MHD code to simulate amore » relativistic jet with an ordered magnetic field propagating into a clumpy ambient medium, with a focus on how density fluctuations may distort the ordered magnetic field and reduce PD in the RS emission for different σ values. For a given density fluctuation, we discover a clear power-law relationship between the relative PD reduction and the σ value of the outflow. Finally, such a relation may be applied to estimate σ of the GRB outflows using the polarization data of early afterglows.« less
Rubert, Josep; James, Kevin J; Mañes, Jordi; Soler, Carla
2012-02-03
Recent developments in mass spectrometers have created a paradoxical situation; different mass spectrometers are available, each of them with their specific strengths and drawbacks. Hybrid instruments try to unify several advantages in one instrument. In this study two of wide-used hybrid instruments were compared: hybrid quadrupole-linear ion trap-mass spectrometry (QTRAP®) and the hybrid linear ion trap-high resolution mass spectrometry (LTQ-Orbitrap®). Both instruments were applied to detect the presence of 18 selected mycotoxins in baby food. Analytical parameters were validated according to 2002/657/CE. Limits of quantification (LOQs) obtained by QTRAP® instrument ranged from 0.45 to 45 μg kg⁻¹ while lower limits of quantification (LLOQs) values were obtained by LTQ-Orbitrap®: 7-70 μg kg⁻¹. The correlation coefficients (r) in both cases were upper than 0.989. These values highlighted that both instruments were complementary for the analysis of mycotoxin in baby food; while QTRAP® reached best sensitivity and selectivity, LTQ-Orbitrap® allowed the identification of non-target and unknowns compounds. Copyright © 2011 Elsevier B.V. All rights reserved.
Monolithic multigrid method for the coupled Stokes flow and deformable porous medium system
NASA Astrophysics Data System (ADS)
Luo, P.; Rodrigo, C.; Gaspar, F. J.; Oosterlee, C. W.
2018-01-01
The interaction between fluid flow and a deformable porous medium is a complicated multi-physics problem, which can be described by a coupled model based on the Stokes and poroelastic equations. A monolithic multigrid method together with either a coupled Vanka smoother or a decoupled Uzawa smoother is employed as an efficient numerical technique for the linear discrete system obtained by finite volumes on staggered grids. A specialty in our modeling approach is that at the interface of the fluid and poroelastic medium, two unknowns from the different subsystems are defined at the same grid point. We propose a special discretization at and near the points on the interface, which combines the approximation of the governing equations and the considered interface conditions. In the decoupled Uzawa smoother, Local Fourier Analysis (LFA) helps us to select optimal values of the relaxation parameter appearing. To implement the monolithic multigrid method, grid partitioning is used to deal with the interface updates when communication is required between two subdomains. Numerical experiments show that the proposed numerical method has an excellent convergence rate. The efficiency and robustness of the method are confirmed in numerical experiments with typically small realistic values of the physical coefficients.
Variable jet properties in GRB 110721A: time resolved observations of the jet photosphere
NASA Astrophysics Data System (ADS)
Iyyani, S.; Ryde, F.; Axelsson, M.; Burgess, J. M.; Guiriec, S.; Larsson, J.; Lundman, C.; Moretti, E.; McGlynn, S.; Nymark, T.; Rosquist, K.
2013-08-01
Fermi Gamma-ray Space Telescope observations of GRB 110721A have revealed two emission components from the relativistic jet: emission from the photosphere, peaking at ˜100 keV, and a non-thermal component, which peaks at ˜1000 keV. We use the photospheric component to calculate the properties of the relativistic outflow. We find a strong evolution in the flow properties: the Lorentz factor decreases with time during the bursts from Γ ˜ 1000 to ˜150 (assuming a redshift z = 2; the values are only weakly dependent on unknown efficiency parameters). Such a decrease is contrary to the expectations from the internal shocks and the isolated magnetar birth models. Moreover, the position of the flow nozzle measured from the central engine, r0, increases by more than two orders of magnitude. Assuming a moderately magnetized outflow we estimate that r0 varies from 106 to ˜109 cm during the burst. We suggest that the maximal value reflects the size of the progenitor core. Finally, we show that these jet properties naturally explain the observed broken power-law decay of the temperature which has been reported as a characteristic for gamma-ray burst pulses.
NASA Technical Reports Server (NTRS)
Truong-Loi, My-Linh; Saatchi, Sassan; Jaruwatanadilok, Sermsak
2012-01-01
A semi-empirical algorithm for the retrieval of soil moisture, root mean square (RMS) height and biomass from polarimetric SAR data is explained and analyzed in this paper. The algorithm is a simplification of the distorted Born model. It takes into account the physical scattering phenomenon and has three major components: volume, double-bounce and surface. This simplified model uses the three backscattering coefficients ( sigma HH, sigma HV and sigma vv) at low-frequency (P-band). The inversion process uses the Levenberg-Marquardt non-linear least-squares method to estimate the structural parameters. The estimation process is entirely explained in this paper, from initialization of the unknowns to retrievals. A sensitivity analysis is also done where the initial values in the inversion process are varying randomly. The results show that the inversion process is not really sensitive to initial values and a major part of the retrievals has a root-mean-square error lower than 5% for soil moisture, 24 Mg/ha for biomass and 0.49 cm for roughness, considering a soil moisture of 40%, roughness equal to 3cm and biomass varying from 0 to 500 Mg/ha with a mean of 161 Mg/ha
Regan, R. Steven; Markstrom, Steven L.; Hay, Lauren E.; Viger, Roland J.; Norton, Parker A.; Driscoll, Jessica M.; LaFontaine, Jacob H.
2018-01-08
This report documents several components of the U.S. Geological Survey National Hydrologic Model of the conterminous United States for use with the Precipitation-Runoff Modeling System (PRMS). It provides descriptions of the (1) National Hydrologic Model, (2) Geospatial Fabric for National Hydrologic Modeling, (3) PRMS hydrologic simulation code, (4) parameters and estimation methods used to compute spatially and temporally distributed default values as required by PRMS, (5) National Hydrologic Model Parameter Database, and (6) model extraction tool named Bandit. The National Hydrologic Model Parameter Database contains values for all PRMS parameters used in the National Hydrologic Model. The methods and national datasets used to estimate all the PRMS parameters are described. Some parameter values are derived from characteristics of topography, land cover, soils, geology, and hydrography using traditional Geographic Information System methods. Other parameters are set to long-established default values and computation of initial values. Additionally, methods (statistical, sensitivity, calibration, and algebraic) were developed to compute parameter values on the basis of a variety of nationally-consistent datasets. Values in the National Hydrologic Model Parameter Database can periodically be updated on the basis of new parameter estimation methods and as additional national datasets become available. A companion ScienceBase resource provides a set of static parameter values as well as images of spatially-distributed parameters associated with PRMS states and fluxes for each Hydrologic Response Unit across the conterminuous United States.
A method for operative quantitative interpretation of multispectral images of biological tissues
NASA Astrophysics Data System (ADS)
Lisenko, S. A.; Kugeiko, M. M.
2013-10-01
A method for operative retrieval of spatial distributions of biophysical parameters of a biological tissue by using a multispectral image of it has been developed. The method is based on multiple regressions between linearly independent components of the diffuse reflection spectrum of the tissue and unknown parameters. Possibilities of the method are illustrated by an example of determining biophysical parameters of the skin (concentrations of melanin, hemoglobin and bilirubin, blood oxygenation, and scattering coefficient of the tissue). Examples of quantitative interpretation of the experimental data are presented.
NASA Astrophysics Data System (ADS)
De Santis, Alberto; Dellepiane, Umberto; Lucidi, Stefano
2012-11-01
In this paper we investigate the estimation problem for a model of the commodity prices. This model is a stochastic state space dynamical model and the problem unknowns are the state variables and the system parameters. Data are represented by the commodity spot prices, very seldom time series of Futures contracts are available for free. Both the system joint likelihood function (state variables and parameters) and the system marginal likelihood (the state variables are eliminated) function are addressed.
NASA Astrophysics Data System (ADS)
Seraphin, Pierre; Gonçalvès, Julio; Vallet-Coulomb, Christine; Champollion, Cédric
2018-06-01
Spatially distributed values of the specific yield, a fundamental parameter for transient groundwater mass balance calculations, were obtained by means of three independent methods for the Crau plain, France. In contrast to its traditional use to assess recharge based on a given specific yield, the water-table fluctuation (WTF) method, applied using major recharging events, gave a first set of reference values. Then, large infiltration processes recorded by monitored boreholes and caused by major precipitation events were interpreted in terms of specific yield by means of a one-dimensional vertical numerical model solving Richards' equations within the unsaturated zone. Finally, two gravity field campaigns, at low and high piezometric levels, were carried out to assess the groundwater mass variation and thus alternative specific yield values. The range obtained by the WTF method for this aquifer made of alluvial detrital material was 2.9- 26%, in line with the scarce data available so far. The average spatial value of specific yield by the WTF method (9.1%) is consistent with the aquifer scale value from the hydro-gravimetric approach. In this investigation, an estimate of the hitherto unknown spatial distribution of the specific yield over the Crau plain was obtained using the most reliable method (the WTF method). A groundwater mass balance calculation over the domain using this distribution yielded similar results to an independent quantification based on a stable isotope-mixing model. This agreement reinforces the relevance of such estimates, which can be used to build a more accurate transient hydrogeological model.
NASA Astrophysics Data System (ADS)
Seraphin, Pierre; Gonçalvès, Julio; Vallet-Coulomb, Christine; Champollion, Cédric
2018-03-01
Spatially distributed values of the specific yield, a fundamental parameter for transient groundwater mass balance calculations, were obtained by means of three independent methods for the Crau plain, France. In contrast to its traditional use to assess recharge based on a given specific yield, the water-table fluctuation (WTF) method, applied using major recharging events, gave a first set of reference values. Then, large infiltration processes recorded by monitored boreholes and caused by major precipitation events were interpreted in terms of specific yield by means of a one-dimensional vertical numerical model solving Richards' equations within the unsaturated zone. Finally, two gravity field campaigns, at low and high piezometric levels, were carried out to assess the groundwater mass variation and thus alternative specific yield values. The range obtained by the WTF method for this aquifer made of alluvial detrital material was 2.9- 26%, in line with the scarce data available so far. The average spatial value of specific yield by the WTF method (9.1%) is consistent with the aquifer scale value from the hydro-gravimetric approach. In this investigation, an estimate of the hitherto unknown spatial distribution of the specific yield over the Crau plain was obtained using the most reliable method (the WTF method). A groundwater mass balance calculation over the domain using this distribution yielded similar results to an independent quantification based on a stable isotope-mixing model. This agreement reinforces the relevance of such estimates, which can be used to build a more accurate transient hydrogeological model.
[Quantitative Evaluation of Metal Artifacts on CT Images on the Basis of Statistics of Extremes].
Kitaguchi, Shigetoshi; Imai, Kuniharu; Ueda, Suguru; Hashimoto, Naomi; Hattori, Shouta; Saika, Takahiro; Ono, Yoshifumi
2016-05-01
It is well-known that metal artifacts have a harmful effect on the image quality of computed tomography (CT) images. However, the physical property remains still unknown. In this study, we investigated the relationship between metal artifacts and tube currents using statistics of extremes. A commercially available phantom for measuring CT dose index 160 mm in diameter was prepared and a brass rod 13 mm in diameter was placed at the centerline of the phantom. This phantom was used as a target object to evaluate metal artifacts and was scanned using an area detector CT scanner with various tube currents under a constant tube voltage of 120 kV. Sixty parallel line segments with a length of 100 pixels were placed to cross metal artifacts on CT images and the largest difference between two adjacent CT values in each of 60 CT value profiles of these line segments was employed as a feature variable for measuring metal artifacts; these feature variables were analyzed on the basis of extreme value theory. The CT value variation induced by metal artifacts was statistically characterized by Gumbel distribution, which was one of the extreme value distributions; namely, metal artifacts have the same statistical characteristic as streak artifacts. Therefore, Gumbel evaluation method makes it possible to analyze not only streak artifacts but also metal artifacts. Furthermore, the location parameter in Gumbel distribution was shown to be in inverse proportion to the square root of a tube current. This result suggested that metal artifacts have the same dose dependence as image noises.
Optical absorption spectra of substitutional Co2+ ions in Mgx Cd1-x Se alloys
NASA Astrophysics Data System (ADS)
Jin, Moon-Seog; Kim, Chang-Dae; Jang, Kiwan; Park, Sang-An; Kim, Duck-Tae; Kim, Hyung-Gon; Kim, Wha-Tek
2006-09-01
Optical absorption spectra of substitutional Co2+ ions in Mgx Cd1-x Se alloys were investigated in the composition region of 0.0 x 0.4 and in the wavelength region of 300 to 2500 nm at 4.8 K and 290 K. We observed several absorption bands in the wavelength regions corresponding to the 4A2(4F) 4T1(4P) transition and the 4A2(4F) 4T1(4F) transition of Co2+ at a tetrahedral Td point symmetry point in the host crystals, as well as unknown absorption bands. The several absorption bands were analyzed in the framework of the crystal-field theory along with the second-order spin-orbit coupling. The unknown absorption bands were assigned as due to phonon-assisted absorption bands. We also investigated the variations of the crystal-field parameter Dq and the Racah parameter B with composition x in the Mgx Cd1-x Se system. The results showed that the crystal-field parameter (Dq ) increases, on the other hand, the Racah parameter (B ) decreases with increasing composition x, which may be connected with an increase in the covalency of the metal-ligand bond with increasing composition x in the Mgx Cd1-x Se system.
Urinary lithogenesis risk tests: comparison of a commercial kit and a laboratory prototype test.
Grases, Félix; Costa-Bauzá, Antonia; Prieto, Rafel M; Arrabal, Miguel; De Haro, Tomás; Lancina, Juan A; Barbuzano, Carmen; Colom, Sergi; Riera, Joaquín; Perelló, Joan; Isern, Bernat; Sanchis, Pilar; Conte, Antonio; Barragan, Fernando; Gomila, Isabel
2011-11-01
Renal stone formation is a multifactorial process depending in part on urine composition. Other parameters relate to structural or pathological features of the kidney. To date, routine laboratory estimation of urolithiasis risk has been based on determination of urinary composition. This process requires collection of at least two 24 h urine samples, which is tedious for patients. The most important feature of urinary lithogenic risk is the balance between various urinary parameters, although unknown factors may be involved. The objective of this study was to compare data obtained using a commercial kit with those of a laboratory prototype, using a multicentre approach, to validate the utility of these methods in routine clinical practice. A simple new commercial test (NefroPlus®; Sarstedt AG & Co., Nümbrecht, Germany) evaluating the capacity of urine to crystallize calcium salts, and thus permitting detection of patients at risk for stone development, was compared with a prototype test previously described by this group. Urine of 64 volunteers produced during the night was used in these comparisons. The commercial test was also used to evaluate urine samples of 83 subjects in one of three hospitals. Both methods were essentially in complete agreement (98%) with respect to test results. The multicentre data were: sensitivity 94.7%; specificity 76.9%; positive predictive value (lithogenic urine) 90.0%; negative predictive value (non-lithogenic urine) 87.0%; test efficacy 89.2%. The new commercial NefroPlus test offers fast and cheap evaluation of the overall risk of development of urinary calcium-containing calculi.
Adaptive boundary concentration control using Zakai equation
NASA Astrophysics Data System (ADS)
Tenno, R.; Mendelson, A.
2010-06-01
A mean-variance control problem is formulated with respect to a partially observed nonlinear system that includes unknown constant parameters. A physical prototype of the system is the cathode surface reaction in an electrolysis cell, where the controller aim is to keep the boundary concentration of species in the near vicinity of the cathode surface low but not zero. The boundary concentration is a diffusion-controlled process observed through the measured current density and, in practice, controlled through the applied voltage. The former incomplete data control problem is converted to complete data-to the so-called separated control problem whose solution is given by the infinite-dimensional Zakai equation. In this article, the separated control problem is solved numerically using pathwise integration of the Zakai equation. This article demonstrates precise tracking of the target trajectory with a rapid convergence of estimates to unknown parameters, which take place simultaneously with control.
Acceptable Tolerances for Matching Icing Similarity Parameters in Scaling Applications
NASA Technical Reports Server (NTRS)
Anderson, David N.
2003-01-01
This paper reviews past work and presents new data to evaluate how changes in similarity parameters affect ice shapes and how closely scale values of the parameters should match reference values. Experimental ice shapes presented are from tests by various researchers in the NASA Glenn Icing Research Tunnel. The parameters reviewed are the modified inertia parameter (which determines the stagnation collection efficiency), accumulation parameter, freezing fraction, Reynolds number, and Weber number. It was demonstrated that a good match of scale and reference ice shapes could sometimes be achieved even when values of the modified inertia parameter did not match precisely. Consequently, there can be some flexibility in setting scale droplet size, which is the test condition determined from the modified inertia parameter. A recommended guideline is that the modified inertia parameter be chosen so that the scale stagnation collection efficiency is within 10 percent of the reference value. The scale accumulation parameter and freezing fraction should also be within 10 percent of their reference values. The Weber number based on droplet size and water properties appears to be a more important scaling parameter than one based on model size and air properties. Scale values of both the Reynolds and Weber numbers need to be in the range of 60 to 160 percent of the corresponding reference values. The effects of variations in other similarity parameters have yet to be established.