Tune-stabilized, non-scaling, fixed-field, alternating gradient accelerator
Johnstone, Carol J [Warrenville, IL
2011-02-01
A FFAG is a particle accelerator having turning magnets with a linear field gradient for confinement and a large edge angle to compensate for acceleration. FODO cells contain focus magnets and defocus magnets that are specified by a number of parameters. A set of seven equations, called the FFAG equations relate the parameters to one another. A set of constraints, call the FFAG constraints, constrain the FFAG equations. Selecting a few parameters, such as injection momentum, extraction momentum, and drift distance reduces the number of unknown parameters to seven. Seven equations with seven unknowns can be solved to yield the values for all the parameters and to thereby fully specify a FFAG.
Spatiotemporal Bayesian analysis of Lyme disease in New York state, 1990-2000.
Chen, Haiyan; Stratton, Howard H; Caraco, Thomas B; White, Dennis J
2006-07-01
Mapping ordinarily increases our understanding of nontrivial spatial and temporal heterogeneities in disease rates. However, the large number of parameters required by the corresponding statistical models often complicates detailed analysis. This study investigates the feasibility of a fully Bayesian hierarchical regression approach to the problem and identifies how it outperforms two more popular methods: crude rate estimates (CRE) and empirical Bayes standardization (EBS). In particular, we apply a fully Bayesian approach to the spatiotemporal analysis of Lyme disease incidence in New York state for the period 1990-2000. These results are compared with those obtained by CRE and EBS in Chen et al. (2005). We show that the fully Bayesian regression model not only gives more reliable estimates of disease rates than the other two approaches but also allows for tractable models that can accommodate more numerous sources of variation and unknown parameters.
Rendezvous with connectivity preservation for multi-robot systems with an unknown leader
NASA Astrophysics Data System (ADS)
Dong, Yi
2018-02-01
This paper studies the leader-following rendezvous problem with connectivity preservation for multi-agent systems composed of uncertain multi-robot systems subject to external disturbances and an unknown leader, both of which are generated by a so-called exosystem with parametric uncertainty. By combining internal model design, potential function technique and adaptive control, two distributed control strategies are proposed to maintain the connectivity of the communication network, to achieve the asymptotic tracking of all the followers to the output of the unknown leader system, as well as to reject unknown external disturbances. It is also worth to mention that the uncertain parameters in the multi-robot systems and exosystem are further allowed to belong to unknown and unbounded sets when applying the second fully distributed control law containing a dynamic gain inspired by high-gain adaptive control or self-tuning regulator.
Distributed weighted least-squares estimation with fast convergence for large-scale systems.
Marelli, Damián Edgardo; Fu, Minyue
2015-01-01
In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods.
Distributed weighted least-squares estimation with fast convergence for large-scale systems☆
Marelli, Damián Edgardo; Fu, Minyue
2015-01-01
In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods. PMID:25641976
Exploring theory space with Monte Carlo reweighting
Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; ...
2014-10-13
Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists andmore » experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.« less
PARTICLE FILTERING WITH SEQUENTIAL PARAMETER LEARNING FOR NONLINEAR BOLD fMRI SIGNALS.
Xia, Jing; Wang, Michelle Yongmei
Analyzing the blood oxygenation level dependent (BOLD) effect in the functional magnetic resonance imaging (fMRI) is typically based on recent ground-breaking time series analysis techniques. This work represents a significant improvement over existing approaches to system identification using nonlinear hemodynamic models. It is important for three reasons. First, instead of using linearized approximations of the dynamics, we present a nonlinear filtering based on the sequential Monte Carlo method to capture the inherent nonlinearities in the physiological system. Second, we simultaneously estimate the hidden physiological states and the system parameters through particle filtering with sequential parameter learning to fully take advantage of the dynamic information of the BOLD signals. Third, during the unknown static parameter learning, we employ the low-dimensional sufficient statistics for efficiency and avoiding potential degeneration of the parameters. The performance of the proposed method is validated using both the simulated data and real BOLD fMRI data.
On selecting a prior for the precision parameter of Dirichlet process mixture models
Dorazio, R.M.
2009-01-01
In hierarchical mixture models the Dirichlet process is used to specify latent patterns of heterogeneity, particularly when the distribution of latent parameters is thought to be clustered (multimodal). The parameters of a Dirichlet process include a precision parameter ?? and a base probability measure G0. In problems where ?? is unknown and must be estimated, inferences about the level of clustering can be sensitive to the choice of prior assumed for ??. In this paper an approach is developed for computing a prior for the precision parameter ?? that can be used in the presence or absence of prior information about the level of clustering. This approach is illustrated in an analysis of counts of stream fishes. The results of this fully Bayesian analysis are compared with an empirical Bayes analysis of the same data and with a Bayesian analysis based on an alternative commonly used prior.
NASA Astrophysics Data System (ADS)
Eilers, Anna-Christina; Hennawi, Joseph F.; Lee, Khee-Gan
2017-08-01
We present a new Bayesian algorithm making use of Markov Chain Monte Carlo sampling that allows us to simultaneously estimate the unknown continuum level of each quasar in an ensemble of high-resolution spectra, as well as their common probability distribution function (PDF) for the transmitted Lyα forest flux. This fully automated PDF regulated continuum fitting method models the unknown quasar continuum with a linear principal component analysis (PCA) basis, with the PCA coefficients treated as nuisance parameters. The method allows one to estimate parameters governing the thermal state of the intergalactic medium (IGM), such as the slope of the temperature-density relation γ -1, while marginalizing out continuum uncertainties in a fully Bayesian way. Using realistic mock quasar spectra created from a simplified semi-numerical model of the IGM, we show that this method recovers the underlying quasar continua to a precision of ≃ 7 % and ≃ 10 % at z = 3 and z = 5, respectively. Given the number of principal component spectra, this is comparable to the underlying accuracy of the PCA model itself. Most importantly, we show that we can achieve a nearly unbiased estimate of the slope γ -1 of the IGM temperature-density relation with a precision of +/- 8.6 % at z = 3 and +/- 6.1 % at z = 5, for an ensemble of ten mock high-resolution quasar spectra. Applying this method to real quasar spectra and comparing to a more realistic IGM model from hydrodynamical simulations would enable precise measurements of the thermal and cosmological parameters governing the IGM, albeit with somewhat larger uncertainties, given the increased flexibility of the model.
Korsgaard, Inge Riis; Lund, Mogens Sandø; Sorensen, Daniel; Gianola, Daniel; Madsen, Per; Jensen, Just
2003-01-01
A fully Bayesian analysis using Gibbs sampling and data augmentation in a multivariate model of Gaussian, right censored, and grouped Gaussian traits is described. The grouped Gaussian traits are either ordered categorical traits (with more than two categories) or binary traits, where the grouping is determined via thresholds on the underlying Gaussian scale, the liability scale. Allowances are made for unequal models, unknown covariance matrices and missing data. Having outlined the theory, strategies for implementation are reviewed. These include joint sampling of location parameters; efficient sampling from the fully conditional posterior distribution of augmented data, a multivariate truncated normal distribution; and sampling from the conditional inverse Wishart distribution, the fully conditional posterior distribution of the residual covariance matrix. Finally, a simulated dataset was analysed to illustrate the methodology. This paper concentrates on a model where residuals associated with liabilities of the binary traits are assumed to be independent. A Bayesian analysis using Gibbs sampling is outlined for the model where this assumption is relaxed. PMID:12633531
Autopilot for frequency-modulation atomic force microscopy.
Kuchuk, Kfir; Schlesinger, Itai; Sivan, Uri
2015-10-01
One of the most challenging aspects of operating an atomic force microscope (AFM) is finding optimal feedback parameters. This statement applies particularly to frequency-modulation AFM (FM-AFM), which utilizes three feedback loops to control the cantilever excitation amplitude, cantilever excitation frequency, and z-piezo extension. These loops are regulated by a set of feedback parameters, tuned by the user to optimize stability, sensitivity, and noise in the imaging process. Optimization of these parameters is difficult due to the coupling between the frequency and z-piezo feedback loops by the non-linear tip-sample interaction. Four proportional-integral (PI) parameters and two lock-in parameters regulating these loops require simultaneous optimization in the presence of a varying unknown tip-sample coupling. Presently, this optimization is done manually in a tedious process of trial and error. Here, we report on the development and implementation of an algorithm that computes the control parameters automatically. The algorithm reads the unperturbed cantilever resonance frequency, its quality factor, and the z-piezo driving signal power spectral density. It analyzes the poles and zeros of the total closed loop transfer function, extracts the unknown tip-sample transfer function, and finds four PI parameters and two lock-in parameters for the frequency and z-piezo control loops that optimize the bandwidth and step response of the total system. Implementation of the algorithm in a home-built AFM shows that the calculated parameters are consistently excellent and rarely require further tweaking by the user. The new algorithm saves the precious time of experienced users, facilitates utilization of FM-AFM by casual users, and removes the main hurdle on the way to fully automated FM-AFM.
Autopilot for frequency-modulation atomic force microscopy
NASA Astrophysics Data System (ADS)
Kuchuk, Kfir; Schlesinger, Itai; Sivan, Uri
2015-10-01
One of the most challenging aspects of operating an atomic force microscope (AFM) is finding optimal feedback parameters. This statement applies particularly to frequency-modulation AFM (FM-AFM), which utilizes three feedback loops to control the cantilever excitation amplitude, cantilever excitation frequency, and z-piezo extension. These loops are regulated by a set of feedback parameters, tuned by the user to optimize stability, sensitivity, and noise in the imaging process. Optimization of these parameters is difficult due to the coupling between the frequency and z-piezo feedback loops by the non-linear tip-sample interaction. Four proportional-integral (PI) parameters and two lock-in parameters regulating these loops require simultaneous optimization in the presence of a varying unknown tip-sample coupling. Presently, this optimization is done manually in a tedious process of trial and error. Here, we report on the development and implementation of an algorithm that computes the control parameters automatically. The algorithm reads the unperturbed cantilever resonance frequency, its quality factor, and the z-piezo driving signal power spectral density. It analyzes the poles and zeros of the total closed loop transfer function, extracts the unknown tip-sample transfer function, and finds four PI parameters and two lock-in parameters for the frequency and z-piezo control loops that optimize the bandwidth and step response of the total system. Implementation of the algorithm in a home-built AFM shows that the calculated parameters are consistently excellent and rarely require further tweaking by the user. The new algorithm saves the precious time of experienced users, facilitates utilization of FM-AFM by casual users, and removes the main hurdle on the way to fully automated FM-AFM.
Autopilot for frequency-modulation atomic force microscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuchuk, Kfir; Schlesinger, Itai; Sivan, Uri, E-mail: phsivan@tx.technion.ac.il
2015-10-15
One of the most challenging aspects of operating an atomic force microscope (AFM) is finding optimal feedback parameters. This statement applies particularly to frequency-modulation AFM (FM-AFM), which utilizes three feedback loops to control the cantilever excitation amplitude, cantilever excitation frequency, and z-piezo extension. These loops are regulated by a set of feedback parameters, tuned by the user to optimize stability, sensitivity, and noise in the imaging process. Optimization of these parameters is difficult due to the coupling between the frequency and z-piezo feedback loops by the non-linear tip-sample interaction. Four proportional-integral (PI) parameters and two lock-in parameters regulating these loopsmore » require simultaneous optimization in the presence of a varying unknown tip-sample coupling. Presently, this optimization is done manually in a tedious process of trial and error. Here, we report on the development and implementation of an algorithm that computes the control parameters automatically. The algorithm reads the unperturbed cantilever resonance frequency, its quality factor, and the z-piezo driving signal power spectral density. It analyzes the poles and zeros of the total closed loop transfer function, extracts the unknown tip-sample transfer function, and finds four PI parameters and two lock-in parameters for the frequency and z-piezo control loops that optimize the bandwidth and step response of the total system. Implementation of the algorithm in a home-built AFM shows that the calculated parameters are consistently excellent and rarely require further tweaking by the user. The new algorithm saves the precious time of experienced users, facilitates utilization of FM-AFM by casual users, and removes the main hurdle on the way to fully automated FM-AFM.« less
Parameter Estimation for GRACE-FO Geometric Ranging Errors
NASA Astrophysics Data System (ADS)
Wegener, H.; Mueller, V.; Darbeheshti, N.; Naeimi, M.; Heinzel, G.
2017-12-01
Onboard GRACE-FO, the novel Laser Ranging Instrument (LRI) serves as a technology demonstrator, but it is a fully functional instrument to provide an additional high-precision measurement of the primary mission observable: the biased range between the two spacecraft. Its (expectedly) two largest error sources are laser frequency noise and tilt-to-length (TTL) coupling. While not much can be done about laser frequency noise, the mechanics of the TTL error are widely understood. They depend, however, on unknown parameters. In order to improve the quality of the ranging data, it is hence essential to accurately estimate these parameters and remove the resulting TTL error from the data.Means to do so will be discussed. In particular, the possibility of using calibration maneuvers, the utility of the attitude information provided by the LRI via Differential Wavefront Sensing (DWS), and the benefit from combining ranging data from LRI with ranging data from the established microwave ranging, will be mentioned.
Parameter-space metric of semicoherent searches for continuous gravitational waves
NASA Astrophysics Data System (ADS)
Pletsch, Holger J.
2010-08-01
Continuous gravitational-wave (CW) signals such as emitted by spinning neutron stars are an important target class for current detectors. However, the enormous computational demand prohibits fully coherent broadband all-sky searches for prior unknown CW sources over wide ranges of parameter space and for yearlong observation times. More efficient hierarchical “semicoherent” search strategies divide the data into segments much shorter than one year, which are analyzed coherently; then detection statistics from different segments are combined incoherently. To optimally perform the incoherent combination, understanding of the underlying parameter-space structure is requisite. This problem is addressed here by using new coordinates on the parameter space, which yield the first analytical parameter-space metric for the incoherent combination step. This semicoherent metric applies to broadband all-sky surveys (also embedding directed searches at fixed sky position) for isolated CW sources. Furthermore, the additional metric resolution attained through the combination of segments is studied. From the search parameters (sky position, frequency, and frequency derivatives), solely the metric resolution in the frequency derivatives is found to significantly increase with the number of segments.
Tuning into Scorpius X-1: adapting a continuous gravitational-wave search for a known binary system
NASA Astrophysics Data System (ADS)
Meadors, Grant David; Goetz, Evan; Riles, Keith
2016-05-01
We describe how the TwoSpect data analysis method for continuous gravitational waves (GWs) has been tuned for directed sources such as the low-mass X-ray binary (LMXB), Scorpius X-1 (Sco X-1). A comparison of five search algorithms generated simulations of the orbital and GW parameters of Sco X-1. Whereas that comparison focused on relative performance, here the simulations help quantify the sensitivity enhancement and parameter estimation abilities of this directed method, derived from an all-sky search for unknown sources, using doubly Fourier-transformed data. Sensitivity is shown to be enhanced when the source sky location and period are known, because we can run a fully templated search, bypassing the all-sky hierarchical stage using an incoherent harmonic sum. The GW strain and frequency, as well as the projected semi-major axis of the binary system, are recovered and uncertainty estimated, for simulated signals that are detected. Upper limits for GW strain are set for undetected signals. Applications to future GW observatory data are discussed. Robust against spin-wandering and computationally tractable despite an unknown frequency, this directed search is an important new tool for finding gravitational signals from LMXBs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stroeer, Alexander; Veitch, John
The Laser Interferometer Space Antenna (LISA) defines new demands on data analysis efforts in its all-sky gravitational wave survey, recording simultaneously thousands of galactic compact object binary foreground sources and tens to hundreds of background sources like binary black hole mergers and extreme-mass ratio inspirals. We approach this problem with an adaptive and fully automatic Reversible Jump Markov Chain Monte Carlo sampler, able to sample from the joint posterior density function (as established by Bayes theorem) for a given mixture of signals ''out of the box'', handling the total number of signals as an additional unknown parameter beside the unknownmore » parameters of each individual source and the noise floor. We show in examples from the LISA Mock Data Challenge implementing the full response of LISA in its TDI description that this sampler is able to extract monochromatic Double White Dwarf signals out of colored instrumental noise and additional foreground and background noise successfully in a global fitting approach. We introduce 2 examples with fixed number of signals (MCMC sampling), and 1 example with unknown number of signals (RJ-MCMC), the latter further promoting the idea behind an experimental adaptation of the model indicator proposal densities in the main sampling stage. We note that the experienced runtimes and degeneracies in parameter extraction limit the shown examples to the extraction of a low but realistic number of signals.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spycher, Nicolas; Peiffer, Loic; Finsterle, Stefan
GeoT implements the multicomponent geothermometry method developed by Reed and Spycher (1984, Geochim. Cosmichim. Acta 46 513–528) into a stand-alone computer program, to ease the application of this method and to improve the prediction of geothermal reservoir temperatures using full and integrated chemical analyses of geothermal fluids. Reservoir temperatures are estimated from statistical analyses of mineral saturation indices computed as a function of temperature. The reconstruction of the deep geothermal fluid compositions, and geothermometry computations, are all implemented into the same computer program, allowing unknown or poorly constrained input parameters to be estimated by numerical optimization using existing parameter estimationmore » software, such as iTOUGH2, PEST, or UCODE. This integrated geothermometry approach presents advantages over classical geothermometers for fluids that have not fully equilibrated with reservoir minerals and/or that have been subject to processes such as dilution and gas loss.« less
Experiments of thermomechanical fatigue of SMAs
NASA Astrophysics Data System (ADS)
Lagoudas, Dimitris C.; Miller, David A.
1999-07-01
As SMA wires are gaining in popularity for use as actuators, one constitutive parameter that remain unknown is the thermomechanical fatigue life. Even though the effect of thermal cycles on the transformation characteristics of SMAs has been studied, these teste have not been extended to high number of cycles. In this study, a novel test frame developed to study the thermomechanical fatigue life of SMAs is described. Additionally, a testing protocol is discussed necessary to fully establish the fatigue characteristics of SMAs under various conditions. Initial results of the initial test show a substantial increase in the number of cycles to failure as the applied stress level reduces to approximately 100 MPa.
Bayesian methods for characterizing unknown parameters of material models
Emery, J. M.; Grigoriu, M. D.; Field Jr., R. V.
2016-02-04
A Bayesian framework is developed for characterizing the unknown parameters of probabilistic models for material properties. In this framework, the unknown parameters are viewed as random and described by their posterior distributions obtained from prior information and measurements of quantities of interest that are observable and depend on the unknown parameters. The proposed Bayesian method is applied to characterize an unknown spatial correlation of the conductivity field in the definition of a stochastic transport equation and to solve this equation by Monte Carlo simulation and stochastic reduced order models (SROMs). As a result, the Bayesian method is also employed tomore » characterize unknown parameters of material properties for laser welds from measurements of peak forces sustained by these welds.« less
Bayesian methods for characterizing unknown parameters of material models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emery, J. M.; Grigoriu, M. D.; Field Jr., R. V.
A Bayesian framework is developed for characterizing the unknown parameters of probabilistic models for material properties. In this framework, the unknown parameters are viewed as random and described by their posterior distributions obtained from prior information and measurements of quantities of interest that are observable and depend on the unknown parameters. The proposed Bayesian method is applied to characterize an unknown spatial correlation of the conductivity field in the definition of a stochastic transport equation and to solve this equation by Monte Carlo simulation and stochastic reduced order models (SROMs). As a result, the Bayesian method is also employed tomore » characterize unknown parameters of material properties for laser welds from measurements of peak forces sustained by these welds.« less
NASA Astrophysics Data System (ADS)
Chaney, N.; Wood, E. F.
2014-12-01
The increasing accessibility of high-resolution land data (< 100 m) and high performance computing allows improved parameterizations of subgrid hydrologic processes in macroscale land surface models. Continental scale fully distributed modeling at these spatial scales is possible; however, its practicality for operational use is still unknown due to uncertainties in input data, model parameters, and storage requirements. To address these concerns, we propose a modeling framework that provides the spatial detail of a fully distributed model yet maintains the benefits of a semi-distributed model. In this presentation we will introduce DTOPLATS-MP, a coupling between the NOAH-MP land surface model and the Dynamic TOPMODEL hydrologic model. This new model captures a catchment's spatial heterogeneity by clustering high-resolution land datasets (soil, topography, and land cover) into hundreds of hydrologic similar units (HSUs). A prior DEM analysis defines the connections between each HSU. At each time step, the 1D land surface model updates each HSU; the HSUs then interact laterally via the subsurface and surface. When compared to the fully distributed form of the model, this framework allows a significant decrease in computation and storage while providing most of the same information and enabling parameter transferability. As a proof of concept, we will show how this new modeling framework can be run over CONUS at a 30-meter spatial resolution. For each catchment in the WBD HUC-12 dataset, the model is run between 2002 and 2012 using available high-resolution continental scale land and meteorological datasets over CONUS (dSSURGO, NLCD, NED, and NCEP Stage IV). For each catchment, the model is run with 1000 model parameter sets obtained from a Latin hypercube sample. This exercise will illustrate the feasibility of running the model operationally at continental scales while accounting for model parameter uncertainty.
Deployable reconnaissance from a VTOL UAS in urban environments
NASA Astrophysics Data System (ADS)
Barnett, Shane; Bird, John; Culhane, Andrew; Sharkasi, Adam; Reinholtz, Charles
2007-04-01
Reconnaissance collection in unknown or hostile environments can be a dangerous and life threatening task. To reduce this risk, the Unmanned Systems Group at Virginia Tech has produced a fully autonomous reconnaissance system able to provide live video reconnaissance from outside and inside unknown structures. This system consists of an autonomous helicopter which launches a small reconnaissance pod inside a building and an operator control unit (OCU) on a ground station. The helicopter is a modified Bergen Industrial Twin using a Rotomotion flight controller and can fly missions of up to one half hour. The mission planning OCU can control the helicopter remotely through teleoperation or fully autonomously by GPS waypoints. A forward facing camera and template matching aid in navigation by identifying the target building. Once the target structure is identified, vision algorithms will center the UAS adjacent to open windows or doorways. Tunable parameters in the vision algorithm account for varying launch distances and opening sizes. Launch of the reconnaissance pod may be initiated remotely through a human in the loop or autonomously. Compressed air propels the half pound stationary pod or the larger mobile pod into the open portals. Once inside the building, the reconnaissance pod will then transmit live video back to the helicopter. The helicopter acts as a repeater node for increased video range and simplification of communication back to the ground station.
Collaborative emitter tracking using Rao-Blackwellized random exchange diffusion particle filtering
NASA Astrophysics Data System (ADS)
Bruno, Marcelo G. S.; Dias, Stiven S.
2014-12-01
We introduce in this paper the fully distributed, random exchange diffusion particle filter (ReDif-PF) to track a moving emitter using multiple received signal strength (RSS) sensors. We consider scenarios with both known and unknown sensor model parameters. In the unknown parameter case, a Rao-Blackwellized (RB) version of the random exchange diffusion particle filter, referred to as the RB ReDif-PF, is introduced. In a simulated scenario with a partially connected network, the proposed ReDif-PF outperformed a PF tracker that assimilates local neighboring measurements only and also outperformed a linearized random exchange distributed extended Kalman filter (ReDif-EKF). Furthermore, the novel ReDif-PF matched the tracking error performance of alternative suboptimal distributed PFs based respectively on iterative Markov chain move steps and selective average gossiping with an inter-node communication cost that is roughly two orders of magnitude lower than the corresponding cost for the Markov chain and selective gossip filters. Compared to a broadcast-based filter which exactly mimics the optimal centralized tracker or its equivalent (exact) consensus-based implementations, ReDif-PF showed a degradation in steady-state error performance. However, compared to the optimal consensus-based trackers, ReDif-PF is better suited for real-time applications since it does not require iterative inter-node communication between measurement arrivals.
NASA Astrophysics Data System (ADS)
Astroza, Rodrigo; Ebrahimian, Hamed; Li, Yong; Conte, Joel P.
2017-09-01
A methodology is proposed to update mechanics-based nonlinear finite element (FE) models of civil structures subjected to unknown input excitation. The approach allows to jointly estimate unknown time-invariant model parameters of a nonlinear FE model of the structure and the unknown time histories of input excitations using spatially-sparse output response measurements recorded during an earthquake event. The unscented Kalman filter, which circumvents the computation of FE response sensitivities with respect to the unknown model parameters and unknown input excitations by using a deterministic sampling approach, is employed as the estimation tool. The use of measurement data obtained from arrays of heterogeneous sensors, including accelerometers, displacement sensors, and strain gauges is investigated. Based on the estimated FE model parameters and input excitations, the updated nonlinear FE model can be interrogated to detect, localize, classify, and assess damage in the structure. Numerically simulated response data of a three-dimensional 4-story 2-by-1 bay steel frame structure with six unknown model parameters subjected to unknown bi-directional horizontal seismic excitation, and a three-dimensional 5-story 2-by-1 bay reinforced concrete frame structure with nine unknown model parameters subjected to unknown bi-directional horizontal seismic excitation are used to illustrate and validate the proposed methodology. The results of the validation studies show the excellent performance and robustness of the proposed algorithm to jointly estimate unknown FE model parameters and unknown input excitations.
Wang, Junbai; Wu, Qianqian; Hu, Xiaohua Tony; Tian, Tianhai
2016-11-01
Investigating the dynamics of genetic regulatory networks through high throughput experimental data, such as microarray gene expression profiles, is a very important but challenging task. One of the major hindrances in building detailed mathematical models for genetic regulation is the large number of unknown model parameters. To tackle this challenge, a new integrated method is proposed by combining a top-down approach and a bottom-up approach. First, the top-down approach uses probabilistic graphical models to predict the network structure of DNA repair pathway that is regulated by the p53 protein. Two networks are predicted, namely a network of eight genes with eight inferred interactions and an extended network of 21 genes with 17 interactions. Then, the bottom-up approach using differential equation models is developed to study the detailed genetic regulations based on either a fully connected regulatory network or a gene network obtained by the top-down approach. Model simulation error, parameter identifiability and robustness property are used as criteria to select the optimal network. Simulation results together with permutation tests of input gene network structures indicate that the prediction accuracy and robustness property of the two predicted networks using the top-down approach are better than those of the corresponding fully connected networks. In particular, the proposed approach reduces computational cost significantly for inferring model parameters. Overall, the new integrated method is a promising approach for investigating the dynamics of genetic regulation. Copyright © 2016 Elsevier Inc. All rights reserved.
System and method for motor parameter estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luhrs, Bin; Yan, Ting
2014-03-18
A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values formore » motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.« less
Parameter estimation in a structural acoustic system with fully nonlinear coupling conditions
NASA Technical Reports Server (NTRS)
Banks, H. T.; Smith, Ralph C.
1994-01-01
A methodology for estimating physical parameters in a class of structural acoustic systems is presented. The general model under consideration consists of an interior cavity which is separated from an exterior noise source by an enclosing elastic structure. Piezoceramic patches are bonded to or embedded in the structure; these can be used both as actuators and sensors in applications ranging from the control of interior noise levels to the determination of structural flaws through nondestructive evaluation techniques. The presence and excitation of patches, however, changes the geometry and material properties of the structure as well as involves unknown patch parameters, thus necessitating the development of parameter estimation techniques which are applicable in this coupled setting. In developing a framework for approximation, parameter estimation and implementation, strong consideration is given to the fact that the input operator is unbonded due to the discrete nature of the patches. Moreover, the model is weakly nonlinear. As a result of the coupling mechanism between the structural vibrations and the interior acoustic dynamics. Within this context, an illustrating model is given, well-posedness and approximations results are discussed and an applicable parameter estimation methodology is presented. The scheme is then illustrated through several numerical examples with simulations modeling a variety of commonly used structural acoustic techniques for systems excitations and data collection.
NASA Technical Reports Server (NTRS)
Wilson, Edward (Inventor)
2006-01-01
The present invention is a method for identifying unknown parameters in a system having a set of governing equations describing its behavior that cannot be put into regression form with the unknown parameters linearly represented. In this method, the vector of unknown parameters is segmented into a plurality of groups where each individual group of unknown parameters may be isolated linearly by manipulation of said equations. Multiple concurrent and independent recursive least squares identification of each said group run, treating other unknown parameters appearing in their regression equation as if they were known perfectly, with said values provided by recursive least squares estimation from the other groups, thereby enabling the use of fast, compact, efficient linear algorithms to solve problems that would otherwise require nonlinear solution approaches. This invention is presented with application to identification of mass and thruster properties for a thruster-controlled spacecraft.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Paul T.; Shadid, John N.; Sala, Marzio
In this study results are presented for the large-scale parallel performance of an algebraic multilevel preconditioner for solution of the drift-diffusion model for semiconductor devices. The preconditioner is the key numerical procedure determining the robustness, efficiency and scalability of the fully-coupled Newton-Krylov based, nonlinear solution method that is employed for this system of equations. The coupled system is comprised of a source term dominated Poisson equation for the electric potential, and two convection-diffusion-reaction type equations for the electron and hole concentration. The governing PDEs are discretized in space by a stabilized finite element method. Solution of the discrete system ismore » obtained through a fully-implicit time integrator, a fully-coupled Newton-based nonlinear solver, and a restarted GMRES Krylov linear system solver. The algebraic multilevel preconditioner is based on an aggressive coarsening graph partitioning of the nonzero block structure of the Jacobian matrix. Representative performance results are presented for various choices of multigrid V-cycles and W-cycles and parameter variations for smoothers based on incomplete factorizations. Parallel scalability results are presented for solution of up to 10{sup 8} unknowns on 4096 processors of a Cray XT3/4 and an IBM POWER eServer system.« less
Nowcasting sunshine number using logistic modeling
NASA Astrophysics Data System (ADS)
Brabec, Marek; Badescu, Viorel; Paulescu, Marius
2013-04-01
In this paper, we present a formalized approach to statistical modeling of the sunshine number, binary indicator of whether the Sun is covered by clouds introduced previously by Badescu (Theor Appl Climatol 72:127-136, 2002). Our statistical approach is based on Markov chain and logistic regression and yields fully specified probability models that are relatively easily identified (and their unknown parameters estimated) from a set of empirical data (observed sunshine number and sunshine stability number series). We discuss general structure of the model and its advantages, demonstrate its performance on real data and compare its results to classical ARIMA approach as to a competitor. Since the model parameters have clear interpretation, we also illustrate how, e.g., their inter-seasonal stability can be tested. We conclude with an outlook to future developments oriented to construction of models allowing for practically desirable smooth transition between data observed with different frequencies and with a short discussion of technical problems that such a goal brings.
NASA Astrophysics Data System (ADS)
Basin, M.; Maldonado, J. J.; Zendejo, O.
2016-07-01
This paper proposes new mean-square filter and parameter estimator design for linear stochastic systems with unknown parameters over linear observations, where unknown parameters are considered as combinations of Gaussian and Poisson white noises. The problem is treated by reducing the original problem to a filtering problem for an extended state vector that includes parameters as additional states, modelled as combinations of independent Gaussian and Poisson processes. The solution to this filtering problem is based on the mean-square filtering equations for incompletely polynomial states confused with Gaussian and Poisson noises over linear observations. The resulting mean-square filter serves as an identifier for the unknown parameters. Finally, a simulation example shows effectiveness of the proposed mean-square filter and parameter estimator.
Optimizing the choice of spin-squeezed states for detecting and characterizing quantum processes
Rozema, Lee A.; Mahler, Dylan H.; Blume-Kohout, Robin; ...
2014-11-07
Quantum metrology uses quantum states with no classical counterpart to measure a physical quantity with extraordinary sensitivity or precision. Most such schemes characterize a dynamical process by probing it with a specially designed quantum state. The success of such a scheme usually relies on the process belonging to a particular one-parameter family. If this assumption is violated, or if the goal is to measure more than one parameter, a different quantum state may perform better. In the most extreme case, we know nothing about the process and wish to learn everything. This requires quantum process tomography, which demands an informationallymore » complete set of probe states. It is very convenient if this set is group covariant—i.e., each element is generated by applying an element of the quantum system’s natural symmetry group to a single fixed fiducial state. In this paper, we consider metrology with 2-photon (“biphoton”) states and report experimental studies of different states’ sensitivity to small, unknown collective SU( 2) rotations [“ SU( 2) jitter”]. Maximally entangled N00 N states are the most sensitive detectors of such a rotation, yet they are also among the worst at fully characterizing an a priori unknown process. We identify (and confirm experimentally) the best SU( 2)-covariant set for process tomography; these states are all less entangled than the N00 N state, and are characterized by the fact that they form a 2-design.« less
NASA Astrophysics Data System (ADS)
Xu, T.; Valocchi, A. J.; Ye, M.; Liang, F.
2016-12-01
Due to simplification and/or misrepresentation of the real aquifer system, numerical groundwater flow and solute transport models are usually subject to model structural error. During model calibration, the hydrogeological parameters may be overly adjusted to compensate for unknown structural error. This may result in biased predictions when models are used to forecast aquifer response to new forcing. In this study, we extend a fully Bayesian method [Xu and Valocchi, 2015] to calibrate a real-world, regional groundwater flow model. The method uses a data-driven error model to describe model structural error and jointly infers model parameters and structural error. In this study, Bayesian inference is facilitated using high performance computing and fast surrogate models. The surrogate models are constructed using machine learning techniques to emulate the response simulated by the computationally expensive groundwater model. We demonstrate in the real-world case study that explicitly accounting for model structural error yields parameter posterior distributions that are substantially different from those derived by the classical Bayesian calibration that does not account for model structural error. In addition, the Bayesian with error model method gives significantly more accurate prediction along with reasonable credible intervals.
Polarimetric subspace target detector for SAR data based on the Huynen dihedral model
NASA Astrophysics Data System (ADS)
Larson, Victor J.; Novak, Leslie M.
1995-06-01
Two new polarimetric subspace target detectors are developed based on a dihedral signal model for bright peaks within a spatially extended target signature. The first is a coherent dihedral target detector based on the exact Huynen model for a dihedral. The second is a noncoherent dihedral target detector based on the Huynen model with an extra unknown phase term. Expressions for these polarimetric subspace target detectors are developed for both additive Gaussian clutter and more general additive spherically invariant random vector clutter including the K-distribution. For the case of Gaussian clutter with unknown clutter parameters, constant false alarm rate implementations of these polarimetric subspace target detectors are developed. The performance of these dihedral detectors is demonstrated with real millimeter-wave fully polarimetric SAR data. The coherent dihedral detector which is developed with a more accurate description of a dihedral offers no performance advantage over the noncoherent dihedral detector which is computationally more attractive. The dihedral detectors do a better job of separating a set of tactical military targets from natural clutter compared to a detector that assumes no knowledge about the polarimetric structure of the target signal.
Bayesian inference in an item response theory model with a generalized student t link function
NASA Astrophysics Data System (ADS)
Azevedo, Caio L. N.; Migon, Helio S.
2012-10-01
In this paper we introduce a new item response theory (IRT) model with a generalized Student t-link function with unknown degrees of freedom (df), named generalized t-link (GtL) IRT model. In this model we consider only the difficulty parameter in the item response function. GtL is an alternative to the two parameter logit and probit models, since the degrees of freedom (df) play a similar role to the discrimination parameter. However, the behavior of the curves of the GtL is different from those of the two parameter models and the usual Student t link, since in GtL the curve obtained from different df's can cross the probit curves in more than one latent trait level. The GtL model has similar proprieties to the generalized linear mixed models, such as the existence of sufficient statistics and easy parameter interpretation. Also, many techniques of parameter estimation, model fit assessment and residual analysis developed for that models can be used for the GtL model. We develop fully Bayesian estimation and model fit assessment tools through a Metropolis-Hastings step within Gibbs sampling algorithm. We consider a prior sensitivity choice concerning the degrees of freedom. The simulation study indicates that the algorithm recovers all parameters properly. In addition, some Bayesian model fit assessment tools are considered. Finally, a real data set is analyzed using our approach and other usual models. The results indicate that our model fits the data better than the two parameter models.
Su, Fei; Wang, Jiang; Deng, Bin; Wei, Xi-Le; Chen, Ying-Yuan; Liu, Chen; Li, Hui-Yan
2015-02-01
The objective here is to explore the use of adaptive input-output feedback linearization method to achieve an improved deep brain stimulation (DBS) algorithm for closed-loop control of Parkinson's state. The control law is based on a highly nonlinear computational model of Parkinson's disease (PD) with unknown parameters. The restoration of thalamic relay reliability is formulated as the desired outcome of the adaptive control methodology, and the DBS waveform is the control input. The control input is adjusted in real time according to estimates of unknown parameters as well as the feedback signal. Simulation results show that the proposed adaptive control algorithm succeeds in restoring the relay reliability of the thalamus, and at the same time achieves accurate estimation of unknown parameters. Our findings point to the potential value of adaptive control approach that could be used to regulate DBS waveform in more effective treatment of PD.
Optimal SVM parameter selection for non-separable and unbalanced datasets.
Jiang, Peng; Missoum, Samy; Chen, Zhao
2014-10-01
This article presents a study of three validation metrics used for the selection of optimal parameters of a support vector machine (SVM) classifier in the case of non-separable and unbalanced datasets. This situation is often encountered when the data is obtained experimentally or clinically. The three metrics selected in this work are the area under the ROC curve (AUC), accuracy, and balanced accuracy. These validation metrics are tested using computational data only, which enables the creation of fully separable sets of data. This way, non-separable datasets, representative of a real-world problem, can be created by projection onto a lower dimensional sub-space. The knowledge of the separable dataset, unknown in real-world problems, provides a reference to compare the three validation metrics using a quantity referred to as the "weighted likelihood". As an application example, the study investigates a classification model for hip fracture prediction. The data is obtained from a parameterized finite element model of a femur. The performance of the various validation metrics is studied for several levels of separability, ratios of unbalance, and training set sizes.
Lee, Eugene K; Tran, David D; Keung, Wendy; Chan, Patrick; Wong, Gabriel; Chan, Camie W; Costa, Kevin D; Li, Ronald A; Khine, Michelle
2017-11-14
Accurately predicting cardioactive effects of new molecular entities for therapeutics remains a daunting challenge. Immense research effort has been focused toward creating new screening platforms that utilize human pluripotent stem cell (hPSC)-derived cardiomyocytes and three-dimensional engineered cardiac tissue constructs to better recapitulate human heart function and drug responses. As these new platforms become increasingly sophisticated and high throughput, the drug screens result in larger multidimensional datasets. Improved automated analysis methods must therefore be developed in parallel to fully comprehend the cellular response across a multidimensional parameter space. Here, we describe the use of machine learning to comprehensively analyze 17 functional parameters derived from force readouts of hPSC-derived ventricular cardiac tissue strips (hvCTS) electrically paced at a range of frequencies and exposed to a library of compounds. A generated metric is effective for then determining the cardioactivity of a given drug. Furthermore, we demonstrate a classification model that can automatically predict the mechanistic action of an unknown cardioactive drug. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Ram Pressure Stripping Made Easy: An Analytical Approach
NASA Astrophysics Data System (ADS)
Köppen, J.; Jáchym, P.; Taylor, R.; Palouš, J.
2018-06-01
The removal of gas by ram pressure stripping of galaxies is treated by a purely kinematic description. The solution has two asymptotic limits: if the duration of the ram pressure pulse exceeds the period of vertical oscillations perpendicular to the galactic plane, the commonly used quasi-static criterion of Gunn & Gott is obtained which uses the maximum ram pressure that the galaxy has experienced along its orbit. For shorter pulses the outcome depends on the time-integrated ram pressure. This parameter pair fully describes the gas mass fraction that is stripped from a given galaxy. This approach closely reproduces results from SPH simulations. We show that typical galaxies follow a very tight relation in this parameter space corresponding to a pressure pulse length of about 300 Myr. Thus, the Gunn & Gott criterion provides a good description for galaxies in larger clusters. Applying the analytic description to a sample of 232 Virgo galaxies from the GoldMine database, we show that the ICM provides indeed the ram pressures needed to explain the deficiencies. We also can distinguish current and past strippers, including objects whose stripping state was unknown.
iGeoT v1.0: Automatic Parameter Estimation for Multicomponent Geothermometry, User's Guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spycher, Nicolas; Finsterle, Stefan
GeoT implements the multicomponent geothermometry method developed by Reed and Spycher [1984] into a stand-alone computer program to ease the application of this method and to improve the prediction of geothermal reservoir temperatures using full and integrated chemical analyses of geothermal fluids. Reservoir temperatures are estimated from statistical analyses of mineral saturation indices computed as a function of temperature. The reconstruction of the deep geothermal fluid compositions, and geothermometry computations, are all implemented into the same computer program, allowing unknown or poorly constrained input parameters to be estimated by numerical optimization. This integrated geothermometry approach presents advantages over classical geothermometersmore » for fluids that have not fully equilibrated with reservoir minerals and/or that have been subject to processes such as dilution and gas loss. This manual contains installation instructions for iGeoT, and briefly describes the input formats needed to run iGeoT in Automatic or Expert Mode. An example is also provided to demonstrate the use of iGeoT.« less
The choice of sample size: a mixed Bayesian / frequentist approach.
Pezeshk, Hamid; Nematollahi, Nader; Maroufy, Vahed; Gittins, John
2009-04-01
Sample size computations are largely based on frequentist or classical methods. In the Bayesian approach the prior information on the unknown parameters is taken into account. In this work we consider a fully Bayesian approach to the sample size determination problem which was introduced by Grundy et al. and developed by Lindley. This approach treats the problem as a decision problem and employs a utility function to find the optimal sample size of a trial. Furthermore, we assume that a regulatory authority, which is deciding on whether or not to grant a licence to a new treatment, uses a frequentist approach. We then find the optimal sample size for the trial by maximising the expected net benefit, which is the expected benefit of subsequent use of the new treatment minus the cost of the trial.
Effect of electromagnetic waves on human reproduction.
Wdowiak, Artur; Mazurek, Paweł A; Wdowiak, Anita; Bojar, Iwona
2017-03-31
Electromagnetic radiation (EMR) emitting from the natural environment, as well as from the use of industrial and everyday appliances, constantly influence the human body. The effect of this type of energy on living tissues may exert various effects on their functioning, although the mechanisms conditioning this phenomenon have not been fully explained. It may be expected that the interactions between electromagnetic radiation and the living organism would depend on the amount and parameters of the transmitted energy and type of tissue exposed. Electromagnetic waves exert an influence on human reproduction by affecting the male and female reproductive systems, the developing embryo, and subsequently, the foetus. Knowledge concerning this problem is still being expanded; however, all the conditionings of human reproduction still remain unknown. The study presents the current state of knowledge concerning the problem, based on the latest scientific reports.
Dynamic Modeling from Flight Data with Unknown Time Skews
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2016-01-01
A method for estimating dynamic model parameters from flight data with unknown time skews is described and demonstrated. The method combines data reconstruction, nonlinear optimization, and equation-error parameter estimation in the frequency domain to accurately estimate both dynamic model parameters and the relative time skews in the data. Data from a nonlinear F-16 aircraft simulation with realistic noise, instrumentation errors, and arbitrary time skews were used to demonstrate the approach. The approach was further evaluated using flight data from a subscale jet transport aircraft, where the measured data were known to have relative time skews. Comparison of modeling results obtained from time-skewed and time-synchronized data showed that the method accurately estimates both dynamic model parameters and relative time skew parameters from flight data with unknown time skews.
Parameter identifiability of linear dynamical systems
NASA Technical Reports Server (NTRS)
Glover, K.; Willems, J. C.
1974-01-01
It is assumed that the system matrices of a stationary linear dynamical system were parametrized by a set of unknown parameters. The question considered here is, when can such a set of unknown parameters be identified from the observed data? Conditions for the local identifiability of a parametrization are derived in three situations: (1) when input/output observations are made, (2) when there exists an unknown feedback matrix in the system and (3) when the system is assumed to be driven by white noise and only output observations are made. Also a sufficient condition for global identifiability is derived.
Parameter identification of thermophilic anaerobic degradation of valerate.
Flotats, Xavier; Ahring, Birgitte K; Angelidaki, Irini
2003-01-01
The considered mathematical model of the decomposition of valerate presents three unknown kinetic parameters, two unknown stoichiometric coefficients, and three unknown initial concentrations for biomass. Applying a structural identifiability study, we concluded that it is necessary to perform simultaneous batch experiments with different initial conditions for estimating these parameters. Four simultaneous batch experiments were conducted at 55 degrees C, characterized by four different initial acetate concentrations. Product inhibition of valerate degradation by acetate was considered. Practical identification was done optimizing the sum of the multiple determination coefficients for all measured state variables and for all experiments simultaneously. The estimated values of kinetic parameters and stoichiometric coefficients were characterized by the parameter correlation matrix, the confidence interval, and the student's t-test at 5% significance level with positive results except for the saturation constant, for which more experiments for improving its identifiability should be conducted. In this article, we discuss kinetic parameter estimation methods.
Atmospherical simulations of the OMEGA/MEX observations
NASA Astrophysics Data System (ADS)
Melchiorri, R.; Drossart, P.; Combes, M.; Encrenaz, T.; Fouchet, T.; Forget, F.; Bibring, J. P.; Ignatiev, N.; Moroz, V.; OMEGA Team
The modelization of the atmospheric contribution in the martian spectrum is an important step for the OMEGA data analysis.A full line by line radiative transfer calculation is made for the gas absorption; the dust opacity component, in a first approximation, is calculated as an optically thin additive component.Due to the large number of parameters needed in the calculations, the building of a huge data base to be interpolated is not envisageable, for each observed OMEGA spectrum with calculation for all the involved parameters (atmospheric pressure, water abundance, CO abundance, dust opacity and geometric angles of observation). The simulation of the observations allows us to fix all the orbital parameters and leave the unknown parameters as the only variables.Starting from the predictions of the current meteorological models of Mars we build a smaller data base corresponding on each observation. We present here a first order simulation, which consists in retrieving atmospheric contribution from the solar reflected component as a multiplicative (for gas absorption) and an additive component (for suspended dust contribution); although a fully consistent approach will require to include surface and atmosphere contributions together in synthetic calculations, this approach is sufficient for retrieving mineralogic information cleaned from atmospheric absorption at first order.First comparison to OMEGA spectra will be presented, with first order retrieval of CO2 pressure, CO and H2O abundance, and dust opacity.
NASA Astrophysics Data System (ADS)
Yu, Miao; Huang, Deqing; Yang, Wanqiu
2018-06-01
In this paper, we address the problem of unknown periodicity for a class of discrete-time nonlinear parametric systems without assuming any growth conditions on the nonlinearities. The unknown periodicity hides in the parametric uncertainties, which is difficult to estimate with existing techniques. By incorporating a logic-based switching mechanism, we identify the period and bound of unknown parameter simultaneously. Lyapunov-based analysis is given to demonstrate that a finite number of switchings can guarantee the asymptotic tracking for the nonlinear parametric systems. The simulation result also shows the efficacy of the proposed switching periodic adaptive control approach.
A Systematic Approach for Model-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.
NASA Astrophysics Data System (ADS)
Chevalier, Pascal; Oukaci, Abdelkader; Delmas, Jean-Pierre
2011-12-01
The detection of a known signal with unknown parameters in the presence of noise plus interferences (called total noise) whose covariance matrix is unknown is an important problem which has received much attention these last decades for applications such as radar, satellite localization or time acquisition in radio communications. However, most of the available receivers assume a second order (SO) circular (or proper) total noise and become suboptimal in the presence of SO noncircular (or improper) interferences, potentially present in the previous applications. The scarce available receivers which take the potential SO noncircularity of the total noise into account have been developed under the restrictive condition of a known signal with known parameters or under the assumption of a random signal. For this reason, following a generalized likelihood ratio test (GLRT) approach, the purpose of this paper is to introduce and to analyze the performance of different array receivers for the detection of a known signal, with different sets of unknown parameters, corrupted by an unknown noncircular total noise. To simplify the study, we limit the analysis to rectilinear known useful signals for which the baseband signal is real, which concerns many applications.
NASA Astrophysics Data System (ADS)
Xu, Peiliang
2018-06-01
The numerical integration method has been routinely used by major institutions worldwide, for example, NASA Goddard Space Flight Center and German Research Center for Geosciences (GFZ), to produce global gravitational models from satellite tracking measurements of CHAMP and/or GRACE types. Such Earth's gravitational products have found widest possible multidisciplinary applications in Earth Sciences. The method is essentially implemented by solving the differential equations of the partial derivatives of the orbit of a satellite with respect to the unknown harmonic coefficients under the conditions of zero initial values. From the mathematical and statistical point of view, satellite gravimetry from satellite tracking is essentially the problem of estimating unknown parameters in the Newton's nonlinear differential equations from satellite tracking measurements. We prove that zero initial values for the partial derivatives are incorrect mathematically and not permitted physically. The numerical integration method, as currently implemented and used in mathematics and statistics, chemistry and physics, and satellite gravimetry, is groundless, mathematically and physically. Given the Newton's nonlinear governing differential equations of satellite motion with unknown equation parameters and unknown initial conditions, we develop three methods to derive new local solutions around a nominal reference orbit, which are linked to measurements to estimate the unknown corrections to approximate values of the unknown parameters and the unknown initial conditions. Bearing in mind that satellite orbits can now be tracked almost continuously at unprecedented accuracy, we propose the measurement-based perturbation theory and derive global uniformly convergent solutions to the Newton's nonlinear governing differential equations of satellite motion for the next generation of global gravitational models. Since the solutions are global uniformly convergent, theoretically speaking, they are able to extract smallest possible gravitational signals from modern and future satellite tracking measurements, leading to the production of global high-precision, high-resolution gravitational models. By directly turning the nonlinear differential equations of satellite motion into the nonlinear integral equations, and recognizing the fact that satellite orbits are measured with random errors, we further reformulate the links between satellite tracking measurements and the global uniformly convergent solutions to the Newton's governing differential equations as a condition adjustment model with unknown parameters, or equivalently, the weighted least squares estimation of unknown differential equation parameters with equality constraints, for the reconstruction of global high-precision, high-resolution gravitational models from modern (and future) satellite tracking measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmad, Israr, E-mail: iak-2000plus@yahoo.com; Saaban, Azizan Bin, E-mail: azizan.s@uum.edu.my; Ibrahim, Adyda Binti, E-mail: adyda@uum.edu.my
This paper addresses a comparative computational study on the synchronization quality, cost and converging speed for two pairs of identical chaotic and hyperchaotic systems with unknown time-varying parameters. It is assumed that the unknown time-varying parameters are bounded. Based on the Lyapunov stability theory and using the adaptive control method, a single proportional controller is proposed to achieve the goal of complete synchronizations. Accordingly, appropriate adaptive laws are designed to identify the unknown time-varying parameters. The designed control strategy is easy to implement in practice. Numerical simulations results are provided to verify the effectiveness of the proposed synchronization scheme.
NASA Astrophysics Data System (ADS)
Tirandaz, Hamed; Karami-Mollaee, Ali
2018-06-01
Chaotic systems demonstrate complex behaviour in their state variables and their parameters, which generate some challenges and consequences. This paper presents a new synchronisation scheme based on integral sliding mode control (ISMC) method on a class of complex chaotic systems with complex unknown parameters. Synchronisation between corresponding states of a class of complex chaotic systems and also convergence of the errors of the system parameters to zero point are studied. The designed feedback control vector and complex unknown parameter vector are analytically achieved based on the Lyapunov stability theory. Moreover, the effectiveness of the proposed methodology is verified by synchronisation of the Chen complex system and the Lorenz complex systems as the leader and the follower chaotic systems, respectively. In conclusion, some numerical simulations related to the synchronisation methodology is given to illustrate the effectiveness of the theoretical discussions.
NASA Astrophysics Data System (ADS)
Zhang, Zhan-Jun; Liu, Yi-Min; Man, Zhong-Xiao
2005-11-01
We present a method to teleport multi-qubit quantum information in an easy way from a sender to a receiver via the control of many agents in a network. Only when all the agents collaborate with the quantum information receiver can the unknown states in the sender's qubits be fully reconstructed in the receiver's qubits. In our method, agents's control parameters are obtained via quantum entanglement swapping. As the realization of the many-agent controlled teleportation is concerned, compared to the recent method [C.P. Yang, et al., Phys. Rev. A 70 (2004) 022329], our present method considerably reduces the preparation difficulty of initial states and the identification difficulty of entangled states, moreover, it does not need local Hadamard operations and it is more feasible in technology. The project supported by National Natural Science Foundation of China under Grant No. 10304022
Search for Screened Interactions Associated with Dark Energy below the 100 μm Length Scale.
Rider, Alexander D; Moore, David C; Blakemore, Charles P; Louis, Maxime; Lu, Marie; Gratta, Giorgio
2016-09-02
We present the results of a search for unknown interactions that couple to mass between an optically levitated microsphere and a gold-coated silicon cantilever. The scale and geometry of the apparatus enable a search for new forces that appear at distances below 100 μm and which would have evaded previous searches due to screening mechanisms. The data are consistent with electrostatic backgrounds and place upper limits on the strength of new interactions at <0.1 fN in the geometry tested. For the specific example of a chameleon interaction with an inverse power law potential, these results exclude matter couplings β>5.6×10^{4} in the region of parameter space where the self-coupling Λ≳5 meV and the microspheres are not fully screened.
Hua, Yongzhao; Dong, Xiwang; Li, Qingdong; Ren, Zhang
2017-11-01
This paper investigates the fault-tolerant time-varying formation control problems for high-order linear multi-agent systems in the presence of actuator failures. Firstly, a fully distributed formation control protocol is presented to compensate for the influences of both bias fault and loss of effectiveness fault. Using the adaptive online updating strategies, no global knowledge about the communication topology is required and the bounds of actuator failures can be unknown. Then an algorithm is proposed to determine the control parameters of the fault-tolerant formation protocol, where the time-varying formation feasible conditions and an approach to expand the feasible formation set are given. Furthermore, the stability of the proposed algorithm is proven based on the Lyapunov-like theory. Finally, two simulation examples are given to demonstrate the effectiveness of the theoretical results. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Walker, R. L., II; Knepley, M.; Aminzadeh, F.
2017-12-01
We seek to use the tools provided by the Portable, Extensible Toolkit for Scientific Computation (PETSc) to represent a multiphysics problem in a form that decouples the element definition from the fully coupled equation through the use of pointwise functions that imitate the strong form of the governing equation. This allows allows individual physical processes to be expressed as independent kernels that may be then coupled with the existing finite element framework, PyLith, and capitalizes upon the flexibility offered by the solver, data management, and time stepping algorithms offered by PETSc. To demonstrate a characteristic example of coupled geophysical simulation devised in this manner, we present a model of a synthetic poroelastic environment, with and without the consideration of inertial effects, with fluid initially represented as a single phase. Matrix displacement and fluid pressure serve as the desired unknowns, with the option for various model parameters represented as dependent variables of the central unknowns. While independent of PyLith, this model also serves to showcase the adaptability of physics kernels for synthetic forward modeling. In addition, we seek to expand the base case to demonstrate the impact of modeling fluid as single phase compressible versus a single incompressible phase. As a goal, we also seek to include multiphase fluid modeling, as well as capillary effects.
M-MRAC Backstepping for Systems with Unknown Virtual Control Coefficients
NASA Technical Reports Server (NTRS)
Stepanyan, Vahram; Krishnakumar, Kalmanje
2015-01-01
The paper presents an over-parametrization free certainty equivalence state feedback backstepping adaptive control design method for systems of any relative degree with unmatched uncertainties and unknown virtual control coefficients. It uses a fast prediction model to estimate the unknown parameters, which is independent of the control design. It is shown that the system's input and output tracking errors can be systematically decreased by the proper choice of the design parameters. The benefits of the approach are demonstrated in numerical simulations.
Parameter estimation of qubit states with unknown phase parameter
NASA Astrophysics Data System (ADS)
Suzuki, Jun
2015-02-01
We discuss a problem of parameter estimation for quantum two-level system, qubit system, in presence of unknown phase parameter. We analyze trade-off relations for mean square errors (MSEs) when estimating relevant parameters with separable measurements based on known precision bounds; the symmetric logarithmic derivative (SLD) Cramér-Rao (CR) bound and Hayashi-Gill-Massar (HGM) bound. We investigate the optimal measurement which attains the HGM bound and discuss its properties. We show that the HGM bound for relevant parameters can be attained asymptotically by using some fraction of given n quantum states to estimate the phase parameter. We also discuss the Holevo bound which can be attained asymptotically by a collective measurement.
Bayesian statistics and Monte Carlo methods
NASA Astrophysics Data System (ADS)
Koch, K. R.
2018-03-01
The Bayesian approach allows an intuitive way to derive the methods of statistics. Probability is defined as a measure of the plausibility of statements or propositions. Three rules are sufficient to obtain the laws of probability. If the statements refer to the numerical values of variables, the so-called random variables, univariate and multivariate distributions follow. They lead to the point estimation by which unknown quantities, i.e. unknown parameters, are computed from measurements. The unknown parameters are random variables, they are fixed quantities in traditional statistics which is not founded on Bayes' theorem. Bayesian statistics therefore recommends itself for Monte Carlo methods, which generate random variates from given distributions. Monte Carlo methods, of course, can also be applied in traditional statistics. The unknown parameters, are introduced as functions of the measurements, and the Monte Carlo methods give the covariance matrix and the expectation of these functions. A confidence region is derived where the unknown parameters are situated with a given probability. Following a method of traditional statistics, hypotheses are tested by determining whether a value for an unknown parameter lies inside or outside the confidence region. The error propagation of a random vector by the Monte Carlo methods is presented as an application. If the random vector results from a nonlinearly transformed vector, its covariance matrix and its expectation follow from the Monte Carlo estimate. This saves a considerable amount of derivatives to be computed, and errors of the linearization are avoided. The Monte Carlo method is therefore efficient. If the functions of the measurements are given by a sum of two or more random vectors with different multivariate distributions, the resulting distribution is generally not known. TheMonte Carlo methods are then needed to obtain the covariance matrix and the expectation of the sum.
Synchronization in complex oscillator networks and smart grids.
Dörfler, Florian; Chertkov, Michael; Bullo, Francesco
2013-02-05
The emergence of synchronization in a network of coupled oscillators is a fascinating topic in various scientific disciplines. A widely adopted model of a coupled oscillator network is characterized by a population of heterogeneous phase oscillators, a graph describing the interaction among them, and diffusive and sinusoidal coupling. It is known that a strongly coupled and sufficiently homogeneous network synchronizes, but the exact threshold from incoherence to synchrony is unknown. Here, we present a unique, concise, and closed-form condition for synchronization of the fully nonlinear, nonequilibrium, and dynamic network. Our synchronization condition can be stated elegantly in terms of the network topology and parameters or equivalently in terms of an intuitive, linear, and static auxiliary system. Our results significantly improve upon the existing conditions advocated thus far, they are provably exact for various interesting network topologies and parameters; they are statistically correct for almost all networks; and they can be applied equally to synchronization phenomena arising in physics and biology as well as in engineered oscillator networks, such as electrical power networks. We illustrate the validity, the accuracy, and the practical applicability of our results in complex network scenarios and in smart grid applications.
NASA Astrophysics Data System (ADS)
Capozzi, Francesco; Lisi, Eligio; Marrone, Antonio
2016-04-01
Within the standard 3ν oscillation framework, we illustrate the status of currently unknown oscillation parameters: the θ23 octant, the mass hierarchy (normal or inverted), and the possible CP-violating phase δ, as derived by a (preliminary) global analysis of oscillation data available in 2015. We then discuss some challenges that will be faced by future, high-statistics analyses of spectral data, starting with one-dimensional energy spectra in reactor experiments, and concluding with two-dimensional energy-angle spectra in large-volume atmospheric experiments. It is shown that systematic uncertainties in the spectral shapes can noticeably affect the prospective sensitivities to unknown oscillation parameters, in particular to the mass hierarchy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Sen; Zhang, Wei; Lian, Jianming
This two-part paper considers the coordination of a population of Thermostatically Controlled Loads (TCLs) with unknown parameters to achieve group objectives. The problem involves designing the bidding and market clearing strategy to motivate self-interested users to realize efficient energy allocation subject to a peak power constraint. The companion paper (Part I) formulates the problem and proposes a load coordination framework using the mechanism design approach. To address the unknown parameters, Part II of this paper presents a joint state and parameter estimation framework based on the expectation maximization algorithm. The overall framework is then validated using real-world weather data andmore » price data, and is compared with other approaches in terms of aggregated power response. Simulation results indicate that our coordination framework can effectively improve the efficiency of the power grid operations and reduce power congestion at key times.« less
Kaklamanos, James; Baise, Laurie G.; Boore, David M.
2011-01-01
The ground-motion prediction equations (GMPEs) developed as part of the Next Generation Attenuation of Ground Motions (NGA-West) project in 2008 are becoming widely used in seismic hazard analyses. However, these new models are considerably more complicated than previous GMPEs, and they require several more input parameters. When employing the NGA models, users routinely face situations in which some of the required input parameters are unknown. In this paper, we present a framework for estimating the unknown source, path, and site parameters when implementing the NGA models in engineering practice, and we derive geometrically-based equations relating the three distance measures found in the NGA models. Our intent is for the content of this paper not only to make the NGA models more accessible, but also to help with the implementation of other present or future GMPEs.
Implementation of an improved adaptive-implicit method in a thermal compositional simulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, T.B.
1988-11-01
A multicomponent thermal simulator with an adaptive-implicit-method (AIM) formulation/inexact-adaptive-Newton (IAN) method is presented. The final coefficient matrix retains the original banded structure so that conventional iterative methods can be used. Various methods for selection of the eliminated unknowns are tested. AIM/IAN method has a lower work count per Newtonian iteration than fully implicit methods, but a wrong choice of unknowns will result in excessive Newtonian iterations. For the problems tested, the residual-error method described in the paper for selecting implicit unknowns, together with the IAN method, had an improvement of up to 28% of the CPU time over the fullymore » implicit method.« less
Fully probabilistic earthquake source inversion on teleseismic scales
NASA Astrophysics Data System (ADS)
Stähler, Simon; Sigloch, Karin
2017-04-01
Seismic source inversion is a non-linear problem in seismology where not just the earthquake parameters but also estimates of their uncertainties are of great practical importance. We have developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. These unknowns are parameterised efficiently by harnessing as prior knowledge solutions from a large number of non-Bayesian inversions. The source time function is expressed as a weighted sum of a small number of empirical orthogonal functions, which were derived from a catalogue of >1000 source time functions (STFs) by a principal component analysis. We use a likelihood model based on the cross-correlation misfit between observed and predicted waveforms. The resulting ensemble of solutions provides full uncertainty and covariance information for the source parameters, and permits propagating these source uncertainties into travel time estimates used for seismic tomography. The computational effort is such that routine, global estimation of earthquake mechanisms and source time functions from teleseismic broadband waveforms is feasible. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source solutions treated as a quality-controlled reference, we derive the noise distribution on signal decorrelation D of the broadband seismogram fits between observed and modelled waveforms. The noise on D is found to approximately follow a log-normal distribution, a fortunate fact that readily accommodates the formulation of an empirical likelihood function for D for our multivariate problem. The first and second moments of this multivariate distribution are shown to depend mostly on the signal-to-noise ratio (SNR) of the CC measurements and on the back-azimuthal distances of seismic stations. References: Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 1: Efficient parameterisation, Solid Earth, 5, 1055-1069, doi:10.5194/se-5-1055-2014, 2014. Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 2: Modelling errors and station covariances, Solid Earth, 7, 1521-1536, doi:10.5194/se-7-1521-2016, 2016.
Elastohydrodynamic lubrication of point contacts. Ph.D. Thesis - Leeds Univ.
NASA Technical Reports Server (NTRS)
Hamrock, B. J.
1976-01-01
A procedure for the numerical solution of the complete, isothermal, elastohydrodynamic lubrication problem for point contacts is given. This procedure calls for the simultaneous solution of the elasticity and Reynolds equations. By using this theory the influence of the ellipticity parameter and the dimensionless speed, load, and material parameters on the minimum and central film thicknesses was investigated. Thirty-four different cases were used in obtaining the fully flooded minimum- and central-film-thickness formulas. Lubricant starvation was also studied. From the results it was possible to express the minimum film thickness for a starved condition in terms of the minimum film thickness for a fully flooded condition, the speed parameter, and the inlet distance. Fifteen additional cases plus three fully flooded cases were used in obtaining this formula. Contour plots of pressure and film thickness in and around the contact have been presented for both fully flooded and starved lubrication conditions.
Half-blind remote sensing image restoration with partly unknown degradation
NASA Astrophysics Data System (ADS)
Xie, Meihua; Yan, Fengxia
2017-01-01
The problem of image restoration has been extensively studied for its practical importance and theoretical interest. This paper mainly discusses the problem of image restoration with partly unknown kernel. In this model, the degraded kernel function is known but its parameters are unknown. With this model, we should estimate the parameters in Gaussian kernel and the real image simultaneity. For this new problem, a total variation restoration model is put out and an intersect direction iteration algorithm is designed. Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index Measurement (SSIM) are used to measure the performance of the method. Numerical results show that we can estimate the parameters in kernel accurately, and the new method has both much higher PSNR and much higher SSIM than the expectation maximization (EM) method in many cases. In addition, the accuracy of estimation is not sensitive to noise. Furthermore, even though the support of the kernel is unknown, we can also use this method to get accurate estimation.
Identification of linear system models and state estimators for controls
NASA Technical Reports Server (NTRS)
Chen, Chung-Wen
1992-01-01
The following paper is presented in viewgraph format and covers topics including: (1) linear state feedback control system; (2) Kalman filter state estimation; (3) relation between residual and stochastic part of output; (4) obtaining Kalman filter gain; (5) state estimation under unknown system model and unknown noises; and (6) relationship between filter Markov parameters and system Markov parameters.
... is unknown how long past vaccinations stay effective. People who received the vaccine many years ago may no longer be fully ... you have been exposed through bioterrorism. Prevention Many people were vaccinated against smallpox in the past. The vaccine is no longer given to the general public. ...
Variations on Bayesian Prediction and Inference
2016-05-09
inference 2.2.1 Background There are a number of statistical inference problems that are not generally formulated via a full probability model...problem of inference about an unknown parameter, the Bayesian approach requires a full probability 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...the problem of inference about an unknown parameter, the Bayesian approach requires a full probability model/likelihood which can be an obstacle
Estimation of nonlinear pilot model parameters including time delay.
NASA Technical Reports Server (NTRS)
Schiess, J. R.; Roland, V. R.; Wells, W. R.
1972-01-01
Investigation of the feasibility of using a Kalman filter estimator for the identification of unknown parameters in nonlinear dynamic systems with a time delay. The problem considered is the application of estimation theory to determine the parameters of a family of pilot models containing delayed states. In particular, the pilot-plant dynamics are described by differential-difference equations of the retarded type. The pilot delay, included as one of the unknown parameters to be determined, is kept in pure form as opposed to the Pade approximations generally used for these systems. Problem areas associated with processing real pilot response data are included in the discussion.
Deng, Zhimin; Tian, Tianhai
2014-07-29
The advances of systems biology have raised a large number of sophisticated mathematical models for describing the dynamic property of complex biological systems. One of the major steps in developing mathematical models is to estimate unknown parameters of the model based on experimentally measured quantities. However, experimental conditions limit the amount of data that is available for mathematical modelling. The number of unknown parameters in mathematical models may be larger than the number of observation data. The imbalance between the number of experimental data and number of unknown parameters makes reverse-engineering problems particularly challenging. To address the issue of inadequate experimental data, we propose a continuous optimization approach for making reliable inference of model parameters. This approach first uses a spline interpolation to generate continuous functions of system dynamics as well as the first and second order derivatives of continuous functions. The expanded dataset is the basis to infer unknown model parameters using various continuous optimization criteria, including the error of simulation only, error of both simulation and the first derivative, or error of simulation as well as the first and second derivatives. We use three case studies to demonstrate the accuracy and reliability of the proposed new approach. Compared with the corresponding discrete criteria using experimental data at the measurement time points only, numerical results of the ERK kinase activation module show that the continuous absolute-error criteria using both function and high order derivatives generate estimates with better accuracy. This result is also supported by the second and third case studies for the G1/S transition network and the MAP kinase pathway, respectively. This suggests that the continuous absolute-error criteria lead to more accurate estimates than the corresponding discrete criteria. We also study the robustness property of these three models to examine the reliability of estimates. Simulation results show that the models with estimated parameters using continuous fitness functions have better robustness properties than those using the corresponding discrete fitness functions. The inference studies and robustness analysis suggest that the proposed continuous optimization criteria are effective and robust for estimating unknown parameters in mathematical models.
Iqbal, Muhammad; Rehan, Muhammad; Khaliq, Abdul; Saeed-ur-Rehman; Hong, Keum-Shik
2014-01-01
This paper investigates the chaotic behavior and synchronization of two different coupled chaotic FitzHugh-Nagumo (FHN) neurons with unknown parameters under external electrical stimulation (EES). The coupled FHN neurons of different parameters admit unidirectional and bidirectional gap junctions in the medium between them. Dynamical properties, such as the increase in synchronization error as a consequence of the deviation of neuronal parameters for unlike neurons, the effect of difference in coupling strengths caused by the unidirectional gap junctions, and the impact of large time-delay due to separation of neurons, are studied in exploring the behavior of the coupled system. A novel integral-based nonlinear adaptive control scheme, to cope with the infeasibility of the recovery variable, for synchronization of two coupled delayed chaotic FHN neurons of different and unknown parameters under uncertain EES is derived. Further, to guarantee robust synchronization of different neurons against disturbances, the proposed control methodology is modified to achieve the uniformly ultimately bounded synchronization. The parametric estimation errors can be reduced by selecting suitable control parameters. The effectiveness of the proposed control scheme is illustrated via numerical simulations.
Toxicity of aged gasoline exhaust particles to normal and diseased airway epithelia
NASA Astrophysics Data System (ADS)
Künzi, Lisa; Krapf, Manuel; Daher, Nancy; Dommen, Josef; Jeannet, Natalie; Schneider, Sarah; Platt, Stephen; Slowik, Jay G.; Baumlin, Nathalie; Salathe, Matthias; Prévôt, André S. H.; Kalberer, Markus; Strähl, Christof; Dümbgen, Lutz; Sioutas, Constantinos; Baltensperger, Urs; Geiser, Marianne
2015-06-01
Particulate matter (PM) pollution is a leading cause of premature death, particularly in those with pre-existing lung disease. A causative link between particle properties and adverse health effects remains unestablished mainly due to complex and variable physico-chemical PM parameters. Controlled laboratory experiments are required. Generating atmospherically realistic aerosols and performing cell-exposure studies at relevant particle-doses are challenging. Here we examine gasoline-exhaust particle toxicity from a Euro-5 passenger car in a uniquely realistic exposure scenario, combining a smog chamber simulating atmospheric ageing, an aerosol enrichment system varying particle number concentration independent of particle chemistry, and an aerosol deposition chamber physiologically delivering particles on air-liquid interface (ALI) cultures reproducing normal and susceptible health status. Gasoline-exhaust is an important PM source with largely unknown health effects. We investigated acute responses of fully-differentiated normal, distressed (antibiotics-treated) normal, and cystic fibrosis human bronchial epithelia (HBE), and a proliferating, single-cell type bronchial epithelial cell-line (BEAS-2B). We show that a single, short-term exposure to realistic doses of atmospherically-aged gasoline-exhaust particles impairs epithelial key-defence mechanisms, rendering it more vulnerable to subsequent hazards. We establish dose-response curves at realistic particle-concentration levels. Significant differences between cell models suggest the use of fully-differentiated HBE is most appropriate in future toxicity studies.
Toxicity of aged gasoline exhaust particles to normal and diseased airway epithelia
Künzi, Lisa; Krapf, Manuel; Daher, Nancy; Dommen, Josef; Jeannet, Natalie; Schneider, Sarah; Platt, Stephen; Slowik, Jay G.; Baumlin, Nathalie; Salathe, Matthias; Prévôt, André S. H.; Kalberer, Markus; Strähl, Christof; Dümbgen, Lutz; Sioutas, Constantinos; Baltensperger, Urs; Geiser, Marianne
2015-01-01
Particulate matter (PM) pollution is a leading cause of premature death, particularly in those with pre-existing lung disease. A causative link between particle properties and adverse health effects remains unestablished mainly due to complex and variable physico-chemical PM parameters. Controlled laboratory experiments are required. Generating atmospherically realistic aerosols and performing cell-exposure studies at relevant particle-doses are challenging. Here we examine gasoline-exhaust particle toxicity from a Euro-5 passenger car in a uniquely realistic exposure scenario, combining a smog chamber simulating atmospheric ageing, an aerosol enrichment system varying particle number concentration independent of particle chemistry, and an aerosol deposition chamber physiologically delivering particles on air-liquid interface (ALI) cultures reproducing normal and susceptible health status. Gasoline-exhaust is an important PM source with largely unknown health effects. We investigated acute responses of fully-differentiated normal, distressed (antibiotics-treated) normal, and cystic fibrosis human bronchial epithelia (HBE), and a proliferating, single-cell type bronchial epithelial cell-line (BEAS-2B). We show that a single, short-term exposure to realistic doses of atmospherically-aged gasoline-exhaust particles impairs epithelial key-defence mechanisms, rendering it more vulnerable to subsequent hazards. We establish dose-response curves at realistic particle-concentration levels. Significant differences between cell models suggest the use of fully-differentiated HBE is most appropriate in future toxicity studies. PMID:26119831
Toxicity of aged gasoline exhaust particles to normal and diseased airway epithelia.
Künzi, Lisa; Krapf, Manuel; Daher, Nancy; Dommen, Josef; Jeannet, Natalie; Schneider, Sarah; Platt, Stephen; Slowik, Jay G; Baumlin, Nathalie; Salathe, Matthias; Prévôt, André S H; Kalberer, Markus; Strähl, Christof; Dümbgen, Lutz; Sioutas, Constantinos; Baltensperger, Urs; Geiser, Marianne
2015-06-29
Particulate matter (PM) pollution is a leading cause of premature death, particularly in those with pre-existing lung disease. A causative link between particle properties and adverse health effects remains unestablished mainly due to complex and variable physico-chemical PM parameters. Controlled laboratory experiments are required. Generating atmospherically realistic aerosols and performing cell-exposure studies at relevant particle-doses are challenging. Here we examine gasoline-exhaust particle toxicity from a Euro-5 passenger car in a uniquely realistic exposure scenario, combining a smog chamber simulating atmospheric ageing, an aerosol enrichment system varying particle number concentration independent of particle chemistry, and an aerosol deposition chamber physiologically delivering particles on air-liquid interface (ALI) cultures reproducing normal and susceptible health status. Gasoline-exhaust is an important PM source with largely unknown health effects. We investigated acute responses of fully-differentiated normal, distressed (antibiotics-treated) normal, and cystic fibrosis human bronchial epithelia (HBE), and a proliferating, single-cell type bronchial epithelial cell-line (BEAS-2B). We show that a single, short-term exposure to realistic doses of atmospherically-aged gasoline-exhaust particles impairs epithelial key-defence mechanisms, rendering it more vulnerable to subsequent hazards. We establish dose-response curves at realistic particle-concentration levels. Significant differences between cell models suggest the use of fully-differentiated HBE is most appropriate in future toxicity studies.
Nonlinear saturation of the slab ITG instability and zonal flow generation with fully kinetic ions
NASA Astrophysics Data System (ADS)
Miecnikowski, Matthew T.; Sturdevant, Benjamin J.; Chen, Yang; Parker, Scott E.
2018-05-01
Fully kinetic turbulence models are of interest for their potential to validate or replace gyrokinetic models in plasma regimes where the gyrokinetic expansion parameters are marginal. Here, we demonstrate fully kinetic ion capability by simulating the growth and nonlinear saturation of the ion-temperature-gradient instability in shearless slab geometry assuming adiabatic electrons and including zonal flow dynamics. The ion trajectories are integrated using the Lorentz force, and the cyclotron motion is fully resolved. Linear growth and nonlinear saturation characteristics show excellent agreement with analogous gyrokinetic simulations across a wide range of parameters. The fully kinetic simulation accurately reproduces the nonlinearly generated zonal flow. This work demonstrates nonlinear capability, resolution of weak gradient drive, and zonal flow physics, which are critical aspects of modeling plasma turbulence with full ion dynamics.
Efficient Bayesian experimental design for contaminant source identification
NASA Astrophysics Data System (ADS)
Zhang, J.; Zeng, L.
2013-12-01
In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameter identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from indirect concentration measurements in identifying unknown source parameters such as the release time, strength and location. In this approach, the sampling location that gives the maximum relative entropy is selected as the optimal one. Once the sampling location is determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown source parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. Compared with the traditional optimal design, which is based on the Gaussian linear assumption, the method developed in this study can cope with arbitrary nonlinearity. It can be used to assist in groundwater monitor network design and identification of unknown contaminant sources. Contours of the expected information gain. The optimal observing location corresponds to the maximum value. Posterior marginal probability densities of unknown parameters, the thick solid black lines are for the designed location. For comparison, other 7 lines are for randomly chosen locations. The true values are denoted by vertical lines. It is obvious that the unknown parameters are estimated better with the desinged location.
Charging of the Van Allen Probes: Theory and Simulations
NASA Astrophysics Data System (ADS)
Delzanno, G. L.; Meierbachtol, C.; Svyatskiy, D.; Denton, M.
2017-12-01
The electrical charging of spacecraft has been a known problem since the beginning of the space age. Its consequences can vary from moderate (single event upsets) to catastrophic (total loss of the spacecraft) depending on a variety of causes, some of which could be related to the surrounding plasma environment, including emission processes from the spacecraft surface. Because of its complexity and cost, this problem is typically studied using numerical simulations. However, inherent unknowns in both plasma parameters and spacecraft material properties can lead to inaccurate predictions of overall spacecraft charging levels. The goal of this work is to identify and study the driving causes and necessary parameters for particular spacecraft charging events on the Van Allen Probes (VAP) spacecraft. This is achieved by making use of plasma theory, numerical simulations, and on-board data. First, we present a simple theoretical spacecraft charging model, which assumes a spherical spacecraft geometry and is based upon the classical orbital-motion-limited approximation. Some input parameters to the model (such as the warm plasma distribution function) are taken directly from on-board VAP data, while other parameters are either varied parametrically to assess their impact on the spacecraft potential, or constrained through spacecraft charging data and statistical techniques. Second, a fully self-consistent numerical simulation is performed by supplying these parameters to CPIC, a particle-in-cell code specifically designed for studying plasma-material interactions. CPIC simulations remove some of the assumptions of the theoretical model and also capture the influence of the full geometry of the spacecraft. The CPIC numerical simulation results will be presented and compared with on-board VAP data. This work will set the foundation for our eventual goal of importing the full plasma environment from the LANL-developed SHIELDS framework into CPIC, in order to more accurately predict spacecraft charging.
Back analysis of geomechanical parameters in underground engineering using artificial bee colony.
Zhu, Changxing; Zhao, Hongbo; Zhao, Ming
2014-01-01
Accurate geomechanical parameters are critical in tunneling excavation, design, and supporting. In this paper, a displacements back analysis based on artificial bee colony (ABC) algorithm is proposed to identify geomechanical parameters from monitored displacements. ABC was used as global optimal algorithm to search the unknown geomechanical parameters for the problem with analytical solution. To the problem without analytical solution, optimal back analysis is time-consuming, and least square support vector machine (LSSVM) was used to build the relationship between unknown geomechanical parameters and displacement and improve the efficiency of back analysis. The proposed method was applied to a tunnel with analytical solution and a tunnel without analytical solution. The results show the proposed method is feasible.
ERIC Educational Resources Information Center
Risley, John S.
1983-01-01
Reviews "Laws of Motion" computer program produced by Educational Materials and Equipment Company. The program (language unknown), for Apple II/II+, is a simulation of an inclined plane, free fall, and Atwood machine in Newtonian/Aristotelian worlds. Suggests use as supplement to discussion of motion by teacher who fully understands the…
NASA-DoD Lead-Free Electronics Project
NASA Technical Reports Server (NTRS)
Kessel, Kurt
2010-01-01
Original Equipment Manufacturers (OEMs), depots, and support contract ors have to be prepared to deal with an electronics supply chain that increasingly provides parts with lead-free finishes, some labeled no differently and intermingled with their SnPb counterparts. Allowance of lead-free components presents one of the greatest risks to the r eliability of military and aerospace electronics. The introduction of components with lead-free terminations, termination finishes, or cir cuit boards presents a host of concerns to customers, suppliers, and maintainers of aerospace and military electronic systems such as: 1. Electrical shorting due to tin whiskers 2. Incompatibility of lead-f ree processes and parameters (including higher melting points of lead -free alloys) with other materials in the system 3. Unknown material properties and incompatibilities that could reduce solder joint reli ability As the transition to lead-free becomes a certain reality for military and aerospace applications, it will be critical to fully un derstand the implications of reworking lead-free assemblies.
Elasto-capillarity in insect fibrillar adhesion.
Gernay, Sophie; Federle, Walter; Lambert, Pierre; Gilet, Tristan
2016-08-01
The manipulation of microscopic objects is challenging because of high adhesion forces, which render macroscopic gripping strategies unsuitable. Adhesive footpads of climbing insects could reveal principles relevant for micro-grippers, as they are able to attach and detach rapidly during locomotion. However, the underlying mechanisms are still not fully understood. In this work, we characterize the geometry and contact formation of the adhesive setae of dock beetles (Gastrophysa viridula) by interference reflection microscopy. We compare our experimental results to the model of an elastic beam loaded with capillary forces. Fitting the model to experimental data yielded not only estimates for seta adhesion and compliance in agreement with previous direct measurements, but also previously unknown parameters such as the volume of the fluid meniscus and the bending stiffness of the tip. In addition to confirming the primary role of surface tension for insect adhesion, our investigation reveals marked differences in geometry and compliance between the three main kinds of seta tips in leaf beetles. © 2016 The Author(s).
Nonlinear adaptive control system design with asymptotically stable parameter estimation error
NASA Astrophysics Data System (ADS)
Mishkov, Rumen; Darmonski, Stanislav
2018-01-01
The paper presents a new general method for nonlinear adaptive system design with asymptotic stability of the parameter estimation error. The advantages of the approach include asymptotic unknown parameter estimation without persistent excitation and capability to directly control the estimates transient response time. The method proposed modifies the basic parameter estimation dynamics designed via a known nonlinear adaptive control approach. The modification is based on the generalised prediction error, a priori constraints with a hierarchical parameter projection algorithm, and the stable data accumulation concepts. The data accumulation principle is the main tool for achieving asymptotic unknown parameter estimation. It relies on the parametric identifiability system property introduced. Necessary and sufficient conditions for exponential stability of the data accumulation dynamics are derived. The approach is applied in a nonlinear adaptive speed tracking vector control of a three-phase induction motor.
NASA Astrophysics Data System (ADS)
Cui, Jie; Li, Zhiying; Krems, Roman V.
2015-10-01
We consider a problem of extrapolating the collision properties of a large polyatomic molecule A-H to make predictions of the dynamical properties for another molecule related to A-H by the substitution of the H atom with a small molecular group X, without explicitly computing the potential energy surface for A-X. We assume that the effect of the -H →-X substitution is embodied in a multidimensional function with unknown parameters characterizing the change of the potential energy surface. We propose to apply the Gaussian Process model to determine the dependence of the dynamical observables on the unknown parameters. This can be used to produce an interval of the observable values which corresponds to physical variations of the potential parameters. We show that the Gaussian Process model combined with classical trajectory calculations can be used to obtain the dependence of the cross sections for collisions of C6H5CN with He on the unknown parameters describing the interaction of the He atom with the CN fragment of the molecule. The unknown parameters are then varied within physically reasonable ranges to produce a prediction uncertainty of the cross sections. The results are normalized to the cross sections for He — C6H6 collisions obtained from quantum scattering calculations in order to provide a prediction interval of the thermally averaged cross sections for collisions of C6H5CN with He.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marzouk, Youssef
Predictive simulation of complex physical systems increasingly rests on the interplay of experimental observations with computational models. Key inputs, parameters, or structural aspects of models may be incomplete or unknown, and must be developed from indirect and limited observations. At the same time, quantified uncertainties are needed to qualify computational predictions in the support of design and decision-making. In this context, Bayesian statistics provides a foundation for inference from noisy and limited data, but at prohibitive computional expense. This project intends to make rigorous predictive modeling *feasible* in complex physical systems, via accelerated and scalable tools for uncertainty quantification, Bayesianmore » inference, and experimental design. Specific objectives are as follows: 1. Develop adaptive posterior approximations and dimensionality reduction approaches for Bayesian inference in high-dimensional nonlinear systems. 2. Extend accelerated Bayesian methodologies to large-scale {\\em sequential} data assimilation, fully treating nonlinear models and non-Gaussian state and parameter distributions. 3. Devise efficient surrogate-based methods for Bayesian model selection and the learning of model structure. 4. Develop scalable simulation/optimization approaches to nonlinear Bayesian experimental design, for both parameter inference and model selection. 5. Demonstrate these inferential tools on chemical kinetic models in reacting flow, constructing and refining thermochemical and electrochemical models from limited data. Demonstrate Bayesian filtering on canonical stochastic PDEs and in the dynamic estimation of inhomogeneous subsurface properties and flow fields.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Sen; Zhang, Wei; Lian, Jianming
This paper focuses on the coordination of a population of Thermostatically Controlled Loads (TCLs) with unknown parameters to achieve group objectives. The problem involves designing the bidding and market clearing strategy to motivate self-interested users to realize efficient energy allocation subject to a peak power constraint. Using the mechanism design approach, we propose a market-based coordination framework, which can effectively incorporate heterogeneous load dynamics, systematically deal with user preferences, account for the unknown load model parameters, and enable the real-world implementation with limited communication resources. This paper is divided into two parts. Part I presents a mathematical formulation of themore » problem and develops a coordination framework using the mechanism design approach. Part II presents a learning scheme to account for the unknown load model parameters, and evaluates the proposed framework through realistic simulations.« less
NASA Astrophysics Data System (ADS)
He, Jia; Xu, You-Lin; Zhan, Sheng; Huang, Qin
2017-03-01
When health monitoring system and vibration control system both are required for a building structure, it will be beneficial and cost-effective to integrate these two systems together for creating a smart building structure. Recently, on the basis of extended Kalman filter (EKF), a time-domain integrated approach was proposed for the identification of structural parameters of the controlled buildings with unknown ground excitations. The identified physical parameters and structural state vectors were then utilized to determine the control force for vibration suppression. In this paper, the possibility of establishing such a smart building structure with the function of simultaneous damage detection and vibration suppression was explored experimentally. A five-story shear building structure equipped with three magneto-rheological (MR) dampers was built. Four additional columns were added to the building model, and several damage scenarios were then simulated by symmetrically cutting off these columns in certain stories. Two sets of earthquakes, i.e. Kobe earthquake and Northridge earthquake, were considered as seismic input and assumed to be unknown during the tests. The structural parameters and the unknown ground excitations were identified during the tests by using the proposed identification method with the measured control forces. Based on the identified structural parameters and system states, a switching control law was employed to adjust the current applied to the MR dampers for the purpose of vibration attenuation. The experimental results show that the presented approach is capable of satisfactorily identifying structural damages and unknown excitations on one hand and significantly mitigating the structural vibration on the other hand.
Pisharady, Pramod Kumar; Sotiropoulos, Stamatios N; Sapiro, Guillermo; Lenglet, Christophe
2017-09-01
We propose a sparse Bayesian learning algorithm for improved estimation of white matter fiber parameters from compressed (under-sampled q-space) multi-shell diffusion MRI data. The multi-shell data is represented in a dictionary form using a non-monoexponential decay model of diffusion, based on continuous gamma distribution of diffusivities. The fiber volume fractions with predefined orientations, which are the unknown parameters, form the dictionary weights. These unknown parameters are estimated with a linear un-mixing framework, using a sparse Bayesian learning algorithm. A localized learning of hyperparameters at each voxel and for each possible fiber orientations improves the parameter estimation. Our experiments using synthetic data from the ISBI 2012 HARDI reconstruction challenge and in-vivo data from the Human Connectome Project demonstrate the improvements.
Semi-Supervised Clustering for High-Dimensional and Sparse Features
ERIC Educational Resources Information Center
Yan, Su
2010-01-01
Clustering is one of the most common data mining tasks, used frequently for data organization and analysis in various application domains. Traditional machine learning approaches to clustering are fully automated and unsupervised where class labels are unknown a priori. In real application domains, however, some "weak" form of side…
International Education: A Compendium of Federal Agency Programs.
ERIC Educational Resources Information Center
Owens, Becky, Comp.
Federal agency programs in support of international education are summarized in this report. The publication is designed to help readers discover unknown programs, more fully understand more familiar programs, and learn more about specific requirements for agencies where proposals have been unsuccessfully submitted in the past. Focus is directed…
JPRS Report: Near East and South Asia.
1990-10-15
we could justifiably be proud of our fully home-grown, tended by nature, traitors. Today, in the era of DAP [expansion unknown], urea, pesticides...Islamabad THE MUSLIM in English 18 Aug 90 p 4 [Article by Anjum Ibrahim : "Social Justice Versus the Rural Elite" quotation marks as published
Li, Zhenyu; Wang, Bin; Liu, Hong
2016-08-30
Satellite capturing with free-floating space robots is still a challenging task due to the non-fixed base and unknown mass property issues. In this paper gyro and eye-in-hand camera data are adopted as an alternative choice for solving this problem. For this improved system, a new modeling approach that reduces the complexity of system control and identification is proposed. With the newly developed model, the space robot is equivalent to a ground-fixed manipulator system. Accordingly, a self-tuning control scheme is applied to handle such a control problem including unknown parameters. To determine the controller parameters, an estimator is designed based on the least-squares technique for identifying the unknown mass properties in real time. The proposed method is tested with a credible 3-dimensional ground verification experimental system, and the experimental results confirm the effectiveness of the proposed control scheme.
Li, Zhenyu; Wang, Bin; Liu, Hong
2016-01-01
Satellite capturing with free-floating space robots is still a challenging task due to the non-fixed base and unknown mass property issues. In this paper gyro and eye-in-hand camera data are adopted as an alternative choice for solving this problem. For this improved system, a new modeling approach that reduces the complexity of system control and identification is proposed. With the newly developed model, the space robot is equivalent to a ground-fixed manipulator system. Accordingly, a self-tuning control scheme is applied to handle such a control problem including unknown parameters. To determine the controller parameters, an estimator is designed based on the least-squares technique for identifying the unknown mass properties in real time. The proposed method is tested with a credible 3-dimensional ground verification experimental system, and the experimental results confirm the effectiveness of the proposed control scheme. PMID:27589748
NASA Astrophysics Data System (ADS)
Singh, R.; Verma, H. K.
2013-12-01
This paper presents a teaching-learning-based optimization (TLBO) algorithm to solve parameter identification problems in the designing of digital infinite impulse response (IIR) filter. TLBO based filter modelling is applied to calculate the parameters of unknown plant in simulations. Unlike other heuristic search algorithms, TLBO algorithm is an algorithm-specific parameter-less algorithm. In this paper big bang-big crunch (BB-BC) optimization and PSO algorithms are also applied to filter design for comparison. Unknown filter parameters are considered as a vector to be optimized by these algorithms. MATLAB programming is used for implementation of proposed algorithms. Experimental results show that the TLBO is more accurate to estimate the filter parameters than the BB-BC optimization algorithm and has faster convergence rate when compared to PSO algorithm. TLBO is used where accuracy is more essential than the convergence speed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Selle, J.E.
A modification was made to the Kaufman method of calculating binary phase diagrams to permit calculation of intra-rare earth diagrams. Atomic volumes for all phases, real or hypothetical, are necessary to determine interaction parameters for calculation of complete diagrams. The procedures used to determine unknown atomic volumes are describes. Also, procedures are described for determining lattice stability parameters for unknown transformations. Results are presented on the calculation of intra-rare earth diagrams between both trivalent and divalent rare earths. 13 refs., 36 figs., 11 tabs.
NASA Astrophysics Data System (ADS)
Miyata, Y.; Suzuki, T.; Takechi, M.; Urano, H.; Ide, S.
2015-07-01
For the purpose of stable plasma equilibrium control and detailed analysis, it is essential to reconstruct an accurate plasma boundary on the poloidal cross section in tokamak devices. The Cauchy condition surface (CCS) method is a numerical approach for calculating the spatial distribution of the magnetic flux outside a hypothetical surface and reconstructing the plasma boundary from the magnetic measurements located outside the plasma. The accuracy of the plasma shape reconstruction has been assessed by comparing the CCS method and an equilibrium calculation in JT-60SA with a high elongation and triangularity of plasma shape. The CCS, on which both Dirichlet and Neumann conditions are unknown, is defined as a hypothetical surface located inside the real plasma region. The accuracy of the plasma shape reconstruction is sensitive to the CCS free parameters such as the number of unknown parameters and the shape in JT-60SA. It is found that the optimum number of unknown parameters and the size of the CCS that minimizes errors in the reconstructed plasma shape are in proportion to the plasma size. Furthermore, it is shown that the accuracy of the plasma shape reconstruction is greatly improved using the optimum number of unknown parameters and shape of the CCS, and the reachable reconstruction errors in plasma shape and locations of strike points are within the target ranges in JT-60SA.
Potocki, J K; Tharp, H S
1993-01-01
The success of treating cancerous tissue with heat depends on the temperature elevation, the amount of tissue elevated to that temperature, and the length of time that the tissue temperature is elevated. In clinical situations the temperature of most of the treated tissue volume is unknown, because only a small number of temperature sensors can be inserted into the tissue. A state space model based on a finite difference approximation of the bioheat transfer equation (BHTE) is developed for identification purposes. A full-order extended Kalman filter (EKF) is designed to estimate both the unknown blood perfusion parameters and the temperature at unmeasured locations. Two reduced-order estimators are designed as computationally less intensive alternatives to the full-order EKF. Simulation results show that the success of the estimation scheme depends strongly on the number and location of the temperature sensors. Superior results occur when a temperature sensor exists in each unknown blood perfusion zone, and the number of sensors is at least as large as the number of unknown perfusion zones. Unacceptable results occur when there are more unknown perfusion parameters than temperature sensors, or when the sensors are placed in locations that do not sample the unknown perfusion information.
Bayesian power spectrum inference with foreground and target contamination treatment
NASA Astrophysics Data System (ADS)
Jasche, J.; Lavaux, G.
2017-10-01
This work presents a joint and self-consistent Bayesian treatment of various foreground and target contaminations when inferring cosmological power spectra and three-dimensional density fields from galaxy redshift surveys. This is achieved by introducing additional block-sampling procedures for unknown coefficients of foreground and target contamination templates to the previously presented ARES framework for Bayesian large-scale structure analyses. As a result, the method infers jointly and fully self-consistently three-dimensional density fields, cosmological power spectra, luminosity-dependent galaxy biases, noise levels of the respective galaxy distributions, and coefficients for a set of a priori specified foreground templates. In addition, this fully Bayesian approach permits detailed quantification of correlated uncertainties amongst all inferred quantities and correctly marginalizes over observational systematic effects. We demonstrate the validity and efficiency of our approach in obtaining unbiased estimates of power spectra via applications to realistic mock galaxy observations that are subject to stellar contamination and dust extinction. While simultaneously accounting for galaxy biases and unknown noise levels, our method reliably and robustly infers three-dimensional density fields and corresponding cosmological power spectra from deep galaxy surveys. Furthermore, our approach correctly accounts for joint and correlated uncertainties between unknown coefficients of foreground templates and the amplitudes of the power spectrum. This effect amounts to correlations and anti-correlations of up to 10 per cent across wide ranges in Fourier space.
Li, Zhongyu; Wu, Junjie; Huang, Yulin; Yang, Haiguang; Yang, Jianyu
2017-01-23
Bistatic forward-looking SAR (BFSAR) is a kind of bistatic synthetic aperture radar (SAR) system that can image forward-looking terrain in the flight direction of an aircraft. Until now, BFSAR imaging theories and methods for a stationary scene have been researched thoroughly. However, for moving-target imaging with BFSAR, the non-cooperative movement of the moving target induces some new issues: (I) large and unknown range cell migration (RCM) (including range walk and high-order RCM); (II) the spatial-variances of the Doppler parameters (including the Doppler centroid and high-order Doppler) are not only unknown, but also nonlinear for different point-scatterers. In this paper, we put forward an adaptive moving-target imaging method for BFSAR. First, the large and unknown range walk is corrected by applying keystone transform over the whole received echo, and then, the relationships among the unknown high-order RCM, the nonlinear spatial-variances of the Doppler parameters, and the speed of the mover, are established. After that, using an optimization nonlinear chirp scaling (NLCS) technique, not only can the unknown high-order RCM be accurately corrected, but also the nonlinear spatial-variances of the Doppler parameters can be balanced. At last, a high-order polynomial filter is applied to compress the whole azimuth data of the moving target. Numerical simulations verify the effectiveness of the proposed method.
Desktop Systems for Manufacturing Carbon Nanotube Films by Chemical Vapor Deposition
2007-06-01
existing low cost tube furnace designs limit the researcher’s ability to fully separate critical reaction parameters such as temperature and flow...Often heated using an external resistive heater coil, a typical configuration, shown in Figure 4, might place a tube made of a non- reactive ...researcher’s ability to fully separate critical parameters such as temperature and flow profiles. Additionally, the use of heating elements external to
Dynamic parameter identification of robot arms with servo-controlled electrical motors
NASA Astrophysics Data System (ADS)
Jiang, Zhao-Hui; Senda, Hiroshi
2005-12-01
This paper addresses the issue of dynamic parameter identification of the robot manipulator with servo-controlled electrical motors. An assumption is made that all kinematical parameters, such as link lengths, are known, and only dynamic parameters containing mass, moment of inertia, and their functions need to be identified. First, we derive dynamics of the robot arm with a linear form of the unknown dynamic parameters by taking dynamic characteristics of the motor and servo unit into consideration. Then, we implement the parameter identification approach to identify the unknown parameters with respect to individual link separately. A pseudo-inverse matrix is used for formulation of the parameter identification. The optimal solution is guaranteed in a sense of least-squares of the mean errors. A Direct Drive (DD) SCARA type industrial robot arm AdeptOne is used as an application example of the parameter identification. Simulations and experiments for both open loop and close loop controls are carried out. Comparison of the results confirms the correctness and usefulness of the parameter identification and the derived dynamic model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Jie; Krems, Roman V.; Li, Zhiying
2015-10-21
We consider a problem of extrapolating the collision properties of a large polyatomic molecule A–H to make predictions of the dynamical properties for another molecule related to A–H by the substitution of the H atom with a small molecular group X, without explicitly computing the potential energy surface for A–X. We assume that the effect of the −H →−X substitution is embodied in a multidimensional function with unknown parameters characterizing the change of the potential energy surface. We propose to apply the Gaussian Process model to determine the dependence of the dynamical observables on the unknown parameters. This can bemore » used to produce an interval of the observable values which corresponds to physical variations of the potential parameters. We show that the Gaussian Process model combined with classical trajectory calculations can be used to obtain the dependence of the cross sections for collisions of C{sub 6}H{sub 5}CN with He on the unknown parameters describing the interaction of the He atom with the CN fragment of the molecule. The unknown parameters are then varied within physically reasonable ranges to produce a prediction uncertainty of the cross sections. The results are normalized to the cross sections for He — C{sub 6}H{sub 6} collisions obtained from quantum scattering calculations in order to provide a prediction interval of the thermally averaged cross sections for collisions of C{sub 6}H{sub 5}CN with He.« less
An almost-parameter-free harmony search algorithm for groundwater pollution source identification.
Jiang, Simin; Zhang, Yali; Wang, Pei; Zheng, Maohui
2013-01-01
The spatiotemporal characterization of unknown sources of groundwater pollution is frequently encountered in environmental problems. This study adopts a simulation-optimization approach that combines a contaminant transport simulation model with a heuristic harmony search algorithm to identify unknown pollution sources. In the proposed methodology, an almost-parameter-free harmony search algorithm is developed. The performance of this methodology is evaluated on an illustrative groundwater pollution source identification problem, and the identified results indicate that the proposed almost-parameter-free harmony search algorithm-based optimization model can give satisfactory estimations, even when the irregular geometry, erroneous monitoring data, and prior information shortage of potential locations are considered.
Liese, Jan; Winter, Karsten; Glass, Änne; Bertolini, Julia; Kämmerer, Peer Wolfgang; Frerich, Bernhard; Schiefke, Ingolf; Remmerbach, Torsten W
2017-11-01
Uncertainties in detection of oral epithelial dysplasia (OED) frequently result from sampling error especially in inflammatory oral lesions. Endomicroscopy allows non-invasive, "en face" imaging of upper oral epithelium, but parameters of OED are unknown. Mucosal nuclei were imaged in 34 toluidine blue-stained oral lesions with a commercial endomicroscopy. Histopathological diagnosis showed four biopsies in "dys-/neoplastic," 23 in "inflammatory," and seven in "others" disease groups. Strength of different assessment strategies of nuclear scoring, nuclear count, and automated nuclear analysis were measured by area under ROC curve (AUC) to identify histopathological "dys-/neoplastic" group. Nuclear objects from automated image analysis were visually corrected. Best-performing parameters of nuclear-to-image ratios were the count of large nuclei (AUC=0.986) and 6-nearest neighborhood relation (AUC=0.896), and best parameters of nuclear polymorphism were the count of atypical nuclei (AUC=0.996) and compactness of nuclei (AUC=0.922). Excluding low-grade OED, nuclear scoring and count reached 100% sensitivity and 98% specificity for detection of dys-/neoplastic lesions. In automated analysis, combination of parameters enhanced diagnostic strength. Sensitivity of 100% and specificity of 87% were seen for distances of 6-nearest neighbors and aspect ratios even in uncorrected objects. Correction improved measures of nuclear polymorphism only. The hue of background color was stronger than nuclear density (AUC=0.779 vs 0.687) to detect dys-/neoplastic group indicating that macroscopic aspect is biased. Nuclear-to-image ratios are applicable for automated optical in vivo diagnostics for oral potentially malignant disorders. Nuclear endomicroscopy may promote non-invasive, early detection of dys-/neoplastic lesions by reducing sampling error. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Mahdi, Abbas Ali; Fatima, Ghizal; Das, Siddhartha Kumar; Verma, Nar Singh
2011-04-01
Fibromyalgia syndrome (FMS) is a complex chronic condition causing widespread pain and variety of other symptoms. It produces pain in the soft tissues located around joints throughout the body. FMS has unknown etiology and its pathophysiology is not fully understood. However, abnormality in circadian rhythm of hormonal profiles and cytokines has been observed in this disorder. Moreover, there are reports of deficiency of serotonin, melatonin, cortisol and cytokines in FMS patients, which are fully regulated by circadian rhythm. Melatonin, the primary hormone of the pineal gland regulates the body's circadian rhythm and normally its levels begin to rise in the mid-to-late evening, remain high for most of the night, and then decrease in the early morning. FMS patients have lower melatonin secretion during the hours of darkness than the healthy subjects. This may contribute to impaired sleep at night, fatigue during the day and changed pain perception. Studies have shown blunting of normal diurnal cortisol rhythm, with elevated evening serum cortisol level in patients with FMS. Thus, due to perturbed level of cortisol secretion several symptoms of FMS may occur. Moreover, disturbed cytokine levels have also been reported in FMS patients. Therefore, circadian rhythm can be an important factor in the pathophysiology, diagnosis and treatment of FMS. This article explores the circadian pattern of abnormalities in FMS patients, as this may help in better understanding the role of variation in symptoms of FMS and its possible relationship with circadian variations of melatonin, cortisol, cytokines and serotonin levels.
Nonlinear unitary quantum collapse model with self-generated noise
NASA Astrophysics Data System (ADS)
Geszti, Tamás
2018-04-01
Collapse models including some external noise of unknown origin are routinely used to describe phenomena on the quantum-classical border; in particular, quantum measurement. Although containing nonlinear dynamics and thereby exposed to the possibility of superluminal signaling in individual events, such models are widely accepted on the basis of fully reproducing the non-signaling statistical predictions of quantum mechanics. Here we present a deterministic nonlinear model without any external noise, in which randomness—instead of being universally present—emerges in the measurement process, from deterministic irregular dynamics of the detectors. The treatment is based on a minimally nonlinear von Neumann equation for a Stern–Gerlach or Bell-type measuring setup, containing coordinate and momentum operators in a self-adjoint skew-symmetric, split scalar product structure over the configuration space. The microscopic states of the detectors act as a nonlocal set of hidden parameters, controlling individual outcomes. The model is shown to display pumping of weights between setup-defined basis states, with a single winner randomly selected and the rest collapsing to zero. Environmental decoherence has no role in the scenario. Through stochastic modelling, based on Pearle’s ‘gambler’s ruin’ scheme, outcome probabilities are shown to obey Born’s rule under a no-drift or ‘fair-game’ condition. This fully reproduces quantum statistical predictions, implying that the proposed non-linear deterministic model satisfies the non-signaling requirement. Our treatment is still vulnerable to hidden signaling in individual events, which remains to be handled by future research.
NASA Technical Reports Server (NTRS)
Rapp, R. H.
1974-01-01
The equations needed for the incorporation of gravity anomalies as unknown parameters in an orbit determination program are described. These equations were implemented in the Geodyn computer program which was used to process optical satellite observations. The arc dependent parameter unknowns, 184 unknown 15 deg and coordinates of 7 tracking stations were considered. Up to 39 arcs (5 to 7 days) involving 10 different satellites, were processed. An anomaly solution from the satellite data and a combination solution with 15 deg terrestrial anomalies were made. The limited data samples indicate that the method works. The 15 deg anomalies from various solutions and the potential coefficients implied by the different solutions are reported.
NASA Technical Reports Server (NTRS)
Smith, R. C.; Bowers, K. L.
1991-01-01
A fully Sinc-Galerkin method for recovering the spatially varying stiffness and damping parameters in Euler-Bernoulli beam models is presented. The forward problems are discretized with a sinc basis in both the spatial and temporal domains thus yielding an approximate solution which converges exponentially and is valid on the infinite time interval. Hence the method avoids the time-stepping which is characteristic of many of the forward schemes which are used in parameter recovery algorithms. Tikhonov regularization is used to stabilize the resulting inverse problem, and the L-curve method for determining an appropriate value of the regularization parameter is briefly discussed. Numerical examples are given which demonstrate the applicability of the method for both individual and simultaneous recovery of the material parameters.
Pet-Armacost, J J; Sepulveda, J; Sakude, M
1999-12-01
The US Department of Transportation was interested in the risks associated with transporting Hydrazine in tanks with and without relief devices. Hydrazine is both highly toxic and flammable, as well as corrosive. Consequently, there was a conflict as to whether a relief device should be used or not. Data were not available on the impact of relief devices on release probabilities or the impact of Hydrazine on the likelihood of fires and explosions. In this paper, a Monte Carlo sensitivity analysis of the unknown parameters was used to assess the risks associated with highway transport of Hydrazine. To help determine whether or not relief devices should be used, fault trees and event trees were used to model the sequences of events that could lead to adverse consequences during transport of Hydrazine. The event probabilities in the event trees were derived as functions of the parameters whose effects were not known. The impacts of these parameters on the risk of toxic exposures, fires, and explosions were analyzed through a Monte Carlo sensitivity analysis and analyzed statistically through an analysis of variance. The analysis allowed the determination of which of the unknown parameters had a significant impact on the risks. It also provided the necessary support to a critical transportation decision even though the values of several key parameters were not known.
Malik, Suheel Abdullah; Qureshi, Ijaz Mansoor; Amir, Muhammad; Malik, Aqdas Naveed; Haq, Ihsanul
2015-01-01
In this paper, a new heuristic scheme for the approximate solution of the generalized Burgers'-Fisher equation is proposed. The scheme is based on the hybridization of Exp-function method with nature inspired algorithm. The given nonlinear partial differential equation (NPDE) through substitution is converted into a nonlinear ordinary differential equation (NODE). The travelling wave solution is approximated by the Exp-function method with unknown parameters. The unknown parameters are estimated by transforming the NODE into an equivalent global error minimization problem by using a fitness function. The popular genetic algorithm (GA) is used to solve the minimization problem, and to achieve the unknown parameters. The proposed scheme is successfully implemented to solve the generalized Burgers'-Fisher equation. The comparison of numerical results with the exact solutions, and the solutions obtained using some traditional methods, including adomian decomposition method (ADM), homotopy perturbation method (HPM), and optimal homotopy asymptotic method (OHAM), show that the suggested scheme is fairly accurate and viable for solving such problems.
Malik, Suheel Abdullah; Qureshi, Ijaz Mansoor; Amir, Muhammad; Malik, Aqdas Naveed; Haq, Ihsanul
2015-01-01
In this paper, a new heuristic scheme for the approximate solution of the generalized Burgers'-Fisher equation is proposed. The scheme is based on the hybridization of Exp-function method with nature inspired algorithm. The given nonlinear partial differential equation (NPDE) through substitution is converted into a nonlinear ordinary differential equation (NODE). The travelling wave solution is approximated by the Exp-function method with unknown parameters. The unknown parameters are estimated by transforming the NODE into an equivalent global error minimization problem by using a fitness function. The popular genetic algorithm (GA) is used to solve the minimization problem, and to achieve the unknown parameters. The proposed scheme is successfully implemented to solve the generalized Burgers'-Fisher equation. The comparison of numerical results with the exact solutions, and the solutions obtained using some traditional methods, including adomian decomposition method (ADM), homotopy perturbation method (HPM), and optimal homotopy asymptotic method (OHAM), show that the suggested scheme is fairly accurate and viable for solving such problems. PMID:25811858
Reconstructing high-dimensional two-photon entangled states via compressive sensing
Tonolini, Francesco; Chan, Susan; Agnew, Megan; Lindsay, Alan; Leach, Jonathan
2014-01-01
Accurately establishing the state of large-scale quantum systems is an important tool in quantum information science; however, the large number of unknown parameters hinders the rapid characterisation of such states, and reconstruction procedures can become prohibitively time-consuming. Compressive sensing, a procedure for solving inverse problems by incorporating prior knowledge about the form of the solution, provides an attractive alternative to the problem of high-dimensional quantum state characterisation. Using a modified version of compressive sensing that incorporates the principles of singular value thresholding, we reconstruct the density matrix of a high-dimensional two-photon entangled system. The dimension of each photon is equal to d = 17, corresponding to a system of 83521 unknown real parameters. Accurate reconstruction is achieved with approximately 2500 measurements, only 3% of the total number of unknown parameters in the state. The algorithm we develop is fast, computationally inexpensive, and applicable to a wide range of quantum states, thus demonstrating compressive sensing as an effective technique for measuring the state of large-scale quantum systems. PMID:25306850
Evolutionary algorithm based heuristic scheme for nonlinear heat transfer equations.
Ullah, Azmat; Malik, Suheel Abdullah; Alimgeer, Khurram Saleem
2018-01-01
In this paper, a hybrid heuristic scheme based on two different basis functions i.e. Log Sigmoid and Bernstein Polynomial with unknown parameters is used for solving the nonlinear heat transfer equations efficiently. The proposed technique transforms the given nonlinear ordinary differential equation into an equivalent global error minimization problem. Trial solution for the given nonlinear differential equation is formulated using a fitness function with unknown parameters. The proposed hybrid scheme of Genetic Algorithm (GA) with Interior Point Algorithm (IPA) is opted to solve the minimization problem and to achieve the optimal values of unknown parameters. The effectiveness of the proposed scheme is validated by solving nonlinear heat transfer equations. The results obtained by the proposed scheme are compared and found in sharp agreement with both the exact solution and solution obtained by Haar Wavelet-Quasilinearization technique which witnesses the effectiveness and viability of the suggested scheme. Moreover, the statistical analysis is also conducted for investigating the stability and reliability of the presented scheme.
Inverse modeling with RZWQM2 to predict water quality
USDA-ARS?s Scientific Manuscript database
Agricultural systems models such as RZWQM2 are complex and have numerous parameters that are unknown and difficult to estimate. Inverse modeling provides an objective statistical basis for calibration that involves simultaneous adjustment of model parameters and yields parameter confidence intervals...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miyata, Y.; Suzuki, T.; Takechi, M.
2015-07-15
For the purpose of stable plasma equilibrium control and detailed analysis, it is essential to reconstruct an accurate plasma boundary on the poloidal cross section in tokamak devices. The Cauchy condition surface (CCS) method is a numerical approach for calculating the spatial distribution of the magnetic flux outside a hypothetical surface and reconstructing the plasma boundary from the magnetic measurements located outside the plasma. The accuracy of the plasma shape reconstruction has been assessed by comparing the CCS method and an equilibrium calculation in JT-60SA with a high elongation and triangularity of plasma shape. The CCS, on which both Dirichletmore » and Neumann conditions are unknown, is defined as a hypothetical surface located inside the real plasma region. The accuracy of the plasma shape reconstruction is sensitive to the CCS free parameters such as the number of unknown parameters and the shape in JT-60SA. It is found that the optimum number of unknown parameters and the size of the CCS that minimizes errors in the reconstructed plasma shape are in proportion to the plasma size. Furthermore, it is shown that the accuracy of the plasma shape reconstruction is greatly improved using the optimum number of unknown parameters and shape of the CCS, and the reachable reconstruction errors in plasma shape and locations of strike points are within the target ranges in JT-60SA.« less
Status and Evaluation of Microwave Furnace Capabilities at NASA Glenn Research Center
NASA Technical Reports Server (NTRS)
Lizcano, Maricela; Mackey, Jonathan A.
2014-01-01
The microwave (MW) furnace is a HY-Tech Microwave Systems, 2 kW 2.45 GHz Single Mode Microwave Applicator operating in continuous wave (CW) with variable power. It is located in Cleveland, Ohio at NASA Glenn Research Center. Until recently, the furnace capabilities had not been fully realized due to unknown failure that subsequently damaged critical furnace components. Although the causes of the problems were unknown, an assessment of the furnace itself indicated operational failure may have been partially caused by power quality. This report summarizes the status of the MW furnace and evaluates its capabilities in materials processing.
Ensemble-Based Parameter Estimation in a Coupled General Circulation Model
Liu, Y.; Liu, Z.; Zhang, S.; ...
2014-09-10
Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less
Distributed parameter estimation in unreliable sensor networks via broadcast gossip algorithms.
Wang, Huiwei; Liao, Xiaofeng; Wang, Zidong; Huang, Tingwen; Chen, Guo
2016-01-01
In this paper, we present an asynchronous algorithm to estimate the unknown parameter under an unreliable network which allows new sensors to join and old sensors to leave, and can tolerate link failures. Each sensor has access to partially informative measurements when it is awakened. In addition, the proposed algorithm can avoid the interference among messages and effectively reduce the accumulated measurement and quantization errors. Based on the theory of stochastic approximation, we prove that our proposed algorithm almost surely converges to the unknown parameter. Finally, we present a numerical example to assess the performance and the communication cost of the algorithm. Copyright © 2015 Elsevier Ltd. All rights reserved.
Hu, Jin; Zeng, Chunna
2017-02-01
The complex-valued Cohen-Grossberg neural network is a special kind of complex-valued neural network. In this paper, the synchronization problem of a class of complex-valued Cohen-Grossberg neural networks with known and unknown parameters is investigated. By using Lyapunov functionals and the adaptive control method based on parameter identification, some adaptive feedback schemes are proposed to achieve synchronization exponentially between the drive and response systems. The results obtained in this paper have extended and improved some previous works on adaptive synchronization of Cohen-Grossberg neural networks. Finally, two numerical examples are given to demonstrate the effectiveness of the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Analysis of multinomial models with unknown index using data augmentation
Royle, J. Andrew; Dorazio, R.M.; Link, W.A.
2007-01-01
Multinomial models with unknown index ('sample size') arise in many practical settings. In practice, Bayesian analysis of such models has proved difficult because the dimension of the parameter space is not fixed, being in some cases a function of the unknown index. We describe a data augmentation approach to the analysis of this class of models that provides for a generic and efficient Bayesian implementation. Under this approach, the data are augmented with all-zero detection histories. The resulting augmented dataset is modeled as a zero-inflated version of the complete-data model where an estimable zero-inflation parameter takes the place of the unknown multinomial index. Interestingly, data augmentation can be justified as being equivalent to imposing a discrete uniform prior on the multinomial index. We provide three examples involving estimating the size of an animal population, estimating the number of diabetes cases in a population using the Rasch model, and the motivating example of estimating the number of species in an animal community with latent probabilities of species occurrence and detection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y.; Liu, Z.; Zhang, S.
Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less
NASA Astrophysics Data System (ADS)
Doutres, Olivier; Atalla, Noureddine; Dong, Kevin
2013-02-01
This paper proposes simple semi-phenomenological models to predict the sound absorption efficiency of highly porous polyurethane foams from microstructure characterization. In a previous paper [J. Appl. Phys. 110, 064901 (2011)], the authors presented a 3-parameter semi-phenomenological model linking the microstructure properties of fully and partially reticulated isotropic polyurethane foams (i.e., strut length l, strut thickness t, and reticulation rate Rw) to the macroscopic non-acoustic parameters involved in the classical Johnson-Champoux-Allard model (i.e., porosity ϕ, airflow resistivity σ, tortuosity α∝, viscous Λ, and thermal Λ' characteristic lengths). The model was based on existing scaling laws, validated for fully reticulated polyurethane foams, and improved using both geometrical and empirical approaches to account for the presence of membrane closing the pores. This 3-parameter model is applied to six polyurethane foams in this paper and is found highly sensitive to the microstructure characterization; particularly to strut's dimensions. A simplified micro-/macro model is then presented. It is based on the cell size Cs and reticulation rate Rw only, assuming that the geometric ratio between strut length l and strut thickness t is known. This simplified model, called the 2-parameter model, considerably simplifies the microstructure characterization procedure. A comparison of the two proposed semi-phenomenological models is presented using six polyurethane foams being either fully or partially reticulated, isotropic or anisotropic. It is shown that the 2-parameter model is less sensitive to measurement uncertainties compared to the original model and allows a better estimation of polyurethane foams sound absorption behavior.
NASA Technical Reports Server (NTRS)
Garrett, L. B.; Smith, G. L.; Perkins, J. N.
1972-01-01
An implicit finite-difference scheme is developed for the fully coupled solution of the viscous, radiating stagnation-streamline equations, including strong blowing. Solutions are presented for both air injection and injection of carbon-phenolic ablation products into air at conditions near the peak radiative heating point in an earth entry trajectory from interplanetary return missions. A detailed radiative-transport code that accounts for the important radiative exchange processes for gaseous mixtures in local thermodynamic and chemical equilibrium is utilized in the study. With minimum number of assumptions for the initially unknown parameters and profile distributions, convergent solutions to the full stagnation-line equations are rapidly obtained by a method of successive approximations. Damping of selected profiles is required to aid convergence of the solutions for massive blowing. It is shown that certain finite-difference approximations to the governing differential equations stabilize and improve the solutions. Detailed comparisons are made with the numerical results of previous investigations. Results of the present study indicate lower radiative heat fluxes at the wall for carbonphenolic ablation than previously predicted.
Gas Chromatography-Mass Spectrometry Facility: Recent Improvements and Applications.
1980-03-01
such as l- octanol despite continuous heavy use. The dur- ability of the high temperature silanized columns over a long period has not yet been fully... Octanone 4 2-Ethyl-2-hexenal 5 5-Nonanone 6 2-Nonanone 7 Linalool 8 Isopulegol 9 Unknown terpene alcohol 10 Terpinenol-4 11 2 ,6-Dimethylaniline (12 2
Chen, Jinxiang; Tuo, Wanyong; Zhang, Xiaoming; He, Chenglin; Xie, Juan; Liu, Chang
2016-12-01
To develop lightweight biomimetic composite structures, the compressive failure and mechanical properties of fully integrated honeycomb plates were investigated experimentally and through the finite element method. The results indicated that: fracturing of the fully integrated honeycomb plates primarily occurred in the core layer, including the sealing edge structure. The morphological failures can be classified into two types, namely dislocations and compactions, and were caused primarily by the stress concentrations at the interfaces between the core layer and the upper and lower laminations and secondarily by the disordered short-fiber distribution in the material; although the fully integrated honeycomb plates manufactured in this experiment were imperfect, their mass-specific compressive strength was superior to that of similar biomimetic samples. Therefore, the proposed bio-inspired structure possesses good overall mechanical properties, and a range of parameters, such as the diameter of the transition arc, was defined for enhancing the design of fully integrated honeycomb plates and improving their compressive mechanical properties. Copyright © 2016 Elsevier B.V. All rights reserved.
Zenker, Sven
2010-08-01
Combining mechanistic mathematical models of physiology with quantitative observations using probabilistic inference may offer advantages over established approaches to computerized decision support in acute care medicine. Particle filters (PF) can perform such inference successively as data becomes available. The potential of PF for real-time state estimation (SE) for a model of cardiovascular physiology is explored using parallel computers and the ability to achieve joint state and parameter estimation (JSPE) given minimal prior knowledge tested. A parallelized sequential importance sampling/resampling algorithm was implemented and its scalability for the pure SE problem for a non-linear five-dimensional ODE model of the cardiovascular system evaluated on a Cray XT3 using up to 1,024 cores. JSPE was implemented using a state augmentation approach with artificial stochastic evolution of the parameters. Its performance when simultaneously estimating the 5 states and 18 unknown parameters when given observations only of arterial pressure, central venous pressure, heart rate, and, optionally, cardiac output, was evaluated in a simulated bleeding/resuscitation scenario. SE was successful and scaled up to 1,024 cores with appropriate algorithm parametrization, with real-time equivalent performance for up to 10 million particles. JSPE in the described underdetermined scenario achieved excellent reproduction of observables and qualitative tracking of enddiastolic ventricular volumes and sympathetic nervous activity. However, only a subset of the posterior distributions of parameters concentrated around the true values for parts of the estimated trajectories. Parallelized PF's performance makes their application to complex mathematical models of physiology for the purpose of clinical data interpretation, prediction, and therapy optimization appear promising. JSPE in the described extremely underdetermined scenario nevertheless extracted information of potential clinical relevance from the data in this simulation setting. However, fully satisfactory resolution of this problem when minimal prior knowledge about parameter values is available will require further methodological improvements, which are discussed.
Bayesian inversions of a dynamic vegetation model at four European grassland sites
NASA Astrophysics Data System (ADS)
Minet, J.; Laloy, E.; Tychon, B.; Francois, L.
2015-05-01
Eddy covariance data from four European grassland sites are used to probabilistically invert the CARAIB (CARbon Assimilation In the Biosphere) dynamic vegetation model (DVM) with 10 unknown parameters, using the DREAM(ZS) (DiffeRential Evolution Adaptive Metropolis) Markov chain Monte Carlo (MCMC) sampler. We focus on comparing model inversions, considering both homoscedastic and heteroscedastic eddy covariance residual errors, with variances either fixed a priori or jointly inferred together with the model parameters. Agreements between measured and simulated data during calibration are comparable with previous studies, with root mean square errors (RMSEs) of simulated daily gross primary productivity (GPP), ecosystem respiration (RECO) and evapotranspiration (ET) ranging from 1.73 to 2.19, 1.04 to 1.56 g C m-2 day-1 and 0.50 to 1.28 mm day-1, respectively. For the calibration period, using a homoscedastic eddy covariance residual error model resulted in a better agreement between measured and modelled data than using a heteroscedastic residual error model. However, a model validation experiment showed that CARAIB models calibrated considering heteroscedastic residual errors perform better. Posterior parameter distributions derived from using a heteroscedastic model of the residuals thus appear to be more robust. This is the case even though the classical linear heteroscedastic error model assumed herein did not fully remove heteroscedasticity of the GPP residuals. Despite the fact that the calibrated model is generally capable of fitting the data within measurement errors, systematic bias in the model simulations are observed. These are likely due to model inadequacies such as shortcomings in the photosynthesis modelling. Besides the residual error treatment, differences between model parameter posterior distributions among the four grassland sites are also investigated. It is shown that the marginal distributions of the specific leaf area and characteristic mortality time parameters can be explained by site-specific ecophysiological characteristics.
Inverse and forward modeling under uncertainty using MRE-based Bayesian approach
NASA Astrophysics Data System (ADS)
Hou, Z.; Rubin, Y.
2004-12-01
A stochastic inverse approach for subsurface characterization is proposed and applied to shallow vadose zone at a winery field site in north California and to a gas reservoir at the Ormen Lange field site in the North Sea. The approach is formulated in a Bayesian-stochastic framework, whereby the unknown parameters are identified in terms of their statistical moments or their probabilities. Instead of the traditional single-valued estimation /prediction provided by deterministic methods, the approach gives a probability distribution for an unknown parameter. This allows calculating the mean, the mode, and the confidence interval, which is useful for a rational treatment of uncertainty and its consequences. The approach also allows incorporating data of various types and different error levels, including measurements of state variables as well as information such as bounds on or statistical moments of the unknown parameters, which may represent prior information. To obtain minimally subjective prior probabilities required for the Bayesian approach, the principle of Minimum Relative Entropy (MRE) is employed. The approach is tested in field sites for flow parameters identification and soil moisture estimation in the vadose zone and for gas saturation estimation at great depth below the ocean floor. Results indicate the potential of coupling various types of field data within a MRE-based Bayesian formalism for improving the estimation of the parameters of interest.
NASA Astrophysics Data System (ADS)
Fukuda, Jun'ichi; Johnson, Kaj M.
2010-06-01
We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.
Handling the unknown soil hydraulic parameters in data assimilation for unsaturated flow problems
NASA Astrophysics Data System (ADS)
Lange, Natascha; Erdal, Daniel; Neuweiler, Insa
2017-04-01
Model predictions of flow in the unsaturated zone require the soil hydraulic parameters. However, these parameters cannot be determined easily in applications, in particular if observations are indirect and cover only a small range of possible states. Correlation of parameters or their correlation in the range of states that are observed is a problem, as different parameter combinations may reproduce approximately the same measured water content. In field campaigns this problem can be helped by adding more measurement devices. Often, observation networks are designed to feed models for long term prediction purposes (i.e. for weather forecasting). A popular way of making predictions with such kind of observations are data assimilation methods, like the ensemble Kalman filter (Evensen, 1994). These methods can be used for parameter estimation if the unknown parameters are included in the state vector and updated along with the model states. Given the difficulties related to estimation of the soil hydraulic parameters in general, it is questionable, though, whether these methods can really be used for parameter estimation under natural conditions. Therefore, we investigate the ability of the ensemble Kalman filter to estimate the soil hydraulic parameters. We use synthetic identical twin-experiments to guarantee full knowledge of the model and the true parameters. We use the van Genuchten model to describe the soil water retention and relative permeability functions. This model is unfortunately prone to the above mentioned pseudo-correlations of parameters. Therefore, we also test the simpler Russo Gardner model, which is less affected by that problem, in our experiments. The total number of unknown parameters is varied by considering different layers of soil. Besides, we study the influence of the parameter updates on the water content predictions. We test different iterative filter approaches and compare different observation strategies for parameter identification. Considering heterogeneous soils, we discuss the representativeness of different observation types to be used for the assimilation. G. Evensen. Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics. Journal of Geophysical Research: Oceans, 99(C5):10143-10162, 1994
Classification of microscopy images of Langerhans islets
NASA Astrophysics Data System (ADS)
Å vihlík, Jan; Kybic, Jan; Habart, David; Berková, Zuzana; Girman, Peter; Kříž, Jan; Zacharovová, Klára
2014-03-01
Evaluation of images of Langerhans islets is a crucial procedure for planning an islet transplantation, which is a promising diabetes treatment. This paper deals with segmentation of microscopy images of Langerhans islets and evaluation of islet parameters such as area, diameter, or volume (IE). For all the available images, the ground truth and the islet parameters were independently evaluated by four medical experts. We use a pixelwise linear classifier (perceptron algorithm) and SVM (support vector machine) for image segmentation. The volume is estimated based on circle or ellipse fitting to individual islets. The segmentations were compared with the corresponding ground truth. Quantitative islet parameters were also evaluated and compared with parameters given by medical experts. We can conclude that accuracy of the presented fully automatic algorithm is fully comparable with medical experts.
Mass properties measurement system dynamics
NASA Technical Reports Server (NTRS)
Doty, Keith L.
1993-01-01
The MPMS mechanism possess two revolute degrees-of-freedom and allows the user to measure the mass, center of gravity, and the inertia tensor of an unknown mass. The dynamics of the Mass Properties Measurement System (MPMS) from the Lagrangian approach to illustrate the dependency of the motion on the unknown parameters.
Doroodgar, Barzin; Liu, Yugang; Nejat, Goldie
2014-12-01
Semi-autonomous control schemes can address the limitations of both teleoperation and fully autonomous robotic control of rescue robots in disaster environments by allowing a human operator to cooperate and share such tasks with a rescue robot as navigation, exploration, and victim identification. In this paper, we present a unique hierarchical reinforcement learning-based semi-autonomous control architecture for rescue robots operating in cluttered and unknown urban search and rescue (USAR) environments. The aim of the controller is to enable a rescue robot to continuously learn from its own experiences in an environment in order to improve its overall performance in exploration of unknown disaster scenes. A direction-based exploration technique is integrated in the controller to expand the search area of the robot via the classification of regions and the rubble piles within these regions. Both simulations and physical experiments in USAR-like environments verify the robustness of the proposed HRL-based semi-autonomous controller to unknown cluttered scenes with different sizes and varying types of configurations.
Protein-like fully reversible tetramerisation and super-association of an aminocellulose
NASA Astrophysics Data System (ADS)
Nikolajski, Melanie; Adams, Gary G.; Gillis, Richard B.; Besong, David Tabot; Rowe, Arthur J.; Heinze, Thomas; Harding, Stephen E.
2014-01-01
Unusual protein-like, partially reversible associative behaviour has recently been observed in solutions of the water soluble carbohydrates known as 6-deoxy-6-(ω-aminoalkyl)aminocelluloses, which produce controllable self-assembling films for enzyme immobilisation and other biotechnological applications. Now, for the first time, we have found a fully reversible self-association (tetramerisation) within this family of polysaccharides. Remarkably these carbohydrate tetramers are then seen to associate further in a regular way into supra-molecular complexes. Fully reversible oligomerisation has been hitherto completely unknown for carbohydrates and instead resembles in some respects the assembly of polypeptides and proteins like haemoglobin and its sickle cell mutation. Our traditional perceptions as to what might be considered ``protein-like'' and what might be considered as ``carbohydrate-like'' behaviour may need to be rendered more flexible, at least as far as interaction phenomena are concerned.
Adaptive control of stochastic linear systems with unknown parameters. M.S. Thesis
NASA Technical Reports Server (NTRS)
Ku, R. T.
1972-01-01
The problem of optimal control of linear discrete-time stochastic dynamical system with unknown and, possibly, stochastically varying parameters is considered on the basis of noisy measurements. It is desired to minimize the expected value of a quadratic cost functional. Since the simultaneous estimation of the state and plant parameters is a nonlinear filtering problem, the extended Kalman filter algorithm is used. Several qualitative and asymptotic properties of the open loop feedback optimal control and the enforced separation scheme are discussed. Simulation results via Monte Carlo method show that, in terms of the performance measure, for stable systems the open loop feedback optimal control system is slightly better than the enforced separation scheme, while for unstable systems the latter scheme is far better.
Bayesian inversions of a dynamic vegetation model in four European grassland sites
NASA Astrophysics Data System (ADS)
Minet, J.; Laloy, E.; Tychon, B.; François, L.
2015-01-01
Eddy covariance data from four European grassland sites are used to probabilistically invert the CARAIB dynamic vegetation model (DVM) with ten unknown parameters, using the DREAM(ZS) Markov chain Monte Carlo (MCMC) sampler. We compare model inversions considering both homoscedastic and heteroscedastic eddy covariance residual errors, with variances either fixed a~priori or jointly inferred with the model parameters. Agreements between measured and simulated data during calibration are comparable with previous studies, with root-mean-square error (RMSE) of simulated daily gross primary productivity (GPP), ecosystem respiration (RECO) and evapotranspiration (ET) ranging from 1.73 to 2.19 g C m-2 day-1, 1.04 to 1.56 g C m-2 day-1, and 0.50 to 1.28 mm day-1, respectively. In validation, mismatches between measured and simulated data are larger, but still with Nash-Sutcliffe efficiency scores above 0.5 for three out of the four sites. Although measurement errors associated with eddy covariance data are known to be heteroscedastic, we showed that assuming a classical linear heteroscedastic model of the residual errors in the inversion do not fully remove heteroscedasticity. Since the employed heteroscedastic error model allows for larger deviations between simulated and measured data as the magnitude of the measured data increases, this error model expectedly lead to poorer data fitting compared to inversions considering a constant variance of the residual errors. Furthermore, sampling the residual error variances along with model parameters results in overall similar model parameter posterior distributions as those obtained by fixing these variances beforehand, while slightly improving model performance. Despite the fact that the calibrated model is generally capable of fitting the data within measurement errors, systematic bias in the model simulations are observed. These are likely due to model inadequacies such as shortcomings in the photosynthesis modelling. Besides model behaviour, difference between model parameter posterior distributions among the four grassland sites are also investigated. It is shown that the marginal distributions of the specific leaf area and characteristic mortality time parameters can be explained by site-specific ecophysiological characteristics. Lastly, the possibility of finding a common set of parameters among the four experimental sites is discussed.
Wang, Xinghu; Hong, Yiguang; Yi, Peng; Ji, Haibo; Kang, Yu
2017-05-24
In this paper, a distributed optimization problem is studied for continuous-time multiagent systems with unknown-frequency disturbances. A distributed gradient-based control is proposed for the agents to achieve the optimal consensus with estimating unknown frequencies and rejecting the bounded disturbance in the semi-global sense. Based on convex optimization analysis and adaptive internal model approach, the exact optimization solution can be obtained for the multiagent system disturbed by exogenous disturbances with uncertain parameters.
Stochastic Inversion of 2D Magnetotelluric Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jinsong
2010-07-01
The algorithm is developed to invert 2D magnetotelluric (MT) data based on sharp boundary parametrization using a Bayesian framework. Within the algorithm, we consider the locations and the resistivity of regions formed by the interfaces are as unknowns. We use a parallel, adaptive finite-element algorithm to forward simulate frequency-domain MT responses of 2D conductivity structure. Those unknown parameters are spatially correlated and are described by a geostatistical model. The joint posterior probability distribution function is explored by Markov Chain Monte Carlo (MCMC) sampling methods. The developed stochastic model is effective for estimating the interface locations and resistivity. Most importantly, itmore » provides details uncertainty information on each unknown parameter. Hardware requirements: PC, Supercomputer, Multi-platform, Workstation; Software requirements C and Fortan; Operation Systems/version is Linux/Unix or Windows« less
Three-dimensional cinematography with control object of unknown shape.
Dapena, J; Harman, E A; Miller, J A
1982-01-01
A technique for reconstruction of three-dimensional (3D) motion which involves a simple filming procedure but allows the deduction of coordinates in large object volumes was developed. Internal camera parameters are calculated from measurements of the film images of two calibrated crosses while external camera parameters are calculated from the film images of points in a control object of unknown shape but at least one known length. The control object, which includes the volume in which the activity is to take place, is formed by a series of poles placed at unknown locations, each carrying two targets. From the internal and external camera parameters, and from locations of the images of point in the films of the two cameras, 3D coordinates of the point can be calculated. Root mean square errors of the three coordinates of points in a large object volume (5m x 5m x 1.5m) were 15 mm, 13 mm, 13 mm and 6 mm, and relative errors in lengths averaged 0.5%, 0.7% and 0.5%, respectively.
Fully automated segmentation of callus by micro-CT compared to biomechanics.
Bissinger, Oliver; Götz, Carolin; Wolff, Klaus-Dietrich; Hapfelmeier, Alexander; Prodinger, Peter Michael; Tischer, Thomas
2017-07-11
A high percentage of closed femur fractures have slight comminution. Using micro-CT (μCT), multiple fragment segmentation is much more difficult than segmentation of unfractured or osteotomied bone. Manual or semi-automated segmentation has been performed to date. However, such segmentation is extremely laborious, time-consuming and error-prone. Our aim was to therefore apply a fully automated segmentation algorithm to determine μCT parameters and examine their association with biomechanics. The femura of 64 rats taken after randomised inhibitory or neutral medication, in terms of the effect on fracture healing, and controls were closed fractured after a Kirschner wire was inserted. After 21 days, μCT and biomechanical parameters were determined by a fully automated method and correlated (Pearson's correlation). The fully automated segmentation algorithm automatically detected bone and simultaneously separated cortical bone from callus without requiring ROI selection for each single bony structure. We found an association of structural callus parameters obtained by μCT to the biomechanical properties. However, results were only explicable by additionally considering the callus location. A large number of slightly comminuted fractures in combination with therapies that influence the callus qualitatively and/or quantitatively considerably affects the association between μCT and biomechanics. In the future, contrast-enhanced μCT imaging of the callus cartilage might provide more information to improve the non-destructive and non-invasive prediction of callus mechanical properties. As studies evaluating such important drugs increase, fully automated segmentation appears to be clinically important.
Exact closed-form solutions of a fully nonlinear asymptotic two-fluid model
NASA Astrophysics Data System (ADS)
Cheviakov, Alexei F.
2018-05-01
A fully nonlinear model of Choi and Camassa (1999) describing one-dimensional incompressible dynamics of two non-mixing fluids in a horizontal channel, under a shallow water approximation, is considered. An equivalence transformation is presented, leading to a special dimensionless form of the system, involving a single dimensionless constant physical parameter, as opposed to five parameters present in the original model. A first-order dimensionless ordinary differential equation describing traveling wave solutions is analyzed. Several multi-parameter families of physically meaningful exact closed-form solutions of the two-fluid model are derived, corresponding to periodic, solitary, and kink-type bidirectional traveling waves; specific examples are given, and properties of the exact solutions are analyzed.
NASA Astrophysics Data System (ADS)
Astroza, Rodrigo; Ebrahimian, Hamed; Conte, Joel P.
2015-03-01
This paper describes a novel framework that combines advanced mechanics-based nonlinear (hysteretic) finite element (FE) models and stochastic filtering techniques to estimate unknown time-invariant parameters of nonlinear inelastic material models used in the FE model. Using input-output data recorded during earthquake events, the proposed framework updates the nonlinear FE model of the structure. The updated FE model can be directly used for damage identification and further used for damage prognosis. To update the unknown time-invariant parameters of the FE model, two alternative stochastic filtering methods are used: the extended Kalman filter (EKF) and the unscented Kalman filter (UKF). A three-dimensional, 5-story, 2-by-1 bay reinforced concrete (RC) frame is used to verify the proposed framework. The RC frame is modeled using fiber-section displacement-based beam-column elements with distributed plasticity and is subjected to the ground motion recorded at the Sylmar station during the 1994 Northridge earthquake. The results indicate that the proposed framework accurately estimate the unknown material parameters of the nonlinear FE model. The UKF outperforms the EKF when the relative root-mean-square error of the recorded responses are compared. In addition, the results suggest that the convergence of the estimate of modeling parameters is smoother and faster when the UKF is utilized.
Zaikin, Alexey; Míguez, Joaquín
2017-01-01
We compare three state-of-the-art Bayesian inference methods for the estimation of the unknown parameters in a stochastic model of a genetic network. In particular, we introduce a stochastic version of the paradigmatic synthetic multicellular clock model proposed by Ullner et al., 2007. By introducing dynamical noise in the model and assuming that the partial observations of the system are contaminated by additive noise, we enable a principled mechanism to represent experimental uncertainties in the synthesis of the multicellular system and pave the way for the design of probabilistic methods for the estimation of any unknowns in the model. Within this setup, we tackle the Bayesian estimation of a subset of the model parameters. Specifically, we compare three Monte Carlo based numerical methods for the approximation of the posterior probability density function of the unknown parameters given a set of partial and noisy observations of the system. The schemes we assess are the particle Metropolis-Hastings (PMH) algorithm, the nonlinear population Monte Carlo (NPMC) method and the approximate Bayesian computation sequential Monte Carlo (ABC-SMC) scheme. We present an extensive numerical simulation study, which shows that while the three techniques can effectively solve the problem there are significant differences both in estimation accuracy and computational efficiency. PMID:28797087
Underwater passive acoustic localization of Pacific walruses in the northeastern Chukchi Sea.
Rideout, Brendan P; Dosso, Stan E; Hannay, David E
2013-09-01
This paper develops and applies a linearized Bayesian localization algorithm based on acoustic arrival times of marine mammal vocalizations at spatially-separated receivers which provides three-dimensional (3D) location estimates with rigorous uncertainty analysis. To properly account for uncertainty in receiver parameters (3D hydrophone locations and synchronization times) and environmental parameters (water depth and sound-speed correction), these quantities are treated as unknowns constrained by prior estimates and prior uncertainties. Unknown scaling factors on both the prior and arrival-time uncertainties are estimated by minimizing Akaike's Bayesian information criterion (a maximum entropy condition). Maximum a posteriori estimates for sound source locations and times, receiver parameters, and environmental parameters are calculated simultaneously using measurements of arrival times for direct and interface-reflected acoustic paths. Posterior uncertainties for all unknowns incorporate both arrival time and prior uncertainties. Monte Carlo simulation results demonstrate that, for the cases considered here, linearization errors are small and the lack of an accurate sound-speed profile does not cause significant biases in the estimated locations. A sequence of Pacific walrus vocalizations, recorded in the Chukchi Sea northwest of Alaska, is localized using this technique, yielding a track estimate and uncertainties with an estimated speed comparable to normal walrus swim speeds.
Flórez, Ana Belén; Ammor, Mohammed Salim; Delgado, Susana; Mayo, Baltasar
2006-12-01
An erm(B) gene carried on the Lactobacillus johnsonii G41 chromosome and the upstream and downstream regions were fully sequenced. Apparently, a 1,495-bp segment of pRE25 from Enterococcus faecalis carrying the erm(B) gene became inserted, by an unknown mechanism, into the L. johnsonii chromosome.
NASA Technical Reports Server (NTRS)
Duong, N.; Winn, C. B.; Johnson, G. R.
1975-01-01
Two approaches to an identification problem in hydrology are presented, based upon concepts from modern control and estimation theory. The first approach treats the identification of unknown parameters in a hydrologic system subject to noisy inputs as an adaptive linear stochastic control problem; the second approach alters the model equation to account for the random part in the inputs, and then uses a nonlinear estimation scheme to estimate the unknown parameters. Both approaches use state-space concepts. The identification schemes are sequential and adaptive and can handle either time-invariant or time-dependent parameters. They are used to identify parameters in the Prasad model of rainfall-runoff. The results obtained are encouraging and confirm the results from two previous studies; the first using numerical integration of the model equation along with a trial-and-error procedure, and the second using a quasi-linearization technique. The proposed approaches offer a systematic way of analyzing the rainfall-runoff process when the input data are imbedded in noise.
A new chaotic communication scheme based on adaptive synchronization.
Xiang-Jun, Wu
2006-12-01
A new chaotic communication scheme using adaptive synchronization technique of two unified chaotic systems is proposed. Different from the existing secure communication methods, the transmitted signal is modulated into the parameter of chaotic systems. The adaptive synchronization technique is used to synchronize two identical chaotic systems embedded in the transmitter and the receiver. It is assumed that the parameter of the receiver system is unknown. Based on the Lyapunov stability theory, an adaptive control law is derived to make the states of two identical unified chaotic systems with unknown system parameters asymptotically synchronized; thus the parameter of the receiver system is identified. Then the recovery of the original information signal in the receiver is successfully achieved on the basis of the estimated parameter. It is noticed that the time required for recovering the information signal and the accuracy of the recovered signal very sensitively depends on the frequency of the information signal. Numerical results have verified the effectiveness of the proposed scheme.
A Fast Solver for Implicit Integration of the Vlasov--Poisson System in the Eulerian Framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garrett, C. Kristopher; Hauck, Cory D.
In this paper, we present a domain decomposition algorithm to accelerate the solution of Eulerian-type discretizations of the linear, steady-state Vlasov equation. The steady-state solver then forms a key component in the implementation of fully implicit or nearly fully implicit temporal integrators for the nonlinear Vlasov--Poisson system. The solver relies on a particular decomposition of phase space that enables the use of sweeping techniques commonly used in radiation transport applications. The original linear system for the phase space unknowns is then replaced by a smaller linear system involving only unknowns on the boundary between subdomains, which can then be solvedmore » efficiently with Krylov methods such as GMRES. Steady-state solves are combined to form an implicit Runge--Kutta time integrator, and the Vlasov equation is coupled self-consistently to the Poisson equation via a linearized procedure or a nonlinear fixed-point method for the electric field. Finally, numerical results for standard test problems demonstrate the efficiency of the domain decomposition approach when compared to the direct application of an iterative solver to the original linear system.« less
Developing Probabilistic Safety Performance Margins for Unknown and Underappreciated Risks
NASA Technical Reports Server (NTRS)
Benjamin, Allan; Dezfuli, Homayoon; Everett, Chris
2015-01-01
Probabilistic safety requirements currently formulated or proposed for space systems, nuclear reactor systems, nuclear weapon systems, and other types of systems that have a low-probability potential for high-consequence accidents depend on showing that the probability of such accidents is below a specified safety threshold or goal. Verification of compliance depends heavily upon synthetic modeling techniques such as PRA. To determine whether or not a system meets its probabilistic requirements, it is necessary to consider whether there are significant risks that are not fully considered in the PRA either because they are not known at the time or because their importance is not fully understood. The ultimate objective is to establish a reasonable margin to account for the difference between known risks and actual risks in attempting to validate compliance with a probabilistic safety threshold or goal. In this paper, we examine data accumulated over the past 60 years from the space program, from nuclear reactor experience, from aircraft systems, and from human reliability experience to formulate guidelines for estimating probabilistic margins to account for risks that are initially unknown or underappreciated. The formulation includes a review of the safety literature to identify the principal causes of such risks.
A Fast Solver for Implicit Integration of the Vlasov--Poisson System in the Eulerian Framework
Garrett, C. Kristopher; Hauck, Cory D.
2018-04-05
In this paper, we present a domain decomposition algorithm to accelerate the solution of Eulerian-type discretizations of the linear, steady-state Vlasov equation. The steady-state solver then forms a key component in the implementation of fully implicit or nearly fully implicit temporal integrators for the nonlinear Vlasov--Poisson system. The solver relies on a particular decomposition of phase space that enables the use of sweeping techniques commonly used in radiation transport applications. The original linear system for the phase space unknowns is then replaced by a smaller linear system involving only unknowns on the boundary between subdomains, which can then be solvedmore » efficiently with Krylov methods such as GMRES. Steady-state solves are combined to form an implicit Runge--Kutta time integrator, and the Vlasov equation is coupled self-consistently to the Poisson equation via a linearized procedure or a nonlinear fixed-point method for the electric field. Finally, numerical results for standard test problems demonstrate the efficiency of the domain decomposition approach when compared to the direct application of an iterative solver to the original linear system.« less
NASA Technical Reports Server (NTRS)
Murphy, K. A.
1988-01-01
A parameter estimation algorithm is developed which can be used to estimate unknown time- or state-dependent delays and other parameters (e.g., initial condition) appearing within a nonlinear nonautonomous functional differential equation. The original infinite dimensional differential equation is approximated using linear splines, which are allowed to move with the variable delay. The variable delays are approximated using linear splines as well. The approximation scheme produces a system of ordinary differential equations with nice computational properties. The unknown parameters are estimated within the approximating systems by minimizing a least-squares fit-to-data criterion. Convergence theorems are proved for time-dependent delays and state-dependent delays within two classes, which say essentially that fitting the data by using approximations will, in the limit, provide a fit to the data using the original system. Numerical test examples are presented which illustrate the method for all types of delay.
NASA Technical Reports Server (NTRS)
Murphy, K. A.
1990-01-01
A parameter estimation algorithm is developed which can be used to estimate unknown time- or state-dependent delays and other parameters (e.g., initial condition) appearing within a nonlinear nonautonomous functional differential equation. The original infinite dimensional differential equation is approximated using linear splines, which are allowed to move with the variable delay. The variable delays are approximated using linear splines as well. The approximation scheme produces a system of ordinary differential equations with nice computational properties. The unknown parameters are estimated within the approximating systems by minimizing a least-squares fit-to-data criterion. Convergence theorems are proved for time-dependent delays and state-dependent delays within two classes, which say essentially that fitting the data by using approximations will, in the limit, provide a fit to the data using the original system. Numerical test examples are presented which illustrate the method for all types of delay.
10 CFR Appendix II to Part 504 - Fuel Price Computation
Code of Federal Regulations, 2010 CFR
2010-01-01
... 504—Fuel Price Computation (a) Introduction. This appendix provides the equations and parameters... inflation indices must follow standard statistical procedures and must be fully documented within the... the weighted average fuel price must follow standard statistical procedures and be fully documented...
Elmiger, Marco P; Poetzsch, Michael; Steuer, Andrea E; Kraemer, Thomas
2018-03-06
High resolution mass spectrometry and modern data independent acquisition (DIA) methods enable the creation of general unknown screening (GUS) procedures. However, even when DIA is used, its potential is far from being exploited, because often, the untargeted acquisition is followed by a targeted search. Applying an actual GUS (including untargeted screening) produces an immense amount of data that must be dealt with. An optimization of the parameters regulating the feature detection and hit generation algorithms of the data processing software could significantly reduce the amount of unnecessary data and thereby the workload. Design of experiment (DoE) approaches allow a simultaneous optimization of multiple parameters. In a first step, parameters are evaluated (crucial or noncrucial). Second, crucial parameters are optimized. The aim in this study was to reduce the number of hits, without missing analytes. The obtained parameter settings from the optimization were compared to the standard settings by analyzing a test set of blood samples spiked with 22 relevant analytes as well as 62 authentic forensic cases. The optimization lead to a marked reduction of workload (12.3 to 1.1% and 3.8 to 1.1% hits for the test set and the authentic cases, respectively) while simultaneously increasing the identification rate (68.2 to 86.4% and 68.8 to 88.1%, respectively). This proof of concept study emphasizes the great potential of DoE approaches to master the data overload resulting from modern data independent acquisition methods used for general unknown screening procedures by optimizing software parameters.
Asymptotic Normality of the Maximum Pseudolikelihood Estimator for Fully Visible Boltzmann Machines.
Nguyen, Hien D; Wood, Ian A
2016-04-01
Boltzmann machines (BMs) are a class of binary neural networks for which there have been numerous proposed methods of estimation. Recently, it has been shown that in the fully visible case of the BM, the method of maximum pseudolikelihood estimation (MPLE) results in parameter estimates, which are consistent in the probabilistic sense. In this brief, we investigate the properties of MPLE for the fully visible BMs further, and prove that MPLE also yields an asymptotically normal parameter estimator. These results can be used to construct confidence intervals and to test statistical hypotheses. These constructions provide a closed-form alternative to the current methods that require Monte Carlo simulation or resampling. We support our theoretical results by showing that the estimator behaves as expected in simulation studies.
Computational methods for estimation of parameters in hyperbolic systems
NASA Technical Reports Server (NTRS)
Banks, H. T.; Ito, K.; Murphy, K. A.
1983-01-01
Approximation techniques for estimating spatially varying coefficients and unknown boundary parameters in second order hyperbolic systems are discussed. Methods for state approximation (cubic splines, tau-Legendre) and approximation of function space parameters (interpolatory splines) are outlined and numerical findings for use of the resulting schemes in model "one dimensional seismic inversion' problems are summarized.
NASA Astrophysics Data System (ADS)
Roozegar, Mehdi; Mahjoob, Mohammad J.; Ayati, Moosa
2017-05-01
This paper deals with adaptive estimation of the unknown parameters and states of a pendulum-driven spherical robot (PDSR), which is a nonlinear in parameters (NLP) chaotic system with parametric uncertainties. Firstly, the mathematical model of the robot is deduced by applying the Newton-Euler methodology for a system of rigid bodies. Then, based on the speed gradient (SG) algorithm, the states and unknown parameters of the robot are estimated online for different step length gains and initial conditions. The estimated parameters are updated adaptively according to the error between estimated and true state values. Since the errors of the estimated states and parameters as well as the convergence rates depend significantly on the value of step length gain, this gain should be chosen optimally. Hence, a heuristic fuzzy logic controller is employed to adjust the gain adaptively. Simulation results indicate that the proposed approach is highly encouraging for identification of this NLP chaotic system even if the initial conditions change and the uncertainties increase; therefore, it is reliable to be implemented on a real robot.
Characterizing unknown systematics in large scale structure surveys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, Nishant; Ho, Shirley; Myers, Adam D.
Photometric large scale structure (LSS) surveys probe the largest volumes in the Universe, but are inevitably limited by systematic uncertainties. Imperfect photometric calibration leads to biases in our measurements of the density fields of LSS tracers such as galaxies and quasars, and as a result in cosmological parameter estimation. Earlier studies have proposed using cross-correlations between different redshift slices or cross-correlations between different surveys to reduce the effects of such systematics. In this paper we develop a method to characterize unknown systematics. We demonstrate that while we do not have sufficient information to correct for unknown systematics in the data,more » we can obtain an estimate of their magnitude. We define a parameter to estimate contamination from unknown systematics using cross-correlations between different redshift slices and propose discarding bins in the angular power spectrum that lie outside a certain contamination tolerance level. We show that this method improves estimates of the bias using simulated data and further apply it to photometric luminous red galaxies in the Sloan Digital Sky Survey as a case study.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kato, Go
We consider the situation where s replicas of a qubit with an unknown state and its orthogonal k replicas are given as an input, and we try to make c clones of the qubit with the unknown state. As a function of s, k, and c, we obtain the optimal fidelity between the qubit with an unknown state and the clone by explicitly giving a completely positive trace-preserving (CPTP) map that represents a cloning machine. We discuss dependency of the fidelity on the values of the parameters s, k, and c.
A dynamical approach in exploring the unknown mass in the Solar system using pulsar timing arrays
NASA Astrophysics Data System (ADS)
Guo, Y. J.; Lee, K. J.; Caballero, R. N.
2018-04-01
The error in the Solar system ephemeris will lead to dipolar correlations in the residuals of pulsar timing array for widely separated pulsars. In this paper, we utilize such correlated signals, and construct a Bayesian data-analysis framework to detect the unknown mass in the Solar system and to measure the orbital parameters. The algorithm is designed to calculate the waveform of the induced pulsar-timing residuals due to the unmodelled objects following the Keplerian orbits in the Solar system. The algorithm incorporates a Bayesian-analysis suit used to simultaneously analyse the pulsar-timing data of multiple pulsars to search for coherent waveforms, evaluate the detection significance of unknown objects, and to measure their parameters. When the object is not detectable, our algorithm can be used to place upper limits on the mass. The algorithm is verified using simulated data sets, and cross-checked with analytical calculations. We also investigate the capability of future pulsar-timing-array experiments in detecting the unknown objects. We expect that the future pulsar-timing data can limit the unknown massive objects in the Solar system to be lighter than 10-11-10-12 M⊙, or measure the mass of Jovian system to a fractional precision of 10-8-10-9.
Unknown loads affect force production capacity in early phases of bench press throws.
Hernández Davó, J L; Sabido Solana, R; Sarabia Marínm, J M; Sánchez Martos, Á; Moya Ramón, M
2015-10-01
Explosive strength training aims to improve force generation in early phases of movement due to its importance in sport performance. The present study examined the influence of lack of knowledge about the load lifted in explosive parameters during bench press throws. Thirteen healthy young men (22.8±2.0 years) participated in the study. Participants performed bench press throws with three different loads (30, 50 and 70% of 1 repetition maximum) in two different conditions (known and unknown loads). In unknown condition, loads were changed within sets in each repetition and participants did not know the load, whereas in known condition the load did not change within sets and participants had knowledge about the load lifted. Results of repeated-measures ANOVA revealed that unknown conditions involves higher power in the first 30, 50, 100 and 150 ms with the three loads, higher values of ratio of force development in those first instants, and differences in time to reach maximal rate of force development with 50 and 70% of 1 repetition maximum. This study showed that unknown conditions elicit higher values of explosive parameters in early phases of bench press throws, thereby this kind of methodology could be considered in explosive strength training.
Ding, A Adam; Wu, Hulin
2014-10-01
We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method.
Ding, A. Adam; Wu, Hulin
2015-01-01
We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method. PMID:26401093
Groebner Basis Solutions to Satellite Trajectory Control by Pole Placement
NASA Astrophysics Data System (ADS)
Kukelova, Z.; Krsek, P.; Smutny, V.; Pajdla, T.
2013-09-01
Satellites play an important role, e.g., in telecommunication, navigation and weather monitoring. Controlling their trajectories is an important problem. In [1], an approach to the pole placement for the synthesis of a linear controller has been presented. It leads to solving five polynomial equations in nine unknown elements of the state space matrices of a compensator. This is an underconstrained system and therefore four of the unknown elements need to be considered as free parameters and set to some prior values to obtain a system of five equations in five unknowns. In [1], this system was solved for one chosen set of free parameters with the help of Dixon resultants. In this work, we study and present Groebner basis solutions to this problem of computation of a dynamic compensator for the satellite for different combinations of input free parameters. We show that the Groebner basis method for solving systems of polynomial equations leads to very simple solutions for all combinations of free parameters. These solutions require to perform only the Gauss-Jordan elimination of a small matrix and computation of roots of a single variable polynomial. The maximum degree of this polynomial is not greater than six in general but for most combinations of the input free parameters its degree is even lower. [1] B. Palancz. Application of Dixon resultant to satellite trajectory control by pole placement. Journal of Symbolic Computation, Volume 50, March 2013, Pages 79-99, Elsevier.
Method and apparatus for sensor fusion
NASA Technical Reports Server (NTRS)
Krishen, Kumar (Inventor); Shaw, Scott (Inventor); Defigueiredo, Rui J. P. (Inventor)
1991-01-01
Method and apparatus for fusion of data from optical and radar sensors by error minimization procedure is presented. The method was applied to the problem of shape reconstruction of an unknown surface at a distance. The method involves deriving an incomplete surface model from an optical sensor. The unknown characteristics of the surface are represented by some parameter. The correct value of the parameter is computed by iteratively generating theoretical predictions of the radar cross sections (RCS) of the surface, comparing the predicted and the observed values for the RCS, and improving the surface model from results of the comparison. Theoretical RCS may be computed from the surface model in several ways. One RCS prediction technique is the method of moments. The method of moments can be applied to an unknown surface only if some shape information is available from an independent source. The optical image provides the independent information.
NASA Astrophysics Data System (ADS)
Frazer, Gordon J.; Anderson, Stuart J.
1997-10-01
The radar returns from some classes of time-varying point targets can be represented by the discrete-time signal plus noise model: xt equals st plus [vt plus (eta) t] equals (summation)i equals o P minus 1 Aiej2(pi f(i)/f(s)t) plus vt plus (eta) t, t (epsilon) 0, . . ., N minus 1, fi equals kfI plus fo where the received signal xt corresponds to the radar return from the target of interest from one azimuth-range cell. The signal has an unknown number of components, P, unknown complex amplitudes Ai and frequencies fi. The frequency parameters fo and fI are unknown, although constrained such that fo less than fI/2 and parameter k (epsilon) {minus u, . . ., minus 2, minus 1, 0, 1, 2, . . ., v} is constrained such that the component frequencies fi are bound by (minus fs/2, fs/2). The noise term vt, is typically colored, and represents clutter, interference and various noise sources. It is unknown, except that (summation)tvt2 less than infinity; in general, vt is not well modelled as an auto-regressive process of known order. The additional noise term (eta) t represents time-invariant point targets in the same azimuth-range cell. An important characteristic of the target is the unknown parameter, fI, representing the frequency interval between harmonic lines. It is desired to determine an estimate of fI from N samples of xt. We propose an algorithm to estimate fI based on Thomson's harmonic line F-Test, which is part of the multi-window spectrum estimation method and demonstrate the proposed estimator applied to target echo time series collected using an experimental HF skywave radar.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ensslin, Torsten A.; Frommert, Mona
2011-05-15
The optimal reconstruction of cosmic metric perturbations and other signals requires knowledge of their power spectra and other parameters. If these are not known a priori, they have to be measured simultaneously from the same data used for the signal reconstruction. We formulate the general problem of signal inference in the presence of unknown parameters within the framework of information field theory. To solve this, we develop a generic parameter-uncertainty renormalized estimation (PURE) technique. As a concrete application, we address the problem of reconstructing Gaussian signals with unknown power-spectrum with five different approaches: (i) separate maximum-a-posteriori power-spectrum measurement and subsequentmore » reconstruction, (ii) maximum-a-posteriori reconstruction with marginalized power-spectrum, (iii) maximizing the joint posterior of signal and spectrum, (iv) guessing the spectrum from the variance in the Wiener-filter map, and (v) renormalization flow analysis of the field-theoretical problem providing the PURE filter. In all cases, the reconstruction can be described or approximated as Wiener-filter operations with assumed signal spectra derived from the data according to the same recipe, but with differing coefficients. All of these filters, except the renormalized one, exhibit a perception threshold in case of a Jeffreys prior for the unknown spectrum. Data modes with variance below this threshold do not affect the signal reconstruction at all. Filter (iv) seems to be similar to the so-called Karhune-Loeve and Feldman-Kaiser-Peacock estimators for galaxy power spectra used in cosmology, which therefore should also exhibit a marginal perception threshold if correctly implemented. We present statistical performance tests and show that the PURE filter is superior to the others, especially if the post-Wiener-filter corrections are included or in case an additional scale-independent spectral smoothness prior can be adopted.« less
NASA Astrophysics Data System (ADS)
Hagemann, M.; Gleason, C. J.
2017-12-01
The upcoming (2021) Surface Water and Ocean Topography (SWOT) NASA satellite mission aims, in part, to estimate discharge on major rivers worldwide using reach-scale measurements of stream width, slope, and height. Current formalizations of channel and floodplain hydraulics are insufficient to fully constrain this problem mathematically, resulting in an infinitely large solution set for any set of satellite observations. Recent work has reformulated this problem in a Bayesian statistical setting, in which the likelihood distributions derive directly from hydraulic flow-law equations. When coupled with prior distributions on unknown flow-law parameters, this formulation probabilistically constrains the parameter space, and results in a computationally tractable description of discharge. Using a curated dataset of over 200,000 in-situ acoustic Doppler current profiler (ADCP) discharge measurements from over 10,000 USGS gaging stations throughout the United States, we developed empirical prior distributions for flow-law parameters that are not observable by SWOT, but that are required in order to estimate discharge. This analysis quantified prior uncertainties on quantities including cross-sectional area, at-a-station hydraulic geometry width exponent, and discharge variability, that are dependent on SWOT-observable variables including reach-scale statistics of width and height. When compared against discharge estimation approaches that do not use this prior information, the Bayesian approach using ADCP-derived priors demonstrated consistently improved performance across a range of performance metrics. This Bayesian approach formally transfers information from in-situ gaging stations to remote-sensed estimation of discharge, in which the desired quantities are not directly observable. Further investigation using large in-situ datasets is therefore a promising way forward in improving satellite-based estimates of river discharge.
NASA Astrophysics Data System (ADS)
Piecuch, C. G.; Huybers, P. J.; Tingley, M.
2016-12-01
Sea level observations from coastal tide gauges are some of the longest instrumental records of the ocean. However, these data can be noisy, biased, and gappy, featuring missing values, and reflecting land motion and local effects. Coping with these issues in a formal manner is a challenging task. Some studies use Bayesian approaches to estimate sea level from tide gauge records, making inference probabilistically. Such methods are typically empirically Bayesian in nature: model parameters are treated as known and assigned point values. But, in reality, parameters are not perfectly known. Empirical Bayes methods thus neglect a potentially important source of uncertainty, and so may overestimate the precision (i.e., underestimate the uncertainty) of sea level estimates. We consider whether empirical Bayes methods underestimate uncertainty in sea level from tide gauge data, comparing to a full Bayes method that treats parameters as unknowns to be solved for along with the sea level field. We develop a hierarchical algorithm that we apply to tide gauge data on the North American northeast coast over 1893-2015. The algorithm is run in full Bayes mode, solving for the sea level process and parameters, and in empirical mode, solving only for the process using fixed parameter values. Error bars on sea level from the empirical method are smaller than from the full Bayes method, and the relative discrepancies increase with time; the 95% credible interval on sea level values from the empirical Bayes method in 1910 and 2010 is 23% and 56% narrower, respectively, than from the full Bayes approach. To evaluate the representativeness of the credible intervals, empirical Bayes and full Bayes methods are applied to corrupted data of a known surrogate field. Using rank histograms to evaluate the solutions, we find that the full Bayes method produces generally reliable error bars, whereas the empirical Bayes method gives too-narrow error bars, such that the 90% credible interval only encompasses 70% of true process values. Results demonstrate that parameter uncertainty is an important source of process uncertainty, and advocate for the fully Bayesian treatment of tide gauge records in ocean circulation and climate studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nicholson, J. C.
Performance metrics for evaluating commercial fixatives are often not readily available for important parameters that must be considered per the facility safety basis and the facility Basis for Interim Operations (BIO). One such parameter is the behavior of such materials in varied, “non-ideal” conditions where ideal is defined as 75 °F, 40% RH. Coupled with the inherent flammable nature of the fixative materials that can act to propagate flame along surfaces that are otherwise fireproof (concrete, sheet metal), much is left unknown when considering the safety basis implications for introducing these materials into nuclear facilities. Through SRNL’s efforts, three (3)more » fixatives, one (1) decontamination gel, and six (6) intumescent coatings were examined for their responses to environmental conditions to determine whether these materials were impervious to non-nominal temperatures and humidities that may be found in nuclear facilities. Characteristics that were examined included set-to-touch time, dust free time, and adhesion testing of the fully cured compounds. Of these ten materials, three were two-part epoxy materials while the other seven consisted of only one constituent. The results show that the epoxies tested are unable to cure in sub-freezing temperatures, with the low temperatures inhibiting crosslinking to a very significant degree. These efforts show significant inhibiting of performance for non-nominal environmental conditions, something that must be addressed both in the decision process for a fixative material to apply and per the safety basis to ensure the accurate flammability and material at risk is calculated.« less
Black Hole Mass and Spin from the 2:3 Twin-peak QPOs in Microquasars
NASA Astrophysics Data System (ADS)
Mondal, Soumen
2010-01-01
In the Galactic microquasars with double peak kHz quasi-periodic oscillations (QPOs) detected in X-ray fluxes, the ratio of the twin-peak frequencies is exactly, or almost exactly 2:3. This rather strongly supports the fact that they originate a few gravitational radii away from its center due to two modes of accretion disk oscillations. Numerical investigations suggest that post-shock matter, before they settle down in a subsonic branch, execute oscillations in the neighborhood region of "shock transition". This shock may excite QPO mechanism. The radial and vertical epicyclic modes of oscillating matter exactly match with these twin-peak QPOs. In fully general relativistic transonic flows, we investigate that shocks may form very close to the horizon around highly spinning Kerr black holes and appear as extremum in the inviscid flows. The extreme shock location provides upper limit of QPOs and hence fixes "lower cutoff" of the spin. We conclude that the 2:3 ratio exactly occurs for spin parameters a >= 0.87 and almost exactly, for wide range of spin parameter, for example, XTE 1550-564, and GRO 1655-40 a>0.87, GRS 1915+105 a>0.83, XTE J1650-500 a>0.78, and H 1743-322 a>0.68. We also make an effort to measure unknown mass for XTE J1650-500(9.1 ~ 14.1 M sun) and H 1743-322(6.6 ~ 11.3 M sun).
NASA Astrophysics Data System (ADS)
Jamlos, Mohd Aminudin; Ismail, Abdul Hafiizh; Jamlos, Mohd Faizal; Narbudowicz, Adam
2017-01-01
Hybrid graphene-copper ultra-wideband array sensor applied to microwave imaging technique is successfully used in detecting and visualizing tumor inside human brain. The sensor made of graphene coated film for the patch while copper for both the transmission line and parasitic element. The hybrid sensor performance is better than fully copper sensor. Hybrid sensor recorded wider bandwidth of 2.0-10.1 GHz compared with fully copper sensor operated from 2.5 to 10.1 GHz. Higher gain of 3.8-8.5 dB is presented by hybrid sensor, while fully copper sensor stated lower gain ranging from 2.6 to 6.7 dB. Both sensors recorded excellent total efficiency averaged at 97 and 94%, respectively. The sensor used for both transmits equivalent signal and receives backscattering signal from stratified human head model in detecting tumor. Difference in the data of the scattering parameters recorded from the head model with presence and absence of tumor is used as the main data to be further processed in confocal microwave imaging algorithm in generating image. MATLAB software is utilized to analyze S-parameter signals obtained from measurement. Tumor presence is indicated by lower S-parameter values compared to higher values recorded by tumor absence.
Iqbal, Muhammad; Rehan, Muhammad; Hong, Keum-Shik
2018-01-01
This paper exploits the dynamical modeling, behavior analysis, and synchronization of a network of four different FitzHugh–Nagumo (FHN) neurons with unknown parameters linked in a ring configuration under direction-dependent coupling. The main purpose is to investigate a robust adaptive control law for the synchronization of uncertain and perturbed neurons, communicating in a medium of bidirectional coupling. The neurons are assumed to be different and interconnected in a ring structure. The strength of the gap junctions is taken to be different for each link in the network, owing to the inter-neuronal coupling medium properties. Robust adaptive control mechanism based on Lyapunov stability analysis is employed and theoretical criteria are derived to realize the synchronization of the network of four FHN neurons in a ring form with unknown parameters under direction-dependent coupling and disturbances. The proposed scheme for synchronization of dissimilar neurons, under external electrical stimuli, coupled in a ring communication topology, having all parameters unknown, and subject to directional coupling medium and perturbations, is addressed for the first time as per our knowledge. To demonstrate the efficacy of the proposed strategy, simulation results are provided. PMID:29535622
NASA Astrophysics Data System (ADS)
Ma, Lin
2017-11-01
This paper develops a method for precisely determining the tension of an inclined cable with unknown boundary conditions. First, the nonlinear motion equation of an inclined cable is derived, and a numerical model of the motion of the cable is proposed using the finite difference method. The proposed numerical model includes the sag-extensibility, flexural stiffness, inclination angle and rotational stiffness at two ends of the cable. Second, the influence of the dynamic parameters of the cable on its frequencies is discussed in detail, and a method for precisely determining the tension of an inclined cable is proposed based on the derivatives of the eigenvalues of the matrices. Finally, a multiparameter identification method is developed that can simultaneously identify multiple parameters, including the rotational stiffness at two ends. This scheme is applicable to inclined cables with varying sag, varying flexural stiffness and unknown boundary conditions. Numerical examples indicate that the method provides good precision. Because the parameters of cables other than tension (e.g., the flexural stiffness and rotational stiffness at the ends) are not accurately known in practical engineering, the multiparameter identification method could further improve the accuracy of cable tension measurements.
Parameter and state estimation in a Neisseria meningitidis model: A study case of Niger
NASA Astrophysics Data System (ADS)
Bowong, S.; Mountaga, L.; Bah, A.; Tewa, J. J.; Kurths, J.
2016-12-01
Neisseria meningitidis (Nm) is a major cause of bacterial meningitidis outbreaks in Africa and the Middle East. The availability of yearly reported meningitis cases in the African meningitis belt offers the opportunity to analyze the transmission dynamics and the impact of control strategies. In this paper, we propose a method for the estimation of state variables that are not accessible to measurements and an unknown parameter in a Nm model. We suppose that the yearly number of Nm induced mortality and the total population are known inputs, which can be obtained from data, and the yearly number of new Nm cases is the model output. We also suppose that the Nm transmission rate is an unknown parameter. We first show how the recruitment rate into the population can be estimated using real data of the total population and Nm induced mortality. Then, we use an auxiliary system called observer whose solutions converge exponentially to those of the original model. This observer does not use the unknown infection transmission rate but only uses the known inputs and the model output. This allows us to estimate unmeasured state variables such as the number of carriers that play an important role in the transmission of the infection and the total number of infected individuals within a human community. Finally, we also provide a simple method to estimate the unknown Nm transmission rate. In order to validate the estimation results, numerical simulations are conducted using real data of Niger.
NASA Technical Reports Server (NTRS)
Martin, William G.; Cairns, Brian; Bal, Guillaume
2014-01-01
This paper derives an efficient procedure for using the three-dimensional (3D) vector radiative transfer equation (VRTE) to adjust atmosphere and surface properties and improve their fit with multi-angle/multi-pixel radiometric and polarimetric measurements of scattered sunlight. The proposed adjoint method uses the 3D VRTE to compute the measurement misfit function and the adjoint 3D VRTE to compute its gradient with respect to all unknown parameters. In the remote sensing problems of interest, the scalar-valued misfit function quantifies agreement with data as a function of atmosphere and surface properties, and its gradient guides the search through this parameter space. Remote sensing of the atmosphere and surface in a three-dimensional region may require thousands of unknown parameters and millions of data points. Many approaches would require calls to the 3D VRTE solver in proportion to the number of unknown parameters or measurements. To avoid this issue of scale, we focus on computing the gradient of the misfit function as an alternative to the Jacobian of the measurement operator. The resulting adjoint method provides a way to adjust 3D atmosphere and surface properties with only two calls to the 3D VRTE solver for each spectral channel, regardless of the number of retrieval parameters, measurement view angles or pixels. This gives a procedure for adjusting atmosphere and surface parameters that will scale to the large problems of 3D remote sensing. For certain types of multi-angle/multi-pixel polarimetric measurements, this encourages the development of a new class of three-dimensional retrieval algorithms with more flexible parametrizations of spatial heterogeneity, less reliance on data screening procedures, and improved coverage in terms of the resolved physical processes in the Earth?s atmosphere.
Parameter Estimation for a Turbulent Buoyant Jet Using Approximate Bayesian Computation
NASA Astrophysics Data System (ADS)
Christopher, Jason D.; Wimer, Nicholas T.; Hayden, Torrey R. S.; Lapointe, Caelan; Grooms, Ian; Rieker, Gregory B.; Hamlington, Peter E.
2016-11-01
Approximate Bayesian Computation (ABC) is a powerful tool that allows sparse experimental or other "truth" data to be used for the prediction of unknown model parameters in numerical simulations of real-world engineering systems. In this presentation, we introduce the ABC approach and then use ABC to predict unknown inflow conditions in simulations of a two-dimensional (2D) turbulent, high-temperature buoyant jet. For this test case, truth data are obtained from a simulation with known boundary conditions and problem parameters. Using spatially-sparse temperature statistics from the 2D buoyant jet truth simulation, we show that the ABC method provides accurate predictions of the true jet inflow temperature. The success of the ABC approach in the present test suggests that ABC is a useful and versatile tool for engineering fluid dynamics research.
Rhelogical constraints on ridge formation on Icy Satellites
NASA Astrophysics Data System (ADS)
Rudolph, M. L.; Manga, M.
2010-12-01
The processes responsible for forming ridges on Europa remain poorly understood. We use a continuum damage mechanics approach to model ridge formation. The main objectives of this contribution are to constrain (1) choice of rheological parameters and (2) maximum ridge size and rate of formation. The key rheological parameters to constrain appear in the evolution equation for a damage variable (D): ˙ {D} = B <<σ >>r}(1-D){-k-α D (p)/(μ ) and in the equation relating damage accumulation to volumetric changes, Jρ 0 = δ (1-D). Similar damage evolution laws have been applied to terrestrial glaciers and to the analysis of rock mechanics experiments. However, it is reasonable to expect that, like viscosity, the rheological constants B, α , and δ depend strongly on temperature, composition, and ice grain size. In order to determine whether the damage model is appropriate for Europa’s ridges, we must find values of the unknown damage parameters that reproduce ridge topography. We perform a suite of numerical experiments to identify the region of parameter space conducive to ridge production and show the sensitivity to changes in each unknown parameter.
Parameter Estimation for a Pulsating Turbulent Buoyant Jet Using Approximate Bayesian Computation
NASA Astrophysics Data System (ADS)
Christopher, Jason; Wimer, Nicholas; Lapointe, Caelan; Hayden, Torrey; Grooms, Ian; Rieker, Greg; Hamlington, Peter
2017-11-01
Approximate Bayesian Computation (ABC) is a powerful tool that allows sparse experimental or other ``truth'' data to be used for the prediction of unknown parameters, such as flow properties and boundary conditions, in numerical simulations of real-world engineering systems. Here we introduce the ABC approach and then use ABC to predict unknown inflow conditions in simulations of a two-dimensional (2D) turbulent, high-temperature buoyant jet. For this test case, truth data are obtained from a direct numerical simulation (DNS) with known boundary conditions and problem parameters, while the ABC procedure utilizes lower fidelity large eddy simulations. Using spatially-sparse statistics from the 2D buoyant jet DNS, we show that the ABC method provides accurate predictions of true jet inflow parameters. The success of the ABC approach in the present test suggests that ABC is a useful and versatile tool for predicting flow information, such as boundary conditions, that can be difficult to determine experimentally.
Modern control concepts in hydrology
NASA Technical Reports Server (NTRS)
Duong, N.; Johnson, G. R.; Winn, C. B.
1974-01-01
Two approaches to an identification problem in hydrology are presented based upon concepts from modern control and estimation theory. The first approach treats the identification of unknown parameters in a hydrologic system subject to noisy inputs as an adaptive linear stochastic control problem; the second approach alters the model equation to account for the random part in the inputs, and then uses a nonlinear estimation scheme to estimate the unknown parameters. Both approaches use state-space concepts. The identification schemes are sequential and adaptive and can handle either time invariant or time dependent parameters. They are used to identify parameters in the Prasad model of rainfall-runoff. The results obtained are encouraging and conform with results from two previous studies; the first using numerical integration of the model equation along with a trial-and-error procedure, and the second, by using a quasi-linearization technique. The proposed approaches offer a systematic way of analyzing the rainfall-runoff process when the input data are imbedded in noise.
Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown
ERIC Educational Resources Information Center
Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi
2014-01-01
When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…
Earth-moon system: Dynamics and parameter estimation
NASA Technical Reports Server (NTRS)
Breedlove, W. J., Jr.
1975-01-01
A theoretical development of the equations of motion governing the earth-moon system is presented. The earth and moon were treated as finite rigid bodies and a mutual potential was utilized. The sun and remaining planets were treated as particles. Relativistic, non-rigid, and dissipative effects were not included. The translational and rotational motion of the earth and moon were derived in a fully coupled set of equations. Euler parameters were used to model the rotational motions. The mathematical model is intended for use with data analysis software to estimate physical parameters of the earth-moon system using primarily LURE type data. Two program listings are included. Program ANEAMO computes the translational/rotational motion of the earth and moon from analytical solutions. Program RIGEM numerically integrates the fully coupled motions as described above.
Cell reprogramming modelled as transitions in a hierarchy of cell cycles
NASA Astrophysics Data System (ADS)
Hannam, Ryan; Annibale, Alessia; Kühn, Reimer
2017-10-01
We construct a model of cell reprogramming (the conversion of fully differentiated cells to a state of pluripotency, known as induced pluripotent stem cells, or iPSCs) which builds on key elements of cell biology viz. cell cycles and cell lineages. Although reprogramming has been demonstrated experimentally, much of the underlying processes governing cell fate decisions remain unknown. This work aims to bridge this gap by modelling cell types as a set of hierarchically related dynamical attractors representing cell cycles. Stages of the cell cycle are characterised by the configuration of gene expression levels, and reprogramming corresponds to triggering transitions between such configurations. Two mechanisms were found for reprogramming in a two level hierarchy: cycle specific perturbations and a noise induced switching. The former corresponds to a directed perturbation that induces a transition into a cycle-state of a different cell type in the potency hierarchy (mainly a stem cell) whilst the latter is a priori undirected and could be induced, e.g. by a (stochastic) change in the cellular environment. These reprogramming protocols were found to be effective in large regimes of the parameter space and make specific predictions concerning reprogramming dynamics which are broadly in line with experimental findings.
The patellofemoral joint: from dysplasia to dislocation
Zaffagnini, Stefano; Grassi, Alberto; Zocco, Gianluca; Rosa, Michele Attilo; Signorelli, Cecilia; Muccioli, Giulio Maria Marcheggiani
2017-01-01
Patellofemoral dysplasia is a major predisposing factor for instability of the patellofemoral joint. However, there is no consensus as to whether patellofemoral dysplasia is genetic in origin, caused by imbalanced forces producing maltracking and remodelling of the trochlea during infancy and growth, or due to other unknown and unexplored factors. The biomechanical effects of patellofemoral dysplasia on patellar stability and on surgical procedures have not been fully investigated. Also, different anatomical and demographic risk factors have been suggested, in an attempt to identify the recurrent dislocators. Therefore, a comprehensive evaluation of all the radiographic, MRI and CT parameters can help the clinician to assess patients with primary and recurrent patellar dislocation and guide management. Patellofemoral dysplasia still represents an extremely challenging condition to manage. Its controversial aetiology and its complex biomechanical behaviour continue to pose more questions than answers to the research community, which reflects the lack of universally accepted guidelines for the correct treatment. However, due to the complexity of this condition, an extremely personalised approach should be reserved for each patient, in considering and addressing the anatomical abnormalities responsible for the symptoms. Cite this article: EFORT Open Rev 2017;2. DOI: 10.1302/2058-5241.2.160081. Originally published online at www.efortopenreviews.org PMID:28630757
NASA Astrophysics Data System (ADS)
Zhou, Wei-Xing; Sornette, Didier
2003-12-01
We propose a straightforward extension of our previously proposed log-periodic power-law model of the “anti-bubble” regime of the USA stock market since the summer of 2000, in terms of the renormalization group framework to model critical points. Using a previous work by Gluzman and Sornette (Phys. Rev. E 65 (2003) 036142) on the classification of the class of Weierstrass-like functions, we show that the five crashes that occurred since August 2000 can be accurately modeled by this approach, in a fully consistent way with no additional parameters. Our theory suggests an overall consistent organization of the investors forming a collective network which interact to form the pessimistic bearish “anti-bubble” regime with intermittent acceleration of the positive feedbacks of pessimistic sentiment leading to these crashes. We develop retrospective predictions, that confirm the existence of significant arbitrage opportunities for a trader using our model. Finally, we offer a prediction for the unknown future of the US S&P500 index extending over 2003 and 2004, that refines the previous prediction of Sornette and Zhou (Quant. Finance 2 (2002) 468).
Blind multirigid retrospective motion correction of MR images.
Loktyushin, Alexander; Nickisch, Hannes; Pohmann, Rolf; Schölkopf, Bernhard
2015-04-01
Physiological nonrigid motion is inevitable when imaging, e.g., abdominal viscera, and can lead to serious deterioration of the image quality. Prospective techniques for motion correction can handle only special types of nonrigid motion, as they only allow global correction. Retrospective methods developed so far need guidance from navigator sequences or external sensors. We propose a fully retrospective nonrigid motion correction scheme that only needs raw data as an input. Our method is based on a forward model that describes the effects of nonrigid motion by partitioning the image into patches with locally rigid motion. Using this forward model, we construct an objective function that we can optimize with respect to both unknown motion parameters per patch and the underlying sharp image. We evaluate our method on both synthetic and real data in 2D and 3D. In vivo data was acquired using standard imaging sequences. The correction algorithm significantly improves the image quality. Our compute unified device architecture (CUDA)-enabled graphic processing unit implementation ensures feasible computation times. The presented technique is the first computationally feasible retrospective method that uses the raw data of standard imaging sequences, and allows to correct for nonrigid motion without guidance from external motion sensors. © 2014 Wiley Periodicals, Inc.
Donohue, Sarah E; Todisco, Alexandra E; Woldorff, Marty G
2013-04-01
Neuroimaging work on multisensory conflict suggests that the relevant modality receives enhanced processing in the face of incongruency. However, the degree of stimulus processing in the irrelevant modality and the temporal cascade of the attentional modulations in either the relevant or irrelevant modalities are unknown. Here, we employed an audiovisual conflict paradigm with a sensory probe in the task-irrelevant modality (vision) to gauge the attentional allocation to that modality. ERPs were recorded as participants attended to and discriminated spoken auditory letters while ignoring simultaneous bilateral visual letter stimuli that were either fully congruent, fully incongruent, or partially incongruent (one side incongruent, one congruent) with the auditory stimulation. Half of the audiovisual letter stimuli were followed 500-700 msec later by a bilateral visual probe stimulus. As expected, ERPs to the audiovisual stimuli showed an incongruency ERP effect (fully incongruent versus fully congruent) of an enhanced, centrally distributed, negative-polarity wave starting ∼250 msec. More critically here, the sensory ERP components to the visual probes were larger when they followed fully incongruent versus fully congruent multisensory stimuli, with these enhancements greatest on fully incongruent trials with the slowest RTs. In addition, on the slowest-response partially incongruent trials, the P2 sensory component to the visual probes was larger contralateral to the preceding incongruent visual stimulus. These data suggest that, in response to conflicting multisensory stimulus input, the initial cognitive effect is a capture of attention by the incongruent irrelevant-modality input, pulling neural processing resources toward that modality, resulting in rapid enhancement, rather than rapid suppression, of that input.
Quantification of type I error probabilities for heterogeneity LOD scores.
Abreu, Paula C; Hodge, Susan E; Greenberg, David A
2002-02-01
Locus heterogeneity is a major confounding factor in linkage analysis. When no prior knowledge of linkage exists, and one aims to detect linkage and heterogeneity simultaneously, classical distribution theory of log-likelihood ratios does not hold. Despite some theoretical work on this problem, no generally accepted practical guidelines exist. Nor has anyone rigorously examined the combined effect of testing for linkage and heterogeneity and simultaneously maximizing over two genetic models (dominant, recessive). The effect of linkage phase represents another uninvestigated issue. Using computer simulation, we investigated type I error (P value) of the "admixture" heterogeneity LOD (HLOD) score, i.e., the LOD score maximized over both recombination fraction theta and admixture parameter alpha and we compared this with the P values when one maximizes only with respect to theta (i.e., the standard LOD score). We generated datasets of phase-known and -unknown nuclear families, sizes k = 2, 4, and 6 children, under fully penetrant autosomal dominant inheritance. We analyzed these datasets (1) assuming a single genetic model, and maximizing the HLOD over theta and alpha; and (2) maximizing the HLOD additionally over two dominance models (dominant vs. recessive), then subtracting a 0.3 correction. For both (1) and (2), P values increased with family size k; rose less for phase-unknown families than for phase-known ones, with the former approaching the latter as k increased; and did not exceed the one-sided mixture distribution xi = (1/2) chi1(2) + (1/2) chi2(2). Thus, maximizing the HLOD over theta and alpha appears to add considerably less than an additional degree of freedom to the associated chi1(2) distribution. We conclude with practical guidelines for linkage investigators. Copyright 2002 Wiley-Liss, Inc.
Chang, Yue-Yue; Wu, Hai-Long; Fang, Huan; Wang, Tong; Liu, Zhi; Ouyang, Yang-Zi; Ding, Yu-Jie; Yu, Ru-Qin
2018-06-15
In this study, a smart and green analytical method based on the second-order calibration algorithm coupled with excitation-emission matrix (EEM) fluorescence was developed for the determination of rhodamine dyes illegally added into chilli samples. The proposed method not only has the advantage of high sensitivity over the traditional fluorescence method but also fully displays the "second-order advantage". Pure signals of analytes were successfully extracted from severely interferential EEMs profiles via using alternating trilinear decomposition (ATLD) algorithm even in the presence of common fluorescence problems such as scattering, peak overlaps and unknown interferences. It is worth noting that the unknown interferents can denote different kinds of backgrounds, not only refer to a constant background. In addition, the method using interpolation method could avoid the information loss of analytes of interest. The use of "mathematical separation" instead of complicated "chemical or physical separation" strategy can be more effective and environmentally friendly. A series of statistical parameters including figures of merit and RSDs of intra- (≤1.9%) and inter-day (≤6.6%) were calculated to validate the accuracy of the proposed method. Furthermore, the authoritative method of HPLC-FLD was adopted to verify the qualitative and quantitative results of the proposed method. Compared with the two methods, it also showed that the ATLD-EEMs method has the advantages of accuracy, rapidness, simplicity and green, which is expected to be developed as an attractive alternative method for simultaneous and interference-free determination of rhodamine dyes illegally added into complex matrices. Copyright © 2018. Published by Elsevier B.V.
Kanoatov, Mirzo; Galievsky, Victor A; Krylova, Svetlana M; Cherney, Leonid T; Jankowski, Hanna K; Krylov, Sergey N
2015-03-03
Nonequilibrium capillary electrophoresis of equilibrium mixtures (NECEEM) is a versatile tool for studying affinity binding. Here we describe a NECEEM-based approach for simultaneous determination of both the equilibrium constant, K(d), and the unknown concentration of a binder that we call a target, T. In essence, NECEEM is used to measure the unbound equilibrium fraction, R, for the binder with a known concentration that we call a ligand, L. The first set of experiments is performed at varying concentrations of T, prepared by serial dilution of the stock solution, but at a constant concentration of L, which is as low as its reliable quantitation allows. The value of R is plotted as a function of the dilution coefficient, and dilution corresponding to R = 0.5 is determined. This dilution of T is used in the second set of experiments in which the concentration of T is fixed but the concentration of L is varied. The experimental dependence of R on the concentration of L is fitted with a function describing their theoretical dependence. Both K(d) and the concentration of T are used as fitting parameters, and their sought values are determined as the ones that generate the best fit. We have fully validated this approach in silico by using computer-simulated NECEEM electropherograms and then applied it to experimental determination of the unknown concentration of MutS protein and K(d) of its interactions with a DNA aptamer. The general approach described here is applicable not only to NECEEM but also to any other method that can determine a fraction of unbound molecules at equilibrium.
State, Parameter, and Unknown Input Estimation Problems in Active Automotive Safety Applications
NASA Astrophysics Data System (ADS)
Phanomchoeng, Gridsada
A variety of driver assistance systems such as traction control, electronic stability control (ESC), rollover prevention and lane departure avoidance systems are being developed by automotive manufacturers to reduce driver burden, partially automate normal driving operations, and reduce accidents. The effectiveness of these driver assistance systems can be significant enhanced if the real-time values of several vehicle parameters and state variables, namely tire-road friction coefficient, slip angle, roll angle, and rollover index, can be known. Since there are no inexpensive sensors available to measure these variables, it is necessary to estimate them. However, due to the significant nonlinear dynamics in a vehicle, due to unknown and changing plant parameters, and due to the presence of unknown input disturbances, the design of estimation algorithms for this application is challenging. This dissertation develops a new approach to observer design for nonlinear systems in which the nonlinearity has a globally (or locally) bounded Jacobian. The developed approach utilizes a modified version of the mean value theorem to express the nonlinearity in the estimation error dynamics as a convex combination of known matrices with time varying coefficients. The observer gains are then obtained by solving linear matrix inequalities (LMIs). A number of illustrative examples are presented to show that the developed approach is less conservative and more useful than the standard Lipschitz assumption based nonlinear observer. The developed nonlinear observer is utilized for estimation of slip angle, longitudinal vehicle velocity, and vehicle roll angle. In order to predict and prevent vehicle rollovers in tripped situations, it is necessary to estimate the vertical tire forces in the presence of unknown road disturbance inputs. An approach to estimate unknown disturbance inputs in nonlinear systems using dynamic model inversion and a modified version of the mean value theorem is presented. The developed theory is used to estimate vertical tire forces and predict tripped rollovers in situations involving road bumps, potholes, and lateral unknown force inputs. To estimate the tire-road friction coefficients at each individual tire of the vehicle, algorithms to estimate longitudinal forces and slip ratios at each tire are proposed. Subsequently, tire-road friction coefficients are obtained using recursive least squares parameter estimators that exploit the relationship between longitudinal force and slip ratio at each tire. The developed approaches are evaluated through simulations with industry standard software, CARSIM, with experimental tests on a Volvo XC90 sport utility vehicle and with experimental tests on a 1/8th scaled vehicle. The simulation and experimental results show that the developed approaches can reliably estimate the vehicle parameters and state variables needed for effective ESC and rollover prevention applications.
ERIC Educational Resources Information Center
Kieftenbeld, Vincent; Natesan, Prathiba
2012-01-01
Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…
Luca, Fabrizio; Valvo, Manuela; Ghezzi, Tiago Leal; Zuccaro, Massimiliano; Cenciarelli, Sabina; Trovato, Cristina; Sonzogni, Angelica; Biffi, Roberto
2013-04-01
Urinary and sexual dysfunctions are recognized complications of rectal cancer surgery. Their incidence after robotic surgery is as yet unknown. The aim of this study was to prospectively evaluate the impact of robotic surgery for rectal cancer on sexual and urinary functions in male and female patients. From April 2008 to December 2010, 74 patients undergoing fully robotic resection for rectal cancer were prospectively included in the study. Urinary and sexual dysfunctions affecting quality of life were assessed with specific self-administered questionnaires in all patients undergoing robotic total mesorectal excision (RTME). Results were calculated with validated scoring systems and statistically analyzed. The analyses of the questionnaires completed by the 74 patients who underwent RTME showed that sexual function and general sexual satisfaction decreased significantly 1 month after intervention: 19.1 ± 8.7 versus 11.9 ± 10.2 (P < 0.05) for erectile function and 6.9 ± 2.4 versus 5.3 ± 2.5 (P < 0.05) for general satisfaction in men; 2.6 ± 3.3 versus 0.8 ± 1.4 (P < 0.05) and 2.4 ± 2.5 versus 0.7 ± 1.6 (P < 0.05) for arousal and general satisfaction, respectively, in women. Subsequently, both parameters increased progressively, and 1 year after surgery, the values were comparable to those measured before surgery. Concerning urinary function, the grade of incontinence measured 1 year after the intervention was unchanged for both sexes. RTME allows for preservation of urinary and sexual functions. This is probably due to the superior movements of the wristed instruments that facilitate fine dissection, coupled with a stable and magnified view that helps in recognizing the inferior hypogastric plexus.
A study of parameter identification
NASA Technical Reports Server (NTRS)
Herget, C. J.; Patterson, R. E., III
1978-01-01
A set of definitions for deterministic parameter identification ability were proposed. Deterministic parameter identificability properties are presented based on four system characteristics: direct parameter recoverability, properties of the system transfer function, properties of output distinguishability, and uniqueness properties of a quadratic cost functional. Stochastic parameter identifiability was defined in terms of the existence of an estimation sequence for the unknown parameters which is consistent in probability. Stochastic parameter identifiability properties are presented based on the following characteristics: convergence properties of the maximum likelihood estimate, properties of the joint probability density functions of the observations, and properties of the information matrix.
Islam, Md Hamidul; Khan, Kamruzzaman; Akbar, M Ali; Salam, Md Abdus
2014-01-01
Mathematical modeling of many physical systems leads to nonlinear evolution equations because most physical systems are inherently nonlinear in nature. The investigation of traveling wave solutions of nonlinear partial differential equations (NPDEs) plays a significant role in the study of nonlinear physical phenomena. In this article, we construct the traveling wave solutions of modified KDV-ZK equation and viscous Burgers equation by using an enhanced (G '/G) -expansion method. A number of traveling wave solutions in terms of unknown parameters are obtained. Derived traveling wave solutions exhibit solitary waves when special values are given to its unknown parameters. 35C07; 35C08; 35P99.
Vision System for Coarsely Estimating Motion Parameters for Unknown Fast Moving Objects in Space
Chen, Min; Hashimoto, Koichi
2017-01-01
Motivated by biological interests in analyzing navigation behaviors of flying animals, we attempt to build a system measuring their motion states. To do this, in this paper, we build a vision system to detect unknown fast moving objects within a given space, calculating their motion parameters represented by positions and poses. We proposed a novel method to detect reliable interest points from images of moving objects, which can be hardly detected by general purpose interest point detectors. 3D points reconstructed using these interest points are then grouped and maintained for detected objects, according to a careful schedule, considering appearance and perspective changes. In the estimation step, a method is introduced to adapt the robust estimation procedure used for dense point set to the case for sparse set, reducing the potential risk of greatly biased estimation. Experiments are conducted against real scenes, showing the capability of the system of detecting multiple unknown moving objects and estimating their positions and poses. PMID:29206189
Design and analysis of adaptive Super-Twisting sliding mode control for a microgyroscope.
Feng, Zhilin; Fei, Juntao
2018-01-01
This paper proposes a novel adaptive Super-Twisting sliding mode control for a microgyroscope under unknown model uncertainties and external disturbances. In order to improve the convergence rate of reaching the sliding surface and the accuracy of regulating and trajectory tracking, a high order Super-Twisting sliding mode control strategy is employed, which not only can combine the advantages of the traditional sliding mode control with the Super-Twisting sliding mode control, but also guarantee that the designed control system can reach the sliding surface and equilibrium point in a shorter finite time from any initial state and avoid chattering problems. In consideration of unknown parameters of micro gyroscope system, an adaptive algorithm based on Lyapunov stability theory is designed to estimate the unknown parameters and angular velocity of microgyroscope. Finally, the effectiveness of the proposed scheme is demonstrated by simulation results. The comparative study between adaptive Super-Twisting sliding mode control and conventional sliding mode control demonstrate the superiority of the proposed method.
ERIC Educational Resources Information Center
Lee, Yi-Hsuan; Zhang, Jinming
2008-01-01
The method of maximum-likelihood is typically applied to item response theory (IRT) models when the ability parameter is estimated while conditioning on the true item parameters. In practice, the item parameters are unknown and need to be estimated first from a calibration sample. Lewis (1985) and Zhang and Lu (2007) proposed the expected response…
NASA Astrophysics Data System (ADS)
Miller, K. L.; Berg, S. J.; Davison, J. H.; Sudicky, E. A.; Forsyth, P. A.
2018-01-01
Although high performance computers and advanced numerical methods have made the application of fully-integrated surface and subsurface flow and transport models such as HydroGeoSphere common place, run times for large complex basin models can still be on the order of days to weeks, thus, limiting the usefulness of traditional workhorse algorithms for uncertainty quantification (UQ) such as Latin Hypercube simulation (LHS) or Monte Carlo simulation (MCS), which generally require thousands of simulations to achieve an acceptable level of accuracy. In this paper we investigate non-intrusive polynomial chaos for uncertainty quantification, which in contrast to random sampling methods (e.g., LHS and MCS), represents a model response of interest as a weighted sum of polynomials over the random inputs. Once a chaos expansion has been constructed, approximating the mean, covariance, probability density function, cumulative distribution function, and other common statistics as well as local and global sensitivity measures is straightforward and computationally inexpensive, thus making PCE an attractive UQ method for hydrologic models with long run times. Our polynomial chaos implementation was validated through comparison with analytical solutions as well as solutions obtained via LHS for simple numerical problems. It was then used to quantify parametric uncertainty in a series of numerical problems with increasing complexity, including a two-dimensional fully-saturated, steady flow and transient transport problem with six uncertain parameters and one quantity of interest; a one-dimensional variably-saturated column test involving transient flow and transport, four uncertain parameters, and two quantities of interest at 101 spatial locations and five different times each (1010 total); and a three-dimensional fully-integrated surface and subsurface flow and transport problem for a small test catchment involving seven uncertain parameters and three quantities of interest at 241 different times each. Numerical experiments show that polynomial chaos is an effective and robust method for quantifying uncertainty in fully-integrated hydrologic simulations, which provides a rich set of features and is computationally efficient. Our approach has the potential for significant speedup over existing sampling based methods when the number of uncertain model parameters is modest ( ≤ 20). To our knowledge, this is the first implementation of the algorithm in a comprehensive, fully-integrated, physically-based three-dimensional hydrosystem model.
Chen, Gang; Song, Yongduan; Guan, Yanfeng
2018-03-01
This brief investigates the finite-time consensus tracking control problem for networked uncertain mechanical systems on digraphs. A new terminal sliding-mode-based cooperative control scheme is developed to guarantee that the tracking errors converge to an arbitrarily small bound around zero in finite time. All the networked systems can have different dynamics and all the dynamics are unknown. A neural network is used at each node to approximate the local unknown dynamics. The control schemes are implemented in a fully distributed manner. The proposed control method eliminates some limitations in the existing terminal sliding-mode-based consensus control methods and extends the existing analysis methods to the case of directed graphs. Simulation results on networked robot manipulators are provided to show the effectiveness of the proposed control algorithms.
Extensions of Rasch's Multiplicative Poisson Model.
ERIC Educational Resources Information Center
Jansen, Margo G. H.; van Duijn, Marijtje A. J.
1992-01-01
A model developed by G. Rasch that assumes scores on some attainment tests can be realizations of a Poisson process is explained and expanded by assuming a prior distribution, with fixed but unknown parameters, for the subject parameters. How additional between-subject and within-subject factors can be incorporated is discussed. (SLD)
Optimal hemodynamic response model for functional near-infrared spectroscopy
Kamran, Muhammad A.; Jeong, Myung Yung; Mannan, Malik M. N.
2015-01-01
Functional near-infrared spectroscopy (fNIRS) is an emerging non-invasive brain imaging technique and measures brain activities by means of near-infrared light of 650–950 nm wavelengths. The cortical hemodynamic response (HR) differs in attributes at different brain regions and on repetition of trials, even if the experimental paradigm is kept exactly the same. Therefore, an HR model that can estimate such variations in the response is the objective of this research. The canonical hemodynamic response function (cHRF) is modeled by two Gamma functions with six unknown parameters (four of them to model the shape and other two to scale and baseline respectively). The HRF model is supposed to be a linear combination of HRF, baseline, and physiological noises (amplitudes and frequencies of physiological noises are supposed to be unknown). An objective function is developed as a square of the residuals with constraints on 12 free parameters. The formulated problem is solved by using an iterative optimization algorithm to estimate the unknown parameters in the model. Inter-subject variations in HRF and physiological noises have been estimated for better cortical functional maps. The accuracy of the algorithm has been verified using 10 real and 15 simulated data sets. Ten healthy subjects participated in the experiment and their HRF for finger-tapping tasks have been estimated and analyzed. The statistical significance of the estimated activity strength parameters has been verified by employing statistical analysis (i.e., t-value > tcritical and p-value < 0.05). PMID:26136668
Optimal hemodynamic response model for functional near-infrared spectroscopy.
Kamran, Muhammad A; Jeong, Myung Yung; Mannan, Malik M N
2015-01-01
Functional near-infrared spectroscopy (fNIRS) is an emerging non-invasive brain imaging technique and measures brain activities by means of near-infrared light of 650-950 nm wavelengths. The cortical hemodynamic response (HR) differs in attributes at different brain regions and on repetition of trials, even if the experimental paradigm is kept exactly the same. Therefore, an HR model that can estimate such variations in the response is the objective of this research. The canonical hemodynamic response function (cHRF) is modeled by two Gamma functions with six unknown parameters (four of them to model the shape and other two to scale and baseline respectively). The HRF model is supposed to be a linear combination of HRF, baseline, and physiological noises (amplitudes and frequencies of physiological noises are supposed to be unknown). An objective function is developed as a square of the residuals with constraints on 12 free parameters. The formulated problem is solved by using an iterative optimization algorithm to estimate the unknown parameters in the model. Inter-subject variations in HRF and physiological noises have been estimated for better cortical functional maps. The accuracy of the algorithm has been verified using 10 real and 15 simulated data sets. Ten healthy subjects participated in the experiment and their HRF for finger-tapping tasks have been estimated and analyzed. The statistical significance of the estimated activity strength parameters has been verified by employing statistical analysis (i.e., t-value > t critical and p-value < 0.05).
A meta-cognitive learning algorithm for a Fully Complex-valued Relaxation Network.
Savitha, R; Suresh, S; Sundararajan, N
2012-08-01
This paper presents a meta-cognitive learning algorithm for a single hidden layer complex-valued neural network called "Meta-cognitive Fully Complex-valued Relaxation Network (McFCRN)". McFCRN has two components: a cognitive component and a meta-cognitive component. A Fully Complex-valued Relaxation Network (FCRN) with a fully complex-valued Gaussian like activation function (sech) in the hidden layer and an exponential activation function in the output layer forms the cognitive component. The meta-cognitive component contains a self-regulatory learning mechanism which controls the learning ability of FCRN by deciding what-to-learn, when-to-learn and how-to-learn from a sequence of training data. The input parameters of cognitive components are chosen randomly and the output parameters are estimated by minimizing a logarithmic error function. The problem of explicit minimization of magnitude and phase errors in the logarithmic error function is converted to system of linear equations and output parameters of FCRN are computed analytically. McFCRN starts with zero hidden neuron and builds the number of neurons required to approximate the target function. The meta-cognitive component selects the best learning strategy for FCRN to acquire the knowledge from training data and also adapts the learning strategies to implement best human learning components. Performance studies on a function approximation and real-valued classification problems show that proposed McFCRN performs better than the existing results reported in the literature. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kel'manov, A. V.; Khandeev, V. I.
2016-02-01
The strongly NP-hard problem of partitioning a finite set of points of Euclidean space into two clusters of given sizes (cardinalities) minimizing the sum (over both clusters) of the intracluster sums of squared distances from the elements of the clusters to their centers is considered. It is assumed that the center of one of the sought clusters is specified at the desired (arbitrary) point of space (without loss of generality, at the origin), while the center of the other one is unknown and determined as the mean value over all elements of this cluster. It is shown that unless P = NP, there is no fully polynomial-time approximation scheme for this problem, and such a scheme is substantiated in the case of a fixed space dimension.
Nursing education trends: future implications and predictions.
Valiga, Theresa M Terry
2012-12-01
This article examines current trends in nursing education and proposes numerous transformations needed to ensure that programs are relevant, fully engage learners, reflect evidence-based teaching practices, and are innovative. Such program characteristics are essential if we are to graduate nurses who can practice effectively in today's complex, ambiguous, ever-changing health care environments and who are prepared to practice in and, indeed, shape tomorrow's unknown practice environments. Copyright © 2012 Elsevier Inc. All rights reserved.
Determination of power system component parameters using nonlinear dead beat estimation method
NASA Astrophysics Data System (ADS)
Kolluru, Lakshmi
Power systems are considered the most complex man-made wonders in existence today. In order to effectively supply the ever increasing demands of the consumers, power systems are required to remain stable at all times. Stability and monitoring of these complex systems are achieved by strategically placed computerized control centers. State and parameter estimation is an integral part of these facilities, as they deal with identifying the unknown states and/or parameters of the systems. Advancements in measurement technologies and the introduction of phasor measurement units (PMU) provide detailed and dynamic information of all measurements. Accurate availability of dynamic measurements provides engineers the opportunity to expand and explore various possibilities in power system dynamic analysis/control. This thesis discusses the development of a parameter determination algorithm for nonlinear power systems, using dynamic data obtained from local measurements. The proposed algorithm was developed by observing the dead beat estimator used in state space estimation of linear systems. The dead beat estimator is considered to be very effective as it is capable of obtaining the required results in a fixed number of steps. The number of steps required is related to the order of the system and the number of parameters to be estimated. The proposed algorithm uses the idea of dead beat estimator and nonlinear finite difference methods to create an algorithm which is user friendly and can determine the parameters fairly accurately and effectively. The proposed algorithm is based on a deterministic approach, which uses dynamic data and mathematical models of power system components to determine the unknown parameters. The effectiveness of the algorithm is tested by implementing it to identify the unknown parameters of a synchronous machine. MATLAB environment is used to create three test cases for dynamic analysis of the system with assumed known parameters. Faults are introduced in the virtual test systems and the dynamic data obtained in each case is analyzed and recorded. Ideally, actual measurements are to be provided to the algorithm. As the measurements are not readily available the data obtained from simulations is fed into the determination algorithm as inputs. The obtained results are then compared to the original (or assumed) values of the parameters. The results obtained suggest that the algorithm is able to determine the parameters of a synchronous machine when crisp data is available.
Discriminative parameter estimation for random walks segmentation.
Baudin, Pierre-Yves; Goodman, Danny; Kumrnar, Puneet; Azzabou, Noura; Carlier, Pierre G; Paragios, Nikos; Kumar, M Pawan
2013-01-01
The Random Walks (RW) algorithm is one of the most efficient and easy-to-use probabilistic segmentation methods. By combining contrast terms with prior terms, it provides accurate segmentations of medical images in a fully automated manner. However, one of the main drawbacks of using the RW algorithm is that its parameters have to be hand-tuned. we propose a novel discriminative learning framework that estimates the parameters using a training dataset. The main challenge we face is that the training samples are not fully supervised. Specifically, they provide a hard segmentation of the images, instead of a probabilistic segmentation. We overcome this challenge by treating the optimal probabilistic segmentation that is compatible with the given hard segmentation as a latent variable. This allows us to employ the latent support vector machine formulation for parameter estimation. We show that our approach significantly outperforms the baseline methods on a challenging dataset consisting of real clinical 3D MRI volumes of skeletal muscles.
Design of a DNA chip for detection of unknown genetically modified organisms (GMOs).
Nesvold, Håvard; Kristoffersen, Anja Bråthen; Holst-Jensen, Arne; Berdal, Knut G
2005-05-01
Unknown genetically modified organisms (GMOs) have not undergone a risk evaluation, and hence might pose a danger to health and environment. There are, today, no methods for detecting unknown GMOs. In this paper we propose a novel method intended as a first step in an approach for detecting unknown genetically modified (GM) material in a single plant. A model is designed where biological and combinatorial reduction rules are applied to a set of DNA chip probes containing all possible sequences of uniform length n, creating probes capable of detecting unknown GMOs. The model is theoretically tested for Arabidopsis thaliana Columbia, and the probabilities for detecting inserts and receiving false positives are assessed for various parameters for this organism. From a theoretical standpoint, the model looks very promising but should be tested further in the laboratory. The model and algorithms will be available upon request to the corresponding author.
Shi, Wuxi; Luo, Rui; Li, Baoquan
2017-01-01
In this study, an adaptive fuzzy prescribed performance control approach is developed for a class of uncertain multi-input and multi-output (MIMO) nonlinear systems with unknown control direction and unknown dead-zone inputs. The properties of symmetric matrix are exploited to design adaptive fuzzy prescribed performance controller, and a Nussbaum-type function is incorporated in the controller to estimate the unknown control direction. This method has two prominent advantages: it does not require the priori knowledge of control direction and only three parameters need to be updated on-line for this MIMO systems. It is proved that all the signals in the resulting closed-loop system are bounded and that the tracking errors converge to a small residual set with the prescribed performance bounds. The effectiveness of the proposed approach is validated by simulation results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
2010-01-01
Background Patients-Reported Outcomes (PRO) are increasingly used in clinical and epidemiological research. Two main types of analytical strategies can be found for these data: classical test theory (CTT) based on the observed scores and models coming from Item Response Theory (IRT). However, whether IRT or CTT would be the most appropriate method to analyse PRO data remains unknown. The statistical properties of CTT and IRT, regarding power and corresponding effect sizes, were compared. Methods Two-group cross-sectional studies were simulated for the comparison of PRO data using IRT or CTT-based analysis. For IRT, different scenarios were investigated according to whether items or person parameters were assumed to be known, to a certain extent for item parameters, from good to poor precision, or unknown and therefore had to be estimated. The powers obtained with IRT or CTT were compared and parameters having the strongest impact on them were identified. Results When person parameters were assumed to be unknown and items parameters to be either known or not, the power achieved using IRT or CTT were similar and always lower than the expected power using the well-known sample size formula for normally distributed endpoints. The number of items had a substantial impact on power for both methods. Conclusion Without any missing data, IRT and CTT seem to provide comparable power. The classical sample size formula for CTT seems to be adequate under some conditions but is not appropriate for IRT. In IRT, it seems important to take account of the number of items to obtain an accurate formula. PMID:20338031
Zhao, Xuefeng; Raghavan, Madhavan L; Lu, Jia
2011-05-01
Knowledge of elastic properties of cerebral aneurysms is crucial for understanding the biomechanical behavior of the lesion. However, characterizing tissue properties using in vivo motion data presents a tremendous challenge. Aside from the limitation of data accuracy, a pressing issue is that the in vivo motion does not expose the stress-free geometry. This is compounded by the nonlinearity, anisotropy, and heterogeneity of the tissue behavior. This article introduces a method for identifying the heterogeneous properties of aneurysm wall tissue under unknown stress-free configuration. In the proposed approach, an accessible configuration is taken as the reference; the unknown stress-free configuration is represented locally by a metric tensor describing the prestrain from the stress-free configuration to the reference configuration. Material parameters are identified together with the metric tensor pointwisely. The paradigm is tested numerically using a forward-inverse analysis loop. An image-derived sac is considered. The aneurysm tissue is modeled as an eightply laminate whose constitutive behavior is described by an anisotropic hyperelastic strain-energy function containing four material parameters. The parameters are assumed to vary continuously in two assigned patterns to represent two types of material heterogeneity. Nine configurations between the diastolic and systolic pressures are generated by forward quasi-static finite element analyses. These configurations are fed to the inverse analysis to delineate the material parameters and the metric tensor. The recovered and the assigned distributions are in good agreement. A forward verification is conducted by comparing the displacement solutions obtained from the recovered and the assigned material parameters at a different pressure. The nodal displacements are found in excellent agreement.
Iron overload patients with unknown etiology from national survey in Japan.
Ikuta, Katsuya; Hatayama, Mayumi; Addo, Lynda; Toki, Yasumichi; Sasaki, Katsunori; Tatsumi, Yasuaki; Hattori, Ai; Kato, Ayako; Kato, Koichi; Hayashi, Hisao; Suzuki, Takahiro; Kobune, Masayoshi; Tsutsui, Miyuki; Gotoh, Akihiko; Aota, Yasuo; Matsuura, Motoo; Hamada, Yuzuru; Tokuda, Takahiro; Komatsu, Norio; Kohgo, Yutaka
2017-03-01
Transfusion is believed to be the main cause of iron overload in Japan. A nationwide survey on post-transfusional iron overload subsequently led to the establishment of guidelines for iron chelation therapy in this country. To date, however, detailed clinical information on the entire iron overload population in Japan has not been fully investigated. In the present study, we obtained and studied detailed clinical information on the iron overload patient population in Japan. Of 1109 iron overload cases, 93.1% were considered to have occurred post-transfusion. There were, however, 76 cases of iron overload of unknown origin, which suggest that many clinicians in Japan may encounter some difficulty in correctly diagnosing and treating iron overload. Further clinical data were obtained for 32 cases of iron overload of unknown origin; median of serum ferritin was 1860.5 ng/mL. As occurs in post-transfusional iron overload, liver dysfunction was found to be as high as 95.7% when serum ferritin levels exceeded 1000 ng/mL in these patients. Gene mutation analysis of the iron metabolism-related genes in 27 cases of iron overload with unknown etiology revealed mutations in the gene coding hemojuvelin, transferrin receptor 2, and ferroportin; this indicates that although rare, hereditary hemochromatosis does occur in Japan.
Wildhaber, Mark L.; Albers, Janice; Green, Nicholas; Moran, Edward H.
2017-01-01
We develop a fully-stochasticized, age-structured population model suitable for population viability analysis (PVA) of fish and demonstrate its use with the endangered pallid sturgeon (Scaphirhynchus albus) of the Lower Missouri River as an example. The model incorporates three levels of variance: parameter variance (uncertainty about the value of a parameter itself) applied at the iteration level, temporal variance (uncertainty caused by random environmental fluctuations over time) applied at the time-step level, and implicit individual variance (uncertainty caused by differences between individuals) applied within the time-step level. We found that population dynamics were most sensitive to survival rates, particularly age-2+ survival, and to fecundity-at-length. The inclusion of variance (unpartitioned or partitioned), stocking, or both generally decreased the influence of individual parameters on population growth rate. The partitioning of variance into parameter and temporal components had a strong influence on the importance of individual parameters, uncertainty of model predictions, and quasiextinction risk (i.e., pallid sturgeon population size falling below 50 age-1+ individuals). Our findings show that appropriately applying variance in PVA is important when evaluating the relative importance of parameters, and reinforce the need for better and more precise estimates of crucial life-history parameters for pallid sturgeon.
A Verification of Aerosol Optical Depth Retrieval Using the Terra Satellite
2012-06-01
of the signal which can be used to calculate total optical depth (from Vincent 2006).............................................................5... signals isolates the direct transmission component of the signal which can be used to calculate total optical depth (from Vincent 2006). 6 2...fully backscattered condition to fully forward scattered, respectively. Values fro the single scatter albedo and the asymmetry parameter can be
Comparing Three Estimation Methods for the Three-Parameter Logistic IRT Model
ERIC Educational Resources Information Center
Lamsal, Sunil
2015-01-01
Different estimation procedures have been developed for the unidimensional three-parameter item response theory (IRT) model. These techniques include the marginal maximum likelihood estimation, the fully Bayesian estimation using Markov chain Monte Carlo simulation techniques, and the Metropolis-Hastings Robbin-Monro estimation. With each…
The Use of One-Sample Prediction Intervals for Estimating CO2 Scrubber Canister Durations
2012-10-01
Grade and 812 D-Grade Sofnolime.3 Definitions According to Devore,4 A CI (confidence interval) refers to a parameter, or population ... characteristic , whose value is fixed but unknown to us. In contrast, a future value of Y is not a parameter but instead a random variable; for this
ERIC Educational Resources Information Center
Bar, Karl-Jurgen; Boettger, Silke; Wagner, Gerd; Wilsdorf, Christine; Gerhard, Uwe Jens; Boettger, Michael K.; Blanz, Bernhard; Sauer, Heinrich
2006-01-01
Objectives: The underlying mechanisms of reduced pain perception in anorexia nervosa (AN) are unknown. To gain more insight into the pathology, the authors investigated pain perception, autonomic function, and endocrine parameters before and during successful treatment of adolescent AN patients. Method: Heat pain perception was assessed in 15…
ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative–quantitative modeling
Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf
2012-01-01
Summary: Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if–then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLabTM-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. Availability: ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/ Contact: stefan.streif@ovgu.de PMID:22451270
Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf
2012-05-01
Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if-then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLab(TM)-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/
Hua, Yongzhao; Dong, Xiwang; Li, Qingdong; Ren, Zhang
2017-05-18
This paper investigates the time-varying formation robust tracking problems for high-order linear multiagent systems with a leader of unknown control input in the presence of heterogeneous parameter uncertainties and external disturbances. The followers need to accomplish an expected time-varying formation in the state space and track the state trajectory produced by the leader simultaneously. First, a time-varying formation robust tracking protocol with a totally distributed form is proposed utilizing the neighborhood state information. With the adaptive updating mechanism, neither any global knowledge about the communication topology nor the upper bounds of the parameter uncertainties, external disturbances and leader's unknown input are required in the proposed protocol. Then, in order to determine the control parameters, an algorithm with four steps is presented, where feasible conditions for the followers to accomplish the expected time-varying formation tracking are provided. Furthermore, based on the Lyapunov-like analysis theory, it is proved that the formation tracking error can converge to zero asymptotically. Finally, the effectiveness of the theoretical results is verified by simulation examples.
Multiparameter Estimation in Networked Quantum Sensors
NASA Astrophysics Data System (ADS)
Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.
2018-02-01
We introduce a general model for a network of quantum sensors, and we use this model to consider the following question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. This immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or nonlinear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.
Myohara, Maroko; Niva, Cintia Carla; Lee, Jae Min
2006-08-01
To identify genes specifically activated during annelid regeneration, suppression subtractive hybridization was performed with cDNAs from regenerating and intact Enchytraeus japonensis, a terrestrial oligochaete that can regenerate a complete organism from small body fragments within 4-5 days. Filter array screening subsequently revealed that about 38% of the forward-subtracted cDNA clones contained genes that were upregulated during regeneration. Two hundred seventy-nine of these clones were sequenced and found to contain 165 different sequences (79 known and 86 unknown). Nine clones were fully sequenced and four of these sequences were matched to known genes for glutamine synthetase, glucosidase 1, retinal protein 4, and phosphoribosylaminoimidazole carboxylase, respectively. The remaining five clones encoded an unknown open-reading frame. The expression levels of these genes were highest during blastema formation. Our present results, therefore, demonstrate the great potential of annelids as a new experimental subject for the exploration of unknown genes that play critical roles in animal regeneration.
Evaluation of Environmental Conditions on the Curing Of Commercial Fixative and Intumescent Coatings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nicholson, J. C.
2016-09-26
Performance metrics for evaluating commercial fixatives are often not readily available for important parameters that must be considered per the facility safety basis and the facility Basis for Interim Operations (BIO). One such parameter is the behavior of such materials in varied, “non-ideal” conditions where ideal is defined as 75 °F, 40% RH. Coupled with the inherent flammable nature of the fixative materials that can act to propagate flame along surfaces that are otherwise fireproof (concrete, sheet metal), much is left unknown when considering the safety basis implications for introducing these materials into nuclear facilities. Through SRNL’s efforts, three (3)more » fixatives, one (1) decontamination gel, and six (6) intumescent coatings were examined for their responses to environmental conditions to determine whether these materials were impervious to non-nominal temperatures and humidities that may be found in nuclear facilities. Characteristics that were examined included set-to-touch time, dust free time, and adhesion testing of the fully cured compounds. Of these ten materials, three were two-part epoxy materials while the other seven consisted of only one constituent. The results show that the epoxies tested are unable to cure in sub-freezing temperatures, with the low temperatures inhibiting crosslinking to a very significant degree. These efforts show significant inhibiting of performance for non-nominal environmental conditions, something that must be addressed both in the decision process for a fixative material to apply and per the safety basis to ensure the accurate flammability and material at risk is calculated.« less
Fully Capitated Payment Breakeven Rate for a Mid-Size Pediatric Practice.
Farmer, Steven A; Shalowitz, Joel; George, Meaghan; McStay, Frank; Patel, Kavita; Perrin, James; Moghtaderi, Ali; McClellan, Mark
2016-08-01
Payers are implementing alternative payment models that attempt to align payment with high-value care. This study calculates the breakeven capitated payment rate for a midsize pediatric practice and explores how several different staffing scenarios affect the rate. We supplemented a literature review and data from >200 practices with interviews of practice administrators, physicians, and payers to construct an income statement for a hypothetical, independent, midsize pediatric practice in fee-for-service. The practice was transitioned to full capitation to calculate the breakeven capitated rate, holding all practice parameters constant. Panel size, overhead, physician salary, and staffing ratios were varied to assess their impact on the breakeven per-member per-month (PMPM) rate. Finally, payment rates from an existing health plan were applied to the practice. The calculated breakeven PMPM was $24.10. When an economic simulation allowed core practice parameters to vary across a broad range, 80% of practices broke even with a PMPM of $35.00. The breakeven PMPM increased by 12% ($3.00) when the staffing ratio increased by 25% and increased by 23% ($5.50) when the staffing ratio increased by 38%. The practice was viable, even with primary care medical home staffing ratios, when rates from a real-world payer were applied. Practices are more likely to succeed in capitated models if pediatricians understand how these models alter practice finances. Staffing changes that are common in patient-centered medical home models increased the breakeven capitated rate. The degree to which team-based care will increase panel size and offset increased cost is unknown. Copyright © 2016 by the American Academy of Pediatrics.
Multisite EPR oximetry from multiple quadrature harmonics.
Ahmad, R; Som, S; Johnson, D H; Zweier, J L; Kuppusamy, P; Potter, L C
2012-01-01
Multisite continuous wave (CW) electron paramagnetic resonance (EPR) oximetry using multiple quadrature field modulation harmonics is presented. First, a recently developed digital receiver is used to extract multiple harmonics of field modulated projection data. Second, a forward model is presented that relates the projection data to unknown parameters, including linewidth at each site. Third, a maximum likelihood estimator of unknown parameters is reported using an iterative algorithm capable of jointly processing multiple quadrature harmonics. The data modeling and processing are applicable for parametric lineshapes under nonsaturating conditions. Joint processing of multiple harmonics leads to 2-3-fold acceleration of EPR data acquisition. For demonstration in two spatial dimensions, both simulations and phantom studies on an L-band system are reported. Copyright © 2011 Elsevier Inc. All rights reserved.
Parametric system identification of catamaran for improving controller design
NASA Astrophysics Data System (ADS)
Timpitak, Surasak; Prempraneerach, Pradya; Pengwang, Eakkachai
2018-01-01
This paper presents an estimation of simplified dynamic model for only surge- and yaw- motions of catamaran by using system identification (SI) techniques to determine associated unknown parameters. These methods will enhance the performance of designing processes for the motion control system of Unmanned Surface Vehicle (USV). The simulation results demonstrate an effective way to solve for damping forces and to determine added masses by applying least-square and AutoRegressive Exogenous (ARX) methods. Both methods are then evaluated according to estimated parametric errors from the vehicle’s dynamic model. The ARX method, which yields better estimated accuracy, can then be applied to identify unknown parameters as well as to help improving a controller design of a real unmanned catamaran.
Hexaammine Complexes of Cr(III) and Co(III): A Spectral Study.
ERIC Educational Resources Information Center
Brown, D. R.; Pavlis, R. R.
1985-01-01
Procedures are provided for experiments containing complex ions with octahedral symmetry, hexaamminecobalt(III) chloride and hexaamminechromium(III) nitrate, so students can interpret fully the ultra violet/visible spectra of the complex cations in terms of the ligand field parameters, 10 "Dq," the Racah interelectron repulsion parameters, "B,"…
Quantum pattern recognition with multi-neuron interactions
NASA Astrophysics Data System (ADS)
Fard, E. Rezaei; Aghayar, K.; Amniat-Talab, M.
2018-03-01
We present a quantum neural network with multi-neuron interactions for pattern recognition tasks by a combination of extended classic Hopfield network and adiabatic quantum computation. This scheme can be used as an associative memory to retrieve partial patterns with any number of unknown bits. Also, we propose a preprocessing approach to classifying the pattern space S to suppress spurious patterns. The results of pattern clustering show that for pattern association, the number of weights (η ) should equal the numbers of unknown bits in the input pattern ( d). It is also remarkable that associative memory function depends on the location of unknown bits apart from the d and load parameter α.
Kovacevic, Sanja; Rafii, Michael S.; Brewer, James B.
2008-01-01
Medial temporal lobe (MTL) atrophy is associated with increased risk for conversion to Alzheimer's disease (AD), but manual tracing techniques and even semi-automated techniques for volumetric assessment are not practical in the clinical setting. In addition, most studies that examined MTL atrophy in AD have focused only on the hippocampus. It is unknown the extent to which volumes of amygdala and temporal horn of the lateral ventricle predict subsequent clinical decline. This study examined whether measures of hippocampus, amygdala, and temporal horn volume predict clinical decline over the following 6-month period in patients with mild cognitive impairment (MCI). Fully-automated volume measurements were performed in 269 MCI patients. Baseline volumes of the hippocampus, amygdala, and temporal horn were evaluated as predictors of change in Mini-mental State Exam (MMSE) and Clinical Dementia Rating Sum of Boxes (CDR SB) over a 6-month interval. Fully-automated measurements of baseline hippocampus and amygdala volumes correlated with baseline delayed recall scores. Patients with smaller baseline volumes of the hippocampus and amygdala or larger baseline volumes of the temporal horn had more rapid subsequent clinical decline on MMSE and CDR SB. Fully-automated and rapid measurement of segmental MTL volumes may help clinicians predict clinical decline in MCI patients. PMID:19474571
Dissolution Rate, Weathering Mechanics, and Friability of TNT, Comp B, Tritonal, and Octol
2010-02-01
second conceptual model also simulates dissolution of a particle that experiences constant soil moisture such as one mixed in with the soil...or are mediated by moisture on the particle surface is not yet known. The identities of these red products are also unknown as are their health...it using the outdoor data. The model assumes that raindrops intercepted by HE particles were fully saturated in HE as they dripped off. Particle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chow, Edmond
Solving sparse problems is at the core of many DOE computational science applications. We focus on the challenge of developing sparse algorithms that can fully exploit the parallelism in extreme-scale computing systems, in particular systems with massive numbers of cores per node. Our approach is to express a sparse matrix factorization as a large number of bilinear constraint equations, and then solving these equations via an asynchronous iterative method. The unknowns in these equations are the matrix entries of the factorization that is desired.
Zoonotic potential of emerging paramyxoviruses: knowns and unknowns
Thibault, Patricia A; Watkinson, Ruth E; Moreira-Soto, Andres; Drexler, Jan Felix; Lee, Benhur
2017-01-01
The risk of spillover of enzootic paramyxoviruses, and the susceptibility of recipient human and domestic animal populations, are defined by a broad collection of ecological and molecular factors that interact in ways that are not yet fully understood. Nipah and Hendra viruses were the first highly-lethal zoonotic paramyxoviruses discovered in modern times, but other paramyxoviruses from multiple genera are present in bats and other reservoirs that have unknown potential to spill over into humans. We outline our current understanding of paramyxovirus reservoir hosts and the ecological factors that may drive spillover, and we explore the molecular barriers to spillover that emergent paramyxoviruses may encounter. By outlining what is known about enzootic paramyxovirus receptor usage, mechanisms of innate immune evasion, and other host-specific interactions, we highlight the breadth of unexplored avenues that may be important in understanding paramyxovirus emergence. PMID:28433050
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allan, Christopher M.; Awad, Agape M.; Johnson, Jarrett S.
Coenzyme Q (Q or ubiquinone) is a redox active lipid composed of a fully substituted benzoquinone ring and a polyisoprenoid tail and is required for mitochondrial electron transport. In the yeast Saccharomyces cerevisiae, Q is synthesized by the products of 11 known genes, COQ1–COQ9, YAH1, and ARH1. The function of some of the Coq proteins remains unknown, and several steps in the Q biosynthetic pathway are not fully characterized. Several of the Coq proteins are associated in a macromolecular complex on the matrix face of the inner mitochondrial membrane, and this complex is required for efficient Q synthesis. In thismore » paper, we further characterize this complex via immunoblotting and proteomic analysis of tandem affinity-purified tagged Coq proteins. We show that Coq8, a putative kinase required for the stability of the Q biosynthetic complex, is associated with a Coq6-containing complex. Additionally Q 6 and late stage Q biosynthetic intermediates were also found to co-purify with the complex. A mitochondrial protein of unknown function, encoded by the YLR290C open reading frame, is also identified as a constituent of the complex and is shown to be required for efficient de novo Q biosynthesis. Finally, given its effect on Q synthesis and its association with the biosynthetic complex, we propose that the open reading frame YLR290C be designated COQ11.« less
Kaufmann, Anton
2010-07-30
Elemental compositions (ECs) can be elucidated by evaluating the high-resolution mass spectra of unknown or suspected unfragmented analyte ions. Classical approaches utilize the exact mass of the monoisotopic peak (M + 0) and the relative abundance of isotope peaks (M + 1 and M + 2). The availability of high-resolution instruments like the Orbitrap currently permits mass resolutions up to 100,000 full width at half maximum. This not only allows the determination of relative isotopic abundances (RIAs), but also the extraction of other diagnostic information from the spectra, such as fully resolved signals originating from (34)S isotopes and fully or partially resolved signals related to (15)N isotopes (isotopic fine structure). Fully and partially resolved peaks can be evaluated by visual inspection of the measured peak profiles. This approach is shown to be capable of correctly discarding many of the EC candidates which were proposed by commercial EC calculating algorithms. Using this intuitive strategy significantly extends the upper mass range for the successful elucidation of ECs. Copyright 2010 John Wiley & Sons, Ltd.
USDA-ARS?s Scientific Manuscript database
The net effect of elevated [CO2] and temperature on photosynthetic acclimation and plant productivity is poorly resolved. We assessed the effects of canopy warming and fully open air [CO2] enrichment on 1) the acclimation of two biochemical parameters that frequently limit photosynthesis (A), the ma...
Impurity bound states in fully gapped d-wave superconductors with subdominant order parameters
Mashkoori, Mahdi; Björnson, Kristofer; Black-Schaffer, Annica M.
2017-01-01
Impurities in superconductors and their induced bound states are important both for engineering novel states such as Majorana zero-energy modes and for probing bulk properties of the superconducting state. The high-temperature cuprates offer a clear advantage in a much larger superconducting order parameter, but the nodal energy spectrum of a pure d-wave superconductor only allows virtual bound states. Fully gapped d-wave superconducting states have, however, been proposed in several cuprate systems thanks to subdominant order parameters producing d + is- or d + id′-wave superconducting states. Here we study both magnetic and potential impurities in these fully gapped d-wave superconductors. Using analytical T-matrix and complementary numerical tight-binding lattice calculations, we show that magnetic and potential impurities behave fundamentally different in d + is- and d + id′-wave superconductors. In a d + is-wave superconductor, there are no bound states for potential impurities, while a magnetic impurity produces one pair of bound states, with a zero-energy level crossing at a finite scattering strength. On the other hand, a d + id′-wave symmetry always gives rise to two pairs of bound states and only produce a reachable zero-energy level crossing if the normal state has a strong particle-hole asymmetry. PMID:28281570
NASA Astrophysics Data System (ADS)
Tehsin, Sara; Rehman, Saad; Riaz, Farhan; Saeed, Omer; Hassan, Ali; Khan, Muazzam; Alam, Muhammad S.
2017-05-01
A fully invariant system helps in resolving difficulties in object detection when camera or object orientation and position are unknown. In this paper, the proposed correlation filter based mechanism provides the capability to suppress noise, clutter and occlusion. Minimum Average Correlation Energy (MACE) filter yields sharp correlation peaks while considering the controlled correlation peak value. Difference of Gaussian (DOG) Wavelet has been added at the preprocessing stage in proposed filter design that facilitates target detection in orientation variant cluttered environment. Logarithmic transformation is combined with a DOG composite minimum average correlation energy filter (WMACE), capable of producing sharp correlation peaks despite any kind of geometric distortion of target object. The proposed filter has shown improved performance over some of the other variant correlation filters which are discussed in the result section.
Multi-objective optimization in quantum parameter estimation
NASA Astrophysics Data System (ADS)
Gong, BeiLi; Cui, Wei
2018-04-01
We investigate quantum parameter estimation based on linear and Kerr-type nonlinear controls in an open quantum system, and consider the dissipation rate as an unknown parameter. We show that while the precision of parameter estimation is improved, it usually introduces a significant deformation to the system state. Moreover, we propose a multi-objective model to optimize the two conflicting objectives: (1) maximizing the Fisher information, improving the parameter estimation precision, and (2) minimizing the deformation of the system state, which maintains its fidelity. Finally, simulations of a simplified ɛ-constrained model demonstrate the feasibility of the Hamiltonian control in improving the precision of the quantum parameter estimation.
Developing population models with data from marked individuals
Hae Yeong Ryu,; Kevin T. Shoemaker,; Eva Kneip,; Anna Pidgeon,; Patricia Heglund,; Brooke Bateman,; Thogmartin, Wayne E.; Reşit Akçakaya,
2016-01-01
Population viability analysis (PVA) is a powerful tool for biodiversity assessments, but its use has been limited because of the requirements for fully specified population models such as demographic structure, density-dependence, environmental stochasticity, and specification of uncertainties. Developing a fully specified population model from commonly available data sources – notably, mark–recapture studies – remains complicated due to lack of practical methods for estimating fecundity, true survival (as opposed to apparent survival), natural temporal variability in both survival and fecundity, density-dependence in the demographic parameters, and uncertainty in model parameters. We present a general method that estimates all the key parameters required to specify a stochastic, matrix-based population model, constructed using a long-term mark–recapture dataset. Unlike standard mark–recapture analyses, our approach provides estimates of true survival rates and fecundities, their respective natural temporal variabilities, and density-dependence functions, making it possible to construct a population model for long-term projection of population dynamics. Furthermore, our method includes a formal quantification of parameter uncertainty for global (multivariate) sensitivity analysis. We apply this approach to 9 bird species and demonstrate the feasibility of using data from the Monitoring Avian Productivity and Survivorship (MAPS) program. Bias-correction factors for raw estimates of survival and fecundity derived from mark–recapture data (apparent survival and juvenile:adult ratio, respectively) were non-negligible, and corrected parameters were generally more biologically reasonable than their uncorrected counterparts. Our method allows the development of fully specified stochastic population models using a single, widely available data source, substantially reducing the barriers that have until now limited the widespread application of PVA. This method is expected to greatly enhance our understanding of the processes underlying population dynamics and our ability to analyze viability and project trends for species of conservation concern.
NASA Astrophysics Data System (ADS)
Böning, Guido; Todica, Andrei; Vai, Alessandro; Lehner, Sebastian; Xiong, Guoming; Mille, Erik; Ilhan, Harun; la Fougère, Christian; Bartenstein, Peter; Hacker, Marcus
2013-11-01
The assessment of left ventricular function, wall motion and myocardial viability using electrocardiogram (ECG)-gated [18F]-FDG positron emission tomography (PET) is widely accepted in human and in preclinical small animal studies. The nonterminal and noninvasive approach permits repeated in vivo evaluations of the same animal, facilitating the assessment of temporal changes in disease or therapy response. Although well established, gated small animal PET studies can contain erroneous gating information, which may yield to blurred images and false estimation of functional parameters. In this work, we present quantitative and visual quality control (QC) methods to evaluate the accuracy of trigger events in PET list-mode and physiological data. Left ventricular functional analysis is performed to quantify the effect of gating errors on the end-systolic and end-diastolic volumes, and on the ejection fraction (EF). We aim to recover the cardiac functional parameters by the application of the commonly established heart rate filter approach using fixed ranges based on a standardized population. In addition, we propose a fully reprocessing approach which retrospectively replaces the gating information of the PET list-mode file with appropriate list-mode decoding and encoding software. The signal of a simultaneously acquired ECG is processed using standard MATLAB vector functions, which can be individually adapted to reliably detect the R-peaks. Finally, the new trigger events are inserted into the PET list-mode file. A population of 30 mice with various health statuses was analyzed and standard cardiac parameters such as mean heart rate (119 ms ± 11.8 ms) and mean heart rate variability (1.7 ms ± 3.4 ms) derived. These standard parameter ranges were taken into account in the QC methods to select a group of nine optimal gated and a group of eight sub-optimal gated [18F]-FDG PET scans of mice from our archive. From the list-mode files of the optimal gated group, we randomly deleted various fractions (5% to 60%) of contained trigger events to generate a corrupted group. The filter approach was capable to correct the corrupted group and yield functional parameters with no significant difference to the optimal gated group. We successfully demonstrated the potential of the fully reprocessing approach by applying it to the sub-optimal group, where the functional parameters were significantly improved after reprocessing (mean EF from 41% ± 16% to 60% ± 13%). When applied to the optimal gated group the fully reprocessing approach did not alter the functional parameters significantly (mean EF from 64% ± 8% to 64 ± 7%). This work presents methods to determine and quantify erroneous gating in small animal gated [18F]-FDG PET scans. We demonstrate the importance of a quality check for cardiac triggering contained in PET list-mode data and the benefit of optionally reprocessing the fully recorded physiological information to retrospectively modify or fully replace the cardiac triggering in PET list-mode data. We aim to provide a preliminary guideline of how to proceed in the presence of errors and demonstrate that offline reprocessing by filtering erroneous trigger events and retrospective gating by ECG processing is feasible. Future work will focus on the extension by additional QC methods, which may exploit the amplitude of trigger events and ECG signal by means of pattern recognition. Furthermore, we aim to transfer the proposed QC methods and the fully reprocessing approach to human myocardial PET/CT.
NASA Astrophysics Data System (ADS)
Zhao, L. W.; Du, J. G.; Yin, J. L.
2018-05-01
This paper proposes a novel secured communication scheme in a chaotic system by applying generalized function projective synchronization of the nonlinear Schrödinger equation. This phenomenal approach guarantees a secured and convenient communication. Our study applied the Melnikov theorem with an active control strategy to suppress chaos in the system. The transmitted information signal is modulated into the parameter of the nonlinear Schrödinger equation in the transmitter and it is assumed that the parameter of the receiver system is unknown. Based on the Lyapunov stability theory and the adaptive control technique, the controllers are designed to make two identical nonlinear Schrödinger equation with the unknown parameter asymptotically synchronized. The numerical simulation results of our study confirmed the validity, effectiveness and the feasibility of the proposed novel synchronization method and error estimate for a secure communication. The Chaos masking signals of the information communication scheme, further guaranteed a safer and secured information communicated via this approach.
A review of the meteorological parameters which affect aerial application
NASA Technical Reports Server (NTRS)
Christensen, L. S.; Frost, W.
1979-01-01
The ambient wind field and temperature gradient were found to be the most important parameters. Investigation results indicated that the majority of meteorological parameters affecting dispersion were interdependent and the exact mechanism by which these factors influence the particle dispersion was largely unknown. The types and approximately ranges of instrumented capabilities for a systematic study of the significant meteorological parameters influencing aerial applications were defined. Current mathematical dispersion models were also briefly reviewed. Unfortunately, a rigorous dispersion model which could be applied to aerial application was not available.
Highly adaptive tests for group differences in brain functional connectivity.
Kim, Junghi; Pan, Wei
2015-01-01
Resting-state functional magnetic resonance imaging (rs-fMRI) and other technologies have been offering evidence and insights showing that altered brain functional networks are associated with neurological illnesses such as Alzheimer's disease. Exploring brain networks of clinical populations compared to those of controls would be a key inquiry to reveal underlying neurological processes related to such illnesses. For such a purpose, group-level inference is a necessary first step in order to establish whether there are any genuinely disrupted brain subnetworks. Such an analysis is also challenging due to the high dimensionality of the parameters in a network model and high noise levels in neuroimaging data. We are still in the early stage of method development as highlighted by Varoquaux and Craddock (2013) that "there is currently no unique solution, but a spectrum of related methods and analytical strategies" to learn and compare brain connectivity. In practice the important issue of how to choose several critical parameters in estimating a network, such as what association measure to use and what is the sparsity of the estimated network, has not been carefully addressed, largely because the answers are unknown yet. For example, even though the choice of tuning parameters in model estimation has been extensively discussed in the literature, as to be shown here, an optimal choice of a parameter for network estimation may not be optimal in the current context of hypothesis testing. Arbitrarily choosing or mis-specifying such parameters may lead to extremely low-powered tests. Here we develop highly adaptive tests to detect group differences in brain connectivity while accounting for unknown optimal choices of some tuning parameters. The proposed tests combine statistical evidence against a null hypothesis from multiple sources across a range of plausible tuning parameter values reflecting uncertainty with the unknown truth. These highly adaptive tests are not only easy to use, but also high-powered robustly across various scenarios. The usage and advantages of these novel tests are demonstrated on an Alzheimer's disease dataset and simulated data.
Yu, Zhaoxu; Li, Shugang; Yu, Zhaosheng; Li, Fangfei
2018-04-01
This paper investigates the problem of output feedback adaptive stabilization for a class of nonstrict-feedback stochastic nonlinear systems with both unknown backlashlike hysteresis and unknown control directions. A new linear state transformation is applied to the original system, and then, control design for the new system becomes feasible. By combining the neural network's (NN's) parameterization, variable separation technique, and Nussbaum gain function method, an input-driven observer-based adaptive NN control scheme, which involves only one parameter to be updated, is developed for such systems. All closed-loop signals are bounded in probability and the error signals remain semiglobally bounded in the fourth moment (or mean square). Finally, the effectiveness and the applicability of the proposed control design are verified by two simulation examples.
Propagation characteristics of electromagnetic waves in dusty plasma with full ionization
NASA Astrophysics Data System (ADS)
Dan, Li; Guo, Li-Xin; Li, Jiang-Ting
2018-01-01
This study investigates the propagation characteristics of electromagnetic (EM) waves in fully ionized dusty plasmas. The propagation characteristics of fully ionized plasma with and without dust under the Fokker-Planck-Landau (FPL) and Bhatnagar-Gross-Krook (BGK) models are compared to those of weakly ionized plasmas by using the propagation matrix method. It is shown that the FPL model is suitable for the analysis of the propagation characteristics of weakly collisional and fully ionized dusty plasmas, as is the BGK model. The influence of varying the dust parameters on the propagation properties of EM waves in the fully ionized dusty plasma was analyzed using the FPL model. The simulation results indicated that the densities and average radii of dust grains influence the reflection and transmission coefficients of fully ionized dusty plasma slabs. These results may be utilized to analyze the effects of interaction between EM waves and dusty plasmas, such as those associated with hypersonic vehicles.
Elastohydrodynamic lubrication of elliptical contacts
NASA Technical Reports Server (NTRS)
Hamrock, B. J.
1981-01-01
The determination of the minimum film thickness within contact is considered for both fully flooded and starved conditions. A fully flooded conjunction is one in which the film thickness is not significantly changed when the amount of lubricant is increased. The fully flooded results presented show the influence of contact geometry on minimum film thickness as expressed by the ellipticity parameter and the dimensionless speed, load, and materials parameters. These results are applied to materials of high elastic modulus (hard EHL), such as metal, and to materials of low elastic modulus(soft EHL), such as rubber. In addition to the film thickness equations that are developed, contour plots of pressure and film thickness are given which show the essential features of elastohydrodynamically lubricated conjunctions. The crescent shaped region of minimum film thickness, with its side lobes in which the separation between the solids is a minimum, clearly emerges in the numerical solutions. In addition to the 3 presented for the fully flooded results, 15 more cases are used for hard EHL contacts and 18 cases are used for soft EHL contacts in a theoretical study of the influence of lubricant starvation on film thickness and pressure. From the starved results for both hard and soft EHL contacts, a simple and important dimensionless inlet boundary distance is specified. This inlet boundary distance defines whether a fully flooded or a starved condition exists in the contact. Contour plots of pressure and film thickness in and around the contact are shown for conditions.
NASA Astrophysics Data System (ADS)
Jensen, Robert K.; Fletcher, P.; Abraham, C.
1991-04-01
The segment mass mass proportions and moments of inertia of a sample of twelve females and seven males with mean ages of 67. 4 and 69. 5 years were estimated using textbook proportions based on cadaver studies. These were then compared with the parameters calculated using a mathematical model the zone method. The methodology of the model was fully evaluated for accuracy and precision and judged to be adequate. The results of the comparisons show that for some segments female parameters are quite different from male parameters and inadequately predicted by the cadaver proportions. The largest discrepancies were for the thigh and the trunk. The cadaver predictions were generally less than satisfactory although the common variance for some segments was moderately high. The use ofnon-linear regression and segment anthropometry was illustrated for the thigh moments of inertia and appears to be appropriate. However the predictions from cadaver data need to be examined fully. These results are dependent on the changes in mass and density distribution which occur with aging and the changes which occur with cadaver samples prior to and following death.
Pentaerythritol Tetranitrate (PETN) Surveillance by HPLC-MS: Instrumental Parameters Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harvey, C A; Meissner, R
Surveillance of PETN Homologs in the stockpile here at LLNL is currently carried out by high performance liquid chromatography (HPLC) with ultra violet (UV) detection. Identification of unknown chromatographic peaks with this detection scheme is severely limited. The design agency is aware of the limitations of this methodology and ordered this study to develop instrumental parameters for the use of a currently owned mass spectrometer (MS) as the detection system. The resulting procedure would be a ''drop-in'' replacement for the current surveillance method (ERD04-524). The addition of quadrupole mass spectrometry provides qualitative identification of PETN and its homologs (Petrin, DiPEHN,more » TriPEON, and TetraPEDN) using a LLNL generated database, while providing mass clues to the identity of unknown chromatographic peaks.« less
Sample extraction and injection with a microscale preconcentrator.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson, Alex Lockwood; Chan, Helena Kai Lun
2007-09-01
This report details the development of a microfabricated preconcentrator that functions as a fully integrated chemical extractor-injector for a microscale gas chromatograph (GC). The device enables parts-per-billion detection and quantitative analysis of volatile organic compounds (VOCs) in indoor air with size and power advantages over macro-scale systems. The 44 mm{sup 3} preconcentrator extracts VOCs using highly adsorptive, granular forms of graphitized carbon black and carbon molecular sieves. The micron-sized silicon cavities have integrated heating and temperature sensing allowing low power, yet rapid heating to thermally desorb the collected VOCs (GC injection). The keys to device construction are a new adsorbent-solventmore » filling technique and solvent-tolerant wafer-level silicon-gold eutectic bonding technology. The product is the first granular adsorbent preconcentrator integrated at the wafer level. Other advantages include exhaustive VOC extraction and injection peak widths an order of magnitude narrower than predecessor prototypes. A mass transfer model, the first for any microscale preconcentrator, is developed to describe both adsorption and desorption behaviors. The physically intuitive model uses implicit and explicit finite differences to numerically solve the required partial differential equations. The model is applied to the adsorption and desorption of decane at various concentrations to extract Langmuir adsorption isotherm parameters from effluent curve measurements where properties are unknown a priori.« less
Meyer, Timothy E; Karamanoglu, Mustafa; Ehsani, Ali A; Kovács, Sándor J
2004-11-01
Impaired exercise tolerance, determined by peak oxygen consumption (VO2 peak), is predictive of mortality and the necessity for cardiac transplantation in patients with chronic heart failure (HF). However, the role of left ventricular (LV) diastolic function at rest, reflected by chamber stiffness assessed echocardiographically, as a determinant of exercise tolerance is unknown. Increased LV chamber stiffness and limitation of VO2 peak are known correlates of HF. Yet, the relationship between chamber stiffness and VO2 peak in subjects with HF has not been fully determined. Forty-one patients with HF New York Heart Association [(NYHA) class 2.4 +/- 0.8, mean +/- SD] had echocardiographic studies and VO2 peak measurements. Transmitral Doppler E waves were analyzed using a previously validated method to determine k, the LV chamber stiffness parameter. Multiple linear regression analysis of VO(2 peak) variance indicated that LV chamber stiffness k (r2 = 0.55) and NYHA classification (r2 = 0.43) were its best independent predictors and when taken together account for 59% of the variability in VO2 peak. We conclude that diastolic function at rest, as manifested by chamber stiffness, is a major determinant of maximal exercise capacity in HF.
NASA Astrophysics Data System (ADS)
Osezua Aikhuele, Daniel; Mohd Turan, Faiz
2016-02-01
The instability in today's market and the emerging demands for mass customized products by customers, are driving companies to seek for cost effective and time efficient improvements in their production system and this have led to real pressure for the adaptation of new developmental architecture and operational parameters to remain competitive in the market. Among such developmental architecture adopted, is the integration of lean thinking in the product development process. However, due to lack of clear understanding of the lean performance and its measurements, many companies are unable to implement and fully integrate the lean principle into their product development process and without a proper performance measurement, the performance level of the organizational value stream will be unknown and the specific area of improvement as it relates to the LPD program cannot be tracked. Hence, it will result in poor decision making in the LPD implementation. This paper therefore seeks to present a conceptual model for evaluation of LPD performances by identifying and analysing the core existing LPD enabler (Chief Engineer, Cross-functional teams, Set-based engineering, Poka-yoke (mistakeproofing), Knowledge-based environment, Value-focused planning and development, Top management support, Technology, Supplier integration, Workforce commitment and Continuous improvement culture) for assessing the LPD performance.
NASA Astrophysics Data System (ADS)
Bottasso, C. L.; Croce, A.; Riboldi, C. E. D.
2014-06-01
The paper presents a novel approach for the synthesis of the open-loop pitch profile during emergency shutdowns. The problem is of interest in the design of wind turbines, as such maneuvers often generate design driving loads on some of the machine components. The pitch profile synthesis is formulated as a constrained optimal control problem, solved numerically using a direct single shooting approach. A cost function expressing a compromise between load reduction and rotor overspeed is minimized with respect to the unknown blade pitch profile. Constraints may include a load reduction not-to-exceed the next dominating loads, a not-to-be-exceeded maximum rotor speed, and a maximum achievable blade pitch rate. Cost function and constraints are computed over a possibly large number of operating conditions, defined so as to cover as well as possible the operating situations encountered in the lifetime of the machine. All such conditions are simulated by using a high-fidelity aeroservoelastic model of the wind turbine, ensuring the accuracy of the evaluation of all relevant parameters. The paper demonstrates the capabilities of the novel proposed formulation, by optimizing the pitch profile of a multi-MW wind turbine. Results show that the procedure can reliably identify optimal pitch profiles that reduce design-driving loads, in a fully automated way.
An improved non-Markovian degradation model with long-term dependency and item-to-item uncertainty
NASA Astrophysics Data System (ADS)
Xi, Xiaopeng; Chen, Maoyin; Zhang, Hanwen; Zhou, Donghua
2018-05-01
It is widely noted in the literature that the degradation should be simplified into a memoryless Markovian process for the purpose of predicting the remaining useful life (RUL). However, there actually exists the long-term dependency in the degradation processes of some industrial systems, including electromechanical equipments, oil tankers, and large blast furnaces. This implies the new degradation state depends not only on the current state, but also on the historical states. Such dynamic systems cannot be accurately described by traditional Markovian models. Here we present an improved non-Markovian degradation model with both the long-term dependency and the item-to-item uncertainty. As a typical non-stationary process with dependent increments, fractional Brownian motion (FBM) is utilized to simulate the fractal diffusion of practical degradations. The uncertainty among multiple items can be represented by a random variable of the drift. Based on this model, the unknown parameters are estimated through the maximum likelihood (ML) algorithm, while a closed-form solution to the RUL distribution is further derived using a weak convergence theorem. The practicability of the proposed model is fully verified by two real-world examples. The results demonstrate that the proposed method can effectively reduce the prediction error.
Constructing general partial differential equations using polynomial and neural networks.
Zjavka, Ladislav; Pedrycz, Witold
2016-01-01
Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems. Copyright © 2015 Elsevier Ltd. All rights reserved.
STEM Educators' Integration of Formative Assessment in Teaching and Lesson Design
NASA Astrophysics Data System (ADS)
Moreno, Kimberly A.
Air-breathing hypersonic vehicles, when fully developed, will offer travel in the atmosphere at unprecendented speeds. Capturing their physical behavior by analytical / numerical models is still a major challenge, still limiting the development of controls technology for such vehicles. To study, in an exploratory manner, active control of air-breathing hypersonic vehicles, an analtical, simplified, model of a generic hypersonic air-breathing vehicle in flight was developed by researchers at the Air Force Research Labs in Dayton, Ohio, along with control laws. Elevator deflection and fuel-to-air ratio were used as inputs. However, that model is very approximate, and the field of hypersonics still faces many unknowns. This thesis contributes to the study of control of air-breating hypersonic vehicles in a number of ways: First, regarding control laws synthesis, optimal gains are chosen for the previously developed control law alongside an alternate control law modified from existing literature by minimizing the Lyapunov function derivative using Monte Carlo simulation. This is followed by analysis of the robustness of the control laws in the face of system parametric uncertainties using Monte Carlo simulations. The resulting statistical distributions of the commanded response are analyzed, and linear regression is used to determine, via sensitivity analysis, which uncertain parameters have the largest impact on the desired outcome.
A spline-based parameter estimation technique for static models of elastic structures
NASA Technical Reports Server (NTRS)
Dutt, P.; Taasan, S.
1986-01-01
The problem of identifying the spatially varying coefficient of elasticity using an observed solution to the forward problem is considered. Under appropriate conditions this problem can be treated as a first order hyperbolic equation in the unknown coefficient. Some continuous dependence results are developed for this problem and a spline-based technique is proposed for approximating the unknown coefficient, based on these results. The convergence of the numerical scheme is established and error estimates obtained.
Tsai, Jason S-H; Hsu, Wen-Teng; Lin, Long-Guei; Guo, Shu-Mei; Tann, Joseph W
2014-01-01
A modified nonlinear autoregressive moving average with exogenous inputs (NARMAX) model-based state-space self-tuner with fault tolerance is proposed in this paper for the unknown nonlinear stochastic hybrid system with a direct transmission matrix from input to output. Through the off-line observer/Kalman filter identification method, one has a good initial guess of modified NARMAX model to reduce the on-line system identification process time. Then, based on the modified NARMAX-based system identification, a corresponding adaptive digital control scheme is presented for the unknown continuous-time nonlinear system, with an input-output direct transmission term, which also has measurement and system noises and inaccessible system states. Besides, an effective state space self-turner with fault tolerance scheme is presented for the unknown multivariable stochastic system. A quantitative criterion is suggested by comparing the innovation process error estimated by the Kalman filter estimation algorithm, so that a weighting matrix resetting technique by adjusting and resetting the covariance matrices of parameter estimate obtained by the Kalman filter estimation algorithm is utilized to achieve the parameter estimation for faulty system recovery. Consequently, the proposed method can effectively cope with partially abrupt and/or gradual system faults and input failures by the fault detection. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Assembly of objects with not fully predefined shapes
NASA Technical Reports Server (NTRS)
Arlotti, M. A.; Dimartino, V.
1989-01-01
An assembly problem in a non-deterministic environment, i.e., where parts to be assembled have unknown shape, size and location, is described. The only knowledge used by the robot to perform the assembly operation is given by a connectivity rule and geometrical constraints concerning parts. Once a set of geometrical features of parts has been extracted by a vision system, applying such a rule allows the dtermination of the composition sequence. A suitable sensory apparatus allows the control the whole operation.
Siphonophores eat fish larger than their stomachs
NASA Astrophysics Data System (ADS)
Pagès, Francesc; Madin, Laurence P.
2010-12-01
We report a collection of the siphonophore Halistemma cupulifera, collected at 20 meters depth during a night SCUBA dive in the Sargasso Sea. One of its stomachs (gastrozooids) contained a leptocephalus larva of the eel Ariosoma sp. folded in thirds to fit, but 8.3 cm in length fully extended. This finding shows that in situ observations can reveal previously unknown trophic interactions that may be significant in a changing world ocean where gelatinous organisms seem to increase at the expense of fish.
Multiparameter Estimation in Networked Quantum Sensors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.
We introduce a general model for a network of quantum sensors, and we use this model to consider the question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. Thismore » immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or non-linear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.« less
Multiparameter Estimation in Networked Quantum Sensors
Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.
2018-02-21
We introduce a general model for a network of quantum sensors, and we use this model to consider the question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. Thismore » immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or non-linear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.« less
Perez-Ecija, A; Mendoza, F J
2017-11-01
Studies have demonstrated differences in commonly measured haemostatic parameters between donkeys and horses. Whether clotting factors, anticoagulant protein activities and thromboelastography parameters also differ between species is still unknown. To characterise haemostatic parameters in healthy donkeys and to compare these with those in horses. Cross-sectional study. Clotting factors (V, VII, VIII, IX, X, XI and XII), and antithrombin III, Protein C and Protein S activities were measured in 80 healthy Andalusian and crossbred donkeys and 40 healthy Andalusian crossbred horses with assays based on human deficient plasmas. Thromboelastography was performed in 34 donkeys using a coagulation and platelet function analyser. Donkeys had shorter activated partial thromboplastin time (mean ± s.d. 33.4 ± 5.2 s vs. 38.8 ± 4.2 s; P<0.001) and higher Factor VII (1825 ± 206 vs. 1513 ± 174; P<0.001), IX (142 ± 41 vs. 114 ± 28; P<0.05) and XI (59.4 ± 14.0 vs. 27.2 ± 6.3; P<0.001) activities, whereas horses showed higher Factor X (130 ± 32 vs. 145 ± 23; P>0.05) and XII (96 ± 21 vs. 108 ± 15; P<0.001) activities. Antithrombin III (204 ± 26 vs. 174 ± 29; P<0.001), Protein C (33.16 ± 10.0 vs. 7.57 ± 1.70; P<0.001) and Protein S (median [interquartile range]: 7.8 [5.8-9.3] vs. 6.2 [5.2-7.0]; P<0.001) activities were higher in donkeys. Activated clot time (175 [159-189]), time to peak (6.5 [5.8-7.8]) and clot formation rate (26.9 [16.9-36.4]) in donkeys were shorter than reported values in horses. Haemostatic pathways could not be fully evaluated in donkeys because some tests are unavailable. Certain fibrinolytic parameters (plasmin, plasminogen, etc.) have not been characterised in donkeys and this may have affected our results. The haemostatic system in donkeys differs from that in horses and extrapolation of reference values between these species is not appropriate. © 2017 EVJ Ltd.
Model Reduction via Principe Component Analysis and Markov Chain Monte Carlo (MCMC) Methods
NASA Astrophysics Data System (ADS)
Gong, R.; Chen, J.; Hoversten, M. G.; Luo, J.
2011-12-01
Geophysical and hydrogeological inverse problems often include a large number of unknown parameters, ranging from hundreds to millions, depending on parameterization and problems undertaking. This makes inverse estimation and uncertainty quantification very challenging, especially for those problems in two- or three-dimensional spatial domains. Model reduction technique has the potential of mitigating the curse of dimensionality by reducing total numbers of unknowns while describing the complex subsurface systems adequately. In this study, we explore the use of principal component analysis (PCA) and Markov chain Monte Carlo (MCMC) sampling methods for model reduction through the use of synthetic datasets. We compare the performances of three different but closely related model reduction approaches: (1) PCA methods with geometric sampling (referred to as 'Method 1'), (2) PCA methods with MCMC sampling (referred to as 'Method 2'), and (3) PCA methods with MCMC sampling and inclusion of random effects (referred to as 'Method 3'). We consider a simple convolution model with five unknown parameters as our goal is to understand and visualize the advantages and disadvantages of each method by comparing their inversion results with the corresponding analytical solutions. We generated synthetic data with noise added and invert them under two different situations: (1) the noised data and the covariance matrix for PCA analysis are consistent (referred to as the unbiased case), and (2) the noise data and the covariance matrix are inconsistent (referred to as biased case). In the unbiased case, comparison between the analytical solutions and the inversion results show that all three methods provide good estimates of the true values and Method 1 is computationally more efficient. In terms of uncertainty quantification, Method 1 performs poorly because of relatively small number of samples obtained, Method 2 performs best, and Method 3 overestimates uncertainty due to inclusion of random effects. However, in the biased case, only Method 3 correctly estimates all the unknown parameters, and both Methods 1 and 2 provide wrong values for the biased parameters. The synthetic case study demonstrates that if the covariance matrix for PCA analysis is inconsistent with true models, the PCA methods with geometric or MCMC sampling will provide incorrect estimates.
Noise parameter estimation for poisson corrupted images using variance stabilization transforms.
Jin, Xiaodan; Xu, Zhenyu; Hirakawa, Keigo
2014-03-01
Noise is present in all images captured by real-world image sensors. Poisson distribution is said to model the stochastic nature of the photon arrival process and agrees with the distribution of measured pixel values. We propose a method for estimating unknown noise parameters from Poisson corrupted images using properties of variance stabilization. With a significantly lower computational complexity and improved stability, the proposed estimation technique yields noise parameters that are comparable in accuracy to the state-of-art methods.
An easy-to-use tool for the evaluation of leachate production at landfill sites.
Grugnaletti, Matteo; Pantini, Sara; Verginelli, Iason; Lombardi, Francesco
2016-09-01
A simulation program for the evaluation of leachate generation at landfill sites is herein presented. The developed tool is based on a water balance model that accounts for all the key processes influencing leachate generation through analytical and empirical equations. After a short description of the tool, different simulations on four Italian landfill sites are shown. The obtained results revealed that when literature values were assumed for the unknown input parameters, the model provided a rough estimation of the leachate production measured in the field. In this case, indeed, the deviations between observed and predicted data appeared, in some cases, significant. Conversely, by performing a preliminary calibration for some of the unknown input parameters (e.g. initial moisture content of wastes, compression index), in nearly all cases the model performances significantly improved. These results although showed the potential capability of a water balance model to estimate the leachate production at landfill sites also highlighted the intrinsic limitation of a deterministic approach to accurately forecast the leachate production over time. Indeed, parameters such as the initial water content of incoming waste and the compression index, that have a great influence on the leachate production, may exhibit temporal variation due to seasonal changing of weather conditions (e.g. rainfall, air humidity) as well as to seasonal variability in the amount and type of specific waste fractions produced (e.g. yard waste, food, plastics) that make their prediction quite complicated. In this sense, we believe that a tool such as the one proposed in this work that requires a limited number of unknown parameters, can be easier handled to quantify the uncertainties. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kernicky, Timothy; Whelan, Matthew; Al-Shaer, Ehab
2018-06-01
A methodology is developed for the estimation of internal axial force and boundary restraints within in-service, prismatic axial force members of structural systems using interval arithmetic and contractor programming. The determination of the internal axial force and end restraints in tie rods and cables using vibration-based methods has been a long standing problem in the area of structural health monitoring and performance assessment. However, for structural members with low slenderness where the dynamics are significantly affected by the boundary conditions, few existing approaches allow for simultaneous identification of internal axial force and end restraints and none permit for quantifying the uncertainties in the parameter estimates due to measurement uncertainties. This paper proposes a new technique for approaching this challenging inverse problem that leverages the Set Inversion Via Interval Analysis algorithm to solve for the unknown axial forces and end restraints using natural frequency measurements. The framework developed offers the ability to completely enclose the feasible solutions to the parameter identification problem, given specified measurement uncertainties for the natural frequencies. This ability to propagate measurement uncertainty into the parameter space is critical towards quantifying the confidence in the individual parameter estimates to inform decision-making within structural health diagnosis and prognostication applications. The methodology is first verified with simulated data for a case with unknown rotational end restraints and then extended to a case with unknown translational and rotational end restraints. A laboratory experiment is then presented to demonstrate the application of the methodology to an axially loaded rod with progressively increased end restraint at one end.
Factors Affecting the Item Parameter Estimation and Classification Accuracy of the DINA Model
ERIC Educational Resources Information Center
de la Torre, Jimmy; Hong, Yuan; Deng, Weiling
2010-01-01
To better understand the statistical properties of the deterministic inputs, noisy "and" gate cognitive diagnosis (DINA) model, the impact of several factors on the quality of the item parameter estimates and classification accuracy was investigated. Results of the simulation study indicate that the fully Bayes approach is most accurate when the…
Structure Elucidation of Unknown Metabolites in Metabolomics by Combined NMR and MS/MS Prediction
Boiteau, Rene M.; Hoyt, David W.; Nicora, Carrie D.; ...
2018-01-17
Here, we introduce a cheminformatics approach that combines highly selective and orthogonal structure elucidation parameters; accurate mass, MS/MS (MS 2), and NMR in a single analysis platform to accurately identify unknown metabolites in untargeted studies. The approach starts with an unknown LC-MS feature, and then combines the experimental MS/MS and NMR information of the unknown to effectively filter the false positive candidate structures based on their predicted MS/MS and NMR spectra. We demonstrate the approach on a model mixture and then we identify an uncatalogued secondary metabolite in Arabidopsis thaliana. The NMR/MS 2 approach is well suited for discovery ofmore » new metabolites in plant extracts, microbes, soils, dissolved organic matter, food extracts, biofuels, and biomedical samples, facilitating the identification of metabolites that are not present in experimental NMR and MS metabolomics databases.« less
Structure Elucidation of Unknown Metabolites in Metabolomics by Combined NMR and MS/MS Prediction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boiteau, Rene M.; Hoyt, David W.; Nicora, Carrie D.
Here, we introduce a cheminformatics approach that combines highly selective and orthogonal structure elucidation parameters; accurate mass, MS/MS (MS 2), and NMR in a single analysis platform to accurately identify unknown metabolites in untargeted studies. The approach starts with an unknown LC-MS feature, and then combines the experimental MS/MS and NMR information of the unknown to effectively filter the false positive candidate structures based on their predicted MS/MS and NMR spectra. We demonstrate the approach on a model mixture and then we identify an uncatalogued secondary metabolite in Arabidopsis thaliana. The NMR/MS 2 approach is well suited for discovery ofmore » new metabolites in plant extracts, microbes, soils, dissolved organic matter, food extracts, biofuels, and biomedical samples, facilitating the identification of metabolites that are not present in experimental NMR and MS metabolomics databases.« less
Structure Elucidation of Unknown Metabolites in Metabolomics by Combined NMR and MS/MS Prediction
Hoyt, David W.; Nicora, Carrie D.; Kinmonth-Schultz, Hannah A.; Ward, Joy K.
2018-01-01
We introduce a cheminformatics approach that combines highly selective and orthogonal structure elucidation parameters; accurate mass, MS/MS (MS2), and NMR into a single analysis platform to accurately identify unknown metabolites in untargeted studies. The approach starts with an unknown LC-MS feature, and then combines the experimental MS/MS and NMR information of the unknown to effectively filter out the false positive candidate structures based on their predicted MS/MS and NMR spectra. We demonstrate the approach on a model mixture, and then we identify an uncatalogued secondary metabolite in Arabidopsis thaliana. The NMR/MS2 approach is well suited to the discovery of new metabolites in plant extracts, microbes, soils, dissolved organic matter, food extracts, biofuels, and biomedical samples, facilitating the identification of metabolites that are not present in experimental NMR and MS metabolomics databases. PMID:29342073
NASA Astrophysics Data System (ADS)
Ding, Liang; Gao, Haibo; Liu, Zhen; Deng, Zongquan; Liu, Guangjun
2015-12-01
Identifying the mechanical property parameters of planetary soil based on terramechanics models using in-situ data obtained from autonomous planetary exploration rovers is both an important scientific goal and essential for control strategy optimization and high-fidelity simulations of rovers. However, identifying all the terrain parameters is a challenging task because of the nonlinear and coupling nature of the involved functions. Three parameter identification methods are presented in this paper to serve different purposes based on an improved terramechanics model that takes into account the effects of slip, wheel lugs, etc. Parameter sensitivity and coupling of the equations are analyzed, and the parameters are grouped according to their sensitivity to the normal force, resistance moment and drawbar pull. An iterative identification method using the original integral model is developed first. In order to realize real-time identification, the model is then simplified by linearizing the normal and shearing stresses to derive decoupled closed-form analytical equations. Each equation contains one or two groups of soil parameters, making step-by-step identification of all the unknowns feasible. Experiments were performed using six different types of single-wheels as well as a four-wheeled rover moving on planetary soil simulant. All the unknown model parameters were identified using the measured data and compared with the values obtained by conventional experiments. It is verified that the proposed iterative identification method provides improved accuracy, making it suitable for scientific studies of soil properties, whereas the step-by-step identification methods based on simplified models require less calculation time, making them more suitable for real-time applications. The models have less than 10% margin of error comparing with the measured results when predicting the interaction forces and moments using the corresponding identified parameters.
Variational formulation of hybrid problems for fully 3-D transonic flow with shocks in rotor
NASA Technical Reports Server (NTRS)
Liu, Gao-Lian
1991-01-01
Based on previous research, the unified variable domain variational theory of hybrid problems for rotor flow is extended to fully 3-D transonic rotor flow with shocks, unifying and generalizing the direct and inverse problems. Three variational principles (VP) families were established. All unknown boundaries and flow discontinuities (such as shocks, free trailing vortex sheets) are successfully handled via functional variations with variable domain, converting almost all boundary and interface conditions, including the Rankine Hugoniot shock relations, into natural ones. This theory provides a series of novel ways for blade design or modification and a rigorous theoretical basis for finite element applications and also constitutes an important part of the optimal design theory of rotor bladings. Numerical solutions to subsonic flow by finite elements with self-adapting nodes given in Refs., show good agreement with experimental results.
NASA Astrophysics Data System (ADS)
Boulton, Chris A.; Allison, Lesley C.; Lenton, Timothy M.
2014-12-01
The Atlantic Meridional Overturning Circulation (AMOC) exhibits two stable states in models of varying complexity. Shifts between alternative AMOC states are thought to have played a role in past abrupt climate changes, but the proximity of the climate system to a threshold for future AMOC collapse is unknown. Generic early warning signals of critical slowing down before AMOC collapse have been found in climate models of low and intermediate complexity. Here we show that early warning signals of AMOC collapse are present in a fully coupled atmosphere-ocean general circulation model, subject to a freshwater hosing experiment. The statistical significance of signals of increasing lag-1 autocorrelation and variance vary with latitude. They give up to 250 years warning before AMOC collapse, after ~550 years of monitoring. Future work is needed to clarify suggested dynamical mechanisms driving critical slowing down as the AMOC collapse is approached.
Boulton, Chris A.; Allison, Lesley C.; Lenton, Timothy M.
2014-01-01
The Atlantic Meridional Overturning Circulation (AMOC) exhibits two stable states in models of varying complexity. Shifts between alternative AMOC states are thought to have played a role in past abrupt climate changes, but the proximity of the climate system to a threshold for future AMOC collapse is unknown. Generic early warning signals of critical slowing down before AMOC collapse have been found in climate models of low and intermediate complexity. Here we show that early warning signals of AMOC collapse are present in a fully coupled atmosphere-ocean general circulation model, subject to a freshwater hosing experiment. The statistical significance of signals of increasing lag-1 autocorrelation and variance vary with latitude. They give up to 250 years warning before AMOC collapse, after ~550 years of monitoring. Future work is needed to clarify suggested dynamical mechanisms driving critical slowing down as the AMOC collapse is approached. PMID:25482065
Boulton, Chris A; Allison, Lesley C; Lenton, Timothy M
2014-12-08
The Atlantic Meridional Overturning Circulation (AMOC) exhibits two stable states in models of varying complexity. Shifts between alternative AMOC states are thought to have played a role in past abrupt climate changes, but the proximity of the climate system to a threshold for future AMOC collapse is unknown. Generic early warning signals of critical slowing down before AMOC collapse have been found in climate models of low and intermediate complexity. Here we show that early warning signals of AMOC collapse are present in a fully coupled atmosphere-ocean general circulation model, subject to a freshwater hosing experiment. The statistical significance of signals of increasing lag-1 autocorrelation and variance vary with latitude. They give up to 250 years warning before AMOC collapse, after ~550 years of monitoring. Future work is needed to clarify suggested dynamical mechanisms driving critical slowing down as the AMOC collapse is approached.
Luijckx, Pepijn; Ben-Ami, Frida; Mouton, Laurence; Du Pasquier, Louis; Ebert, Dieter
2011-02-01
The degree of specificity in host-parasite interactions has important implications for ecology and evolution. Unfortunately, specificity can be difficult to determine when parasites cannot be cultured. In such cases, studies often use isolates of unknown genetic composition, which may lead to an underestimation of specificity. We obtained the first clones of the unculturable bacterium Pasteuria ramosa, a parasite of Daphnia magna. Clonal genotypes of the parasite exhibited much more specific interactions with host genotypes than previous studies using isolates. Clones of P. ramosa infected fewer D. magna genotypes than isolates and host clones were either fully susceptible or fully resistant to the parasite. Our finding enhances our understanding of the evolution of virulence and coevolutionary dynamics in this system. We recommend caution when using P. ramosa isolates as the presence of multiple genotypes may influence the outcome and interpretation of some experiments. © 2010 Blackwell Publishing Ltd/CNRS.
Bayesian source term determination with unknown covariance of measurements
NASA Astrophysics Data System (ADS)
Belal, Alkomiet; Tichý, Ondřej; Šmídl, Václav
2017-04-01
Determination of a source term of release of a hazardous material into the atmosphere is a very important task for emergency response. We are concerned with the problem of estimation of the source term in the conventional linear inverse problem, y = Mx, where the relationship between the vector of observations y is described using the source-receptor-sensitivity (SRS) matrix M and the unknown source term x. Since the system is typically ill-conditioned, the problem is recast as an optimization problem minR,B(y - Mx)TR-1(y - Mx) + xTB-1x. The first term minimizes the error of the measurements with covariance matrix R, and the second term is a regularization of the source term. There are different types of regularization arising for different choices of matrices R and B, for example, Tikhonov regularization assumes covariance matrix B as the identity matrix multiplied by scalar parameter. In this contribution, we adopt a Bayesian approach to make inference on the unknown source term x as well as unknown R and B. We assume prior on x to be a Gaussian with zero mean and unknown diagonal covariance matrix B. The covariance matrix of the likelihood R is also unknown. We consider two potential choices of the structure of the matrix R. First is the diagonal matrix and the second is a locally correlated structure using information on topology of the measuring network. Since the inference of the model is intractable, iterative variational Bayes algorithm is used for simultaneous estimation of all model parameters. The practical usefulness of our contribution is demonstrated on an application of the resulting algorithm to real data from the European Tracer Experiment (ETEX). This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).
NASA Astrophysics Data System (ADS)
Nakanishi, Taiki; Matsunaga, Maya; Kobayashi, Atsuki; Nakazato, Kazuo; Niitsu, Kiichi
2018-03-01
A 40-GHz fully integrated CMOS-based circuit for circulating tumor cells (CTC) analysis, consisting of an on-chip vector network analyzer (VNA) and a highly sensitive coplanar-line-based detection area is presented in this paper. In this work, we introduce a fully integrated architecture that eliminates unwanted parasitic effects. The proposed analyzer was designed using 65 nm CMOS technology, and SPICE and MWS simulations were used to validate its operation. The simulation confirmed that the proposed circuit can measure S-parameter shifts resulting from the addition of various types of tumor cells to the detection area, the data of which are provided in a previous study: the |S 21| values for HepG2, A549, and HEC-1-A cells are -0.683, -0.580, and -0.623 dB, respectively. Additionally, the measurement demonstrated an S-parameters reduction of -25.7% when a silicone resin was put on the circuit. Hence, the proposed system is expected to contribute to cancer diagnosis.
AutoBayes Program Synthesis System Users Manual
NASA Technical Reports Server (NTRS)
Schumann, Johann; Jafari, Hamed; Pressburger, Tom; Denney, Ewen; Buntine, Wray; Fischer, Bernd
2008-01-01
Program synthesis is the systematic, automatic construction of efficient executable code from high-level declarative specifications. AutoBayes is a fully automatic program synthesis system for the statistical data analysis domain; in particular, it solves parameter estimation problems. It has seen many successful applications at NASA and is currently being used, for example, to analyze simulation results for Orion. The input to AutoBayes is a concise description of a data analysis problem composed of a parameterized statistical model and a goal that is a probability term involving parameters and input data. The output is optimized and fully documented C/C++ code computing the values for those parameters that maximize the probability term. AutoBayes can solve many subproblems symbolically rather than having to rely on numeric approximation algorithms, thus yielding effective, efficient, and compact code. Statistical analysis is faster and more reliable, because effort can be focused on model development and validation rather than manual development of solution algorithms and code.
Heo, Man Seung; Moon, Hyun Seok; Kim, Hee Chan; Park, Hyung Woo; Lim, Young Hoon; Paek, Sun Ha
2015-03-01
The purpose of this study to develop new deep-brain stimulation system for long-term use in animals, in order to develop a variety of neural prostheses. Our system has two distinguished features, which are the fully implanted system having wearable wireless power transfer and ability to change the parameter of stimulus parameter. It is useful for obtaining a variety of data from a long-term experiment. To validate our system, we performed pre-clinical test in Parkinson's disease-rat models for 4 weeks. Through the in vivo test, we observed the possibility of not only long-term implantation and stability, but also free movement of animals. We confirmed that the electrical stimulation neither caused any side effect nor damaged the electrodes. We proved possibility of our system to conduct the long-term pre-clinical test in variety of parameter, which is available for development of neural prostheses.
Zouari, Farouk; Ibeas, Asier; Boulkroune, Abdesselem; Cao, Jinde; Mehdi Arefi, Mohammad
2018-06-01
This study addresses the issue of the adaptive output tracking control for a category of uncertain nonstrict-feedback delayed incommensurate fractional-order systems in the presence of nonaffine structures, unmeasured pseudo-states, unknown control directions, unknown actuator nonlinearities and output constraints. Firstly, the mean value theorem and the Gaussian error function are introduced to eliminate the difficulties that arise from the nonaffine structures and the unknown actuator nonlinearities, respectively. Secondly, the immeasurable tracking error variables are suitably estimated by constructing a fractional-order linear observer. Thirdly, the neural network, the Razumikhin Lemma, the variable separation approach, and the smooth Nussbaum-type function are used to deal with the uncertain nonlinear dynamics, the unknown time-varying delays, the nonstrict feedback and the unknown control directions, respectively. Fourthly, asymmetric barrier Lyapunov functions are employed to overcome the violation of the output constraints and to tune online the parameters of the adaptive neural controller. Through rigorous analysis, it is proved that the boundedness of all variables in the closed-loop system and the semi global asymptotic tracking are ensured without transgression of the constraints. The principal contributions of this study can be summarized as follows: (1) based on Caputo's definitions and new lemmas, methods concerning the controllability, observability and stability analysis of integer-order systems are extended to fractional-order ones, (2) the output tracking objective for a relatively large class of uncertain systems is achieved with a simple controller and less tuning parameters. Finally, computer-simulation studies from the robotic field are given to demonstrate the effectiveness of the proposed controller. Copyright © 2018 Elsevier Ltd. All rights reserved.
MXLKID: a maximum likelihood parameter identifier. [In LRLTRAN for CDC 7600
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gavel, D.T.
MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ohtori, Norikazu, E-mail: ohtori@chem.sc.niigata-u.ac.jp; Ishii, Yoshiki
Explicit expressions of the self-diffusion coefficient, D{sub i}, and shear viscosity, η{sub sv}, are presented for Lennard-Jones (LJ) binary mixtures in the liquid states along the saturated vapor line. The variables necessary for the expressions were derived from dimensional analysis of the properties: atomic mass, number density, packing fraction, temperature, and the size and energy parameters used in the LJ potential. The unknown dependence of the properties on each variable was determined by molecular dynamics (MD) calculations for an equimolar mixture of Ar and Kr at the temperature of 140 K and density of 1676 kg m{sup −3}. The scalingmore » equations obtained by multiplying all the single-variable dependences can well express D{sub i} and η{sub sv} evaluated by the MD simulation for a whole range of compositions and temperatures without any significant coupling between the variables. The equation for D{sub i} can also explain the dual atomic-mass dependence, i.e., the average-mass and the individual-mass dependence; the latter accounts for the “isotope effect” on D{sub i}. The Stokes-Einstein (SE) relation obtained from these equations is fully consistent with the SE relation for pure LJ liquids and that for infinitely dilute solutions. The main differences from the original SE relation are the presence of dependence on the individual mass and on the individual energy parameter. In addition, the packing-fraction dependence turned out to bridge another gap between the present and original SE relations as well as unifying the SE relation between pure liquids and infinitely dilute solutions.« less
Assessment of Fragmentation Performance of Blast-enhanced Explosive Fragmentation Munitions
2010-10-01
velocity of the dilating case is not fully clear. Table 1: PAX-Al (18.1% Al) JWL EOS parameters from Stiel, ref. 5. Parameter Mass fraction of Al...the PAX-Al JWL EOS (Jones-Wilkins- Lee Equation of State) parameters for varying the aluminum reaction fractions is given in Table 1. Comparing the...Maryland. [5] L. I. Stiel, 2008-2010, JAGUAR PAX-Al JWL EOS, unpublished work, Polytechnic Institute of New York University, Six MetroTech Center
On estimating the phase of periodic waveform in additive Gaussian noise, part 2
NASA Astrophysics Data System (ADS)
Rauch, L. L.
1984-11-01
Motivated by advances in signal processing technology that support more complex algorithms, a new look is taken at the problem of estimating the phase and other parameters of a periodic waveform in additive Gaussian noise. The general problem was introduced and the maximum a posteriori probability criterion with signal space interpretation was used to obtain the structures of optimum and some suboptimum phase estimators for known constant frequency and unknown constant phase with an a priori distribution. Optimal algorithms are obtained for some cases where the frequency is a parameterized function of time with the unknown parameters and phase having a joint a priori distribution. In the last section, the intrinsic and extrinsic geometry of hypersurfaces is introduced to provide insight to the estimation problem for the small noise and large noise cases.
On Estimating the Phase of Periodic Waveform in Additive Gaussian Noise, Part 2
NASA Technical Reports Server (NTRS)
Rauch, L. L.
1984-01-01
Motivated by advances in signal processing technology that support more complex algorithms, a new look is taken at the problem of estimating the phase and other parameters of a periodic waveform in additive Gaussian noise. The general problem was introduced and the maximum a posteriori probability criterion with signal space interpretation was used to obtain the structures of optimum and some suboptimum phase estimators for known constant frequency and unknown constant phase with an a priori distribution. Optimal algorithms are obtained for some cases where the frequency is a parameterized function of time with the unknown parameters and phase having a joint a priori distribution. In the last section, the intrinsic and extrinsic geometry of hypersurfaces is introduced to provide insight to the estimation problem for the small noise and large noise cases.
A Multi-Resolution Nonlinear Mapping Technique for Design and Analysis Applications
NASA Technical Reports Server (NTRS)
Phan, Minh Q.
1998-01-01
This report describes a nonlinear mapping technique where the unknown static or dynamic system is approximated by a sum of dimensionally increasing functions (one-dimensional curves, two-dimensional surfaces, etc.). These lower dimensional functions are synthesized from a set of multi-resolution basis functions, where the resolutions specify the level of details at which the nonlinear system is approximated. The basis functions also cause the parameter estimation step to become linear. This feature is taken advantage of to derive a systematic procedure to determine and eliminate basis functions that are less significant for the particular system under identification. The number of unknown parameters that must be estimated is thus reduced and compact models obtained. The lower dimensional functions (identified curves and surfaces) permit a kind of "visualization" into the complexity of the nonlinearity itself.
3D tomographic reconstruction using geometrical models
NASA Astrophysics Data System (ADS)
Battle, Xavier L.; Cunningham, Gregory S.; Hanson, Kenneth M.
1997-04-01
We address the issue of reconstructing an object of constant interior density in the context of 3D tomography where there is prior knowledge about the unknown shape. We explore the direct estimation of the parameters of a chosen geometrical model from a set of radiographic measurements, rather than performing operations (segmentation for example) on a reconstructed volume. The inverse problem is posed in the Bayesian framework. A triangulated surface describes the unknown shape and the reconstruction is computed with a maximum a posteriori (MAP) estimate. The adjoint differentiation technique computes the derivatives needed for the optimization of the model parameters. We demonstrate the usefulness of the approach and emphasize the techniques of designing forward and adjoint codes. We use the system response of the University of Arizona Fast SPECT imager to illustrate this method by reconstructing the shape of a heart phantom.
Global identifiability of linear compartmental models--a computer algebra algorithm.
Audoly, S; D'Angiò, L; Saccomani, M P; Cobelli, C
1998-01-01
A priori global identifiability deals with the uniqueness of the solution for the unknown parameters of a model and is, thus, a prerequisite for parameter estimation of biological dynamic models. Global identifiability is however difficult to test, since it requires solving a system of algebraic nonlinear equations which increases both in nonlinearity degree and number of terms and unknowns with increasing model order. In this paper, a computer algebra tool, GLOBI (GLOBal Identifiability) is presented, which combines the topological transfer function method with the Buchberger algorithm, to test global identifiability of linear compartmental models. GLOBI allows for the automatic testing of a priori global identifiability of general structure compartmental models from general multi input-multi output experiments. Examples of usage of GLOBI to analyze a priori global identifiability of some complex biological compartmental models are provided.
Multilevel adaptive control of nonlinear interconnected systems.
Motallebzadeh, Farzaneh; Ozgoli, Sadjaad; Momeni, Hamid Reza
2015-01-01
This paper presents an adaptive backstepping-based multilevel approach for the first time to control nonlinear interconnected systems with unknown parameters. The system consists of a nonlinear controller at the first level to neutralize the interaction terms, and some adaptive controllers at the second level, in which the gains are optimally tuned using genetic algorithm. The presented scheme can be used in systems with strong couplings where completely ignoring the interactions leads to problems in performance or stability. In order to test the suitability of the method, two case studies are provided: the uncertain double and triple coupled inverted pendulums connected by springs with unknown parameters. The simulation results show that the method is capable of controlling the system effectively, in both regulation and tracking tasks. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
A Multi-Resolution Nonlinear Mapping Technique for Design and Analysis Application
NASA Technical Reports Server (NTRS)
Phan, Minh Q.
1997-01-01
This report describes a nonlinear mapping technique where the unknown static or dynamic system is approximated by a sum of dimensionally increasing functions (one-dimensional curves, two-dimensional surfaces, etc.). These lower dimensional functions are synthesized from a set of multi-resolution basis functions, where the resolutions specify the level of details at which the nonlinear system is approximated. The basis functions also cause the parameter estimation step to become linear. This feature is taken advantage of to derive a systematic procedure to determine and eliminate basis functions that are less significant for the particular system under identification. The number of unknown parameters that must be estimated is thus reduced and compact models obtained. The lower dimensional functions (identified curves and surfaces) permit a kind of "visualization" into the complexity of the nonlinearity itself.
Allan, Christopher M.; Awad, Agape M.; Johnson, Jarrett S.; Shirasaki, Dyna I.; Wang, Charles; Blaby-Haas, Crysten E.; Merchant, Sabeeha S.; Loo, Joseph A.; Clarke, Catherine F.
2015-01-01
Coenzyme Q (Q or ubiquinone) is a redox active lipid composed of a fully substituted benzoquinone ring and a polyisoprenoid tail and is required for mitochondrial electron transport. In the yeast Saccharomyces cerevisiae, Q is synthesized by the products of 11 known genes, COQ1–COQ9, YAH1, and ARH1. The function of some of the Coq proteins remains unknown, and several steps in the Q biosynthetic pathway are not fully characterized. Several of the Coq proteins are associated in a macromolecular complex on the matrix face of the inner mitochondrial membrane, and this complex is required for efficient Q synthesis. Here, we further characterize this complex via immunoblotting and proteomic analysis of tandem affinity-purified tagged Coq proteins. We show that Coq8, a putative kinase required for the stability of the Q biosynthetic complex, is associated with a Coq6-containing complex. Additionally Q6 and late stage Q biosynthetic intermediates were also found to co-purify with the complex. A mitochondrial protein of unknown function, encoded by the YLR290C open reading frame, is also identified as a constituent of the complex and is shown to be required for efficient de novo Q biosynthesis. Given its effect on Q synthesis and its association with the biosynthetic complex, we propose that the open reading frame YLR290C be designated COQ11. PMID:25631044
Allan, Christopher M.; Awad, Agape M.; Johnson, Jarrett S.; ...
2015-01-28
Coenzyme Q (Q or ubiquinone) is a redox active lipid composed of a fully substituted benzoquinone ring and a polyisoprenoid tail and is required for mitochondrial electron transport. In the yeast Saccharomyces cerevisiae, Q is synthesized by the products of 11 known genes, COQ1–COQ9, YAH1, and ARH1. The function of some of the Coq proteins remains unknown, and several steps in the Q biosynthetic pathway are not fully characterized. Several of the Coq proteins are associated in a macromolecular complex on the matrix face of the inner mitochondrial membrane, and this complex is required for efficient Q synthesis. In thismore » paper, we further characterize this complex via immunoblotting and proteomic analysis of tandem affinity-purified tagged Coq proteins. We show that Coq8, a putative kinase required for the stability of the Q biosynthetic complex, is associated with a Coq6-containing complex. Additionally Q 6 and late stage Q biosynthetic intermediates were also found to co-purify with the complex. A mitochondrial protein of unknown function, encoded by the YLR290C open reading frame, is also identified as a constituent of the complex and is shown to be required for efficient de novo Q biosynthesis. Finally, given its effect on Q synthesis and its association with the biosynthetic complex, we propose that the open reading frame YLR290C be designated COQ11.« less
Fuzzy similarity measures for ultrasound tissue characterization
NASA Astrophysics Data System (ADS)
Emara, Salem M.; Badawi, Ahmed M.; Youssef, Abou-Bakr M.
1995-03-01
Computerized ultrasound tissue characterization has become an objective means for diagnosis of diseases. It is difficult to differentiate diffuse liver diseases, namely cirrhotic and fatty liver from a normal one, by visual inspection from the ultrasound images. The visual criteria for differentiating diffused diseases is rather confusing and highly dependent upon the sonographer's experience. The need for computerized tissue characterization is thus justified to quantitatively assist the sonographer for accurate differentiation and to minimize the degree of risk from erroneous interpretation. In this paper we used the fuzzy similarity measure as an approximate reasoning technique to find the maximum degree of matching between an unknown case defined by a feature vector and a family of prototypes (knowledge base). The feature vector used for the matching process contains 8 quantitative parameters (textural, acoustical, and speckle parameters) extracted from the ultrasound image. The steps done to match an unknown case with the family of prototypes (cirr, fatty, normal) are: Choosing the membership functions for each parameter, then obtaining the fuzzification matrix for the unknown case and the family of prototypes, then by the linguistic evaluation of two fuzzy quantities we obtain the similarity matrix, then by a simple aggregation method and the fuzzy integrals we obtain the degree of similarity. Finally, we find that the similarity measure results are comparable to the neural network classification techniques and it can be used in medical diagnosis to determine the pathology of the liver and to monitor the extent of the disease.
Metabolome Profiling of Partial and Fully Reprogrammed Induced Pluripotent Stem Cells.
Park, Soon-Jung; Lee, Sang A; Prasain, Nutan; Bae, Daekyeong; Kang, Hyunsu; Ha, Taewon; Kim, Jong Soo; Hong, Ki-Sung; Mantel, Charlie; Moon, Sung-Hwan; Broxmeyer, Hal E; Lee, Man Ryul
2017-05-15
Acquisition of proper metabolomic fate is required to convert somatic cells toward fully reprogrammed pluripotent stem cells. The majority of induced pluripotent stem cells (iPSCs) are partially reprogrammed and have a transcriptome different from that of the pluripotent stem cells. The metabolomic profile and mitochondrial metabolic functions required to achieve full reprogramming of somatic cells to iPSC status have not yet been elucidated. Clarification of the metabolites underlying reprogramming mechanisms should enable further optimization to enhance the efficiency of obtaining fully reprogrammed iPSCs. In this study, we characterized the metabolites of human fully reprogrammed iPSCs, partially reprogrammed iPSCs, and embryonic stem cells (ESCs). Using capillary electrophoresis time-of-flight mass spectrometry-based metabolomics, we found that 89% of analyzed metabolites were similarly expressed in fully reprogrammed iPSCs and human ESCs (hESCs), whereas partially reprogrammed iPSCs shared only 74% similarly expressed metabolites with hESCs. Metabolomic profiling analysis suggested that converting mitochondrial respiration to glycolytic flux is critical for reprogramming of somatic cells into fully reprogrammed iPSCs. This characterization of metabolic reprogramming in iPSCs may enable the development of new reprogramming parameters for enhancing the generation of fully reprogrammed human iPSCs.
Spin vectors of asteroids 21 Lutetia, 196 Philomela, 250 Bettina, 337 Devosa, and 804 Hispania
NASA Technical Reports Server (NTRS)
Michalowski, Tadeusz
1992-01-01
Such parameters as shape, orientation of spin axis, prograde or retrograde rotation are important for understanding the collisional evolution of asteroids since the primordial epochs of solar system history. These parameters remain unknown for most asteroids and poorly constrained for all but a few. This work presents results for five asteroids: 21, 196, 250, 337, and 804.
1995-11-01
network - based AFS concepts. Neural networks can addition of vanes in each engine exhaust for thrust provide...parameter estimation programs 19-11 8.6 Neural Network Based Methods unknown parameters of the postulated state space model Artificial neural network ...Forward Neural Network the network that the applicability of the recurrent neural and ii) Recurrent Neural Network [117-119]. network to
User-customized brain computer interfaces using Bayesian optimization
NASA Astrophysics Data System (ADS)
Bashashati, Hossein; Ward, Rabab K.; Bashashati, Ali
2016-04-01
Objective. The brain characteristics of different people are not the same. Brain computer interfaces (BCIs) should thus be customized for each individual person. In motor-imagery based synchronous BCIs, a number of parameters (referred to as hyper-parameters) including the EEG frequency bands, the channels and the time intervals from which the features are extracted should be pre-determined based on each subject’s brain characteristics. Approach. To determine the hyper-parameter values, previous work has relied on manual or semi-automatic methods that are not applicable to high-dimensional search spaces. In this paper, we propose a fully automatic, scalable and computationally inexpensive algorithm that uses Bayesian optimization to tune these hyper-parameters. We then build different classifiers trained on the sets of hyper-parameter values proposed by the Bayesian optimization. A final classifier aggregates the results of the different classifiers. Main Results. We have applied our method to 21 subjects from three BCI competition datasets. We have conducted rigorous statistical tests, and have shown the positive impact of hyper-parameter optimization in improving the accuracy of BCIs. Furthermore, We have compared our results to those reported in the literature. Significance. Unlike the best reported results in the literature, which are based on more sophisticated feature extraction and classification methods, and rely on prestudies to determine the hyper-parameter values, our method has the advantage of being fully automated, uses less sophisticated feature extraction and classification methods, and yields similar or superior results compared to the best performing designs in the literature.
NASA Astrophysics Data System (ADS)
Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke
2017-04-01
Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.
Si, Wenjie; Dong, Xunde; Yang, Feifei
2018-03-01
This paper is concerned with the problem of decentralized adaptive backstepping state-feedback control for uncertain high-order large-scale stochastic nonlinear time-delay systems. For the control design of high-order large-scale nonlinear systems, only one adaptive parameter is constructed to overcome the over-parameterization, and neural networks are employed to cope with the difficulties raised by completely unknown system dynamics and stochastic disturbances. And then, the appropriate Lyapunov-Krasovskii functional and the property of hyperbolic tangent functions are used to deal with the unknown unmatched time-delay interactions of high-order large-scale systems for the first time. At last, on the basis of Lyapunov stability theory, the decentralized adaptive neural controller was developed, and it decreases the number of learning parameters. The actual controller can be designed so as to ensure that all the signals in the closed-loop system are semi-globally uniformly ultimately bounded (SGUUB) and the tracking error converges in the small neighborhood of zero. The simulation example is used to further show the validity of the design method. Copyright © 2018 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jinsong; Kemna, Andreas; Hubbard, Susan S.
2008-05-15
We develop a Bayesian model to invert spectral induced polarization (SIP) data for Cole-Cole parameters using Markov chain Monte Carlo (MCMC) sampling methods. We compare the performance of the MCMC based stochastic method with an iterative Gauss-Newton based deterministic method for Cole-Cole parameter estimation through inversion of synthetic and laboratory SIP data. The Gauss-Newton based method can provide an optimal solution for given objective functions under constraints, but the obtained optimal solution generally depends on the choice of initial values and the estimated uncertainty information is often inaccurate or insufficient. In contrast, the MCMC based inversion method provides extensive globalmore » information on unknown parameters, such as the marginal probability distribution functions, from which we can obtain better estimates and tighter uncertainty bounds of the parameters than with the deterministic method. Additionally, the results obtained with the MCMC method are independent of the choice of initial values. Because the MCMC based method does not explicitly offer single optimal solution for given objective functions, the deterministic and stochastic methods can complement each other. For example, the stochastic method can first be used to obtain the means of the unknown parameters by starting from an arbitrary set of initial values and the deterministic method can then be initiated using the means as starting values to obtain the optimal estimates of the Cole-Cole parameters.« less
A fully Sinc-Galerkin method for Euler-Bernoulli beam models
NASA Technical Reports Server (NTRS)
Smith, R. C.; Bowers, K. L.; Lund, J.
1990-01-01
A fully Sinc-Galerkin method in both space and time is presented for fourth-order time-dependent partial differential equations with fixed and cantilever boundary conditions. The Sinc discretizations for the second-order temporal problem and the fourth-order spatial problems are presented. Alternate formulations for variable parameter fourth-order problems are given which prove to be especially useful when applying the forward techniques to parameter recovery problems. The discrete system which corresponds to the time-dependent partial differential equations of interest are then formulated. Computational issues are discussed and a robust and efficient algorithm for solving the resulting matrix system is outlined. Numerical results which highlight the method are given for problems with both analytic and singular solutions as well as fixed and cantilever boundary conditions.
Harada, Shingo; Kanao, Kenichiro; Yamamoto, Yuki; Arie, Takayuki; Akita, Seiji; Takei, Kuniharu
2014-12-23
A three-axis tactile force sensor that determines the touch and slip/friction force may advance artificial skin and robotic applications by fully imitating human skin. The ability to detect slip/friction and tactile forces simultaneously allows unknown objects to be held in robotic applications. However, the functionalities of flexible devices have been limited to a tactile force in one direction due to difficulties fabricating devices on flexible substrates. Here we demonstrate a fully printed fingerprint-like three-axis tactile force and temperature sensor for artificial skin applications. To achieve economic macroscale devices, these sensors are fabricated and integrated using only printing methods. Strain engineering enables the strain distribution to be detected upon applying a slip/friction force. By reading the strain difference at four integrated force sensors for a pixel, both the tactile and slip/friction forces can be analyzed simultaneously. As a proof of concept, the high sensitivity and selectivity for both force and temperature are demonstrated using a 3×3 array artificial skin that senses tactile, slip/friction, and temperature. Multifunctional sensing components for a flexible device are important advances for both practical applications and basic research in flexible electronics.
Sittig, Dean F; Salimi, Mandana; Aiyagari, Ranjit; Banas, Colin; Clay, Brian; Gibson, Kathryn A; Goel, Ashutosh; Hines, Robert; Longhurst, Christopher A; Mishra, Vimal; Sirajuddin, Anwar M; Satterly, Tyler; Singh, Hardeep
2018-04-26
The Safety Assurance Factors for EHR Resilience (SAFER) guides were released in 2014 to help health systems conduct proactive risk assessment of electronic health record (EHR)- safety related policies, processes, procedures, and configurations. The extent to which SAFER recommendations are followed is unknown. We conducted risk assessments of 8 organizations of varying size, complexity, EHR, and EHR adoption maturity. Each organization self-assessed adherence to all 140 unique SAFER recommendations contained within 9 guides (range 10-29 recommendations per guide). In each guide, recommendations were organized into 3 broad domains: "safe health IT" (total 45 recommendations); "using health IT safely" (total 80 recommendations); and "monitoring health IT" (total 15 recommendations). The 8 sites fully implemented 25 of 140 (18%) SAFER recommendations. Mean number of "fully implemented" recommendations per guide ranged from 94% (System Interfaces-18 recommendations) to 63% (Clinical Communication-12 recommendations). Adherence was higher for "safe health IT" domain (82.1%) vs "using health IT safely" (72.5%) and "monitoring health IT" (67.3%). Despite availability of recommendations on how to improve use of EHRs, most recommendations were not fully implemented. New national policy initiatives are needed to stimulate implementation of these best practices.
Nagasaki, Masao; Yamaguchi, Rui; Yoshida, Ryo; Imoto, Seiya; Doi, Atsushi; Tamada, Yoshinori; Matsuno, Hiroshi; Miyano, Satoru; Higuchi, Tomoyuki
2006-01-01
We propose an automatic construction method of the hybrid functional Petri net as a simulation model of biological pathways. The problems we consider are how we choose the values of parameters and how we set the network structure. Usually, we tune these unknown factors empirically so that the simulation results are consistent with biological knowledge. Obviously, this approach has the limitation in the size of network of interest. To extend the capability of the simulation model, we propose the use of data assimilation approach that was originally established in the field of geophysical simulation science. We provide genomic data assimilation framework that establishes a link between our simulation model and observed data like microarray gene expression data by using a nonlinear state space model. A key idea of our genomic data assimilation is that the unknown parameters in simulation model are converted as the parameter of the state space model and the estimates are obtained as the maximum a posteriori estimators. In the parameter estimation process, the simulation model is used to generate the system model in the state space model. Such a formulation enables us to handle both the model construction and the parameter tuning within a framework of the Bayesian statistical inferences. In particular, the Bayesian approach provides us a way of controlling overfitting during the parameter estimations that is essential for constructing a reliable biological pathway. We demonstrate the effectiveness of our approach using synthetic data. As a result, parameter estimation using genomic data assimilation works very well and the network structure is suitably selected.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Shaohua
This paper is concerned with the problem of adaptive fuzzy dynamic surface control (DSC) for the permanent magnet synchronous motor (PMSM) system with chaotic behavior, disturbance and unknown control gain and parameters. Nussbaum gain is adopted to cope with the situation that the control gain is unknown. And the unknown items can be estimated by fuzzy logic system. The proposed controller guarantees that all the signals in the closed-loop system are bounded and the system output eventually converges to a small neighborhood of the desired reference signal. Finally, the numerical simulations indicate that the proposed scheme can suppress the chaosmore » of PMSM and show the effectiveness and robustness of the proposed method.« less
Luo, Shaohua
2014-09-01
This paper is concerned with the problem of adaptive fuzzy dynamic surface control (DSC) for the permanent magnet synchronous motor (PMSM) system with chaotic behavior, disturbance and unknown control gain and parameters. Nussbaum gain is adopted to cope with the situation that the control gain is unknown. And the unknown items can be estimated by fuzzy logic system. The proposed controller guarantees that all the signals in the closed-loop system are bounded and the system output eventually converges to a small neighborhood of the desired reference signal. Finally, the numerical simulations indicate that the proposed scheme can suppress the chaos of PMSM and show the effectiveness and robustness of the proposed method.
Identification of vehicle suspension parameters by design optimization
NASA Astrophysics Data System (ADS)
Tey, J. Y.; Ramli, R.; Kheng, C. W.; Chong, S. Y.; Abidin, M. A. Z.
2014-05-01
The design of a vehicle suspension system through simulation requires accurate representation of the design parameters. These parameters are usually difficult to measure or sometimes unavailable. This article proposes an efficient approach to identify the unknown parameters through optimization based on experimental results, where the covariance matrix adaptation-evolutionary strategy (CMA-es) is utilized to improve the simulation and experimental results against the kinematic and compliance tests. This speeds up the design and development cycle by recovering all the unknown data with respect to a set of kinematic measurements through a single optimization process. A case study employing a McPherson strut suspension system is modelled in a multi-body dynamic system. Three kinematic and compliance tests are examined, namely, vertical parallel wheel travel, opposite wheel travel and single wheel travel. The problem is formulated as a multi-objective optimization problem with 40 objectives and 49 design parameters. A hierarchical clustering method based on global sensitivity analysis is used to reduce the number of objectives to 30 by grouping correlated objectives together. Then, a dynamic summation of rank value is used as pseudo-objective functions to reformulate the multi-objective optimization to a single-objective optimization problem. The optimized results show a significant improvement in the correlation between the simulated model and the experimental model. Once accurate representation of the vehicle suspension model is achieved, further analysis, such as ride and handling performances, can be implemented for further optimization.
Performance Analysis for Channel Estimation With 1-Bit ADC and Unknown Quantization Threshold
NASA Astrophysics Data System (ADS)
Stein, Manuel S.; Bar, Shahar; Nossek, Josef A.; Tabrikian, Joseph
2018-05-01
In this work, the problem of signal parameter estimation from measurements acquired by a low-complexity analog-to-digital converter (ADC) with $1$-bit output resolution and an unknown quantization threshold is considered. Single-comparator ADCs are energy-efficient and can be operated at ultra-high sampling rates. For analysis of such systems, a fixed and known quantization threshold is usually assumed. In the symmetric case, i.e., zero hard-limiting offset, it is known that in the low signal-to-noise ratio (SNR) regime the signal processing performance degrades moderately by ${2}/{\\pi}$ ($-1.96$ dB) when comparing to an ideal $\\infty$-bit converter. Due to hardware imperfections, low-complexity $1$-bit ADCs will in practice exhibit an unknown threshold different from zero. Therefore, we study the accuracy which can be obtained with receive data processed by a hard-limiter with unknown quantization level by using asymptotically optimal channel estimation algorithms. To characterize the estimation performance of these nonlinear algorithms, we employ analytic error expressions for different setups while modeling the offset as a nuisance parameter. In the low SNR regime, we establish the necessary condition for a vanishing loss due to missing offset knowledge at the receiver. As an application, we consider the estimation of single-input single-output wireless channels with inter-symbol interference and validate our analysis by comparing the analytic and experimental performance of the studied estimation algorithms. Finally, we comment on the extension to multiple-input multiple-output channel models.
The Routine Fitting of Kinetic Data to Models
Berman, Mones; Shahn, Ezra; Weiss, Marjory F.
1962-01-01
A mathematical formalism is presented for use with digital computers to permit the routine fitting of data to physical and mathematical models. Given a set of data, the mathematical equations describing a model, initial conditions for an experiment, and initial estimates for the values of model parameters, the computer program automatically proceeds to obtain a least squares fit of the data by an iterative adjustment of the values of the parameters. When the experimental measures are linear combinations of functions, the linear coefficients for a least squares fit may also be calculated. The values of both the parameters of the model and the coefficients for the sum of functions may be unknown independent variables, unknown dependent variables, or known constants. In the case of dependence, only linear dependencies are provided for in routine use. The computer program includes a number of subroutines, each one of which performs a special task. This permits flexibility in choosing various types of solutions and procedures. One subroutine, for example, handles linear differential equations, another, special non-linear functions, etc. The use of analytic or numerical solutions of equations is possible. PMID:13867975
Adaptive Control Based Harvesting Strategy for a Predator-Prey Dynamical System.
Sen, Moitri; Simha, Ashutosh; Raha, Soumyendu
2018-04-23
This paper deals with designing a harvesting control strategy for a predator-prey dynamical system, with parametric uncertainties and exogenous disturbances. A feedback control law for the harvesting rate of the predator is formulated such that the population dynamics is asymptotically stabilized at a positive operating point, while maintaining a positive, steady state harvesting rate. The hierarchical block strict feedback structure of the dynamics is exploited in designing a backstepping control law, based on Lyapunov theory. In order to account for unknown parameters, an adaptive control strategy has been proposed in which the control law depends on an adaptive variable which tracks the unknown parameter. Further, a switching component has been incorporated to robustify the control performance against bounded disturbances. Proofs have been provided to show that the proposed adaptive control strategy ensures asymptotic stability of the dynamics at a desired operating point, as well as exact parameter learning in the disturbance-free case and learning with bounded error in the disturbance prone case. The dynamics, with uncertainty in the death rate of the predator, subjected to a bounded disturbance has been simulated with the proposed control strategy.
Efficient Bayesian experimental design for contaminant source identification
NASA Astrophysics Data System (ADS)
Zhang, Jiangjiang; Zeng, Lingzao; Chen, Cheng; Chen, Dingjiang; Wu, Laosheng
2015-01-01
In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameters identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from concentration measurements in identifying unknown parameters. In this approach, the sampling locations that give the maximum expected relative entropy are selected as the optimal design. After the sampling locations are determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport equation. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. It is shown that the methods can be used to assist in both single sampling location and monitoring network design for contaminant source identifications in groundwater.
Jafari, Masoumeh; Salimifard, Maryam; Dehghani, Maryam
2014-07-01
This paper presents an efficient method for identification of nonlinear Multi-Input Multi-Output (MIMO) systems in the presence of colored noises. The method studies the multivariable nonlinear Hammerstein and Wiener models, in which, the nonlinear memory-less block is approximated based on arbitrary vector-based basis functions. The linear time-invariant (LTI) block is modeled by an autoregressive moving average with exogenous (ARMAX) model which can effectively describe the moving average noises as well as the autoregressive and the exogenous dynamics. According to the multivariable nature of the system, a pseudo-linear-in-the-parameter model is obtained which includes two different kinds of unknown parameters, a vector and a matrix. Therefore, the standard least squares algorithm cannot be applied directly. To overcome this problem, a Hierarchical Least Squares Iterative (HLSI) algorithm is used to simultaneously estimate the vector and the matrix of unknown parameters as well as the noises. The efficiency of the proposed identification approaches are investigated through three nonlinear MIMO case studies. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Adaptive control of nonlinear uncertain active suspension systems with prescribed performance.
Huang, Yingbo; Na, Jing; Wu, Xing; Liu, Xiaoqin; Guo, Yu
2015-01-01
This paper proposes adaptive control designs for vehicle active suspension systems with unknown nonlinear dynamics (e.g., nonlinear spring and piece-wise linear damper dynamics). An adaptive control is first proposed to stabilize the vertical vehicle displacement and thus to improve the ride comfort and to guarantee other suspension requirements (e.g., road holding and suspension space limitation) concerning the vehicle safety and mechanical constraints. An augmented neural network is developed to online compensate for the unknown nonlinearities, and a novel adaptive law is developed to estimate both NN weights and uncertain model parameters (e.g., sprung mass), where the parameter estimation error is used as a leakage term superimposed on the classical adaptations. To further improve the control performance and simplify the parameter tuning, a prescribed performance function (PPF) characterizing the error convergence rate, maximum overshoot and steady-state error is used to propose another adaptive control. The stability for the closed-loop system is proved and particular performance requirements are analyzed. Simulations are included to illustrate the effectiveness of the proposed control schemes. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sainsbury-Martinez, Felix; Browning, Matthew; Miesch, Mark; Featherstone, Nicholas A.
2018-01-01
Low-Mass stars are typically fully convective, and as such their dynamics may differ significantly from sun-like stars. Here we present a series of 3D anelastic HD and MHD simulations of fully convective stars, designed to investigate how the meridional circulation, the differential rotation, and residual entropy are affected by both varying stellar parameters, such as the luminosity or the rotation rate, and by the presence of a magnetic field. We also investigate, more specifically, a theoretical model in which isorotation contours and residual entropy (σ‧ = σ ‑ σ(r)) are intrinsically linked via the thermal wind equation (as proposed in the Solar context by Balbus in 2009). We have selected our simulation parameters in such as way as to span the transition between Solar-like differential rotation (fast equator + slow poles) and ‘anti-Solar’ differential rotation (slow equator + fast poles), as characterised by the convective Rossby number and △Ω. We illustrate the transition from single-celled to multi-celled MC profiles, and from positive to negative latitudinal entropy gradients. We show that an extrapolation involving both TWB and the σ‧/Ω link provides a reasonable estimate for the interior profile of our fully convective stars. Finally, we also present a selection of MHD simulations which exhibit an almost unsuppressed differential rotation profile, with energy balances remaining dominated by kinetic components.
Initial clinical trial of a closed loop, fully automatic intra-aortic balloon pump.
Kantrowitz, A; Freed, P S; Cardona, R R; Gage, K; Marinescu, G N; Westveld, A H; Litch, B; Suzuki, A; Hayakawa, H; Takano, T
1992-01-01
A new generation, closed loop, fully automatic intraaortic balloon pump (CL-IABP) system continuously optimizes diastolic augmentation by adjusting balloon pump parameters beat by beat without operator intervention. In dogs in sinus rhythm and with experimentally induced arrhythmias, the new CL-IABP system provided safe, effective augmentation. To investigate the system's suitability for clinical use, 10 patients meeting standard indications for IABP were studied. The patients were pumped by the fully automatic IABP system for an average of 20 hr (range, 1-48 hr). At start-up, the system optimized pumping parameters within 7-20 sec. Evaluation of 186 recordings made at hourly intervals showed that inflation began within 20 msec of the dicrotic notch 99% of the time. In 100% of the recordings, deflation straddled the first half of ventricular ejection. Peak pressure across the balloon membrane averaged 55 mmHg and, in no case, exceeded 100 mmHg. Examination of the data showed that as soon as the system was actuated it provided consistently beneficial diastolic augmentation without any further operator intervention. Eight patients improved and two died (one of irreversible cardiogenic shock and one of ischemic cardiomyopathy). No complications were attributable to the investigational aspects of the system. A fully automated IABP is feasible in the clinical setting, and it may have advantages relative to current generation IABP systems.
Francesca Marucco; Daniel H. Pletscher; Luigi Boitani; Michael K. Schwartz; Kristy L. Pilgrim; Jean-Dominique Lebreton
2009-01-01
Population abundance and related parameters need to be assessed to implement effective wildlife management. These essential parameters are often very hard to obtain for rare, wide-ranging and elusive species, particularly those listed as endangered or threatened (IUCN 2001). In Italy, wolves Canis lupus Linnaeus 1758, now a fully protected species in Western Europe,...
Nonlinear Viscoelastic Characterization of the Porcine Spinal Cord
Shetye, Snehal; Troyer, Kevin; Streijger, Femke; Lee, Jae H. T.; Kwon, Brian K.; Cripton, Peter; Puttlitz, Christian M.
2014-01-01
Although quasi-static and quasi-linear viscoelastic properties of the spinal cord have been reported previously, there are no published studies that have investigated the fully (strain-dependent) nonlinear viscoelastic properties of the spinal cord. In this study, stress relaxation experiments and dynamic cycling were performed on six fresh porcine lumbar cord specimens to examine their viscoelastic mechanical properties. The stress relaxation data were fitted to a modified superposition formulation and a novel finite ramp time correction technique was applied. The parameters obtained from this fitting methodology were used to predict the average dynamic cyclic viscoelastic behavior of the porcine cord. The data indicate that the porcine spinal cord exhibited fully nonlinear viscoelastic behavior. The average weighted RMSE for a Heaviside ramp fit was 2.8kPa, which was significantly greater (p < 0.001) than that of the nonlinear (comprehensive viscoelastic characterization (CVC) method) fit (0.365kPa). Further, the nonlinear mechanical parameters obtained were able to accurately predict the dynamic behavior, thus exemplifying the reliability of the obtained nonlinear parameters. These parameters will be important for future studies investigating various damage mechanisms of the spinal cord and studies developing high resolution finite elements models of the spine. PMID:24211612
Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J
2014-01-01
Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model.
Kristunas, Caroline A; Hemming, Karla; Eborall, Helen C; Gray, Laura J
2017-01-01
Introduction The stepped-wedge cluster randomised trial (SW-CRT) is a complex design, for which many decisions about key design parameters must be made during the planning. These include the number of steps and the duration of time needed to embed the intervention. Feasibility studies are likely to be useful for informing these decisions and increasing the likelihood of the main trial's success. However, the number of feasibility studies being conducted for SW-CRTs is currently unknown. This review aims to establish the number of feasibility studies being conducted for SW-CRTs and determine which feasibility issues are commonly investigated. Methods and analysis Fully published feasibility studies for SW-CRTs will be identified, according to predefined inclusion criteria, from searches conducted in Ovid MEDLINE, Scopus, Embase and PsycINFO. To also identify and gain information on unpublished feasibility studies the following will be contacted: authors of published SW-CRTs (identified from the most recent systematic reviews); contacts for registered SW-CRTs (identified from clinical trials registries); lead statisticians of UK registered clinical trials units and researchers known to work in the area of SW-CRTs. Data extraction will be conducted independently by two reviewers. For the fully published feasibility studies, data will be extracted on the study characteristics, the rationale for the study, the process for determining progression to a main trial, how the study informed the main trial and whether the main trial went ahead. The researchers involved in the unpublished feasibility studies will be contacted to elicit the same information. A narrative synthesis will be conducted and provided alongside a descriptive analysis of the study characteristics. Ethics and dissemination This review does not require ethical approval, as no individual patient data will be used. The results of this review will be published in an open-access peer-reviewed journal. PMID:28765139
Kristunas, Caroline A; Hemming, Karla; Eborall, Helen C; Gray, Laura J
2017-08-01
The stepped-wedge cluster randomised trial (SW-CRT) is a complex design, for which many decisions about key design parameters must be made during the planning. These include the number of steps and the duration of time needed to embed the intervention. Feasibility studies are likely to be useful for informing these decisions and increasing the likelihood of the main trial's success. However, the number of feasibility studies being conducted for SW-CRTs is currently unknown. This review aims to establish the number of feasibility studies being conducted for SW-CRTs and determine which feasibility issues are commonly investigated. Fully published feasibility studies for SW-CRTs will be identified, according to predefined inclusion criteria, from searches conducted in Ovid MEDLINE, Scopus, Embase and PsycINFO. To also identify and gain information on unpublished feasibility studies the following will be contacted: authors of published SW-CRTs (identified from the most recent systematic reviews); contacts for registered SW-CRTs (identified from clinical trials registries); lead statisticians of UK registered clinical trials units and researchers known to work in the area of SW-CRTs.Data extraction will be conducted independently by two reviewers. For the fully published feasibility studies, data will be extracted on the study characteristics, the rationale for the study, the process for determining progression to a main trial, how the study informed the main trial and whether the main trial went ahead. The researchers involved in the unpublished feasibility studies will be contacted to elicit the same information.A narrative synthesis will be conducted and provided alongside a descriptive analysis of the study characteristics. This review does not require ethical approval, as no individual patient data will be used. The results of this review will be published in an open-access peer-reviewed journal. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
NASA Astrophysics Data System (ADS)
Zhao, Hui; Zheng, Mingwen; Li, Shudong; Wang, Weiping
2018-03-01
Some existing papers focused on finite-time parameter identification and synchronization, but provided incomplete theoretical analyses. Such works incorporated conflicting constraints for parameter identification, therefore, the practical significance could not be fully demonstrated. To overcome such limitations, the underlying paper presents new results of parameter identification and synchronization for uncertain complex dynamical networks with impulsive effect and stochastic perturbation based on finite-time stability theory. Novel results of parameter identification and synchronization control criteria are obtained in a finite time by utilizing Lyapunov function and linear matrix inequality respectively. Finally, numerical examples are presented to illustrate the effectiveness of our theoretical results.
Methods to examine reproductive biology in free-ranging, fully-marine mammals.
Lanyon, Janet M; Burgess, Elizabeth A
2014-01-01
Historical overexploitation of marine mammals, combined with present-day pressures, has resulted in severely depleted populations, with many species listed as threatened or endangered. Understanding breeding patterns of threatened marine mammals is crucial to assessing population viability, potential recovery and conservation actions. However, determining reproductive parameters of wild fully-marine mammals (cetaceans and sirenians) is challenging due to their wide distributions, high mobility, inaccessible habitats, cryptic lifestyles and in many cases, large body size and intractability. Consequently, reproductive biologists employ an innovative suite of methods to collect useful information from these species. This chapter reviews historic, recent and state-of-the-art methods to examine diverse aspects of reproduction in fully-aquatic mammals.
Theoretical results for starved elliptical contacts
NASA Technical Reports Server (NTRS)
Hamrock, B. J.; Dowson, D.
1983-01-01
Eighteen cases were used in the theoretical study of the influence of lubricant starvation on film thickness and pressure in elliptical elastohydrodynamic conjunctions. From the results a simple and important critical dimensionless inlet boundary distance at which lubricant starvation becomes significant was specified. This inlet boundary distance defines whether a fully flooded or a starved condition exists in the contact. Furthermore, it was found that the film thickness for a starved condition is written in dimensionless terms as a function of the inlet distance parameter and the film thickness for a fully flooded condition. Contour plots of pressure and film thickness in and around the contact are shown for fully flooded and starved conditions.
A method for operative quantitative interpretation of multispectral images of biological tissues
NASA Astrophysics Data System (ADS)
Lisenko, S. A.; Kugeiko, M. M.
2013-10-01
A method for operative retrieval of spatial distributions of biophysical parameters of a biological tissue by using a multispectral image of it has been developed. The method is based on multiple regressions between linearly independent components of the diffuse reflection spectrum of the tissue and unknown parameters. Possibilities of the method are illustrated by an example of determining biophysical parameters of the skin (concentrations of melanin, hemoglobin and bilirubin, blood oxygenation, and scattering coefficient of the tissue). Examples of quantitative interpretation of the experimental data are presented.
NASA Astrophysics Data System (ADS)
De Santis, Alberto; Dellepiane, Umberto; Lucidi, Stefano
2012-11-01
In this paper we investigate the estimation problem for a model of the commodity prices. This model is a stochastic state space dynamical model and the problem unknowns are the state variables and the system parameters. Data are represented by the commodity spot prices, very seldom time series of Futures contracts are available for free. Both the system joint likelihood function (state variables and parameters) and the system marginal likelihood (the state variables are eliminated) function are addressed.
NASA Astrophysics Data System (ADS)
Guo, J. L.; Song, H. S.
2010-01-01
We study the thermal entanglement in the two-qubit Heisenberg XXZ model with the Dzyaloshinskii-Moriya (DM) interaction, and teleport an unknown state using the model in thermal equilibrium state as a quantum channel. The effects of DM interaction, including Dx and Dz interaction, the anisotropy and temperature on the entanglement and fully entangled fraction are considered. What deserves mentioning here is that for the antiferromagnetic case, the Dx interaction can be more helpful for increasing the entanglement and critical temperature than Dz, but this cannot for teleportation.
Otani, Kyoko; Nakazono, Akemi; Salgo, Ivan S; Lang, Roberto M; Takeuchi, Masaaki
2016-10-01
Echocardiographic determination of left heart chamber volumetric parameters by using manual tracings during multiple beats is tedious in atrial fibrillation (AF). The aim of this study was to determine the usefulness of fully automated left chamber quantification software with single-beat three-dimensional transthoracic echocardiographic data sets in patients with AF. Single-beat full-volume three-dimensional transthoracic echocardiographic data sets were prospectively acquired during consecutive multiple cardiac beats (≥10 beats) in 88 patients with AF. In protocol 1, left ventricular volumes, left ventricular ejection fraction, and maximal left atrial volume were validated using automated quantification against the manual tracing method in identical beats in 10 patients. In protocol 2, automated quantification-derived averaged values from multiple beats were compared with the corresponding values obtained from the indexed beat in all patients. Excellent correlations of left chamber parameters between automated quantification and the manual method were observed (r = 0.88-0.98) in protocol 1. The time required for the analysis with the automated quantification method (5 min) was significantly less compared with the manual method (27 min) (P < .0001). In protocol 2, there were excellent linear correlations between the averaged left chamber parameters and the corresponding values obtained from the indexed beat (r = 0.94-0.99), and test-retest variability of left chamber parameters was low (3.5%-4.8%). Three-dimensional transthoracic echocardiography with fully automated quantification software is a rapid and reliable way to measure averaged values of left heart chamber parameters during multiple consecutive beats. Thus, it is a potential new approach for left chamber quantification in patients with AF in daily routine practice. Copyright © 2016 American Society of Echocardiography. Published by Elsevier Inc. All rights reserved.
Berner, Christine L; Staid, Andrea; Flage, Roger; Guikema, Seth D
2017-10-01
Recently, the concept of black swans has gained increased attention in the fields of risk assessment and risk management. Different types of black swans have been suggested, distinguishing between unknown unknowns (nothing in the past can convincingly point to its occurrence), unknown knowns (known to some, but not to relevant analysts), or known knowns where the probability of occurrence is judged as negligible. Traditional risk assessments have been questioned, as their standard probabilistic methods may not be capable of predicting or even identifying these rare and extreme events, thus creating a source of possible black swans. In this article, we show how a simulation model can be used to identify previously unknown potentially extreme events that if not identified and treated could occur as black swans. We show that by manipulating a verified and validated model used to predict the impacts of hazards on a system of interest, we can identify hazard conditions not previously experienced that could lead to impacts much larger than any previous level of impact. This makes these potential black swan events known and allows risk managers to more fully consider them. We demonstrate this method using a model developed to evaluate the effect of hurricanes on energy systems in the United States; we identify hurricanes with potentially extreme impacts, storms well beyond what the historic record suggests is possible in terms of impacts. © 2016 Society for Risk Analysis.
Optical absorption spectra of substitutional Co2+ ions in Mgx Cd1-x Se alloys
NASA Astrophysics Data System (ADS)
Jin, Moon-Seog; Kim, Chang-Dae; Jang, Kiwan; Park, Sang-An; Kim, Duck-Tae; Kim, Hyung-Gon; Kim, Wha-Tek
2006-09-01
Optical absorption spectra of substitutional Co2+ ions in Mgx Cd1-x Se alloys were investigated in the composition region of 0.0 x 0.4 and in the wavelength region of 300 to 2500 nm at 4.8 K and 290 K. We observed several absorption bands in the wavelength regions corresponding to the 4A2(4F) 4T1(4P) transition and the 4A2(4F) 4T1(4F) transition of Co2+ at a tetrahedral Td point symmetry point in the host crystals, as well as unknown absorption bands. The several absorption bands were analyzed in the framework of the crystal-field theory along with the second-order spin-orbit coupling. The unknown absorption bands were assigned as due to phonon-assisted absorption bands. We also investigated the variations of the crystal-field parameter Dq and the Racah parameter B with composition x in the Mgx Cd1-x Se system. The results showed that the crystal-field parameter (Dq ) increases, on the other hand, the Racah parameter (B ) decreases with increasing composition x, which may be connected with an increase in the covalency of the metal-ligand bond with increasing composition x in the Mgx Cd1-x Se system.
Image Restoration for Fluorescence Planar Imaging with Diffusion Model
Gong, Yuzhu; Li, Yang
2017-01-01
Fluorescence planar imaging (FPI) is failure to capture high resolution images of deep fluorochromes due to photon diffusion. This paper presents an image restoration method to deal with this kind of blurring. The scheme of this method is conceived based on a reconstruction method in fluorescence molecular tomography (FMT) with diffusion model. A new unknown parameter is defined through introducing the first mean value theorem for definite integrals. System matrix converting this unknown parameter to the blurry image is constructed with the elements of depth conversion matrices related to a chosen plane named focal plane. Results of phantom and mouse experiments show that the proposed method is capable of reducing the blurring of FPI image caused by photon diffusion when the depth of focal plane is chosen within a proper interval around the true depth of fluorochrome. This method will be helpful to the estimation of the size of deep fluorochrome. PMID:29279843
Adaptive boundary concentration control using Zakai equation
NASA Astrophysics Data System (ADS)
Tenno, R.; Mendelson, A.
2010-06-01
A mean-variance control problem is formulated with respect to a partially observed nonlinear system that includes unknown constant parameters. A physical prototype of the system is the cathode surface reaction in an electrolysis cell, where the controller aim is to keep the boundary concentration of species in the near vicinity of the cathode surface low but not zero. The boundary concentration is a diffusion-controlled process observed through the measured current density and, in practice, controlled through the applied voltage. The former incomplete data control problem is converted to complete data-to the so-called separated control problem whose solution is given by the infinite-dimensional Zakai equation. In this article, the separated control problem is solved numerically using pathwise integration of the Zakai equation. This article demonstrates precise tracking of the target trajectory with a rapid convergence of estimates to unknown parameters, which take place simultaneously with control.
Optimal estimation of parameters and states in stochastic time-varying systems with time delay
NASA Astrophysics Data System (ADS)
Torkamani, Shahab; Butcher, Eric A.
2013-08-01
In this study estimation of parameters and states in stochastic linear and nonlinear delay differential systems with time-varying coefficients and constant delay is explored. The approach consists of first employing a continuous time approximation to approximate the stochastic delay differential equation with a set of stochastic ordinary differential equations. Then the problem of parameter estimation in the resulting stochastic differential system is represented as an optimal filtering problem using a state augmentation technique. By adapting the extended Kalman-Bucy filter to the resulting system, the unknown parameters of the time-delayed system are estimated from noise-corrupted, possibly incomplete measurements of the states.
The classical equation of state of fully ionized plasmas
NASA Astrophysics Data System (ADS)
Eisa, Dalia Ahmed
2011-03-01
The aim of this paper is to calculate the analytical form of the equation of state until the third virial coefficient of a classical system interacting via an effective potential of fully Ionized Plasmas. The excess osmotic pressure is represented in the forms of a convergent series expansions in terms of the plasma Parameter μ _{ab} = {{{e_a e_b χ } over {DKT}}}, where χ2 is the square of the inverse Debye radius. We consider only the thermal equilibrium plasma.
Wu, Q; Zhao, X; You, H
2017-05-18
This study aimed to test the diagnostic performance of a fully quantitative fibrosis assessment tool for liver fibrosis in patients with chronic hepatitis B (CHB), primary biliary cirrhosis (PBC) and non-alcoholic steatohepatitis (NASH). A total of 117 patients with liver fibrosis were included in this study, including 50 patients with CHB, 49 patients with PBC and 18 patients with NASH. All patients underwent liver biopsy (LB). Fibrosis stages were assessed by two experienced pathologists. Histopathological images of LB slices were processed by second harmonic generation (SHG)/two-photon excited fluorescence (TPEF) microscopy without staining, a system called qFibrosis (quantitative fibrosis) system. Altogether 101 quantitative features of the SHG/TPEF images were acquired. The parameters of aggregated collagen in portal, septal and fibrillar areas increased significantly with stages of liver fibrosis in PBC and CHB (P<0.05), but the same was not found for parameters of distributed collagen (P>0.05). There was a significant correlation between parameters of aggregated collagen in portal, septal and fibrillar areas and stages of liver fibrosis from CHB and PBC (P<0.05), but no correlation was found between the distributed collagen parameters and the stages of liver fibrosis from those patients (P>0.05). There was no significant correlation between NASH parameters and stages of fibrosis (P>0.05). For CHB and PBC patients, the highest correlation was between septal parameters and fibrosis stages, the second highest was between portal parameters and fibrosis stages and the lowest correlation was between fibrillar parameters and fibrosis stages. The correlation between the septal parameters of the PBC and stages is significantly higher than the parameters of the other two areas (P<0.05). The qFibrosis candidate parameters based on CHB were also applicable for quantitative analysis of liver fibrosis in PBC patients. Different parameters should be selected for liver fibrosis assessment in different stages of PBC compared with CHB.
Wu, Q.; Zhao, X.; You, H.
2017-01-01
This study aimed to test the diagnostic performance of a fully quantitative fibrosis assessment tool for liver fibrosis in patients with chronic hepatitis B (CHB), primary biliary cirrhosis (PBC) and non-alcoholic steatohepatitis (NASH). A total of 117 patients with liver fibrosis were included in this study, including 50 patients with CHB, 49 patients with PBC and 18 patients with NASH. All patients underwent liver biopsy (LB). Fibrosis stages were assessed by two experienced pathologists. Histopathological images of LB slices were processed by second harmonic generation (SHG)/two-photon excited fluorescence (TPEF) microscopy without staining, a system called qFibrosis (quantitative fibrosis) system. Altogether 101 quantitative features of the SHG/TPEF images were acquired. The parameters of aggregated collagen in portal, septal and fibrillar areas increased significantly with stages of liver fibrosis in PBC and CHB (P<0.05), but the same was not found for parameters of distributed collagen (P>0.05). There was a significant correlation between parameters of aggregated collagen in portal, septal and fibrillar areas and stages of liver fibrosis from CHB and PBC (P<0.05), but no correlation was found between the distributed collagen parameters and the stages of liver fibrosis from those patients (P>0.05). There was no significant correlation between NASH parameters and stages of fibrosis (P>0.05). For CHB and PBC patients, the highest correlation was between septal parameters and fibrosis stages, the second highest was between portal parameters and fibrosis stages and the lowest correlation was between fibrillar parameters and fibrosis stages. The correlation between the septal parameters of the PBC and stages is significantly higher than the parameters of the other two areas (P<0.05). The qFibrosis candidate parameters based on CHB were also applicable for quantitative analysis of liver fibrosis in PBC patients. Different parameters should be selected for liver fibrosis assessment in different stages of PBC compared with CHB. PMID:28538834
NASA Astrophysics Data System (ADS)
Tomar, Kiledar S.; Kumar, Shashi; Tolpekin, Valentyn A.; Joshi, Sushil K.
2016-05-01
Forests act as sink of carbon and as a result maintains carbon cycle in atmosphere. Deforestation leads to imbalance in global carbon cycle and changes in climate. Hence estimation of forest biophysical parameter like biomass becomes a necessity. PolSAR has the ability to discriminate the share of scattering element like surface, double bounce and volume scattering in a single SAR resolution cell. Studies have shown that volume scattering is a significant parameter for forest biophysical characterization which mainly occurred from vegetation due to randomly oriented structures. This random orientation of forest structure causes shift in orientation angle of polarization ellipse which ultimately disturbs the radar signature and shows overestimation of volume scattering and underestimation of double bounce scattering after decomposition of fully PolSAR data. Hybrid polarimetry has the advantage of zero POA shift due to rotational symmetry followed by the circular transmission of electromagnetic waves. The prime objective of this study was to extract the potential of Hybrid PolSAR and fully PolSAR data for AGB estimation using Extended Water Cloud model. Validation was performed using field biomass. The study site chosen was Barkot Forest, Uttarakhand, India. To obtain the decomposition components, m-alpha and Yamaguchi decomposition modelling for Hybrid and fully PolSAR data were implied respectively. The RGB composite image for both the decomposition techniques has generated. The contribution of all scattering from each plot for m-alpha and Yamaguchi decomposition modelling were extracted. The R2 value for modelled AGB and field biomass from Hybrid PolSAR and fully PolSAR data were found 0.5127 and 0.4625 respectively. The RMSE for Hybrid and fully PolSAR between modelled AGB and field biomass were 63.156 (t ha-1) and 73.424 (t ha-1) respectively. On the basis of RMSE and R2 value, this study suggests Hybrid PolSAR decomposition modelling to retrieve scattering element for AGB estimation from forest.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boffi, V.C.; Molinari, V.G.; Parks, D.E.
1962-05-01
Features of the pulsed neution source theory connected with the measurement of diffusion parameters are discussed. Various analytical procedures for determining the decay constant of the fully thermalized neutron flux are compared. The problem of the diffusion coefficient definition is also considered in some detail. (auth)
Calibration of two complex ecosystem models with different likelihood functions
NASA Astrophysics Data System (ADS)
Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán
2014-05-01
The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model goodness metric on calibration. The different likelihoods are different functions of RMSE (root mean squared error) weighted by measurement uncertainty: exponential / linear / quadratic / linear normalized by correlation. As a first calibration step sensitivity analysis was performed in order to select the influential parameters which have strong effect on the output data. In the second calibration step only the sensitive parameters were calibrated (optimal values and confidence intervals were calculated). In case of PaSim more parameters were found responsible for the 95% of the output data variance than is case of BBGC MuSo. Analysis of the results of the optimized models revealed that the exponential likelihood estimation proved to be the most robust (best model simulation with optimized parameter, highest confidence interval increase). The cross-validation of the model simulations can help in constraining the highly uncertain greenhouse gas budget of grasslands.
GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mikkelsen, K.; Næss, S. K.; Eriksen, H. K., E-mail: kristin.mikkelsen@astro.uio.no
2013-11-10
We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3)more » better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N{sub par}. One of the main goals of the present paper is to determine how large N{sub par} can be, while still maintaining reasonable computational efficiency; we find that N{sub par} = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme.« less
A novel fully automatic scheme for fiducial marker-based alignment in electron tomography.
Han, Renmin; Wang, Liansan; Liu, Zhiyong; Sun, Fei; Zhang, Fa
2015-12-01
Although the topic of fiducial marker-based alignment in electron tomography (ET) has been widely discussed for decades, alignment without human intervention remains a difficult problem. Specifically, the emergence of subtomogram averaging has increased the demand for batch processing during tomographic reconstruction; fully automatic fiducial marker-based alignment is the main technique in this process. However, the lack of an accurate method for detecting and tracking fiducial markers precludes fully automatic alignment. In this paper, we present a novel, fully automatic alignment scheme for ET. Our scheme has two main contributions: First, we present a series of algorithms to ensure a high recognition rate and precise localization during the detection of fiducial markers. Our proposed solution reduces fiducial marker detection to a sampling and classification problem and further introduces an algorithm to solve the parameter dependence of marker diameter and marker number. Second, we propose a novel algorithm to solve the tracking of fiducial markers by reducing the tracking problem to an incomplete point set registration problem. Because a global optimization of a point set registration occurs, the result of our tracking is independent of the initial image position in the tilt series, allowing for the robust tracking of fiducial markers without pre-alignment. The experimental results indicate that our method can achieve an accurate tracking, almost identical to the current best one in IMOD with half automatic scheme. Furthermore, our scheme is fully automatic, depends on fewer parameters (only requires a gross value of the marker diameter) and does not require any manual interaction, providing the possibility of automatic batch processing of electron tomographic reconstruction. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Chu, Zhongyi; Ma, Ye; Hou, Yueyang; Wang, Fengwen
2017-02-01
This paper presents a novel identification method for the intact inertial parameters of an unknown object in space captured by a manipulator in a space robotic system. With strong dynamic and kinematic coupling existing in the robotic system, the inertial parameter identification of the unknown object is essential for the ideal control strategy based on changes in the attitude and trajectory of the space robot via capturing operations. Conventional studies merely refer to the principle and theory of identification, and an error analysis process of identification is deficient for a practical scenario. To solve this issue, an analysis of the effect of errors on identification is illustrated first, and the accumulation of measurement or estimation errors causing poor identification precision is demonstrated. Meanwhile, a modified identification equation incorporating the contact force, as well as the force/torque of the end-effector, is proposed to weaken the accumulation of errors and improve the identification accuracy. Furthermore, considering a severe disturbance condition caused by various measured noises, the hybrid immune algorithm, Recursive Least Squares and Affine Projection Sign Algorithm (RLS-APSA), is employed to decode the modified identification equation to ensure a stable identification property. Finally, to verify the validity of the proposed identification method, the co-simulation of ADAMS-MATLAB is implemented by multi-degree of freedom models of a space robotic system, and the numerical results show a precise and stable identification performance, which is able to guarantee the execution of aerospace operations and prevent failed control strategies.
Adaptive neural control for a class of nonlinear time-varying delay systems with unknown hysteresis.
Liu, Zhi; Lai, Guanyu; Zhang, Yun; Chen, Xin; Chen, Chun Lung Philip
2014-12-01
This paper investigates the fusion of unknown direction hysteresis model with adaptive neural control techniques in face of time-delayed continuous time nonlinear systems without strict-feedback form. Compared with previous works on the hysteresis phenomenon, the direction of the modified Bouc-Wen hysteresis model investigated in the literature is unknown. To reduce the computation burden in adaptation mechanism, an optimized adaptation method is successfully applied to the control design. Based on the Lyapunov-Krasovskii method, two neural-network-based adaptive control algorithms are constructed to guarantee that all the system states and adaptive parameters remain bounded, and the tracking error converges to an adjustable neighborhood of the origin. In final, some numerical examples are provided to validate the effectiveness of the proposed control methods.
NASA Astrophysics Data System (ADS)
Mozaffar, A.; Schoon, N.; Digrado, A.; Bachy, A.; Delaplace, P.; du Jardin, P.; Fauconnier, M.-L.; Aubinet, M.; Heinesch, B.; Amelynck, C.
2017-03-01
Because of its high abundance and long lifetime compared to other volatile organic compounds in the atmosphere, methanol (CH3OH) plays an important role in atmospheric chemistry. Even though agricultural crops are believed to be a large source of methanol, emission inventories from those crop ecosystems are still scarce and little information is available concerning the driving mechanisms for methanol production and emission at different developmental stages of the plants/leaves. This study focuses on methanol emissions from Zea mays L. (maize), which is vastly cultivated throughout the world. Flux measurements have been performed on young plants, almost fully grown leaves and fully grown leaves, enclosed in dynamic flow-through enclosures in a temperature and light-controlled environmental chamber. Strong differences in the response of methanol emissions to variations in PPFD (Photosynthetic Photon Flux Density) were noticed between the young plants, almost fully grown and fully grown leaves. Moreover, young maize plants showed strong emission peaks following light/dark transitions, for which guttation can be put forward as a hypothetical pathway. Young plants' average daily methanol fluxes exceeded by a factor of 17 those of almost fully grown and fully grown leaves when expressed per leaf area. Absolute flux values were found to be smaller than those reported in the literature, but in fair agreement with recent ecosystem scale flux measurements above a maize field of the same variety as used in this study. The flux measurements in the current study were used to evaluate the dynamic biogenic volatile organic compound (BVOC) emission model of Niinemets and Reichstein. The modelled and measured fluxes from almost fully grown leaves were found to agree best when a temperature and light dependent methanol production function was applied. However, this production function turned out not to be suitable for modelling the observed emissions from the young plants, indicating that production must be influenced by (an) other parameter(s). This study clearly shows that methanol emission from maize is complex, especially for young plants. Additional studies at different developmental stages of other crop species will be required in order to develop accurate methanol emission algorithms for agricultural crops.
Improved mapping of radio sources from VLBI data by least-square fit
NASA Technical Reports Server (NTRS)
Rodemich, E. R.
1985-01-01
A method is described for producing improved mapping of radio sources from Very Long Base Interferometry (VLBI) data. The method described is more direct than existing Fourier methods, is often more accurate, and runs at least as fast. The visibility data is modeled here, as in existing methods, as a function of the unknown brightness distribution and the unknown antenna gains and phases. These unknowns are chosen so that the resulting function values are as near as possible to the observed values. If researchers use the radio mapping source deviation to measure the closeness of this fit to the observed values, they are led to the problem of minimizing a certain function of all the unknown parameters. This minimization problem cannot be solved directly, but it can be attacked by iterative methods which we show converge automatically to the minimum with no user intervention. The resulting brightness distribution will furnish the best fit to the data among all brightness distributions of given resolution.
Li, Yongming; Tong, Shaocheng
2017-06-28
In this paper, an adaptive neural networks (NNs)-based decentralized control scheme with the prescribed performance is proposed for uncertain switched nonstrict-feedback interconnected nonlinear systems. It is assumed that nonlinear interconnected terms and nonlinear functions of the concerned systems are unknown, and also the switching signals are unknown and arbitrary. A linear state estimator is constructed to solve the problem of unmeasured states. The NNs are employed to approximate unknown interconnected terms and nonlinear functions. A new output feedback decentralized control scheme is developed by using the adaptive backstepping design technique. The control design problem of nonlinear interconnected switched systems with unknown switching signals can be solved by the proposed scheme, and only a tuning parameter is needed for each subsystem. The proposed scheme can ensure that all variables of the control systems are semi-globally uniformly ultimately bounded and the tracking errors converge to a small residual set with the prescribed performance bound. The effectiveness of the proposed control approach is verified by some simulation results.
Ebel, B.A.; Mirus, B.B.; Heppner, C.S.; VanderKwaak, J.E.; Loague, K.
2009-01-01
Distributed hydrologic models capable of simulating fully-coupled surface water and groundwater flow are increasingly used to examine problems in the hydrologic sciences. Several techniques are currently available to couple the surface and subsurface; the two most frequently employed approaches are first-order exchange coefficients (a.k.a., the surface conductance method) and enforced continuity of pressure and flux at the surface-subsurface boundary condition. The effort reported here examines the parameter sensitivity of simulated hydrologic response for the first-order exchange coefficients at a well-characterized field site using the fully coupled Integrated Hydrology Model (InHM). This investigation demonstrates that the first-order exchange coefficients can be selected such that the simulated hydrologic response is insensitive to the parameter choice, while simulation time is considerably reduced. Alternatively, the ability to choose a first-order exchange coefficient that intentionally decouples the surface and subsurface facilitates concept-development simulations to examine real-world situations where the surface-subsurface exchange is impaired. While the parameters comprising the first-order exchange coefficient cannot be directly estimated or measured, the insensitivity of the simulated flow system to these parameters (when chosen appropriately) combined with the ability to mimic actual physical processes suggests that the first-order exchange coefficient approach can be consistent with a physics-based framework. Copyright ?? 2009 John Wiley & Sons, Ltd.
Impact of polymer structure and composition on fully resorbable endovascular scaffold performance
Ferdous, Jahid; Kolachalama, Vijaya B.; Shazly, Tarek
2014-01-01
Fully erodible endovascular scaffolds are being increasingly considered for the treatment of obstructive arterial disease owing to their potential to mitigate long-term risks associated with permanent alternatives. While complete scaffold erosion facilitates vessel healing, generation and release of material degradation by-products from candidate materials such as poly-l-lactide (PLLA) may elicit local inflammatory responses that limit implant efficacy. We developed a computational framework to quantify how the compositional and structural parameters of PLLA-based fully erodible endovascular scaffolds affect degradation kinetics, erosion kinetics and the transient accumulation of material by-products within the arterial wall. Parametric studies reveal that, while some material properties have similar effects on these critical processes, others induce qualitatively opposing responses. For example, scaffold degradation is only mildly responsive to changes in either PLLA polydispersity or the initial degree of crystallinity, while the erosion kinetics is comparatively sensitive to crystallinity. Moreover, lactide doping can effectively tune both scaffold degradation and erosion, but a concomitant increase in local byproduct accumulation raises concerns about implant safety. Optimized erodible endovascular scaffolds must precisely balance therapeutic function and biological response over the implant lifetime, where compositional and structural parameters will have differential effects on implant performance. PMID:23261926
Miniaturized Ka-Band Dual-Channel Radar
NASA Technical Reports Server (NTRS)
Hoffman, James P.; Moussessian, Alina; Jenabi, Masud; Custodero, Brian
2011-01-01
Smaller (volume, mass, power) electronics for a Ka-band (36 GHz) radar interferometer were required. To reduce size and achieve better control over RFphase versus temperature, fully hybrid electronics were developed for the RF portion of the radar s two-channel receiver and single-channel transmitter. In this context, fully hybrid means that every active RF device was an open die, and all passives were directly attached to the subcarrier. Attachments were made using wire and ribbon bonding. In this way, every component, even small passives, was selected for the fabrication of the two radar receivers, and the devices were mounted relative to each other in order to make complementary components isothermal and to isolate other components from potential temperature gradients. This is critical for developing receivers that can track each other s phase over temperature, which is a key mission driver for obtaining ocean surface height. Fully hybrid, Ka-band (36 GHz) radar transmitter and dual-channel receiver were developed for spaceborne radar interferometry. The fully hybrid fabrication enables control over every aspect of the component selection, placement, and connection. Since the two receiver channels must track each other to better than 100 millidegrees of RF phase over several minutes, the hardware in the two receivers must be "identical," routed the same (same line lengths), and as isothermal as possible. This level of design freedom is not possible with packaged components, which include many internal passive, unknown internal connection lengths/types, and often a single orientation of inputs and outputs.
Temporal gravity field modeling based on least square collocation with short-arc approach
NASA Astrophysics Data System (ADS)
ran, jiangjun; Zhong, Min; Xu, Houze; Liu, Chengshu; Tangdamrongsub, Natthachet
2014-05-01
After the launch of the Gravity Recovery And Climate Experiment (GRACE) in 2002, several research centers have attempted to produce the finest gravity model based on different approaches. In this study, we present an alternative approach to derive the Earth's gravity field, and two main objectives are discussed. Firstly, we seek the optimal method to estimate the accelerometer parameters, and secondly, we intend to recover the monthly gravity model based on least square collocation method. The method has been paid less attention compared to the least square adjustment method because of the massive computational resource's requirement. The positions of twin satellites are treated as pseudo-observations and unknown parameters at the same time. The variance covariance matrices of the pseudo-observations and the unknown parameters are valuable information to improve the accuracy of the estimated gravity solutions. Our analyses showed that introducing a drift parameter as an additional accelerometer parameter, compared to using only a bias parameter, leads to a significant improvement of our estimated monthly gravity field. The gravity errors outside the continents are significantly reduced based on the selected set of the accelerometer parameters. We introduced the improved gravity model namely the second version of Institute of Geodesy and Geophysics, Chinese Academy of Sciences (IGG-CAS 02). The accuracy of IGG-CAS 02 model is comparable to the gravity solutions computed from the Geoforschungszentrum (GFZ), the Center for Space Research (CSR) and the NASA Jet Propulsion Laboratory (JPL). In term of the equivalent water height, the correlation coefficients over the study regions (the Yangtze River valley, the Sahara desert, and the Amazon) among four gravity models are greater than 0.80.
The Inverse Problem for Confined Aquifer Flow: Identification and Estimation With Extensions
NASA Astrophysics Data System (ADS)
Loaiciga, Hugo A.; MariñO, Miguel A.
1987-01-01
The contributions of this work are twofold. First, a methodology for estimating the elements of parameter matrices in the governing equation of flow in a confined aquifer is developed. The estimation techniques for the distributed-parameter inverse problem pertain to linear least squares and generalized least squares methods. The linear relationship among the known heads and unknown parameters of the flow equation provides the background for developing criteria for determining the identifiability status of unknown parameters. Under conditions of exact or overidentification it is possible to develop statistically consistent parameter estimators and their asymptotic distributions. The estimation techniques, namely, two-stage least squares and three stage least squares, are applied to a specific groundwater inverse problem and compared between themselves and with an ordinary least squares estimator. The three-stage estimator provides the closer approximation to the actual parameter values, but it also shows relatively large standard errors as compared to the ordinary and two-stage estimators. The estimation techniques provide the parameter matrices required to simulate the unsteady groundwater flow equation. Second, a nonlinear maximum likelihood estimation approach to the inverse problem is presented. The statistical properties of maximum likelihood estimators are derived, and a procedure to construct confidence intervals and do hypothesis testing is given. The relative merits of the linear and maximum likelihood estimators are analyzed. Other topics relevant to the identification and estimation methodologies, i.e., a continuous-time solution to the flow equation, coping with noise-corrupted head measurements, and extension of the developed theory to nonlinear cases are also discussed. A simulation study is used to evaluate the methods developed in this study.
NASA Astrophysics Data System (ADS)
Škoda, Petr; Palička, Andrej; Koza, Jakub; Shakurova, Ksenia
2017-06-01
The current archives of LAMOST multi-object spectrograph contain millions of fully reduced spectra, from which the automatic pipelines have produced catalogues of many parameters of individual objects, including their approximate spectral classification. This is, however, mostly based on the global shape of the whole spectrum and on integral properties of spectra in given bandpasses, namely presence and equivalent width of prominent spectral lines, while for identification of some interesting object types (e.g. Be stars or quasars) the detailed shape of only a few lines is crucial. Here the machine learning is bringing a new methodology capable of improving the reliability of classification of such objects even in boundary cases. We present results of Spark-based semi-supervised machine learning of LAMOST spectra attempting to automatically identify the single and double-peak emission of Hα line typical for Be and B[e] stars. The labelled sample was obtained from archive of 2m Perek telescope at Ondřejov observatory. A simple physical model of spectrograph resolution was used in domain adaptation to LAMOST training domain. The resulting list of candidates contains dozens of Be stars (some are likely yet unknown), but also a bunch of interesting objects resembling spectra of quasars and even blazars, as well as many instrumental artefacts. The verification of a nature of interesting candidates benefited considerably from cross-matching and visualisation in the Virtual Observatory environment.
Simulation of naturally fractured reservoirs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saidi, A.M.
1983-11-01
A three-dimensional, three-phase reservoir simulator was developed to study the behavior of fully or partially fractured reservoirs. It is also demonstrated, that when a fractured reservoir is subject to a relatively large rate of pressure drop and/or it composed of relatively large blocks, the pseudo steady-state pressure concept gives large errors as compared with transient fromulation. In addition, when gravity drainage and imbibitum processes, which is the most important mechanism in the fractured reservoirs, are represented by a ''lumped parameter'' even larger errors can be produced in exchange flow between matrix and fractures. For these reasons, the matrix blocks aremore » gridded and the transfer between matrix and fractures are calculated using pressure and diffusion transient concept. In this way the gravity drainage is also calculated accurately. As the matrix-fracture exchange flow depends on the location of each matrix grid relative to the GOC and/or WOC in fracture, the exchange flow equation are derived and given for each possible case. The differential equation describing the flow of water, oil, and gas within the matrix and fracture system, each of which may contain six unknowns, are presented. The two sets of equations are solved implicitly for pressure water, and gas stauration in both matrix and fractures. The first twenty two years of the history of Haft Kel field was successfully matched with this model and the results are included.« less
Testing local-realism and macro-realism under generalized dichotomic measurements
NASA Astrophysics Data System (ADS)
Das, Debarshi; Mal, Shiladitya; Home, Dipankar
2018-04-01
Generalized quantum measurements with two outcomes are fully characterized by two real parameters, dubbed as sharpness parameter and biasedness parameter and they can be linked with different aspects of the experimental setup. It is known that sharpness parameter characterizes precision of the measurements and decreasing sharpness parameter of the measurements reduces the possibility of probing quantum features like quantum mechanical (QM) violation of local-realism (LR) or macro-realism (MR). Here we investigate the effect of biasedness together with that of sharpness of measurements and find a trade-off between those two parameters in the context of probing QM violations of LR and MR. Interestingly, we also find that the above mentioned trade-off is more robust in the latter case.
Quantum Optimization of Fully Connected Spin Glasses
NASA Astrophysics Data System (ADS)
Venturelli, Davide; Mandrà, Salvatore; Knysh, Sergey; O'Gorman, Bryan; Biswas, Rupak; Smelyanskiy, Vadim
2015-07-01
Many NP-hard problems can be seen as the task of finding a ground state of a disordered highly connected Ising spin glass. If solutions are sought by means of quantum annealing, it is often necessary to represent those graphs in the annealer's hardware by means of the graph-minor embedding technique, generating a final Hamiltonian consisting of coupled chains of ferromagnetically bound spins, whose binding energy is a free parameter. In order to investigate the effect of embedding on problems of interest, the fully connected Sherrington-Kirkpatrick model with random ±1 couplings is programmed on the D-Wave TwoTM annealer using up to 270 qubits interacting on a Chimera-type graph. We present the best embedding prescriptions for encoding the Sherrington-Kirkpatrick problem in the Chimera graph. The results indicate that the optimal choice of embedding parameters could be associated with the emergence of the spin-glass phase of the embedded problem, whose presence was previously uncertain. This optimal parameter setting allows the performance of the quantum annealer to compete with (and potentially outperform, in the absence of analog control errors) optimized simulated annealing algorithms.
2015-01-01
Targeted environmental monitoring reveals contamination by known chemicals, but may exclude potentially pervasive but unknown compounds. Marine mammals are sentinels of persistent and bioaccumulative contaminants due to their longevity and high trophic position. Using nontargeted analysis, we constructed a mass spectral library of 327 persistent and bioaccumulative compounds identified in blubber from two ecotypes of common bottlenose dolphins (Tursiops truncatus) sampled in the Southern California Bight. This library of halogenated organic compounds (HOCs) consisted of 180 anthropogenic contaminants, 41 natural products, 4 with mixed sources, 8 with unknown sources, and 94 with partial structural characterization and unknown sources. The abundance of compounds whose structures could not be fully elucidated highlights the prevalence of undiscovered HOCs accumulating in marine food webs. Eighty-six percent of the identified compounds are not currently monitored, including 133 known anthropogenic chemicals. Compounds related to dichlorodiphenyltrichloroethane (DDT) were the most abundant. Natural products were, in some cases, detected at abundances similar to anthropogenic compounds. The profile of naturally occurring HOCs differed between ecotypes, suggesting more abundant offshore sources of these compounds. This nontargeted analytical framework provided a comprehensive list of HOCs that may be characteristic of the region, and its application within monitoring surveys may suggest new chemicals for evaluation. PMID:25526519
NASA Astrophysics Data System (ADS)
Huang, Chen; Chi, Yu-Chieh
2017-12-01
The key element in Kohn-Sham (KS) density functional theory is the exchange-correlation (XC) potential. We recently proposed the exchange-correlation potential patching (XCPP) method with the aim of directly constructing high-level XC potential in a large system by patching the locally computed, high-level XC potentials throughout the system. In this work, we investigate the patching of the exact exchange (EXX) and the random phase approximation (RPA) correlation potentials. A major challenge of XCPP is that a cluster's XC potential, obtained by solving the optimized effective potential equation, is only determined up to an unknown constant. Without fully determining the clusters' XC potentials, the patched system's XC potential is "uneven" in the real space and may cause non-physical results. Here, we developed a simple method to determine this unknown constant. The performance of XCPP-RPA is investigated on three one-dimensional systems: H20, H10Li8, and the stretching of the H19-H bond. We investigated two definitions of EXX: (i) the definition based on the adiabatic connection and fluctuation dissipation theorem (ACFDT) and (ii) the Hartree-Fock (HF) definition. With ACFDT-type EXX, effective error cancellations were observed between the patched EXX and the patched RPA correlation potentials. Such error cancellations were absent for the HF-type EXX, which was attributed to the fact that for systems with fractional occupation numbers, the integral of the HF-type EXX hole is not -1. The KS spectra and band gaps from XCPP agree reasonably well with the benchmarks as we make the clusters large.
Discontinuous dual-primal mixed finite elements for elliptic problems
NASA Technical Reports Server (NTRS)
Bottasso, Carlo L.; Micheletti, Stefano; Sacco, Riccardo
2000-01-01
We propose a novel discontinuous mixed finite element formulation for the solution of second-order elliptic problems. Fully discontinuous piecewise polynomial finite element spaces are used for the trial and test functions. The discontinuous nature of the test functions at the element interfaces allows to introduce new boundary unknowns that, on the one hand enforce the weak continuity of the trial functions, and on the other avoid the need to define a priori algorithmic fluxes as in standard discontinuous Galerkin methods. Static condensation is performed at the element level, leading to a solution procedure based on the sole interface unknowns. The resulting family of discontinuous dual-primal mixed finite element methods is presented in the one and two-dimensional cases. In the one-dimensional case, we show the equivalence of the method with implicit Runge-Kutta schemes of the collocation type exhibiting optimal behavior. Numerical experiments in one and two dimensions demonstrate the order accuracy of the new method, confirming the results of the analysis.
Extremum seeking with bounded update rates
Scheinker, Alexander; Krstić, Miroslav
2013-11-16
In this work, we present a form of extremum seeking (ES) in which the unknown function being minimized enters the system’s dynamics as the argument of a cosine or sine term, thereby guaranteeing known bounds on update rates and control efforts. We present general n-dimensional optimization and stabilization results as well as 2D vehicle control, with bounded velocity and control efforts. For application to autonomous vehicles, tracking a source in a GPS denied environment with unknown orientation, this ES approach allows for smooth heading angle actuation, with constant velocity, and in application to a unicycle-type vehicle results in control abilitymore » as if the vehicle is fully actuated. Our stability analysis is made possible by the classic results of Kurzweil, Jarnik, Sussmann, and Liu, regarding systems with highly oscillatory terms. In our stability analysis, we combine the averaging results with a semi-global practical stability result under small parametric perturbations developed by Moreau and Aeyels.« less
Fully Coupled Aero-Thermochemical-Elastic Simulations of an Eroding Graphite Nozzle
NASA Technical Reports Server (NTRS)
Blades, E. L.; Reveles, N. D.; Nucci, M.; Maclean, M.
2017-01-01
A multiphysics simulation capability has been developed that incorporates mutual interactions between aerodynamics, structural response from aero/thermal loading, ablation/pyrolysis, heating, and surface-to-surface radiation to perform high-fidelity, fully coupled aerothermoelastic ablation simulations, which to date had been unattainable. The multiphysics framework couples CHAR (a 3-D implicit charring ablator solver), Loci/CHEM (a computational fluid dynamics solver for high-speed chemically reacting flows), and Abaqus (a nonlinear structural dynamics solver) to create a fully coupled aerothermoelastic charring ablative solver. The solvers are tightly coupled in a fully integrated fashion to resolve the effects of the ablation pyrolysis and charring process and chemistry products upon the flow field, the changes in surface geometry due to recession upon the flow field, and thermal-structural analysis of the body from the induced aerodynamic heating from the flow field. The multiphysics framework was successfully demonstrated on a solid rocket motor graphite nozzle erosion application. Comparisons were made with available experimental data that measured the throat erosion during the motor firing. The erosion data is well characterized, as the test rig was equipped with a windowed nozzle section for real-time X-ray radiography diagnostics of the instantaneous throat variations for deducing the instantaneous erosion rates. The nozzle initially undergoes a nozzle contraction due to thermal expansion before ablation effects are able to widen the throat. A series of parameters studies were conducted using the coupled simulation capability to determine the sensitivity of the nozzle erosion to different parameters. The parameter studies included the shape of the nozzle throat (flat versus rounded), the material properties, the effect of the choice of turbulence model, and the inclusion or exclusion of the mechanical thermal expansion. Overall, the predicted results match the experiment very well, and the predictions were able to bound the data within acceptable limits.
NASA Astrophysics Data System (ADS)
Duan, Y.; Durand, M. T.; Jezek, K. C.; Yardim, C.; Bringer, A.; Aksoy, M.; Johnson, J. T.
2017-12-01
The ultra-wideband software-defined microwave radiometer (UWBRAD) is designed to provide ice sheet internal temperature product via measuring low frequency microwave emission. Twelve channels ranging from 0.5 to 2.0 GHz are covered by the instrument. A Greenland air-borne demonstration was demonstrated in September 2016, provided first demonstration of Ultra-wideband radiometer observations of geophysical scenes, including ice sheets. Another flight is planned for September 2017 for acquiring measurements in central ice sheet. A Bayesian framework is designed to retrieve the ice sheet internal temperature from simulated UWBRAD brightness temperature (Tb) measurements over Greenland flight path with limited prior information of the ground. A 1-D heat-flow model, the Robin Model, was used to model the ice sheet internal temperature profile with ground information. Synthetic UWBRAD Tb observations was generated via the partially coherent radiation transfer model, which utilizes the Robin model temperature profile and an exponential fit of ice density from Borehole measurement as input, and corrupted with noise. The effective surface temperature, geothermal heat flux, the variance of upper layer ice density, and the variance of fine scale density variation at deeper ice sheet were treated as unknown variables within the retrieval framework. Each parameter is defined with its possible range and set to be uniformly distributed. The Markov Chain Monte Carlo (MCMC) approach is applied to make the unknown parameters randomly walk in the parameter space. We investigate whether the variables can be improved over priors using the MCMC approach and contribute to the temperature retrieval theoretically. UWBRAD measurements near camp century from 2016 was also treated with the MCMC to examine the framework with scattering effect. The fine scale density fluctuation is an important parameter. It is the most sensitive yet highly unknown parameter in the estimation framework. Including the fine scale density fluctuation greatly improved the retrieval results. The ice sheet vertical temperature profile, especially the 10m temperature, can be well retrieved via the MCMC process. Future retrieval work will apply the Bayesian approach to UWBRAD airborne measurements.
While relationships between chemical structure and observed properties or activities (QSAR - quantitative structure activity relationship) can be used to predict the behavior of unknown chemicals, this method is semiempirical in nature relying on high quality experimental data to...
Reanalysis of 24 Nearby Open Clusters using Gaia data
NASA Astrophysics Data System (ADS)
Yen, Steffi X.; Reffert, Sabine; Röser, Siegfried; Schilbach, Elena; Kharchenko, Nina V.; Piskunov, Anatoly E.
2018-04-01
We have developed a fully automated cluster characterization pipeline, which simultaneously determines cluster membership and fits the fundamental cluster parameters: distance, reddening, and age. We present results for 24 established clusters and compare them to literature values. Given the large amount of stellar data for clusters available from Gaia DR2 in 2018, this pipeline will be beneficial to analyzing the parameters of open clusters in our Galaxy.
Optimal critic learning for robot control in time-varying environments.
Wang, Chen; Li, Yanan; Ge, Shuzhi Sam; Lee, Tong Heng
2015-10-01
In this paper, optimal critic learning is developed for robot control in a time-varying environment. The unknown environment is described as a linear system with time-varying parameters, and impedance control is employed for the interaction control. Desired impedance parameters are obtained in the sense of an optimal realization of the composite of trajectory tracking and force regulation. Q -function-based critic learning is developed to determine the optimal impedance parameters without the knowledge of the system dynamics. The simulation results are presented and compared with existing methods, and the efficacy of the proposed method is verified.
Quantum Hamiltonian identification from measurement time traces.
Zhang, Jun; Sarovar, Mohan
2014-08-22
Precise identification of parameters governing quantum processes is a critical task for quantum information and communication technologies. In this Letter, we consider a setting where system evolution is determined by a parametrized Hamiltonian, and the task is to estimate these parameters from temporal records of a restricted set of system observables (time traces). Based on the notion of system realization from linear systems theory, we develop a constructive algorithm that provides estimates of the unknown parameters directly from these time traces. We illustrate the algorithm and its robustness to measurement noise by applying it to a one-dimensional spin chain model with variable couplings.
Dosso, Stan E; Wilmut, Michael J; Nielsen, Peter L
2010-07-01
This paper applies Bayesian source tracking in an uncertain environment to Mediterranean Sea data, and investigates the resulting tracks and track uncertainties as a function of data information content (number of data time-segments, number of frequencies, and signal-to-noise ratio) and of prior information (environmental uncertainties and source-velocity constraints). To track low-level sources, acoustic data recorded for multiple time segments (corresponding to multiple source positions along the track) are inverted simultaneously. Environmental uncertainty is addressed by including unknown water-column and seabed properties as nuisance parameters in an augmented inversion. Two approaches are considered: Focalization-tracking maximizes the posterior probability density (PPD) over the unknown source and environmental parameters. Marginalization-tracking integrates the PPD over environmental parameters to obtain a sequence of joint marginal probability distributions over source coordinates, from which the most-probable track and track uncertainties can be extracted. Both approaches apply track constraints on the maximum allowable vertical and radial source velocity. The two approaches are applied for towed-source acoustic data recorded at a vertical line array at a shallow-water test site in the Mediterranean Sea where previous geoacoustic studies have been carried out.
NASA Astrophysics Data System (ADS)
Shoukat, Sobia; Naqvi, Qaisar A.
2016-12-01
In this manuscript, scattering from a perfect electric conducting strip located at planar interface of topological insulator (TI)-chiral medium is investigated using the Kobayashi Potential method. Longitudinal components of electric and magnetic vector potential in terms of unknown weighting function are considered. Use of related set of boundary conditions yields two algebraic equations and four dual integral equations (DIEs). Integrand of two DIEs are expanded in terms of the characteristic functions with expansion coefficients which must satisfy, simultaneously, the discontinuous property of the Weber-Schafheitlin integrals, required edge and boundary conditions. The resulting expressions are then combined with algebraic equations to express the weighting function in terms of expansion coefficients, these expansion coefficients are then substituted in remaining DIEs. The projection is applied using the Jacobi polynomials. This treatment yields matrix equation for expansion coefficients which is solved numerically. These unknown expansion coefficients are used to find the scattered field. The far zone scattering width is investigated with respect to different parameters of the geometry, i.e, chirality of chiral medium, angle of incidence, size of the strip. Significant effects of different parameters including TI parameter on the scattering width are noted.
A New Linearized Crank-Nicolson Mixed Element Scheme for the Extended Fisher-Kolmogorov Equation
Wang, Jinfeng; Li, Hong; He, Siriguleng; Gao, Wei
2013-01-01
We present a new mixed finite element method for solving the extended Fisher-Kolmogorov (EFK) equation. We first decompose the EFK equation as the two second-order equations, then deal with a second-order equation employing finite element method, and handle the other second-order equation using a new mixed finite element method. In the new mixed finite element method, the gradient ∇u belongs to the weaker (L 2(Ω))2 space taking the place of the classical H(div; Ω) space. We prove some a priori bounds for the solution for semidiscrete scheme and derive a fully discrete mixed scheme based on a linearized Crank-Nicolson method. At the same time, we get the optimal a priori error estimates in L 2 and H 1-norm for both the scalar unknown u and the diffusion term w = −Δu and a priori error estimates in (L 2)2-norm for its gradient χ = ∇u for both semi-discrete and fully discrete schemes. PMID:23864831
A new linearized Crank-Nicolson mixed element scheme for the extended Fisher-Kolmogorov equation.
Wang, Jinfeng; Li, Hong; He, Siriguleng; Gao, Wei; Liu, Yang
2013-01-01
We present a new mixed finite element method for solving the extended Fisher-Kolmogorov (EFK) equation. We first decompose the EFK equation as the two second-order equations, then deal with a second-order equation employing finite element method, and handle the other second-order equation using a new mixed finite element method. In the new mixed finite element method, the gradient ∇u belongs to the weaker (L²(Ω))² space taking the place of the classical H(div; Ω) space. We prove some a priori bounds for the solution for semidiscrete scheme and derive a fully discrete mixed scheme based on a linearized Crank-Nicolson method. At the same time, we get the optimal a priori error estimates in L² and H¹-norm for both the scalar unknown u and the diffusion term w = -Δu and a priori error estimates in (L²)²-norm for its gradient χ = ∇u for both semi-discrete and fully discrete schemes.
A Combination of Ontogeny and CNS Environment Establishes Microglial Identity.
Bennett, F Chris; Bennett, Mariko L; Yaqoob, Fazeela; Mulinyawe, Sara B; Grant, Gerald A; Hayden Gephart, Melanie; Plowey, Edward D; Barres, Ben A
2018-05-22
Microglia, the brain's resident macrophages, are dynamic CNS custodians with surprising origins in the extra-embryonic yolk sac. The consequences of their distinct ontogeny are unknown but critical to understanding and treating brain diseases. We created a brain macrophage transplantation system to disentangle how environment and ontogeny specify microglial identity. We find that donor cells extensively engraft in the CNS of microglia-deficient mice, and even after exposure to a cell culture environment, microglia fully regain their identity when returned to the CNS. Though transplanted macrophages from multiple tissues can express microglial genes in the brain, only those of yolk-sac origin fully attain microglial identity. Transplanted macrophages of inappropriate origin, including primary human cells in a humanized host, express disease-associated genes and specific ontogeny markers. Through brain macrophage transplantation, we discover new principles of microglial identity that have broad applications to the study of disease and development of myeloid cell therapies. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Sanchez, M.; Probst, L.; Blazevic, E.; Nakao, B.; Northrup, M. A.
2011-11-01
We describe a fully automated and autonomous air-borne biothreat detection system for biosurveillance applications. The system, including the nucleic-acid-based detection assay, was designed, built and shipped by Microfluidic Systems Inc (MFSI), a new subsidiary of PositiveID Corporation (PSID). Our findings demonstrate that the system and assay unequivocally identify pathogenic strains of Bacillus anthracis, Yersinia pestis, Francisella tularensis, Burkholderia mallei, and Burkholderia pseudomallei. In order to assess the assay's ability to detect unknown samples, our team also challenged it against a series of blind samples provided by the Department of Homeland Security (DHS). These samples included natural occurring isolated strains, near-neighbor isolates, and environmental samples. Our results indicate that the multiplex assay was specific and produced no false positives when challenged with in house gDNA collections and DHS provided panels. Here we present another analytical tool for the rapid identification of nine Centers for Disease Control and Prevention category A and B biothreat organisms.
Airport Noise Prediction Model -- MOD 7
DOT National Transportation Integrated Search
1978-07-01
The MOD 7 Airport Noise Prediction Model is fully operational. The language used is Fortran, and it has been run on several different computer systems. Its capabilities include prediction of noise levels for single parameter changes, for multiple cha...
Liu, Yanhui; Zhang, Peihua
2016-09-01
This paper presents a study of the compression behaviors of fully covered biodegradable polydioxanone biliary stents (FCBPBs) developed for human body by finite element method. To investigate the relationship between the compression force and structure parameter (monofilament diameter and braid-pin number), nine numerical models based on actual biliary stent were established, the simulation and experimental results are in good agreement with each other when calculating the compression force derived from both experiment and simulation results, indicating that the simulation results can be provided a useful reference to the investigation of biliary stents. The stress distribution on FCBPBSs was studied to optimize the structure of FCBPBSs. In addition, the plastic dissipation analysis and plastic strain of FCBPBSs were obtained via the compression simulation, revealing the structure parameter effect on the tolerance. Copyright © 2016 Elsevier Ltd. All rights reserved.
Measurement of pixel response functions of a fully depleted CCD
NASA Astrophysics Data System (ADS)
Kobayashi, Yukiyasu; Niwa, Yoshito; Yano, Taihei; Gouda, Naoteru; Hara, Takuji; Yamada, Yoshiyuki
2014-07-01
We describe the measurement of detailed and precise Pixel Response Functions (PRFs) of a fully depleted CCD. Measurements were performed under different physical conditions, such as different wavelength light sources or CCD operating temperatures. We determined the relations between these physical conditions and the forms of the PRF. We employ two types of PRFs: one is the model PRF (mPRF) that can represent the shape of a PRF with one characteristic parameter and the other is the simulated PRF (sPRF) that is the resultant PRF from simulating physical phenomena. By using measured, model, and simulated PRFs, we determined the relations between operational parameters and the PRFs. Using the obtained relations, we can now estimate a PRF under conditions that will be encountered during the course of Nano-JASMINE observations. These estimated PRFs will be utilized in the analysis of the Nano-JASMINE data.
A Regev-type fully homomorphic encryption scheme using modulus switching.
Chen, Zhigang; Wang, Jian; Chen, Liqun; Song, Xinxia
2014-01-01
A critical challenge in a fully homomorphic encryption (FHE) scheme is to manage noise. Modulus switching technique is currently the most efficient noise management technique. When using the modulus switching technique to design and implement a FHE scheme, how to choose concrete parameters is an important step, but to our best knowledge, this step has drawn very little attention to the existing FHE researches in the literature. The contributions of this paper are twofold. On one hand, we propose a function of the lower bound of dimension value in the switching techniques depending on the LWE specific security levels. On the other hand, as a case study, we modify the Brakerski FHE scheme (in Crypto 2012) by using the modulus switching technique. We recommend concrete parameter values of our proposed scheme and provide security analysis. Our result shows that the modified FHE scheme is more efficient than the original Brakerski scheme in the same security level.
Social judgment theory based model on opinion formation, polarization and evolution
NASA Astrophysics Data System (ADS)
Chau, H. F.; Wong, C. Y.; Chow, F. K.; Fung, Chi-Hang Fred
2014-12-01
The dynamical origin of opinion polarization in the real world is an interesting topic that physical scientists may help to understand. To properly model the dynamics, the theory must be fully compatible with findings by social psychologists on microscopic opinion change. Here we introduce a generic model of opinion formation with homogeneous agents based on the well-known social judgment theory in social psychology by extending a similar model proposed by Jager and Amblard. The agents’ opinions will eventually cluster around extreme and/or moderate opinions forming three phases in a two-dimensional parameter space that describes the microscopic opinion response of the agents. The dynamics of this model can be qualitatively understood by mean-field analysis. More importantly, first-order phase transition in opinion distribution is observed by evolving the system under a slow change in the system parameters, showing that punctuated equilibria in public opinion can occur even in a fully connected social network.
NASA Astrophysics Data System (ADS)
Naseralavi, S. S.; Salajegheh, E.; Fadaee, M. J.; Salajegheh, J.
2014-06-01
This paper presents a technique for damage detection in structures under unknown periodic excitations using the transient displacement response. The method is capable of identifying the damage parameters without finding the input excitations. We first define the concept of displacement space as a linear space in which each point represents displacements of structure under an excitation and initial condition. Roughly speaking, the method is based on the fact that structural displacements under free and forced vibrations are associated with two parallel subspaces in the displacement space. Considering this novel geometrical viewpoint, an equation called kernel parallelization equation (KPE) is derived for damage detection under unknown periodic excitations and a sensitivity-based algorithm for solving KPE is proposed accordingly. The method is evaluated via three case studies under periodic excitations, which confirm the efficiency of the proposed method.
Object-Image Correspondence for Algebraic Curves under Projections
NASA Astrophysics Data System (ADS)
Burdis, Joseph M.; Kogan, Irina A.; Hong, Hoon
2013-03-01
We present a novel algorithm for deciding whether a given planar curve is an image of a given spatial curve, obtained by a central or a parallel projection with unknown parameters. The motivation comes from the problem of establishing a correspondence between an object and an image, taken by a camera with unknown position and parameters. A straightforward approach to this problem consists of setting up a system of conditions on the projection parameters and then checking whether or not this system has a solution. The computational advantage of the algorithm presented here, in comparison to algorithms based on the straightforward approach, lies in a significant reduction of a number of real parameters that need to be eliminated in order to establish existence or non-existence of a projection that maps a given spatial curve to a given planar curve. Our algorithm is based on projection criteria that reduce the projection problem to a certain modification of the equivalence p! roblem of planar curves under affine and projective transformations. To solve the latter problem we make an algebraic adaptation of signature construction that has been used to solve the equivalence problems for smooth curves. We introduce a notion of a classifying set of rational differential invariants and produce explicit formulas for such invariants for the actions of the projective and the affine groups on the plane.
Parameter estimation for stiff deterministic dynamical systems via ensemble Kalman filter
NASA Astrophysics Data System (ADS)
Arnold, Andrea; Calvetti, Daniela; Somersalo, Erkki
2014-10-01
A commonly encountered problem in numerous areas of applications is to estimate the unknown coefficients of a dynamical system from direct or indirect observations at discrete times of some of the components of the state vector. A related problem is to estimate unobserved components of the state. An egregious example of such a problem is provided by metabolic models, in which the numerous model parameters and the concentrations of the metabolites in tissue are to be estimated from concentration data in the blood. A popular method for addressing similar questions in stochastic and turbulent dynamics is the ensemble Kalman filter (EnKF), a particle-based filtering method that generalizes classical Kalman filtering. In this work, we adapt the EnKF algorithm for deterministic systems in which the numerical approximation error is interpreted as a stochastic drift with variance based on classical error estimates of numerical integrators. This approach, which is particularly suitable for stiff systems where the stiffness may depend on the parameters, allows us to effectively exploit the parallel nature of particle methods. Moreover, we demonstrate how spatial prior information about the state vector, which helps the stability of the computed solution, can be incorporated into the filter. The viability of the approach is shown by computed examples, including a metabolic system modeling an ischemic episode in skeletal muscle, with a high number of unknown parameters.
NASA Astrophysics Data System (ADS)
Xing, X.; Yuan, Z.; Chen, L. F.; Yu, X. Y.; Xiao, L.
2018-04-01
The stability control is one of the major technical difficulties in the field of highway subgrade construction engineering. Building deformation model is a crucial step for InSAR time series deformation monitoring. Most of the InSAR deformation models for deformation monitoring are pure empirical mathematical models, without considering the physical mechanism of the monitored object. In this study, we take rheology into consideration, inducing rheological parameters into traditional InSAR deformation models. To assess the feasibility and accuracy for our new model, both simulation and real deformation data over Lungui highway (a typical highway built on soft clay subgrade in Guangdong province, China) are investigated with TerraSAR-X satellite imagery. In order to solve the unknows of the non-linear rheological model, three algorithms: Gauss-Newton (GN), Levenberg-Marquarat (LM), and Genetic Algorithm (GA), are utilized and compared to estimate the unknown parameters. Considering both the calculation efficiency and accuracy, GA is chosen as the final choice for the new model in our case study. Preliminary real data experiment is conducted with use of 17 TerraSAR-X Stripmap images (with a 3-m resolution). With the new deformation model and GA aforementioned, the unknown rheological parameters over all the high coherence points are obtained and the LOS deformation (the low-pass component) sequences are generated.
Topology-selective jamming of fully-connected, code-division random-access networks
NASA Technical Reports Server (NTRS)
Polydoros, Andreas; Cheng, Unjeng
1990-01-01
The purpose is to introduce certain models of topology selective stochastic jamming and examine its impact on a class of fully-connected, spread-spectrum, slotted ALOHA-type random access networks. The theory covers dedicated as well as half-duplex units. The dominant role of the spatial duty factor is established, and connections with the dual concept of time selective jamming are discussed. The optimal choices of coding rate and link access parameters (from the users' side) and the jamming spatial fraction are numerically established for DS and FH spreading.
Bonabeau model on a fully connected graph
NASA Astrophysics Data System (ADS)
Malarz, K.; Stauffer, D.; Kułakowski, K.
2006-03-01
Numerical simulations are reported on the Bonabeau model on a fully connected graph, where spatial degrees of freedom are absent. The control parameter is the memory factor f. The phase transition is observed at the dispersion of the agents power hi. The critical value fC shows a hysteretic behavior with respect to the initial distribution of hi. fC decreases with the system size; this decrease can be compensated by a greater number of fights between a global reduction of the distribution width of hi. The latter step is equivalent to a partial forgetting.
Geostatistical models are appropriate for spatially distributed data measured at irregularly spaced locations. We propose an efficient Markov chain Monte Carlo (MCMC) algorithm for fitting Bayesian geostatistical models with substantial numbers of unknown parameters to sizable...
Bootstrap Methods: A Very Leisurely Look.
ERIC Educational Resources Information Center
Hinkle, Dennis E.; Winstead, Wayland H.
The Bootstrap method, a computer-intensive statistical method of estimation, is illustrated using a simple and efficient Statistical Analysis System (SAS) routine. The utility of the method for generating unknown parameters, including standard errors for simple statistics, regression coefficients, discriminant function coefficients, and factor…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hwang, Jai-chan; Noh, Hyerim
Special relativistic hydrodynamics with weak gravity has hitherto been unknown in the literature. Whether such an asymmetric combination is possible has been unclear. Here, the hydrodynamic equations with Poisson-type gravity, considering fully relativistic velocity and pressure under the weak gravity and the action-at-a-distance limit, are consistently derived from Einstein’s theory of general relativity. An analysis is made in the maximal slicing, where the Poisson’s equation becomes much simpler than our previous study in the zero-shear gauge. Also presented is the hydrodynamic equations in the first post-Newtonian approximation, now under the general hypersurface condition. Our formulation includes the anisotropic stress.
Ulcerative colitis precipitated by a verocytotoxin-producing Escherichia coli infection.
Farina, C; Caprioli, A; Luzzi, I; Sonzogni, A; Goglio, A
1995-12-01
The aetiology of ulcerative colitis remains unknown, despite extensive research into likely causes, such as infections, diet, environmental factors, immunological or genetic defects, psychomotor disorders, and abnormalities of mucin. We report here a case of ulcerative colitis in which the first episode of the disease was associated with serologic evidence of infection by verocytotoxin (VT)-producing O157 Escherichia coli (VTEC), possibly the trigger factor of a previously silent ulcerative colitis. Although histological reports of ulcerative colitis associated with VTEC infection are sporadically reported, the trigger role of VTEC in precipitating, aggravating or prolonging this pathology should be more fully elucidated.
Markers of Oral Lichen Planus Malignant Transformation
Tampa, Mircea; Mitran, Madalina; Mitran, Cristina; Matei, Clara; Georgescu, Simona-Roxana
2018-01-01
Oral lichen planus (OLP) is a chronic inflammatory disease of unknown etiology with significant impact on patients' quality of life. Malignant transformation into oral squamous cell carcinoma (OSCC) is considered as one of the most serious complications of the disease; nevertheless, controversy still persists. Various factors seem to be involved in the progression of malignant transformation; however, the mechanism of this process is not fully understood yet. Molecular alterations detected in OLP samples might represent useful biomarkers for predicting and monitoring the malignant progression. In this review, we discuss various studies which highlight different molecules as ominous predictors of OLP malignant transformation. PMID:29682099
Ground-state candidate for the classical dipolar kagome Ising antiferromagnet
NASA Astrophysics Data System (ADS)
Chioar, I. A.; Rougemaille, N.; Canals, B.
2016-06-01
We have investigated the low-temperature thermodynamic properties of the classical dipolar kagome Ising antiferromagnet using Monte Carlo simulations, in the quest for the ground-state manifold. In spite of the limitations of a single-spin-flip approach, we managed to identify certain ordering patterns in the low-temperature regime and we propose a candidate for this unknown state. This configuration presents some intriguing features and is fully compatible with the extrapolations of the at-equilibrium thermodynamic behavior sampled so far, making it a very likely choice for the dipolar long-range ordered state of the classical kagome Ising antiferromagnet.
MoCha: Molecular Characterization of Unknown Pathways.
Lobo, Daniel; Hammelman, Jennifer; Levin, Michael
2016-04-01
Automated methods for the reverse-engineering of complex regulatory networks are paving the way for the inference of mechanistic comprehensive models directly from experimental data. These novel methods can infer not only the relations and parameters of the known molecules defined in their input datasets, but also unknown components and pathways identified as necessary by the automated algorithms. Identifying the molecular nature of these unknown components is a crucial step for making testable predictions and experimentally validating the models, yet no specific and efficient tools exist to aid in this process. To this end, we present here MoCha (Molecular Characterization), a tool optimized for the search of unknown proteins and their pathways from a given set of known interacting proteins. MoCha uses the comprehensive dataset of protein-protein interactions provided by the STRING database, which currently includes more than a billion interactions from over 2,000 organisms. MoCha is highly optimized, performing typical searches within seconds. We demonstrate the use of MoCha with the characterization of unknown components from reverse-engineered models from the literature. MoCha is useful for working on network models by hand or as a downstream step of a model inference engine workflow and represents a valuable and efficient tool for the characterization of unknown pathways using known data from thousands of organisms. MoCha and its source code are freely available online under the GPLv3 license.
The route to chaos for the Kuramoto-Sivashinsky equation
NASA Technical Reports Server (NTRS)
Papageorgiou, Demetrios T.; Smyrlis, Yiorgos
1990-01-01
The results of extensive numerical experiments of the spatially periodic initial value problem for the Kuramoto-Sivashinsky equation. This paper is concerned with the asymptotic nonlinear dynamics at the dissipation parameter decreases and spatio-temporal chaos sets in. To this end the initial condition is taken to be the same for all numerical experiments (a single sine wave is used) and the large time evolution of the system is followed numerically. Numerous computations were performed to establish the existence of windows, in parameter space, in which the solution has the following characteristics as the viscosity is decreased: a steady fully modal attractor to a steady bimodal attractor to another steady fully modal attractor to a steady trimodal attractor to a periodic attractor, to another steady fully modal attractor, to another periodic attractor, to a steady tetramodal attractor, to another periodic attractor having a full sequence of period-doublings (in parameter space) to chaos. Numerous solutions are presented which provide conclusive evidence of the period-doubling cascades which precede chaos for this infinite-dimensional dynamical system. These results permit a computation of the length of subwindows which in turn provide an estimate for their successive ratios as the cascade develops. A calculation based on the numerical results is also presented to show that the period doubling sequences found here for the Kuramoto-Sivashinsky equation, are in complete agreement with Feigenbaum's universal constant of 4,669201609... . Some preliminary work shows several other windows following the first chaotic one including periodic, chaotic, and a steady octamodal window; however, the windows shrink significantly in size to enable concrete quantitative conclusions to be made.
A fully Bayesian method for jointly fitting instrumental calibration and X-ray spectral models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Jin; Yu, Yaming; Van Dyk, David A.
2014-10-20
Owing to a lack of robust principled methods, systematic instrumental uncertainties have generally been ignored in astrophysical data analysis despite wide recognition of the importance of including them. Ignoring calibration uncertainty can cause bias in the estimation of source model parameters and can lead to underestimation of the variance of these estimates. We previously introduced a pragmatic Bayesian method to address this problem. The method is 'pragmatic' in that it introduced an ad hoc technique that simplified computation by neglecting the potential information in the data for narrowing the uncertainty for the calibration product. Following that work, we use amore » principal component analysis to efficiently represent the uncertainty of the effective area of an X-ray (or γ-ray) telescope. Here, however, we leverage this representation to enable a principled, fully Bayesian method that coherently accounts for the calibration uncertainty in high-energy spectral analysis. In this setting, the method is compared with standard analysis techniques and the pragmatic Bayesian method. The advantage of the fully Bayesian method is that it allows the data to provide information not only for estimation of the source parameters but also for the calibration product—here the effective area, conditional on the adopted spectral model. In this way, it can yield more accurate and efficient estimates of the source parameters along with valid estimates of their uncertainty. Provided that the source spectrum can be accurately described by a parameterized model, this method allows rigorous inference about the effective area by quantifying which possible curves are most consistent with the data.« less
NASA Astrophysics Data System (ADS)
Asgari, Jamal; Mohammadloo, Tannaz H.; Amiri-Simkooei, Ali Reza
2015-09-01
GNSS kinematic techniques are capable of providing precise coordinates in extremely short observation time-span. These methods usually determine the coordinates of an unknown station with respect to a reference one. To enhance the precision, accuracy, reliability and integrity of the estimated unknown parameters, GNSS kinematic equations are to be augmented by possible constraints. Such constraints could be derived from the geometric relation of the receiver positions in motion. This contribution presents the formulation of the constrained kinematic global navigation satellite systems positioning. Constraints effectively restrict the definition domain of the unknown parameters from the three-dimensional space to a subspace defined by the equation of motion. To test the concept of the constrained kinematic positioning method, the equation of a circle is employed as a constraint. A device capable of moving on a circle was made and the observations from 11 positions on the circle were analyzed. Relative positioning was conducted by considering the center of the circle as the reference station. The equation of the receiver's motion was rewritten in the ECEF coordinates system. A special attention is drawn onto how a constraint is applied to kinematic positioning. Implementing the constraint in the positioning process provides much more precise results compared to the unconstrained case. This has been verified based on the results obtained from the covariance matrix of the estimated parameters and the empirical results using kinematic positioning samples as well. The theoretical standard deviations of the horizontal components are reduced by a factor ranging from 1.24 to 2.64. The improvement on the empirical standard deviation of the horizontal components ranges from 1.08 to 2.2.
Park, Eun Sug; Hopke, Philip K; Oh, Man-Suk; Symanski, Elaine; Han, Daikwon; Spiegelman, Clifford H
2014-07-01
There has been increasing interest in assessing health effects associated with multiple air pollutants emitted by specific sources. A major difficulty with achieving this goal is that the pollution source profiles are unknown and source-specific exposures cannot be measured directly; rather, they need to be estimated by decomposing ambient measurements of multiple air pollutants. This estimation process, called multivariate receptor modeling, is challenging because of the unknown number of sources and unknown identifiability conditions (model uncertainty). The uncertainty in source-specific exposures (source contributions) as well as uncertainty in the number of major pollution sources and identifiability conditions have been largely ignored in previous studies. A multipollutant approach that can deal with model uncertainty in multivariate receptor models while simultaneously accounting for parameter uncertainty in estimated source-specific exposures in assessment of source-specific health effects is presented in this paper. The methods are applied to daily ambient air measurements of the chemical composition of fine particulate matter ([Formula: see text]), weather data, and counts of cardiovascular deaths from 1995 to 1997 for Phoenix, AZ, USA. Our approach for evaluating source-specific health effects yields not only estimates of source contributions along with their uncertainties and associated health effects estimates but also estimates of model uncertainty (posterior model probabilities) that have been ignored in previous studies. The results from our methods agreed in general with those from the previously conducted workshop/studies on the source apportionment of PM health effects in terms of number of major contributing sources, estimated source profiles, and contributions. However, some of the adverse source-specific health effects identified in the previous studies were not statistically significant in our analysis, which probably resulted because we incorporated parameter uncertainty in estimated source contributions that has been ignored in the previous studies into the estimation of health effects parameters. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Van Dongen, Hans P. A.; Mott, Christopher G.; Huang, Jen-Kuang; Mollicone, Daniel J.; McKenzie, Frederic D.; Dinges, David F.
2007-01-01
Current biomathematical models of fatigue and performance do not accurately predict cognitive performance for individuals with a priori unknown degrees of trait vulnerability to sleep loss, do not predict performance reliably when initial conditions are uncertain, and do not yield statistically valid estimates of prediction accuracy. These limitations diminish their usefulness for predicting the performance of individuals in operational environments. To overcome these 3 limitations, a novel modeling approach was developed, based on the expansion of a statistical technique called Bayesian forecasting. The expanded Bayesian forecasting procedure was implemented in the two-process model of sleep regulation, which has been used to predict performance on the basis of the combination of a sleep homeostatic process and a circadian process. Employing the two-process model with the Bayesian forecasting procedure to predict performance for individual subjects in the face of unknown traits and uncertain states entailed subject-specific optimization of 3 trait parameters (homeostatic build-up rate, circadian amplitude, and basal performance level) and 2 initial state parameters (initial homeostatic state and circadian phase angle). Prior information about the distribution of the trait parameters in the population at large was extracted from psychomotor vigilance test (PVT) performance measurements in 10 subjects who had participated in a laboratory experiment with 88 h of total sleep deprivation. The PVT performance data of 3 additional subjects in this experiment were set aside beforehand for use in prospective computer simulations. The simulations involved updating the subject-specific model parameters every time the next performance measurement became available, and then predicting performance 24 h ahead. Comparison of the predictions to the subjects' actual data revealed that as more data became available for the individuals at hand, the performance predictions became increasingly more accurate and had progressively smaller 95% confidence intervals, as the model parameters converged efficiently to those that best characterized each individual. Even when more challenging simulations were run (mimicking a change in the initial homeostatic state; simulating the data to be sparse), the predictions were still considerably more accurate than would have been achieved by the two-process model alone. Although the work described here is still limited to periods of consolidated wakefulness with stable circadian rhythms, the results obtained thus far indicate that the Bayesian forecasting procedure can successfully overcome some of the major outstanding challenges for biomathematical prediction of cognitive performance in operational settings. Citation: Van Dongen HPA; Mott CG; Huang JK; Mollicone DJ; McKenzie FD; Dinges DF. Optimization of biomathematical model predictions for cognitive performance impairment in individuals: accounting for unknown traits and uncertain states in homeostatic and circadian processes. SLEEP 2007;30(9):1129-1143. PMID:17910385
A Systematic Approach to Sensor Selection for Aircraft Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2009-01-01
A systematic approach for selecting an optimal suite of sensors for on-board aircraft gas turbine engine health estimation is presented. The methodology optimally chooses the engine sensor suite and the model tuning parameter vector to minimize the Kalman filter mean squared estimation error in the engine s health parameters or other unmeasured engine outputs. This technique specifically addresses the underdetermined estimation problem where there are more unknown system health parameters representing degradation than available sensor measurements. This paper presents the theoretical estimation error equations, and describes the optimization approach that is applied to select the sensors and model tuning parameters to minimize these errors. Two different model tuning parameter vector selection approaches are evaluated: the conventional approach of selecting a subset of health parameters to serve as the tuning parameters, and an alternative approach that selects tuning parameters as a linear combination of all health parameters. Results from the application of the technique to an aircraft engine simulation are presented, and compared to those from an alternative sensor selection strategy.
ON IDENTIFIABILITY OF NONLINEAR ODE MODELS AND APPLICATIONS IN VIRAL DYNAMICS
MIAO, HONGYU; XIA, XIAOHUA; PERELSON, ALAN S.; WU, HULIN
2011-01-01
Ordinary differential equations (ODE) are a powerful tool for modeling dynamic processes with wide applications in a variety of scientific fields. Over the last 2 decades, ODEs have also emerged as a prevailing tool in various biomedical research fields, especially in infectious disease modeling. In practice, it is important and necessary to determine unknown parameters in ODE models based on experimental data. Identifiability analysis is the first step in determing unknown parameters in ODE models and such analysis techniques for nonlinear ODE models are still under development. In this article, we review identifiability analysis methodologies for nonlinear ODE models developed in the past one to two decades, including structural identifiability analysis, practical identifiability analysis and sensitivity-based identifiability analysis. Some advanced topics and ongoing research are also briefly reviewed. Finally, some examples from modeling viral dynamics of HIV, influenza and hepatitis viruses are given to illustrate how to apply these identifiability analysis methods in practice. PMID:21785515
Wang, Wei; Wen, Changyun; Huang, Jiangshuai; Fan, Huijin
2017-11-01
In this paper, a backstepping based distributed adaptive control scheme is proposed for multiple uncertain Euler-Lagrange systems under directed graph condition. The common desired trajectory is allowed totally unknown by part of the subsystems and the linearly parameterized trajectory model assumed in currently available results is no longer needed. To compensate the effects due to unknown trajectory information, a smooth function of consensus errors and certain positive integrable functions are introduced in designing virtual control inputs. Besides, to overcome the difficulty of completely counteracting the coupling terms of distributed consensus errors and parameter estimation errors in the presence of asymmetric Laplacian matrix, extra information transmission of local parameter estimates are introduced among linked subsystem and adaptive gain technique is adopted to generate distributed torque inputs. It is shown that with the proposed distributed adaptive control scheme, global uniform boundedness of all the closed-loop signals and asymptotically output consensus tracking can be achieved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Backstepping Design of Adaptive Neural Fault-Tolerant Control for MIMO Nonlinear Systems.
Gao, Hui; Song, Yongduan; Wen, Changyun
In this paper, an adaptive controller is developed for a class of multi-input and multioutput nonlinear systems with neural networks (NNs) used as a modeling tool. It is shown that all the signals in the closed-loop system with the proposed adaptive neural controller are globally uniformly bounded for any external input in . In our control design, the upper bound of the NN modeling error and the gains of external disturbance are characterized by unknown upper bounds, which is more rational to establish the stability in the adaptive NN control. Filter-based modification terms are used in the update laws of unknown parameters to improve the transient performance. Finally, fault-tolerant control is developed to accommodate actuator failure. An illustrative example applying the adaptive controller to control a rigid robot arm shows the validation of the proposed controller.In this paper, an adaptive controller is developed for a class of multi-input and multioutput nonlinear systems with neural networks (NNs) used as a modeling tool. It is shown that all the signals in the closed-loop system with the proposed adaptive neural controller are globally uniformly bounded for any external input in . In our control design, the upper bound of the NN modeling error and the gains of external disturbance are characterized by unknown upper bounds, which is more rational to establish the stability in the adaptive NN control. Filter-based modification terms are used in the update laws of unknown parameters to improve the transient performance. Finally, fault-tolerant control is developed to accommodate actuator failure. An illustrative example applying the adaptive controller to control a rigid robot arm shows the validation of the proposed controller.
NASA Astrophysics Data System (ADS)
Farhadi, Leila; Entekhabi, Dara; Salvucci, Guido
2016-04-01
In this study, we develop and apply a mapping estimation capability for key unknown parameters that link the surface water and energy balance equations. The method is applied to the Gourma region in West Africa. The accuracy of the estimation method at point scale was previously examined using flux tower data. In this study, the capability is scaled to be applicable with remotely sensed data products and hence allow mapping. Parameters of the system are estimated through a process that links atmospheric forcing (precipitation and incident radiation), surface states, and unknown parameters. Based on conditional averaging of land surface temperature and moisture states, respectively, a single objective function is posed that measures moisture and temperature-dependent errors solely in terms of observed forcings and surface states. This objective function is minimized with respect to parameters to identify evapotranspiration and drainage models and estimate water and energy balance flux components. The uncertainty of the estimated parameters (and associated statistical confidence limits) is obtained through the inverse of Hessian of the objective function, which is an approximation of the covariance matrix. This calibration-free method is applied to the mesoscale region of Gourma in West Africa using multiplatform remote sensing data. The retrievals are verified against tower-flux field site data and physiographic characteristics of the region. The focus is to find the functional form of the evaporative fraction dependence on soil moisture, a key closure function for surface and subsurface heat and moisture dynamics, using remote sensing data.
NASA Astrophysics Data System (ADS)
Wei, Ying-Kang; Luo, Xiao-Tao; Li, Cheng-Xin; Li, Chang-Jiu
2017-01-01
Magnesium-based alloys have excellent physical and mechanical properties for a lot of applications. However, due to high chemical reactivity, magnesium and its alloys are highly susceptible to corrosion. In this study, Al6061 coating was deposited on AZ31B magnesium by cold spray with a commercial Al6061 powder blended with large-sized stainless steel particles (in-situ shot-peening particles) using nitrogen gas. Microstructure and corrosion behavior of the sprayed coating was investigated as a function of shot-peening particle content in the feedstock. It is found that by introducing the in-situ tamping effect using shot-peening (SP) particles, the plastic deformation of deposited particles is significantly enhanced, thereby resulting in a fully dense Al6061 coating. SEM observations reveal that no SP particle is deposited into Al6061 coating at the optimization spraying parameters. Porosity of the coating significantly decreases from 10.7 to 0.4% as the SP particle content increases from 20 to 60 vol.%. The electrochemical corrosion experiments reveal that this novel in-situ SP-assisted cold spraying is effective to deposit fully dense Al6061 coating through which aqueous solution is not permeable and thus can provide exceptional protection of the magnesium-based materials from corrosion.
Application of Ensemble Kalman Filter in Power System State Tracking and Sensitivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yulan; Huang, Zhenyu; Zhou, Ning
2012-05-01
Ensemble Kalman Filter (EnKF) is proposed to track dynamic states of generators. The algorithm of EnKF and its application to generator state tracking are presented in detail. The accuracy and sensitivity of the method are analyzed with respect to initial state errors, measurement noise, unknown fault locations, time steps and parameter errors. It is demonstrated through simulation studies that even with some errors in the parameters, the developed EnKF can effectively track generator dynamic states using disturbance data.
Fully-Coupled Dynamical Jitter Modeling of Momentum Exchange Devices
NASA Astrophysics Data System (ADS)
Alcorn, John
A primary source of spacecraft jitter is due to mass imbalances within momentum exchange devices (MEDs) used for fine pointing, such as reaction wheels (RWs) and variable-speed control moment gyroscopes (VSCMGs). Although these effects are often characterized through experimentation in order to validate pointing stability requirements, it is of interest to include jitter in a computer simulation of the spacecraft in the early stages of spacecraft development. An estimate of jitter amplitude may be found by modeling MED imbalance torques as external disturbance forces and torques on the spacecraft. In this case, MED mass imbalances are lumped into static and dynamic imbalance parameters, allowing jitter force and torque to be simply proportional to wheel speed squared. A physically realistic dynamic model may be obtained by defining mass imbalances in terms of a wheel center of mass location and inertia tensor. The fully-coupled dynamic model allows for momentum and energy validation of the system. This is often critical when modeling additional complex dynamical behavior such as flexible dynamics and fuel slosh. Furthermore, it is necessary to use the fully-coupled model in instances where the relative mass properties of the spacecraft with respect to the RWs cause the simplified jitter model to be inaccurate. This thesis presents a generalized approach to MED imbalance modeling of a rigid spacecraft hub with N RWs or VSCMGs. A discussion is included to convert from manufacturer specifications of RW imbalances to the parameters introduced within each model. Implementations of the fully-coupled RW and VSCMG models derived within this thesis are released open-source as part of the Basilisk astrodynamics software.
Tong, Shaocheng; Wang, Tong; Li, Yongming; Zhang, Huaguang
2014-06-01
This paper discusses the problem of adaptive neural network output feedback control for a class of stochastic nonlinear strict-feedback systems. The concerned systems have certain characteristics, such as unknown nonlinear uncertainties, unknown dead-zones, unmodeled dynamics and without the direct measurements of state variables. In this paper, the neural networks (NNs) are employed to approximate the unknown nonlinear uncertainties, and then by representing the dead-zone as a time-varying system with a bounded disturbance. An NN state observer is designed to estimate the unmeasured states. Based on both backstepping design technique and a stochastic small-gain theorem, a robust adaptive NN output feedback control scheme is developed. It is proved that all the variables involved in the closed-loop system are input-state-practically stable in probability, and also have robustness to the unmodeled dynamics. Meanwhile, the observer errors and the output of the system can be regulated to a small neighborhood of the origin by selecting appropriate design parameters. Simulation examples are also provided to illustrate the effectiveness of the proposed approach.
Li, Da-Peng; Li, Dong-Juan; Liu, Yan-Jun; Tong, Shaocheng; Chen, C L Philip
2017-10-01
This paper deals with the tracking control problem for a class of nonlinear multiple input multiple output unknown time-varying delay systems with full state constraints. To overcome the challenges which cause by the appearances of the unknown time-varying delays and full-state constraints simultaneously in the systems, an adaptive control method is presented for such systems for the first time. The appropriate Lyapunov-Krasovskii functions and a separation technique are employed to eliminate the effect of unknown time-varying delays. The barrier Lyapunov functions are employed to prevent the violation of the full state constraints. The singular problems are dealt with by introducing the signal function. Finally, it is proven that the proposed method can both guarantee the good tracking performance of the systems output, all states are remained in the constrained interval and all the closed-loop signals are bounded in the design process based on choosing appropriate design parameters. The practicability of the proposed control technique is demonstrated by a simulation study in this paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, Jaromy; Sun Zaijing; Wells, Doug
2009-03-10
Photon activation analysis detected elements in two NIST standards that did not have reported concentration values. A method is currently being developed to infer these concentrations by using scaling parameters and the appropriate known quantities within the NIST standard itself. Scaling parameters include: threshold, peak and endpoint energies; photo-nuclear cross sections for specific isotopes; Bremstrahlung spectrum; target thickness; and photon flux. Photo-nuclear cross sections and energies from the unknown elements must also be known. With these quantities, the same integral was performed for both the known and unknown elements resulting in an inference of the concentration of the un-reported elementmore » based on the reported value. Since Rb and Mn were elements that were reported in the standards, and because they had well-identified peaks, they were used as the standards of inference to determine concentrations of the unreported elements of As, I, Nb, Y, and Zr. This method was tested by choosing other known elements within the standards and inferring a value based on the stated procedure. The reported value of Mn in the first NIST standard was 403{+-}15 ppm and the reported value of Ca in the second NIST standard was 87000 ppm (no reported uncertainty). The inferred concentrations were 370{+-}23 ppm and 80200{+-}8700 ppm respectively.« less
Finite-time master-slave synchronization and parameter identification for uncertain Lurie systems.
Wang, Tianbo; Zhao, Shouwei; Zhou, Wuneng; Yu, Weiqin
2014-07-01
This paper investigates the finite-time master-slave synchronization and parameter identification problem for uncertain Lurie systems based on the finite-time stability theory and the adaptive control method. The finite-time master-slave synchronization means that the state of a slave system follows with that of a master system in finite time, which is more reasonable than the asymptotical synchronization in applications. The uncertainties include the unknown parameters and noise disturbances. An adaptive controller and update laws which ensures the synchronization and parameter identification to be realized in finite time are constructed. Finally, two numerical examples are given to show the effectiveness of the proposed method. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
flexsurv: A Platform for Parametric Survival Modeling in R
Jackson, Christopher H.
2018-01-01
flexsurv is an R package for fully-parametric modeling of survival data. Any parametric time-to-event distribution may be fitted if the user supplies a probability density or hazard function, and ideally also their cumulative versions. Standard survival distributions are built in, including the three and four-parameter generalized gamma and F distributions. Any parameter of any distribution can be modeled as a linear or log-linear function of covariates. The package also includes the spline model of Royston and Parmar (2002), in which both baseline survival and covariate effects can be arbitrarily flexible parametric functions of time. The main model-fitting function, flexsurvreg, uses the familiar syntax of survreg from the standard survival package (Therneau 2016). Censoring or left-truncation are specified in ‘Surv’ objects. The models are fitted by maximizing the full log-likelihood, and estimates and confidence intervals for any function of the model parameters can be printed or plotted. flexsurv also provides functions for fitting and predicting from fully-parametric multi-state models, and connects with the mstate package (de Wreede, Fiocco, and Putter 2011). This article explains the methods and design principles of the package, giving several worked examples of its use. PMID:29593450
Elastohydrodynamics of elliptical contacts for materials of low elastic modulus
NASA Technical Reports Server (NTRS)
Hamrock, B. J.; Dowson, D.
1983-01-01
The influence of the ellipticity parameter k and the dimensionless speed U, load W, and materials G parameters on minimum film thickness for materials of low elastic modulus was investigated. The ellipticity parameter was varied from 1 (a ball-on-plane configuration) to 12 (a configuration approaching a line contact); U and W were each varied by one order of magnitude. Seventeen cases were used to generate the minimum- and central-film-thickness relations. The influence of lubricant starvation on minimum film thickness in starved elliptical, elastohydrodynamic configurations was also investigated for materials of low elastic modulus. Lubricant starvation was studied simply by moving the inlet boundary closer to the center of the conjunction in the numerical solutions. Contour plots of pressure and film thickness in and around the contact were presented for both fully flooded and starved lubrication conditions. It is evident from these figures that the inlet pressure contours become less circular and closer to the edge of the Hertzian contact zone and that the film thickness decreases substantially as the serverity of starvation increases. The results presented reveal the essential features of both fully flooded and starved, elliptical, elastohydrodynamic conjunctions for materials of low elastic modulus.
Obtaining short-fiber orientation model parameters using non-lubricated squeeze flow
NASA Astrophysics Data System (ADS)
Lambert, Gregory; Wapperom, Peter; Baird, Donald
2017-12-01
Accurate models of fiber orientation dynamics during the processing of polymer-fiber composites are needed for the design work behind important automobile parts. All of the existing models utilize empirical parameters, but a standard method for obtaining them independent of processing does not exist. This study considers non-lubricated squeeze flow through a rectangular channel as a solution. A two-dimensional finite element method simulation of the kinematics and fiber orientation evolution along the centerline of a sample is developed as a first step toward a fully three-dimensional simulation. The model is used to fit to orientation data in a short-fiber-reinforced polymer composite after squeezing. Fiber orientation model parameters obtained in this study do not agree well with those obtained for the same material during startup of simple shear. This is attributed to the vastly different rates at which fibers orient during shearing and extensional flows. A stress model is also used to try to fit to experimental closure force data. Although the model can be tuned to the correct magnitude of the closure force, it does not fully recreate the transient behavior, which is attributed to the lack of any consideration for fiber-fiber interactions.
NASA Astrophysics Data System (ADS)
Kenok, R.; Jomdecha, C.; Jirarungsatian, C.
The aim of this paper is to study the acoustic emission (AE) parameters obtained from CNG cylinders during pressurization. AE from flaw propagation, material integrity, and pressuring of cylinder was the main objective for characterization. CNG cylinders of ISO 11439, resin fully wrapped type and metal liner type, were employed to test by hydrostatic stressing. The pressure was step increased until 1.1 time of operating pressure. Two AE sensors, resonance frequency of 150 kHz, were mounted on the cylinder wall to detect the AE throughout the testing. From the experiment results, AE can be detected from pressuring rate, material integrity, and flaw propagation from the cylinder wall. AE parameters including Amplitude, Count, Energy (MARSE), Duration and Rise time were analyzed to distinguish the AE data. The results show that the AE of flaw propagation was different in character from that of pressurization. Especially, AE detected from flaws of resin wrapped and metal liner was significantly different. To locate the flaw position, both the AE sensors can be accurately used to locate the flaw propagation in a linear pattern. The error was less than ±5 cm.
Inference regarding multiple structural changes in linear models with endogenous regressors☆
Hall, Alastair R.; Han, Sanggohn; Boldea, Otilia
2012-01-01
This paper considers the linear model with endogenous regressors and multiple changes in the parameters at unknown times. It is shown that minimization of a Generalized Method of Moments criterion yields inconsistent estimators of the break fractions, but minimization of the Two Stage Least Squares (2SLS) criterion yields consistent estimators of these parameters. We develop a methodology for estimation and inference of the parameters of the model based on 2SLS. The analysis covers the cases where the reduced form is either stable or unstable. The methodology is illustrated via an application to the New Keynesian Phillips Curve for the US. PMID:23805021
Prediction of Unsteady Aerodynamic Coefficients at High Angles of Attack
NASA Technical Reports Server (NTRS)
Pamadi, Bandu N.; Murphy, Patrick C.; Klein, Vladislav; Brandon, Jay M.
2001-01-01
The nonlinear indicial response method is used to model the unsteady aerodynamic coefficients in the low speed longitudinal oscillatory wind tunnel test data of the 0.1 scale model of the F-16XL aircraft. Exponential functions are used to approximate the deficiency function in the indicial response. Using one set of oscillatory wind tunnel data and parameter identification method, the unknown parameters in the exponential functions are estimated. The genetic algorithm is used as a least square minimizing algorithm. The assumed model structures and parameter estimates are validated by comparing the predictions with other sets of available oscillatory wind tunnel test data.
Vinzens, Fabrizio; Zumstein, Valentin; Bieg, Christian; Ackermann, Christoph
2016-05-26
Patients presenting with abdominal pain and pneumoperitoneum in radiological examination usually require emergency explorative laparoscopy or laparotomy. Pneumoperitoneum mostly associates with gastrointestinal perforation. There are very few cases where surgery can be avoided. We present 2 cases of pneumoperitoneum with unknown origin and successful conservative treatment. Both patients were elderly women presenting to our emergency unit, with moderate abdominal pain. There was neither medical intervention nor trauma in their medical history. Physical examination revealed mild abdominal tenderness, but no clinical sign of peritonitis. Cardiopulmonary examination remained unremarkable. Blood studies showed only slight abnormalities, in particular, inflammation parameters were not significantly increased. Finally, obtained CTs showed free abdominal gas of unknown origin in both cases. We performed conservative management with nil per os, nasogastric tube, total parenteral nutrition and prophylactic antibiotics. After 2 weeks, both were discharged home. 2016 BMJ Publishing Group Ltd.
Probabilistic and deterministic aspects of linear estimation in geodesy. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Dermanis, A.
1976-01-01
Recent advances in observational techniques related to geodetic work (VLBI, laser ranging) make it imperative that more consideration should be given to modeling problems. Uncertainties in the effect of atmospheric refraction, polar motion and precession-nutation parameters, cannot be dispensed with in the context of centimeter level geodesy. Even physical processes that have generally been previously altogether neglected (station motions) must now be taken into consideration. The problem of modeling functions of time or space, or at least their values at observation points (epochs) is explored. When the nature of the function to be modeled is unknown. The need to include a limited number of terms and to a priori decide upon a specific form may result in a representation which fails to sufficiently approximate the unknown function. An alternative approach of increasing application is the modeling of unknown functions as stochastic processes.
NASA Astrophysics Data System (ADS)
Wang, L. M.
2017-09-01
A novel model-free adaptive sliding mode strategy is proposed for a generalized projective synchronization (GPS) between two entirely unknown fractional-order chaotic systems subject to the external disturbances. To solve the difficulties from the little knowledge about the master-slave system and to overcome the bad effects of the external disturbances on the generalized projective synchronization, the radial basis function neural networks are used to approach the packaged unknown master system and the packaged unknown slave system (including the external disturbances). Consequently, based on the slide mode technology and the neural network theory, a model-free adaptive sliding mode controller is designed to guarantee asymptotic stability of the generalized projective synchronization error. The main contribution of this paper is that a control strategy is provided for the generalized projective synchronization between two entirely unknown fractional-order chaotic systems subject to the unknown external disturbances, and the proposed control strategy only requires that the master system has the same fractional orders as the slave system. Moreover, the proposed method allows us to achieve all kinds of generalized projective chaos synchronizations by turning the user-defined parameters onto the desired values. Simulation results show the effectiveness of the proposed method and the robustness of the controlled system.
Wang, Min; Ge, Shuzhi Sam; Hong, Keum-Shik
2010-11-01
This paper presents adaptive neural tracking control for a class of non-affine pure-feedback systems with multiple unknown state time-varying delays. To overcome the design difficulty from non-affine structure of pure-feedback system, mean value theorem is exploited to deduce affine appearance of state variables x(i) as virtual controls α(i), and of the actual control u. The separation technique is introduced to decompose unknown functions of all time-varying delayed states into a series of continuous functions of each delayed state. The novel Lyapunov-Krasovskii functionals are employed to compensate for the unknown functions of current delayed state, which is effectively free from any restriction on unknown time-delay functions and overcomes the circular construction of controller caused by the neural approximation of a function of u and [Formula: see text] . Novel continuous functions are introduced to overcome the design difficulty deduced from the use of one adaptive parameter. To achieve uniformly ultimate boundedness of all the signals in the closed-loop system and tracking performance, control gains are effectively modified as a dynamic form with a class of even function, which makes stability analysis be carried out at the present of multiple time-varying delays. Simulation studies are provided to demonstrate the effectiveness of the proposed scheme.
An adaptive control scheme for a flexible manipulator
NASA Technical Reports Server (NTRS)
Yang, T. C.; Yang, J. C. S.; Kudva, P.
1987-01-01
The problem of controlling a single link flexible manipulator is considered. A self-tuning adaptive control scheme is proposed which consists of a least squares on-line parameter identification of an equivalent linear model followed by a tuning of the gains of a pole placement controller using the parameter estimates. Since the initial parameter values for this model are assumed unknown, the use of arbitrarily chosen initial parameter estimates in the adaptive controller would result in undesirable transient effects. Hence, the initial stage control is carried out with a PID controller. Once the identified parameters have converged, control is transferred to the adaptive controller. Naturally, the relevant issues in this scheme are tests for parameter convergence and minimization of overshoots during control switch-over. To demonstrate the effectiveness of the proposed scheme, simulation results are presented with an analytical nonlinear dynamic model of a single link flexible manipulator.
Statistical inference involving binomial and negative binomial parameters.
García-Pérez, Miguel A; Núñez-Antón, Vicente
2009-05-01
Statistical inference about two binomial parameters implies that they are both estimated by binomial sampling. There are occasions in which one aims at testing the equality of two binomial parameters before and after the occurrence of the first success along a sequence of Bernoulli trials. In these cases, the binomial parameter before the first success is estimated by negative binomial sampling whereas that after the first success is estimated by binomial sampling, and both estimates are related. This paper derives statistical tools to test two hypotheses, namely, that both binomial parameters equal some specified value and that both parameters are equal though unknown. Simulation studies are used to show that in small samples both tests are accurate in keeping the nominal Type-I error rates, and also to determine sample size requirements to detect large, medium, and small effects with adequate power. Additional simulations also show that the tests are sufficiently robust to certain violations of their assumptions.
Karr, Jonathan R; Williams, Alex H; Zucker, Jeremy D; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A; Bot, Brian M; Hoff, Bruce R; Kellen, Michael R; Covert, Markus W; Stolovitzky, Gustavo A; Meyer, Pablo
2015-05-01
Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation.
Adaptive Parameter Estimation of Person Recognition Model in a Stochastic Human Tracking Process
NASA Astrophysics Data System (ADS)
Nakanishi, W.; Fuse, T.; Ishikawa, T.
2015-05-01
This paper aims at an estimation of parameters of person recognition models using a sequential Bayesian filtering method. In many human tracking method, any parameters of models used for recognize the same person in successive frames are usually set in advance of human tracking process. In real situation these parameters may change according to situation of observation and difficulty level of human position prediction. Thus in this paper we formulate an adaptive parameter estimation using general state space model. Firstly we explain the way to formulate human tracking in general state space model with their components. Then referring to previous researches, we use Bhattacharyya coefficient to formulate observation model of general state space model, which is corresponding to person recognition model. The observation model in this paper is a function of Bhattacharyya coefficient with one unknown parameter. At last we sequentially estimate this parameter in real dataset with some settings. Results showed that sequential parameter estimation was succeeded and were consistent with observation situations such as occlusions.
Karr, Jonathan R.; Williams, Alex H.; Zucker, Jeremy D.; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A.; Bot, Brian M.; Hoff, Bruce R.; Kellen, Michael R.; Covert, Markus W.; Stolovitzky, Gustavo A.; Meyer, Pablo
2015-01-01
Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model’s structure and in silico “experimental” data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation. PMID:26020786
[Subchronic toxicity testing of mold-ripened cheese].
Schoch, U; Lüthy, J; Schlatter, C
1984-08-01
The biological effects of known mycotoxins of Penicillium roqueforti or P. camemberti and other still unknown, but potentially toxic metabolites in mould ripened cheese (commercial samples of Blue- and Camembert cheese) were investigated. High amounts of mycelium (equivalents of 100 kg cheese/man and day) were fed to mice in a subchronic feeding trial. The following parameters were determined: development of body weight, organ weights, hematology, blood plasma enzymes. No signs of adverse effects produced by cheese mycotoxins could be detected after 28 days. No still unknown toxic metabolites could be demonstrated. From these results no health hazard from the consumption of mould ripened cheese, even in high amounts, appears to exist.
Garcia, Tanya P; Ma, Yanyuan
2017-10-01
We develop consistent and efficient estimation of parameters in general regression models with mismeasured covariates. We assume the model error and covariate distributions are unspecified, and the measurement error distribution is a general parametric distribution with unknown variance-covariance. We construct root- n consistent, asymptotically normal and locally efficient estimators using the semiparametric efficient score. We do not estimate any unknown distribution or model error heteroskedasticity. Instead, we form the estimator under possibly incorrect working distribution models for the model error, error-prone covariate, or both. Empirical results demonstrate robustness to different incorrect working models in homoscedastic and heteroskedastic models with error-prone covariates.
Nonlinear robust controller design for multi-robot systems with unknown payloads
NASA Technical Reports Server (NTRS)
Song, Y. D.; Anderson, J. N.; Homaifar, A.; Lai, H. Y.
1992-01-01
This work is concerned with the control problem of a multi-robot system handling a payload with unknown mass properties. Force constraints at the grasp points are considered. Robust control schemes are proposed that cope with the model uncertainty and achieve asymptotic path tracking. To deal with the force constraints, a strategy for optimally sharing the task is suggested. This strategy basically consists of two steps. The first detects the robots that need help and the second arranges that help. It is shown that the overall system is not only robust to uncertain payload parameters, but also satisfies the force constraints.
McNeilly, Clyde E.
1977-01-04
A device is provided for automatically selecting from a plurality of ranges of a scale of values to which a meter may be made responsive, that range which encompasses the value of an unknown parameter. A meter relay indicates whether the unknown is of greater or lesser value than the range to which the meter is then responsive. The rotatable part of a stepping relay is rotated in one direction or the other in response to the indication from the meter relay. Various positions of the rotatable part are associated with particular scales. Switching means are sensitive to the position of the rotatable part to couple the associated range to the meter.
Cooley, Richard L.
1982-01-01
Prior information on the parameters of a groundwater flow model can be used to improve parameter estimates obtained from nonlinear regression solution of a modeling problem. Two scales of prior information can be available: (1) prior information having known reliability (that is, bias and random error structure) and (2) prior information consisting of best available estimates of unknown reliability. A regression method that incorporates the second scale of prior information assumes the prior information to be fixed for any particular analysis to produce improved, although biased, parameter estimates. Approximate optimization of two auxiliary parameters of the formulation is used to help minimize the bias, which is almost always much smaller than that resulting from standard ridge regression. It is shown that if both scales of prior information are available, then a combined regression analysis may be made.
Integrated direct/indirect adaptive robust motion trajectory tracking control of pneumatic cylinders
NASA Astrophysics Data System (ADS)
Meng, Deyuan; Tao, Guoliang; Zhu, Xiaocong
2013-09-01
This paper studies the precision motion trajectory tracking control of a pneumatic cylinder driven by a proportional-directional control valve. An integrated direct/indirect adaptive robust controller is proposed. The controller employs a physical model based indirect-type parameter estimation to obtain reliable estimates of unknown model parameters, and utilises a robust control method with dynamic compensation type fast adaptation to attenuate the effects of parameter estimation errors, unmodelled dynamics and disturbances. Due to the use of projection mapping, the robust control law and the parameter adaption algorithm can be designed separately. Since the system model uncertainties are unmatched, the recursive backstepping technology is adopted to design the robust control law. Extensive comparative experimental results are presented to illustrate the effectiveness of the proposed controller and its performance robustness to parameter variations and sudden disturbances.
Parameter estimation in nonlinear distributed systems - Approximation theory and convergence results
NASA Technical Reports Server (NTRS)
Banks, H. T.; Reich, Simeon; Rosen, I. G.
1988-01-01
An abstract approximation framework and convergence theory is described for Galerkin approximations applied to inverse problems involving nonlinear distributed parameter systems. Parameter estimation problems are considered and formulated as the minimization of a least-squares-like performance index over a compact admissible parameter set subject to state constraints given by an inhomogeneous nonlinear distributed system. The theory applies to systems whose dynamics can be described by either time-independent or nonstationary strongly maximal monotonic operators defined on a reflexive Banach space which is densely and continuously embedded in a Hilbert space. It is demonstrated that if readily verifiable conditions on the system's dependence on the unknown parameters are satisfied, and the usual Galerkin approximation assumption holds, then solutions to the approximating problems exist and approximate a solution to the original infinite-dimensional identification problem.
NASA Technical Reports Server (NTRS)
Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.
1991-01-01
A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.
Type Ia Supernova Intrinsic Magnitude Dispersion and the Fitting of Cosmological Parameters
NASA Astrophysics Data System (ADS)
Kim, A. G.
2011-02-01
I present an analysis for fitting cosmological parameters from a Hubble diagram of a standard candle with unknown intrinsic magnitude dispersion. The dispersion is determined from the data, simultaneously with the cosmological parameters. This contrasts with the strategies used to date. The advantages of the presented analysis are that it is done in a single fit (it is not iterative), it provides a statistically founded and unbiased estimate of the intrinsic dispersion, and its cosmological-parameter uncertainties account for the intrinsic-dispersion uncertainty. Applied to Type Ia supernovae, my strategy provides a statistical measure to test for subtypes and assess the significance of any magnitude corrections applied to the calibrated candle. Parameter bias and differences between likelihood distributions produced by the presented and currently used fitters are negligibly small for existing and projected supernova data sets.
Automatic Exposure Control Device for Digital Mammography
2001-08-01
developing innovative approaches for controlling DM exposures. These approaches entail using the digital detector and an artificial neural network to...of interest that determine the exposure parameters for the fully exposed image; and (2) to use an artificial neural network to select exposure
Automatic Exposure Control Device for Digital Mammography
2004-08-01
developing innovative approaches for controlling DM exposures. These approaches entail using the digital detector and an artificial neural network to...of interest that determine the exposure parameters for the fully exposed image; and (2) to use an artificial neural network to select exposure
Surface tension effects on fully developed liquid layer flow over a convex corner
NASA Astrophysics Data System (ADS)
Bhatti, Ifrah; Farid, Saadia; Ullah, Saif; Riaz, Samia; Faryad, Maimoona
2018-04-01
This investigation deals with the study of fully developed liquid layer flow along with surface tension effects, confronting a convex corner in the direction of fluid flow. At the point of interaction, the related equations are formulated using double deck structure and match asymptotic techniques. Linearized solutions for small angle are obtained analytically. The solutions corresponding to similar flow neglecting surface tension effects are also recovered as special case of our general solutions. Finally, the influence of pertinent parameters on the flow, as well as a comparison between models, are shown by graphical illustration.
Human Fear Conditioning Conducted in Full Immersion 3-Dimensional Virtual Reality
Huff, Nicole C.; Zielinski, David J.; Fecteau, Matthew E.; Brady, Rachael; LaBar, Kevin S.
2010-01-01
Fear conditioning is a widely used paradigm in non-human animal research to investigate the neural mechanisms underlying fear and anxiety. A major challenge in conducting conditioning studies in humans is the ability to strongly manipulate or simulate the environmental contexts that are associated with conditioned emotional behaviors. In this regard, virtual reality (VR) technology is a promising tool. Yet, adapting this technology to meet experimental constraints requires special accommodations. Here we address the methodological issues involved when conducting fear conditioning in a fully immersive 6-sided VR environment and present fear conditioning data. In the real world, traumatic events occur in complex environments that are made up of many cues, engaging all of our sensory modalities. For example, cues that form the environmental configuration include not only visual elements, but aural, olfactory, and even tactile. In rodent studies of fear conditioning animals are fully immersed in a context that is rich with novel visual, tactile and olfactory cues. However, standard laboratory tests of fear conditioning in humans are typically conducted in a nondescript room in front of a flat or 2D computer screen and do not replicate the complexity of real world experiences. On the other hand, a major limitation of clinical studies aimed at reducing (extinguishing) fear and preventing relapse in anxiety disorders is that treatment occurs after participants have acquired a fear in an uncontrolled and largely unknown context. Thus the experimenters are left without information about the duration of exposure, the true nature of the stimulus, and associated background cues in the environment1. In the absence of this information it can be difficult to truly extinguish a fear that is both cue and context-dependent. Virtual reality environments address these issues by providing the complexity of the real world, and at the same time allowing experimenters to constrain fear conditioning and extinction parameters to yield empirical data that can suggest better treatment options and/or analyze mechanistic hypotheses. In order to test the hypothesis that fear conditioning may be richly encoded and context specific when conducted in a fully immersive environment, we developed distinct virtual reality 3-D contexts in which participants experienced fear conditioning to virtual snakes or spiders. Auditory cues co-occurred with the CS in order to further evoke orienting responses and a feeling of "presence" in subjects 2 . Skin conductance response served as the dependent measure of fear acquisition, memory retention and extinction. PMID:20736913
An approximate solution for interlaminar stresses in laminated composites: Applied mechanics program
NASA Technical Reports Server (NTRS)
Rose, Cheryl A.; Herakovich, Carl T.
1992-01-01
An approximate solution for interlaminar stresses in finite width, laminated composites subjected to uniform extensional, and bending loads is presented. The solution is based upon the principle of minimum complementary energy and an assumed, statically admissible stress state, derived by considering local material mismatch effects and global equilibrium requirements. The stresses in each layer are approximated by polynomial functions of the thickness coordinate, multiplied by combinations of exponential functions of the in-plane coordinate, expressed in terms of fourteen unknown decay parameters. Imposing the stationary condition of the laminate complementary energy with respect to the unknown variables yields a system of fourteen non-linear algebraic equations for the parameters. Newton's method is implemented to solve this system. Once the parameters are known, the stresses can be easily determined at any point in the laminate. Results are presented for through-thickness and interlaminar stress distributions for angle-ply, cross-ply (symmetric and unsymmetric laminates), and quasi-isotropic laminates subjected to uniform extension and bending. It is shown that the solution compares well with existing finite element solutions and represents an improved approximate solution for interlaminar stresses, primarily at interfaces where global equilibrium is satisfied by the in-plane stresses, but large local mismatch in properties requires the presence of interlaminar stresses.
A Bayesian Approach to Determination of F, D, and Z Values Used in Steam Sterilization Validation.
Faya, Paul; Stamey, James D; Seaman, John W
2017-01-01
For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the well-known D T , z , and F o values that are used in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these values to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. LAY ABSTRACT: For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the critical process parameters that are evaluated in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these parameters to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. © PDA, Inc. 2017.
Palombo, Marco; Gabrielli, Andrea; De Santis, Silvia; Capuani, Silvia
2012-03-01
In this paper, we investigate the image contrast that characterizes anomalous and non-gaussian diffusion images obtained using the stretched exponential model. This model is based on the introduction of the γ stretched parameter, which quantifies deviation from the mono-exponential decay of diffusion signal as a function of the b-value. To date, the biophysical substrate underpinning the contrast observed in γ maps, in other words, the biophysical interpretation of the γ parameter (or the fractional order derivative in space, β parameter) is still not fully understood, although it has already been applied to investigate both animal models and human brain. Due to the ability of γ maps to reflect additional microstructural information which cannot be obtained using diffusion procedures based on gaussian diffusion, some authors propose this parameter as a measure of diffusion heterogeneity or water compartmentalization in biological tissues. Based on our recent work we suggest here that the coupling between internal and diffusion gradients provide pseudo-superdiffusion effects which are quantified by the stretching exponential parameter γ. This means that the image contrast of Mγ maps reflects local magnetic susceptibility differences (Δχ(m)), thus highlighting better than T(2)(∗) contrast the interface between compartments characterized by Δχ(m). Thanks to this characteristic, Mγ imaging may represent an interesting tool to develop contrast-enhanced MRI for molecular imaging. The spectroscopic and imaging experiments (performed in controlled micro-beads dispersion) that are reported here, strongly suggest internal gradients, and as a consequence Δχ(m), to be an important factor in fully understanding the source of contrast in anomalous diffusion methods that are based on a stretched exponential model analysis of diffusion data obtained at varying gradient strengths g. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Golmohammadi, A.; Jafarpour, B.; M Khaninezhad, M. R.
2017-12-01
Calibration of heterogeneous subsurface flow models leads to ill-posed nonlinear inverse problems, where too many unknown parameters are estimated from limited response measurements. When the underlying parameters form complex (non-Gaussian) structured spatial connectivity patterns, classical variogram-based geostatistical techniques cannot describe the underlying connectivity patterns. Modern pattern-based geostatistical methods that incorporate higher-order spatial statistics are more suitable for describing such complex spatial patterns. Moreover, when the underlying unknown parameters are discrete (geologic facies distribution), conventional model calibration techniques that are designed for continuous parameters cannot be applied directly. In this paper, we introduce a novel pattern-based model calibration method to reconstruct discrete and spatially complex facies distributions from dynamic flow response data. To reproduce complex connectivity patterns during model calibration, we impose a feasibility constraint to ensure that the solution follows the expected higher-order spatial statistics. For model calibration, we adopt a regularized least-squares formulation, involving data mismatch, pattern connectivity, and feasibility constraint terms. Using an alternating directions optimization algorithm, the regularized objective function is divided into a continuous model calibration problem, followed by mapping the solution onto the feasible set. The feasibility constraint to honor the expected spatial statistics is implemented using a supervised machine learning algorithm. The two steps of the model calibration formulation are repeated until the convergence criterion is met. Several numerical examples are used to evaluate the performance of the developed method.
Merrikh-Bayat, Farshad
2017-05-01
In this paper first the Multi-term Fractional-Order PID (MFOPID) whose transfer function is equal to [Formula: see text] , where k j and α j are unknown and known real parameters respectively, is introduced. Without any loss of generality, a special form of MFOPID with transfer function k p +k i /s+k d1 s+k d2 s μ where k p , k i , k d1 , and k d2 are unknown real and μ is a known positive real parameter, is considered. Similar to PID and TID, MFOPID is also linear in its parameters which makes it possible to study all of them in a same framework. Tuning the parameters of PID, TID, and MFOPID based on loop shaping using Linear Matrix Inequalities (LMIs) is discussed. For this purpose separate LMIs for closed-loop stability (of sufficient type) and adjusting different aspects of the open-loop frequency response are developed. The proposed LMIs for stability are obtained based on the Nyquist stability theorem and can be applied to both integer and fractional-order (not necessarily commensurate) processes which are either stable or have one unstable pole. Numerical simulations show that the performance of the four-variable MFOPID can compete the trivial five-variable FOPID and often excels PID and TID. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
On the predictiveness of single-field inflationary models
NASA Astrophysics Data System (ADS)
Burgess, C. P.; Patil, Subodh P.; Trott, Michael
2014-06-01
We re-examine the predictiveness of single-field inflationary models and discuss how an unknown UV completion can complicate determining inflationary model parameters from observations, even from precision measurements. Besides the usual naturalness issues associated with having a shallow inflationary potential, we describe another issue for inflation, namely, unknown UV physics modifies the running of Standard Model (SM) parameters and thereby introduces uncertainty into the potential inflationary predictions. We illustrate this point using the minimal Higgs Inflationary scenario, which is arguably the most predictive single-field model on the market, because its predictions for A S , r and n s are made using only one new free parameter beyond those measured in particle physics experiments, and run up to the inflationary regime. We find that this issue can already have observable effects. At the same time, this UV-parameter dependence in the Renormalization Group allows Higgs Inflation to occur (in principle) for a slightly larger range of Higgs masses. We comment on the origin of the various UV scales that arise at large field values for the SM Higgs, clarifying cut off scale arguments by further developing the formalism of a non-linear realization of SU L (2) × U(1) in curved space. We discuss the interesting fact that, outside of Higgs Inflation, the effect of a non-minimal coupling to gravity, even in the SM, results in a non-linear EFT for the Higgs sector. Finally, we briefly comment on post BICEP2 attempts to modify the Higgs Inflation scenario.
NASA Astrophysics Data System (ADS)
Schalge, Bernd; Rihani, Jehan; Haese, Barbara; Baroni, Gabriele; Erdal, Daniel; Haefliger, Vincent; Lange, Natascha; Neuweiler, Insa; Hendricks-Franssen, Harrie-Jan; Geppert, Gernot; Ament, Felix; Kollet, Stefan; Cirpka, Olaf; Saavedra, Pablo; Han, Xujun; Attinger, Sabine; Kunstmann, Harald; Vereecken, Harry; Simmer, Clemens
2017-04-01
Currently, an integrated approach to simulating the earth system is evolving where several compartment models are coupled to achieve the best possible physically consistent representation. We used the model TerrSysMP, which fully couples subsurface, land surface and atmosphere, in a synthetic study that mimicked the Neckar catchment in Southern Germany. A virtual reality run at a high resolution of 400m for the land surface and subsurface and 1.1km for the atmosphere was made. Ensemble runs at a lower resolution (800m for the land surface and subsurface) were also made. The ensemble was generated by varying soil and vegetation parameters and lateral atmospheric forcing among the different ensemble members in a systematic way. It was found that the ensemble runs deviated for some variables and some time periods largely from the virtual reality reference run (the reference run was not covered by the ensemble), which could be related to the different model resolutions. This was for example the case for river discharge in the summer. We also analyzed the spread of model states as function of time and found clear relations between the spread and the time of the year and weather conditions. For example, the ensemble spread of latent heat flux related to uncertain soil parameters was larger under dry soil conditions than under wet soil conditions. Another example is that the ensemble spread of atmospheric states was more influenced by uncertain soil and vegetation parameters under conditions of low air pressure gradients (in summer) than under conditions with larger air pressure gradients in winter. The analysis of the ensemble of fully coupled model simulations provided valuable insights in the dynamics of land-atmosphere feedbacks which we will further highlight in the presentation.
Search Planning Under Incomplete Information Using Stochastic Optimization and Regression
2011-09-01
solve since they involve un- certainty and unknown parameters (see for example Shapiro et al., 2009; Wallace & Ziemba , 2005). One application area is...M16130.2E. 43 Wallace, S. W., & Ziemba , W. T. (2005). Applications of stochastic programming. Philadelphia, PA: Society for Industrial and Applied
A Robust Deconvolution Method based on Transdimensional Hierarchical Bayesian Inference
NASA Astrophysics Data System (ADS)
Kolb, J.; Lekic, V.
2012-12-01
Analysis of P-S and S-P conversions allows us to map receiver side crustal and lithospheric structure. This analysis often involves deconvolution of the parent wave field from the scattered wave field as a means of suppressing source-side complexity. A variety of deconvolution techniques exist including damped spectral division, Wiener filtering, iterative time-domain deconvolution, and the multitaper method. All of these techniques require estimates of noise characteristics as input parameters. We present a deconvolution method based on transdimensional Hierarchical Bayesian inference in which both noise magnitude and noise correlation are used as parameters in calculating the likelihood probability distribution. Because the noise for P-S and S-P conversion analysis in terms of receiver functions is a combination of both background noise - which is relatively easy to characterize - and signal-generated noise - which is much more difficult to quantify - we treat measurement errors as an known quantity, characterized by a probability density function whose mean and variance are model parameters. This transdimensional Hierarchical Bayesian approach has been successfully used previously in the inversion of receiver functions in terms of shear and compressional wave speeds of an unknown number of layers [1]. In our method we used a Markov chain Monte Carlo (MCMC) algorithm to find the receiver function that best fits the data while accurately assessing the noise parameters. In order to parameterize the receiver function we model the receiver function as an unknown number of Gaussians of unknown amplitude and width. The algorithm takes multiple steps before calculating the acceptance probability of a new model, in order to avoid getting trapped in local misfit minima. Using both observed and synthetic data, we show that the MCMC deconvolution method can accurately obtain a receiver function as well as an estimate of the noise parameters given the parent and daughter components. Furthermore, we demonstrate that this new approach is far less susceptible to generating spurious features even at high noise levels. Finally, the method yields not only the most-likely receiver function, but also quantifies its full uncertainty. [1] Bodin, T., M. Sambridge, H. Tkalčić, P. Arroucau, K. Gallagher, and N. Rawlinson (2012), Transdimensional inversion of receiver functions and surface wave dispersion, J. Geophys. Res., 117, B02301
Neutrino oscillations and Non-Standard Interactions
NASA Astrophysics Data System (ADS)
Farzan, Yasaman; Tórtola, Mariam
2018-02-01
Current neutrino experiments are measuring the neutrino mixing parameters with an unprecedented accuracy. The upcoming generation of neutrino experiments will be sensitive to subdominant oscillation effects that can give information on the yet-unknown neutrino parameters: the Dirac CP-violating phase, the mass ordering and the octant of θ_{23}. Determining the exact values of neutrino mass and mixing parameters is crucial to test neutrino models and flavor symmetries designed to predict these neutrino parameters. In the first part of this review, we summarize the current status of the neutrino oscillation parameter determination. We consider the most recent data from all solar experiments and the atmospheric data from Super-Kamiokande, IceCube and ANTARES. We also implement the data from the reactor neutrino experiments KamLAND, Daya Bay, RENO and Double Chooz as well as the long baseline neutrino data from MINOS, T2K and NOvA. If in addition to the standard interactions, neutrinos have subdominant yet-unknown Non-Standard Interactions (NSI) with matter fields, extracting the values of these parameters will suffer from new degeneracies and ambiguities. We review such effects and formulate the conditions on the NSI parameters under which the precision measurement of neutrino oscillation parameters can be distorted. Like standard weak interactions, the non-standard interaction can be categorized into two groups: Charged Current (CC) NSI and Neutral Current (NC) NSI. Our focus will be mainly on neutral current NSI because it is possible to build a class of models that give rise to sizeable NC NSI with discernible effects on neutrino oscillation. These models are based on new U(1) gauge symmetry with a gauge boson of mass ≲ 10 MeV. The UV complete model should be of course electroweak invariant which in general implies that along with neutrinos, charged fermions also acquire new interactions on which there are strong bounds. We enumerate the bounds that already exist on the electroweak symmetric models and demonstrate that it is possible to build viable models avoiding all these bounds. In the end, we review methods to test these models and suggest approaches to break the degeneracies in deriving neutrino mass parameters caused by NSI.
Hanada, Akiko; Kurogi, Takashi; Giang, Nguyen Minh; Yamada, Takeshi; Kamimoto, Yuki; Kiso, Yoshiaki; Hiraishi, Akira
2014-01-01
Laboratory-scale acidophilic nitrifying sequencing-batch reactors (ANSBRs) were constructed by seeding with sewage-activated sludge and cultivating with ammonium-containing acidic mineral medium (pH 4.0) with or without a trace amount of yeast extract. In every batch cycle, the pH varied between 2.7 and 4.0, and ammonium was completely converted to nitrate. Attempts to detect nitrifying functional genes in the fully acclimated ANSBRs by PCR with previously designed primers mostly gave negative results. 16S rRNA gene-targeted PCR and a subsequent denaturating gradient gel electrophoresis analysis revealed that a marked change occurred in the bacterial community during the overall period of operation, in which members of the candidate phylum TM7 and the class Gammaproteobacteria became predominant at the fully acclimated stage. This result was fully supported by a 16S rRNA gene clone library analysis, as the major phylogenetic groups of clones detected (>5% of the total) were TM7 (33%), Gammaproteobacteria (37%), Actinobacteria (10%), and Alphaproteobacteria (8%). Fluorescence in situ hybridization with specific probes also demonstrated the prevalence of TM7 bacteria and Gammaproteobacteria. These results suggest that previously unknown nitrifying microorganisms may play a major role in ANSBRs; however, the ecophysiological significance of the TM7 bacteria predominating in this process remains unclear. PMID:25241805
Vegger, Jens Bay; Brüel, Annemarie; Brent, Mikkel Bo; Thomsen, Jesper Skovhus
2018-03-01
Osteopenia and osteoporosis predominately occur in the fully grown skeleton. However, it is unknown whether disuse osteopenia in skeletally mature, but growing, mice resembles that of fully grown mice. Twenty-four 16-week-old (young) and eighteen 44-week-old (aged) female C57BL/6J mice were investigated. Twelve young and nine aged mice were injected with botulinum toxin in one hind limb; the remaining mice served as controls. The mice were euthanized after 3 weeks of disuse. The femora were scanned by micro-computed tomography (µCT) and bone strength was determined by mechanically testing the femoral mid-diaphysis and neck. At the distal femoral metaphysis, the loss of trabecular bone volume fraction (BV/TV) differed between the young and aged mice. However, at the distal femoral epiphysis, no age-dependent differences were observed. Thinning of the trabeculae was not affected by the age of the mice at either the distal femoral metaphysis or the epiphysis. Furthermore, the aged mice lost more bone strength at the femoral mid-diaphysis, but not at the femoral neck, compared to the young mice. In general, the bone loss induced by botulinum toxin did not differ substantially between young and aged mice. Therefore, the loss of bone in young mice resembles that of aged mice, even though they are not fully grown.
Selective therapy in equine parasite control--application and limitations.
Nielsen, M K; Pfister, K; von Samson-Himmelstjerna, G
2014-05-28
Since the 1960s equine parasite control has relied heavily on frequent anthelmintic treatments often applied with frequent intervals year-round. However, increasing levels of anthelmintic resistance in cyathostomins and Parascaris equorum are now forcing the equine industry to change to a more surveillance-based treatment approach to facilitate a reduction in treatment intensity. The principle of selective therapy has been implemented with success in small ruminant parasite control, and has also found use in horse populations. Typically, egg counts are performed from all individuals in the population, and those exceeding a predetermined cutoff threshold are treated. Several studies document the applicability of this method in populations of adult horses, where the overall cyathostomin egg shedding can be controlled by only treating about half the horses. However, selective therapy has not been evaluated in foals and young horses, and it remains unknown whether the principle is adequate to also provide control over other important parasites such as tapeworms, ascarids, and large strongyles. One recent study associated selective therapy with increased occurrence of Strongylus vulgaris. Studies are needed to evaluate potential health risks associated with selective therapy, and to assess to which extent development of anthelmintic resistance can be delayed with this approach. The choice of strongyle egg count cutoff value for anthelmintic treatment is currently based more on tradition than science, and a recent publication illustrated that apparently healthy horses with egg counts below 100 eggs per gram (EPG) can harbor cyathostomin burdens in the range of 100,000 luminal worms. It remains unknown whether leaving such horses untreated constitutes a potential threat to equine health. The concept of selective therapy has merit for equine strongyle control, but several questions remain as it has not been fully scientifically evaluated. There is a great need for new and improved methods for diagnosis and surveillance to supplement or replace the fecal egg counts, and equine health parameters need to be included in studies evaluating any parasite control program. Copyright © 2014 Elsevier B.V. All rights reserved.
Towards de novo identification of metabolites by analyzing tandem mass spectra.
Böcker, Sebastian; Rasche, Florian
2008-08-15
Mass spectrometry is among the most widely used technologies in proteomics and metabolomics. Being a high-throughput method, it produces large amounts of data that necessitates an automated analysis of the spectra. Clearly, database search methods for protein analysis can easily be adopted to analyze metabolite mass spectra. But for metabolites, de novo interpretation of spectra is even more important than for protein data, because metabolite spectra databases cover only a small fraction of naturally occurring metabolites: even the model plant Arabidopsis thaliana has a large number of enzymes whose substrates and products remain unknown. The field of bio-prospection searches biologically diverse areas for metabolites which might serve as pharmaceuticals. De novo identification of metabolite mass spectra requires new concepts and methods since, unlike proteins, metabolites possess a non-linear molecular structure. In this work, we introduce a method for fully automated de novo identification of metabolites from tandem mass spectra. Mass spectrometry data is usually assumed to be insufficient for identification of molecular structures, so we want to estimate the molecular formula of the unknown metabolite, a crucial step for its identification. The method first calculates all molecular formulas that explain the parent peak mass. Then, a graph is build where vertices correspond to molecular formulas of all peaks in the fragmentation mass spectra, whereas edges correspond to hypothetical fragmentation steps. Our algorithm afterwards calculates the maximum scoring subtree of this graph: each peak in the spectra must be scored at most once, so the subtree shall contain only one explanation per peak. Unfortunately, finding this subtree is NP-hard. We suggest three exact algorithms (including one fixed parameter tractable algorithm) as well as two heuristics to solve the problem. Tests on real mass spectra show that the FPT algorithm and the heuristics solve the problem suitably fast and provide excellent results: for all 32 test compounds the correct solution was among the top five suggestions, for 26 compounds the first suggestion of the exact algorithm was correct. http://www.bio.inf.uni-jena.de/tandemms
Using lod scores to detect sex differences in male-female recombination fractions.
Feenstra, B; Greenberg, D A; Hodge, S E
2004-01-01
Human recombination fraction (RF) can differ between males and females, but investigators do not always know which disease genes are located in genomic areas of large RF sex differences. Knowledge of RF sex differences contributes to our understanding of basic biology and can increase the power of a linkage study, improve gene localization, and provide clues to possible imprinting. One way to detect these differences is to use lod scores. In this study we focused on detecting RF sex differences and answered the following questions, in both phase-known and phase-unknown matings: (1) How large a sample size is needed to detect a RF sex difference? (2) What are "optimal" proportions of paternally vs. maternally informative matings? (3) Does ascertaining nonoptimal proportions of paternally or maternally informative matings lead to ascertainment bias? Our results were as follows: (1) We calculated expected lod scores (ELODs) under two different conditions: "unconstrained," allowing sex-specific RF parameters (theta(female), theta(male)); and "constrained," requiring theta(female) = theta(male). We then examined the DeltaELOD (identical with difference between maximized constrained and unconstrained ELODs) and calculated minimum sample sizes required to achieve statistically significant DeltaELODs. For large RF sex differences, samples as small as 10 to 20 fully informative matings can achieve statistical significance. We give general sample size guidelines for detecting RF differences in informative phase-known and phase-unknown matings. (2) We defined p as the proportion of paternally informative matings in the dataset; and the optimal proportion p(circ) as that value of p that maximizes DeltaELOD. We determined that, surprisingly, p(circ) does not necessarily equal (1/2), although it does fall between approximately 0.4 and 0.6 in most situations. (3) We showed that if p in a sample deviates from its optimal value, no bias is introduced (asymptotically) to the maximum likelihood estimates of theta(female) and theta(male), even though ELOD is reduced (see point 2). This fact is important because often investigators cannot control the proportions of paternally and maternally informative families. In conclusion, it is possible to reliably detect sex differences in recombination fraction. Copyright 2004 S. Karger AG, Basel
Gonzalez, Jesus M.; Francis, Bryan; Burda, Sherri; Hess, Kaitlyn; Behera, Digamber; Gupta, Dheeraj; Agarwal, Ashutosh Nath; Verma, Indu; Verma, Ajoy; Myneedu, Vithal Prasad; Niedbala, Sam; Laal, Suman
2014-01-01
The need for an accurate, rapid, simple and affordable point-of-care (POC) test for Tuberculosis (TB) that can be implemented in microscopy centers and other peripheral health-care settings in the TB-endemic countries remains unmet. This manuscript describes preliminary results of a new prototype rapid lateral flow TB test based on detection of antibodies to immunodominant epitopes (peptides) derived from carefully selected, highly immunogenic M. tuberculosis cell-wall proteins. Peptide selection was initially based on recognition by antibodies in sera from TB patients but not in PPD-/PPD+/BCG-vaccinated individuals from TB-endemic settings. The peptides were conjugated to BSA; the purified peptide-BSA conjugates striped onto nitrocellulose membrane and adsorbed onto colloidal gold particles to devise the prototype test, and evaluated for reactivity with sera from 3 PPD-, 29 PPD+, 15 PPD-unknown healthy subjects, 10 patients with non-TB lung disease and 124 smear-positive TB patients. The assay parameters were adjusted to determine positive/negative status within 15 minutes via visual or instrumented assessment. There was minimal or no reactivity of sera from non-TB subjects with the striped BSA-peptides demonstrating the lack of anti-peptide antibodies in subjects with latent TB and/or BCG vaccination. Sera from most TB patients demonstrated reactivity with one or more peptides. The sensitivity of antibody detection ranged from 28–85% with the 9 BSA-peptides. Three peptides were further evaluated with sera from 400 subjects, including additional PPD-/PPD+/PPD-unknown healthy contacts, close hospital contacts and household contacts of untreated TB patients, patients with non-TB lung disease, and HIV+TB- patients. Combination of the 3 peptides provided sensitivity and specificity>90%. While the final fully optimized lateral flow POC test for TB is under development, these preliminary results demonstrate that an antibody-detection based rapid POC lateral flow test based on select combinations of immunodominant M. tb-specific epitopes may potentially replace microscopy for TB diagnosis in TB-endemic settings. PMID:25247820
Sampling in ecology and evolution - bridging the gap between theory and practice
Albert, C.H.; Yoccoz, N.G.; Edwards, T.C.; Graham, C.H.; Zimmermann, N.E.; Thuiller, W.
2010-01-01
Sampling is a key issue for answering most ecological and evolutionary questions. The importance of developing a rigorous sampling design tailored to specific questions has already been discussed in the ecological and sampling literature and has provided useful tools and recommendations to sample and analyse ecological data. However, sampling issues are often difficult to overcome in ecological studies due to apparent inconsistencies between theory and practice, often leading to the implementation of simplified sampling designs that suffer from unknown biases. Moreover, we believe that classical sampling principles which are based on estimation of means and variances are insufficient to fully address many ecological questions that rely on estimating relationships between a response and a set of predictor variables over time and space. Our objective is thus to highlight the importance of selecting an appropriate sampling space and an appropriate sampling design. We also emphasize the importance of using prior knowledge of the study system to estimate models or complex parameters and thus better understand ecological patterns and processes generating these patterns. Using a semi-virtual simulation study as an illustration we reveal how the selection of the space (e.g. geographic, climatic), in which the sampling is designed, influences the patterns that can be ultimately detected. We also demonstrate the inefficiency of common sampling designs to reveal response curves between ecological variables and climatic gradients. Further, we show that response-surface methodology, which has rarely been used in ecology, is much more efficient than more traditional methods. Finally, we discuss the use of prior knowledge, simulation studies and model-based designs in defining appropriate sampling designs. We conclude by a call for development of methods to unbiasedly estimate nonlinear ecologically relevant parameters, in order to make inferences while fulfilling requirements of both sampling theory and field work logistics. ?? 2010 The Authors.
North American Crust and Upper Mantle Structure Imaged Using an Adaptive Bayesian Inversion
NASA Astrophysics Data System (ADS)
Eilon, Z.; Fischer, K. M.; Dalton, C. A.
2017-12-01
We present a methodology for imaging upper mantle structure using a Bayesian approach that incorporates a novel combination of seismic data types and an adaptive parameterization based on piecewise discontinuous splines. Our inversion algorithm lays the groundwork for improved seismic velocity models of the lithosphere and asthenosphere by harnessing increased computing power alongside sophisticated data analysis, with the flexibility to include multiple datatypes with complementary resolution. Our new method has been designed to simultaneously fit P-s and S-p converted phases and Rayleigh wave phase velocities measured from ambient noise (periods 6-40 s) and earthquake sources (periods 30-170s). Careful processing of the body wave data isolates the signals from velocity gradients between the mid-crust and 250 km depth. We jointly invert the body and surface wave data to obtain detailed 1-D velocity models that include robustly imaged mantle discontinuities. Synthetic tests demonstrate that S-p phases are particularly important for resolving mantle structure, while surface waves capture absolute velocities with resolution better than 0.1 km/s. By treating data noise as an unknown parameter, and by generating posterior parameter distributions, model trade offs and uncertainties are fully captured by the inversion. We apply the method to stations across the northwest and north-central United States, finding that the imaged structure improves upon existing models by sharpening the vertical resolution of absolute velocity profiles and offering robust uncertainty estimates. In the tectonically active northwestern US, a strong velocity drop immediately beneath the Moho connotes thin (<70 km) lithosphere and a sharp lithosphere-asthenosphere transition; the asthenospheric velocity profile here matches observations at mid-ocean ridges. Within the Wyoming and Superior cratons, our models reveal mid-lithospheric velocity gradients indicative of thermochemical cratonic layering, but the lithosphere-asthenosphere boundary is relatively gradual. This flexible method holds promise for increasingly detailed understanding of the lithosphere-asthenosphere system.
Ramesh, Sindhu; Bhattacharya, Dwipayan; Majrashi, Mohammed; Morgan, Marlee; Prabhakar Clement, T; Dhanasekaran, Muralikrishnan
2018-04-15
The 2010 Deepwater Horizon (DWH) oil spill is the largest marine oil spill in US history. In the aftermath of the spill, the response efforts used a chemical dispersant, Corexit, to disperse the oil spill. The health impacts of crude oil and Corexit mixture to humans, mammals, fishes, and birds are mostly unknown. The purpose of this study is to investigate the in vivo effects of DWH oil, Corexit, and oil-Corexit mixture on the general behavior, hematological markers, and liver and kidney functions of rodents. C57 Bl6 mice were treated with DWH oil (80 mg/kg) and/or Corexit (95 mg/kg), and several hematological markers, lipid profile, liver and kidney functions were monitored. The results show that both DWH oil and Corexit altered the white blood cells and platelet counts. Moreover, they also impacted the lipid profile and induced toxic effects on the liver and kidney functions. The impacts were more pronounced when the mice were treated with a mixture of DWH-oil and Corexit. This study provides preliminary data to elucidate the potential toxicological effects of DWH oil, Corexit, and their mixtures on mammalian health. Residues from the DWH spill continue to remain trapped along various Gulf Coast beaches and therefore further studies are needed to fully understand their long-term impacts on coastal ecosystems. Copyright © 2018. Published by Elsevier Inc.
Modeling and Analysis of FCM UN TRISO Fuel Using the PARFUME Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaise Collin
2013-09-01
The PARFUME (PARticle Fuel ModEl) modeling code was used to assess the overall fuel performance of uranium nitride (UN) tri-structural isotropic (TRISO) ceramic fuel in the frame of the design and development of Fully Ceramic Matrix (FCM) fuel. A specific modeling of a TRISO particle with UN kernel was developed with PARFUME, and its behavior was assessed in irradiation conditions typical of a Light Water Reactor (LWR). The calculations were used to access the dimensional changes of the fuel particle layers and kernel, including the formation of an internal gap. The survivability of the UN TRISO particle was estimated dependingmore » on the strain behavior of the constituent materials at high fast fluence and burn-up. For nominal cases, internal gas pressure and representative thermal profiles across the kernel and layers were determined along with stress levels in the pyrolytic carbon (PyC) and silicon carbide (SiC) layers. These parameters were then used to evaluate fuel particle failure probabilities. Results of the study show that the survivability of UN TRISO fuel under LWR irradiation conditions might only be guaranteed if the kernel and PyC swelling rates are limited at high fast fluence and burn-up. These material properties are unknown at the irradiation levels expected to be reached by UN TRISO fuel in LWRs. Therefore, more effort is needed to determine them and positively conclude on the applicability of FCM fuel to LWRs.« less
Virtual k -Space Modulation Optical Microscopy
NASA Astrophysics Data System (ADS)
Kuang, Cuifang; Ma, Ye; Zhou, Renjie; Zheng, Guoan; Fang, Yue; Xu, Yingke; Liu, Xu; So, Peter T. C.
2016-07-01
We report a novel superresolution microscopy approach for imaging fluorescence samples. The reported approach, termed virtual k -space modulation optical microscopy (VIKMOM), is able to improve the lateral resolution by a factor of 2, reduce the background level, improve the optical sectioning effect and correct for unknown optical aberrations. In the acquisition process of VIKMOM, we used a scanning confocal microscope setup with a 2D detector array to capture sample information at each scanned x -y position. In the recovery process of VIKMOM, we first modulated the captured data by virtual k -space coding and then employed a ptychography-inspired procedure to recover the sample information and correct for unknown optical aberrations. We demonstrated the performance of the reported approach by imaging fluorescent beads, fixed bovine pulmonary artery endothelial (BPAE) cells, and living human astrocytes (HA). As the VIKMOM approach is fully compatible with conventional confocal microscope setups, it may provide a turn-key solution for imaging biological samples with ˜100 nm lateral resolution, in two or three dimensions, with improved optical sectioning capabilities and aberration correcting.
Gröbner Bases and Generation of Difference Schemes for Partial Differential Equations
NASA Astrophysics Data System (ADS)
Gerdt, Vladimir P.; Blinkov, Yuri A.; Mozzhilkin, Vladimir V.
2006-05-01
In this paper we present an algorithmic approach to the generation of fully conservative difference schemes for linear partial differential equations. The approach is based on enlargement of the equations in their integral conservation law form by extra integral relations between unknown functions and their derivatives, and on discretization of the obtained system. The structure of the discrete system depends on numerical approximation methods for the integrals occurring in the enlarged system. As a result of the discretization, a system of linear polynomial difference equations is derived for the unknown functions and their partial derivatives. A difference scheme is constructed by elimination of all the partial derivatives. The elimination can be achieved by selecting a proper elimination ranking and by computing a Gröbner basis of the linear difference ideal generated by the polynomials in the discrete system. For these purposes we use the difference form of Janet-like Gröbner bases and their implementation in Maple. As illustration of the described methods and algorithms, we construct a number of difference schemes for Burgers and Falkowich-Karman equations and discuss their numerical properties.
Process Modeling of Ti-6Al-4V Linear Friction Welding (LFW)
2012-10-01
metallurgy of Ti-6Al-4V to predict microstructure and mechanical properties within the LFW joints (as a function of the LFW process parameters). A... metallurgy aspects of Ti-6Al-4V are reviewed in section 2. The LFW behavior of the same alloy is discussed in section 3. The fully coupled...6. 2. Physical Metallurgy of Ti-6Al-4V Before one can expect to successfully complete the task of understanding the effect of FSW process parameters
Analysis of Partitioned Methods for the Biot System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bukac, Martina; Layton, William; Moraiti, Marina
2015-02-18
In this work, we present a comprehensive study of several partitioned methods for the coupling of flow and mechanics. We derive energy estimates for each method for the fully-discrete problem. We write the obtained stability conditions in terms of a key control parameter defined as a ratio of the coupling strength and the speed of propagation. Depending on the parameters in the problem, give the choice of the partitioned method which allows the largest time step. (C) 2015 Wiley Periodicals, Inc.
Understanding and quantifying the uncertainty of model parameters and predictions has gained more interest in recent years with the increased use of computational models in chemical risk assessment. Fully characterizing the uncertainty in risk metrics derived from linked quantita...
Structure and properties of microporous titanosilicate determined by first-principles calculations
NASA Astrophysics Data System (ADS)
Ching, W. Y.; Xu, Yong-Nian; Gu, Zong-Quan
1996-12-01
The structure of EST-10, a member of synthetic microporous titanosilicates, was recently determined by an ingenious combination of experimental and simulational techniques. However, the locations of the alkali atoms in the framework remain elusive and its electronic structure is totally unknown. Based on first-principles local density calculations, the possible locations of the alkali atoms are identified and its electronic structure and bonding fully elucidated. ETS-10 is a semiconductor with a direct band gap of 2.33 eV. The Na atoms are likely to locate inside the seven-member ring pore adjacent to the one-dimensional Ti-O-Ti-O- chain.
Online fully automated three-dimensional surface reconstruction of unknown objects
NASA Astrophysics Data System (ADS)
Khalfaoui, Souhaiel; Aigueperse, Antoine; Fougerolle, Yohan; Seulin, Ralph; Fofi, David
2015-04-01
This paper presents a novel scheme for automatic and intelligent 3D digitization using robotic cells. The advantage of our procedure is that it is generic since it is not performed for a specific scanning technology. Moreover, it is not dependent on the methods used to perform the tasks associated with each elementary process. The comparison of results between manual and automatic scanning of complex objects shows that our digitization strategy is very efficient and faster than trained experts. The 3D models of the different objects are obtained with a strongly reduced number of acquisitions while moving efficiently the ranging device.
The Role of Th17 Cells in the Pathogenesis of Behcet's Disease.
Nanke, Yuki; Yago, Toru; Kotake, Shigeru
2017-07-21
Behcet's disease (BD) is a polysymptomatic and recurrent systemic vasculitis with a chronic course and unknown cause. The pathogenesis of BD has not been fully elucidated; however, BD has been considered to be a typical Th1-mediated inflammatory disease, characterized by elevated levels of Th1 cytokines such as IFN-γ, IL-2, and TNF-α. Recently, some studies reported that Th17-associated cytokines were increased in BD; thus, Th17 cells and the IL17/IL23 pathway may play important roles in the pathogenesis of BD. In this chapter, we focus on the pathogenic role of Th17 cells in BD.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, J.E.
Many robotic operations, e.g., mapping, scanning, feature following, etc., require accurate surface following of arbitrary targets. This paper presents a versatile surface following and mapping system designed to promote hardware, software and application independence, modular development, and upward expandability. These goals are met by: a full, a priori specification of the hardware and software interfaces; a modular system architecture; and a hierarchical surface-data analysis method, permitting application specific tuning at each conceptual level of topological abstraction. This surface following system was fully designed and independently of any specific robotic host, then successfully integrated with and demonstrated on a completely amore » priori unknown, real-time robotic system. 7 refs.« less
Measurement of G
NASA Astrophysics Data System (ADS)
Gayou, Olivier
2001-10-01
The measurement of the elastic form factors is a key ingredient to any complete understanding of the internal structure of the nucleons, and ultimately of the strong force. Precise data are essential to impose stringent tests on any QCD-based theory. The electromagnetic interaction provides a unique tool to investigate these form factors. In elastic electron scattering off a proton, the electron interacts with the nucleon exchanging a virtual photon. The electron-photon interaction is fully understood from QED, hence making the hadron vertex the only unknown of the reaction...
Manifold traversing as a model for learning control of autonomous robots
NASA Technical Reports Server (NTRS)
Szakaly, Zoltan F.; Schenker, Paul S.
1992-01-01
This paper describes a recipe for the construction of control systems that support complex machines such as multi-limbed/multi-fingered robots. The robot has to execute a task under varying environmental conditions and it has to react reasonably when previously unknown conditions are encountered. Its behavior should be learned and/or trained as opposed to being programmed. The paper describes one possible method for organizing the data that the robot has learned by various means. This framework can accept useful operator input even if it does not fully specify what to do, and can combine knowledge from autonomous, operator assisted and programmed experiences.
A review on classification methods for solving fully fuzzy linear systems
NASA Astrophysics Data System (ADS)
Daud, Wan Suhana Wan; Ahmad, Nazihah; Aziz, Khairu Azlan Abd
2015-12-01
Fully Fuzzy Linear System (FFLS) exists when there are fuzzy numbers on both sides of the linear systems. This system is quite significant today since most of the linear systems play with uncertainties of parameters especially in mathematics, engineering and finance. Many researchers and practitioners used the FFLS to model their problem and they apply various methods to solve it. In this paper, we present the outcome of a comprehensive review that we have done on various methods used for solving the FFLS. We classify our findings based on parameters' type used for the FFLS either restricted or unrestricted. We also discuss some of the methods by illustrating numerical examples and identify the differences between the methods. Ultimately, we summarize all findings in a table. We hope this study will encourage researchers to appreciate the use of this method and with that it will be easier for them to choose the right method or to propose any new method for solving the FFLS.
NASA Technical Reports Server (NTRS)
Jones, David J.; Kurath, Peter
1988-01-01
Fully reversed uniaxial strain controlled fatigue tests were performed on smooth cylindrical specimens made of 304 stainless steel. Fatigue life data and cracking observations for uniaxial tests were compared with life data and cracking behavior observed in fully reversed torsional tests. It was determined that the product of maximum principle strain amplitude and maximum principle stress provided the best correlation of fatigue lives for these two loading conditions. Implementation of this parameter is in agreement with observed physical damage and it accounts for the variation of stress-strain response, which is unique to specific loading conditions. Biaxial fatigue tests were conducted on tubular specimens employing both in-phase and out-of-phase tension torsion cyclic strain paths. Cracking observations indicated that the physical damage which occurred in the biaxial tests was similar to the damage observed in uniaxial and torsional tests. The Smith, Watson, and Topper parameter was then extended to predict the fatigue lives resulting from the more complex loading conditions.
Seven-panel solar wing deployment and on-orbit maneuvering analyses
NASA Astrophysics Data System (ADS)
Hwang, Earl
2005-05-01
BSS developed a new generation high power (~20kW) solar array to meet the customer demands. The high power solar array had the north and south solar wings of which designs were identical. Each side of the solar wing consists of three main conventional solar panels and the four-side panel swing-out new design. The fully deployed solar array surface area is 966 ft2. It was a quite challenging task to define the solar array's optimum design parameters and deployment scheme for such a huge solar array's successful deployment and on-orbit maneuvering. Hence, a deployable seven-flex-panel solar wing nonlinear math model and a fully deployed solar array/bus-payload math model were developed with the Dynamic Analysis and Design System (DADS) program codes utilizing the inherited and empirical data. Performing extensive parametric analyses with the math model, the optimum design parameters and the orbit maneuvering /deployment schemes were determined to meet all the design requirements, and for the successful solar wing deployment on-orbit.
Groundwater flow to a horizontal or slanted well in an unconfined aquifer
NASA Astrophysics Data System (ADS)
Zhan, Hongbin; Zlotnik, Vitaly A.
2002-07-01
New semianalytical solutions for evaluation of the drawdown near horizontal and slanted wells with finite length screens in unconfined aquifers are presented. These fully three-dimensional solutions consider instantaneous drainage or delayed yield and aquifer anisotropy. As a basis, solution for the drawdown created by a point source in a uniform anisotropic unconfined aquifer is derived in Laplace domain. Using superposition, the point source solution is extended to the cases of the horizontal and slanted wells. The previous solutions for vertical wells can be described as a special case of the new solutions. Numerical Laplace inversion allows effective evaluation of the drawdown in real time. Examples illustrate the effects of well geometry and the aquifer parameters on drawdown. Results can be used to generate type curves from observations in piezometers and partially or fully penetrating observation wells. The proposed solutions and software are useful for the parameter identification, design of remediation systems, drainage, and mine dewatering.
Automation of surface observations program
NASA Technical Reports Server (NTRS)
Short, Steve E.
1988-01-01
At present, surface weather observing methods are still largely manual and labor intensive. Through the nationwide implementation of Automated Surface Observing Systems (ASOS), this situation can be improved. Two ASOS capability levels are planned. The first is a basic-level system which will automatically observe the weather parameters essential for aviation operations and will operate either with or without supplemental contributions by an observer. The second is a more fully automated, stand-alone system which will observe and report the full range of weather parameters and will operate primarily in the unattended mode. Approximately 250 systems are planned by the end of the decade. When deployed, these systems will generate the standard hourly and special long-line transmitted weather observations, as well as provide continuous weather information direct to airport users. Specific ASOS configurations will vary depending upon whether the operation is unattended, minimally attended, or fully attended. The major functions of ASOS are data collection, data processing, product distribution, and system control. The program phases of development, demonstration, production system acquisition, and operational implementation are described.
A Regev-Type Fully Homomorphic Encryption Scheme Using Modulus Switching
Chen, Zhigang; Wang, Jian; Song, Xinxia
2014-01-01
A critical challenge in a fully homomorphic encryption (FHE) scheme is to manage noise. Modulus switching technique is currently the most efficient noise management technique. When using the modulus switching technique to design and implement a FHE scheme, how to choose concrete parameters is an important step, but to our best knowledge, this step has drawn very little attention to the existing FHE researches in the literature. The contributions of this paper are twofold. On one hand, we propose a function of the lower bound of dimension value in the switching techniques depending on the LWE specific security levels. On the other hand, as a case study, we modify the Brakerski FHE scheme (in Crypto 2012) by using the modulus switching technique. We recommend concrete parameter values of our proposed scheme and provide security analysis. Our result shows that the modified FHE scheme is more efficient than the original Brakerski scheme in the same security level. PMID:25093212
NASA Astrophysics Data System (ADS)
Eggert, F.; Camus, P. P.; Schleifer, M.; Reinauer, F.
2018-01-01
The energy-dispersive X-ray spectrometer (EDS or EDX) is a commonly used device to characterise the composition of investigated material in scanning and transmission electron microscopes (SEM and TEM). One major benefit compared to wavelength-dispersive X-ray spectrometers (WDS) is that EDS systems collect the entire spectrum simultaneously. Therefore, not only are all emitted characteristic X-ray lines in the spectrum, but also the complete bremsstrahlung distribution is included. It is possible to get information about the specimen even from this radiation, which is usually perceived more as a disturbing background. This is possible by using theoretical model knowledge about bremsstrahlung excitation and absorption in the specimen in comparison to the actual measured spectrum. The core aim of this investigation is to present a method for better bremsstrahlung fitting in unknown geometry cases by variation of the geometry parameters and to utilise this knowledge also for characteristic radiation evaluation. A method is described, which allows the parameterisation of the true X-ray absorption conditions during spectrum acquisition. An ‘effective tilt’ angle parameter is determined by evaluation of the bremsstrahlung shape of the measured SEM spectra. It is useful for bremsstrahlung background approximation, with exact calculations of the absorption edges below the characteristic peaks, required for P/B-ZAF model based quantification methods. It can even be used for ZAF based quantification models as a variable input parameter. The analytical results are then much more reliable for the different absorption effects from irregular specimen surfaces because the unknown absorption dependency is considered. Finally, the method is also applied for evaluation of TEM spectra. In this case, the real physical parameter optimisation is with sample thickness (mass thickness), which is influencing the emitted and measured spectrum due to different absorption with TEM measurements. The effects are in the very low energy part of the spectrum, and are much more visible with most recent windowless TEM detectors. The thickness of the sample can be determined in this way from the measured bremsstrahlung spectrum shape.
Analytical Incorporation of Velocity Parameters into Ice Sheet Elevation Change Rate Computations
NASA Astrophysics Data System (ADS)
Nagarajan, S.; Ahn, Y.; Teegavarapu, R. S. V.
2014-12-01
NASA, ESA and various other agencies have been collecting laser, optical and RADAR altimetry data through various missions to study the elevation changes of the Cryosphere. The laser altimetry collected by various airborne and spaceborne missions provides multi-temporal coverage of Greenland and Antarctica since 1993 to now. Though these missions have increased the data coverage, considering the dynamic nature of the ice surface, it is still sparse both spatially and temporally for accurate elevation change detection studies. The temporal and spatial gaps are usually filled by interpolation techniques. This presentation will demonstrate a method to improve the temporal interpolation. Considering the accuracy, repeat coverage and spatial distribution, the laser scanning data has been widely used to compute elevation change rate of Greenland and Antarctica ice sheets. A major problem with these approaches is non-consideration of ice sheet velocity dynamics into change rate computations. Though the correlation between velocity and elevation change rate have been noticed by Hurkmans et al., 2012, the corrections for velocity changes were applied after computing elevation change rates by assuming linear or higher polynomial relationship. This research will discuss the possibilities of parameterizing ice sheet dynamics as unknowns (dX and dY) in the adjustment mathematical model that computes elevation change (dZ) rates. It is a simultaneous computation of changes in all three directions of the ice surface. Also, the laser points between two time epochs in a crossover area have different distribution and count. Therefore, a registration method that does not require point-to-point correspondence is required to recover the unknown elevation and velocity parameters. This research will experiment the possibilities of registering multi-temporal datasets using volume minimization algorithm, which determines the unknown dX, dY and dZ that minimizes the volume between two or more time-epoch point clouds. In order to make use of other existing data as well as to constrain the adjustment, InSAR velocity will be used as initial values for the parameters dX and dY. The presentation will discuss the results of analytical incorporation of parameters and the volume based registration method for a test site in Greenland.
Fritscher, Karl; Schuler, Benedikt; Link, Thomas; Eckstein, Felix; Suhm, Norbert; Hänni, Markus; Hengg, Clemens; Schubert, Rainer
2008-01-01
Fractures of the proximal femur are one of the principal causes of mortality among elderly persons. Traditional methods for the determination of femoral fracture risk use methods for measuring bone mineral density. However, BMD alone is not sufficient to predict bone failure load for an individual patient and additional parameters have to be determined for this purpose. In this work an approach that uses statistical models of appearance to identify relevant regions and parameters for the prediction of biomechanical properties of the proximal femur will be presented. By using Support Vector Regression the proposed model based approach is capable of predicting two different biomechanical parameters accurately and fully automatically in two different testing scenarios.
Surface wave chemical detector using optical radiation
Thundat, Thomas G.; Warmack, Robert J.
2007-07-17
A surface wave chemical detector comprising at least one surface wave substrate, each of said substrates having a surface wave and at least one measurable surface wave parameter; means for exposing said surface wave substrate to an unknown sample of at least one chemical to be analyzed, said substrate adsorbing said at least one chemical to be sensed if present in said sample; a source of radiation for radiating said surface wave substrate with different wavelengths of said radiation, said surface wave parameter being changed by said adsorbing; and means for recording signals representative of said surface wave parameter of each of said surface wave substrates responsive to said radiation of said different wavelengths, measurable changes of said parameter due to adsorbing said chemical defining a unique signature of a detected chemical.
Optimized tuner selection for engine performance estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)
2013-01-01
A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.
González-Díaz, Humberto; Munteanu, Cristian R; Postelnicu, Lucian; Prado-Prado, Francisco; Gestal, Marcos; Pazos, Alejandro
2012-03-01
Lipid-Binding Proteins (LIBPs) or Fatty Acid-Binding Proteins (FABPs) play an important role in many diseases such as different types of cancer, kidney injury, atherosclerosis, diabetes, intestinal ischemia and parasitic infections. Thus, the computational methods that can predict LIBPs based on 3D structure parameters became a goal of major importance for drug-target discovery, vaccine design and biomarker selection. In addition, the Protein Data Bank (PDB) contains 3000+ protein 3D structures with unknown function. This list, as well as new experimental outcomes in proteomics research, is a very interesting source to discover relevant proteins, including LIBPs. However, to the best of our knowledge, there are no general models to predict new LIBPs based on 3D structures. We developed new Quantitative Structure-Activity Relationship (QSAR) models based on 3D electrostatic parameters of 1801 different proteins, including 801 LIBPs. We calculated these electrostatic parameters with the MARCH-INSIDE software and they correspond to the entire protein or to specific protein regions named core, inner, middle, and surface. We used these parameters as inputs to develop a simple Linear Discriminant Analysis (LDA) classifier to discriminate 3D structure of LIBPs from other proteins. We implemented this predictor in the web server named LIBP-Pred, freely available at , along with other important web servers of the Bio-AIMS portal. The users can carry out an automatic retrieval of protein structures from PDB or upload their custom protein structural models from their disk created with LOMETS server. We demonstrated the PDB mining option performing a predictive study of 2000+ proteins with unknown function. Interesting results regarding the discovery of new Cancer Biomarkers in humans or drug targets in parasites have been discussed here in this sense.
PV systems photoelectric parameters determining for field conditions and real operation conditions
NASA Astrophysics Data System (ADS)
Shepovalova, Olga V.
2018-05-01
In this work, research experience and reference documentation have been generalized related to PV systems photoelectric parameters (PV array output parameters) determining. The basic method has been presented that makes it possible to determine photoelectric parameters with the state-of-the-art reliability and repeatability. This method provides an effective tool for PV systems comparison and evaluation of PV system parameters that the end-user will have in the course of its real operation for compliance with those stipulated in reference documentation. The method takes in consideration all parameters that may possibly affect photoelectric performance and that are supported by sufficiently valid procedures for their values testing. Test conditions, requirements for equipment subject to tests and test preparations have been established and the test procedure for fully equipped PV system in field tests and in real operation conditions has been described.
NASA Astrophysics Data System (ADS)
Lubey, D.; Ko, H.; Scheeres, D.
The classical orbit determination (OD) method of dealing with unknown maneuvers is to restart the OD process with post-maneuver observations. However, it is also possible to continue the OD process through such unknown maneuvers by representing those unknown maneuvers with an appropriate event representation. It has been shown in previous work (Ko & Scheeres, JGCD 2014) that any maneuver performed by a satellite transitioning between two arbitrary orbital states can be represented as an equivalent maneuver connecting those two states using Thrust-Fourier-Coefficients (TFCs). Event representation using TFCs rigorously provides a unique control law that can generate the desired secular behavior for a given unknown maneuver. This paper presents applications of this representation approach to orbit prediction and maneuver detection problem across unknown maneuvers. The TFCs are appended to a sequential filter as an adjoint state to compensate unknown perturbing accelerations and the modified filter estimates the satellite state and thrust coefficients by processing OD across the time of an unknown maneuver. This modified sequential filter with TFCs is capable of fitting tracking data and maintaining an OD solution in the presence of unknown maneuvers. Also, the modified filter is found effective in detecting a sudden change in TFC values which indicates a maneuver. In order to illustrate that the event representation approach with TFCs is robust and sufficiently general to be easily adjustable, different types of measurement data are processed with the filter in a realistic LEO setting. Further, cases with mis-modeling of non-gravitational force are included in our study to verify the versatility and efficiency of our presented algorithm. Simulation results show that the modified sequential filter with TFCs can detect and estimate the orbit and thrust parameters in the presence of unknown maneuvers with or without measurement data during maneuvers. With no measurement data during maneuvers, the modified filter with TFCs uses an existing pre-maneuver orbit solution to compute a post-maneuver orbit solution by forcing TFCs to compensate for an unknown maneuver. With observation data available during maneuvers, maneuver start time and stop time is determined
Cadena, Edwin
2016-01-01
Abundant pan-trionychid (soft-shell) turtles specimens have been found in Eocene sequences of central Europe, particularly from two localities in Germany, the Messel Pit (a UNESCO World Natural Heritage Site) and Geiseltal, traditionally attributed to Trionyx messelianus or Rafetoides austriacus . Over the last two decades new specimens of this taxon from these two localities have been discovered and fully prepared. However, they have remained unstudied, as well as their phylogenetic position inside Pan-Trionychidae is unknown. Five new specimens of Palaeoamyda messeliana nov. comb. from Messel Pit and Geiseltal localities are fully described here. A revised diagnosis for the species is also presented here, together with its inclusion in a phylogenetic analysis of Pan-Trionychidae that shows that this species is sister to the extant Amyda cartilaginea , one of the most abundant pan-trionychid (soft-shell) turtles from Asia, both members of the clade Chitrini. The specimens described in here are among the best and most complete fossil pan-trionychid skeletons so far known.
NASA Astrophysics Data System (ADS)
Wang, Yihan; Lu, Tong; Wan, Wenbo; Liu, Lingling; Zhang, Songhe; Li, Jiao; Zhao, Huijuan; Gao, Feng
2018-02-01
To fully realize the potential of photoacoustic tomography (PAT) in preclinical and clinical applications, rapid measurements and robust reconstructions are needed. Sparse-view measurements have been adopted effectively to accelerate the data acquisition. However, since the reconstruction from the sparse-view sampling data is challenging, both of the effective measurement and the appropriate reconstruction should be taken into account. In this study, we present an iterative sparse-view PAT reconstruction scheme where a virtual parallel-projection concept matching for the proposed measurement condition is introduced to help to achieve the "compressive sensing" procedure of the reconstruction, and meanwhile the spatially adaptive filtering fully considering the a priori information of the mutually similar blocks existing in natural images is introduced to effectively recover the partial unknown coefficients in the transformed domain. Therefore, the sparse-view PAT images can be reconstructed with higher quality compared with the results obtained by the universal back-projection (UBP) algorithm in the same sparse-view cases. The proposed approach has been validated by simulation experiments, which exhibits desirable performances in image fidelity even from a small number of measuring positions.
Testing Gene-Gene Interactions in the Case-Parents Design
Yu, Zhaoxia
2011-01-01
The case-parents design has been widely used to detect genetic associations as it can prevent spurious association that could occur in population-based designs. When examining the effect of an individual genetic locus on a disease, logistic regressions developed by conditioning on parental genotypes provide complete protection from spurious association caused by population stratification. However, when testing gene-gene interactions, it is unknown whether conditional logistic regressions are still robust. Here we evaluate the robustness and efficiency of several gene-gene interaction tests that are derived from conditional logistic regressions. We found that in the presence of SNP genotype correlation due to population stratification or linkage disequilibrium, tests with incorrectly specified main-genetic-effect models can lead to inflated type I error rates. We also found that a test with fully flexible main genetic effects always maintains correct test size and its robustness can be achieved with negligible sacrifice of its power. When testing gene-gene interactions is the focus, the test allowing fully flexible main effects is recommended to be used. PMID:21778736
Saleem, Muhammad; Sharif, Kashif; Fahmi, Aliya
2018-04-27
Applications of Pareto distribution are common in reliability, survival and financial studies. In this paper, A Pareto mixture distribution is considered to model a heterogeneous population comprising of two subgroups. Each of two subgroups is characterized by the same functional form with unknown distinct shape and scale parameters. Bayes estimators have been derived using flat and conjugate priors using squared error loss function. Standard errors have also been derived for the Bayes estimators. An interesting feature of this study is the preparation of components of Fisher Information matrix.
Borysov, Stanislav S.; Forchheimer, Daniel; Haviland, David B.
2014-10-29
Here we present a theoretical framework for the dynamic calibration of the higher eigenmode parameters (stiffness and optical lever inverse responsivity) of a cantilever. The method is based on the tip–surface force reconstruction technique and does not require any prior knowledge of the eigenmode shape or the particular form of the tip–surface interaction. The calibration method proposed requires a single-point force measurement by using a multimodal drive and its accuracy is independent of the unknown physical amplitude of a higher eigenmode.
1981-12-01
preventing the generation of 16 6 negative location estimators. Because of the invariant pro- perty of the EDF statistics, this transformation will...likelihood. If the parameter estimation method developed by Harter and Moore is used, care must be taken to prevent the location estimators from being...vs A 2 Critical Values, Level-.Ol, n-30 128 , 0 6N m m • w - APPENDIX E Computer Prgrams 129 Program to Calculate the Cramer-von Mises Critical Values
NASA Technical Reports Server (NTRS)
Bakhshiyan, B. T.; Nazirov, R. R.; Elyasberg, P. E.
1980-01-01
The problem of selecting the optimal algorithm of filtration and the optimal composition of the measurements is examined assuming that the precise values of the mathematical expectancy and the matrix of covariation of errors are unknown. It is demonstrated that the optimal algorithm of filtration may be utilized for making some parameters more precise (for example, the parameters of the gravitational fields) after preliminary determination of the elements of the orbit by a simpler method of processing (for example, the method of least squares).
Power maximization of a point absorber wave energy converter using improved model predictive control
NASA Astrophysics Data System (ADS)
Milani, Farideh; Moghaddam, Reihaneh Kardehi
2017-08-01
This paper considers controlling and maximizing the absorbed power of wave energy converters for irregular waves. With respect to physical constraints of the system, a model predictive control is applied. Irregular waves' behavior is predicted by Kalman filter method. Owing to the great influence of controller parameters on the absorbed power, these parameters are optimized by imperialist competitive algorithm. The results illustrate the method's efficiency in maximizing the extracted power in the presence of unknown excitation force which should be predicted by Kalman filter.
A method for nonlinear exponential regression analysis
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1971-01-01
A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.
Reason, emotion and decision-making: risk and reward computation with feeling.
Quartz, Steven R
2009-05-01
Many models of judgment and decision-making posit distinct cognitive and emotional contributions to decision-making under uncertainty. Cognitive processes typically involve exact computations according to a cost-benefit calculus, whereas emotional processes typically involve approximate, heuristic processes that deliver rapid evaluations without mental effort. However, it remains largely unknown what specific parameters of uncertain decision the brain encodes, the extent to which these parameters correspond to various decision-making frameworks, and their correspondence to emotional and rational processes. Here, I review research suggesting that emotional processes encode in a precise quantitative manner the basic parameters of financial decision theory, indicating a reorientation of emotional and cognitive contributions to risky choice.
Sörberg Wallin, Alma; Falkstedt, Daniel; Allebeck, Peter; Melin, Bo; Janszky, Imre; Hemmingsson, Tomas
2015-04-01
Lower intelligence early in life is associated with increased risks for coronary heart disease (CHD) and mortality. Intelligence level might affect compliance to treatment but its prognostic importance in patients with CHD is unknown. A cohort of 1923 Swedish men with a measure of intelligence from mandatory military conscription in 1969-1970 at age 18-20, who were diagnosed with CHD 1991-2007, were followed to the end of 2008. recurrent CHD event. Secondary outcome: case fatality from the first event, cardiovascular and all-cause mortality. National registers provided information on CHD events, comorbidity, mortality and socioeconomic factors. The fully adjusted HRs for recurrent CHD for medium and low intelligence, compared with high intelligence, were 0.98, (95% CIs 0.83 to 1.16) and 1.09 (0.89 to 1.34), respectively. The risks were increased for cardiovascular and all-cause mortality with lower intelligence, but were attenuated in the fully adjusted models (fully adjusted HRs for cardiovascular mortality 1.92 (0.94 to 3.94) and 1.98 (0.89 to 4.37), respectively; for all-cause mortality 1.63 (1.00 to 2.65) and 1.62 (0.94 to 2.78), respectively). There was no increased risk for case-fatality at the first event (fully adjusted ORs 1.06 (0.73 to 1.55) and 0.97 (0.62 to 1.50), respectively). Although we found lower intelligence to be associated with increased mortality in middle-aged men with CHD, there was no evidence for its possible effect on recurrence in CHD. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Villa, C A; Finlayson, S; Limpus, C; Gaus, C
2015-04-15
Biomonitoring of blood is commonly used to identify and quantify occupational or environmental exposure to chemical contaminants. Increasingly, this technique has been applied to wildlife contaminant monitoring, including for green turtles, allowing for the non-lethal evaluation of chemical exposure in their nearshore environment. The sources, composition, bioavailability and toxicity of metals in the marine environment are, however, often unknown and influenced by numerous biotic and abiotic factors. These factors can vary considerably across time and space making the selection of the most informative elements for biomonitoring challenging. This study aimed to validate an ICP-MS multi-element screening method for green turtle blood in order to identify and facilitate prioritisation of target metals for subsequent fully quantitative analysis. Multi-element screening provided semiquantitative results for 70 elements, 28 of which were also determined through fully quantitative analysis. Of the 28 comparable elements, 23 of the semiquantitative results had an accuracy between 67% and 112% relative to the fully quantified values. In lieu of any available turtle certified reference materials (CRMs), we evaluated the use of human blood CRMs as a matrix surrogate for quality control, and compared two commonly used sample preparation methods for matrix related effects. The results demonstrate that human blood provides an appropriate matrix for use as a quality control material in the fully quantitative analysis of metals in turtle blood. An example for the application of this screening method is provided by comparing screening results from blood of green turtles foraging in an urban and rural region in Queensland, Australia. Potential targets for future metal biomonitoring in these regions were identified by this approach. Copyright © 2014 Elsevier B.V. All rights reserved.
Salimi, Nima; Loh, Kar Hoe; Kaur Dhillon, Sarinder; Chong, Ving Ching
2016-01-01
Background. Fish species may be identified based on their unique otolith shape or contour. Several pattern recognition methods have been proposed to classify fish species through morphological features of the otolith contours. However, there has been no fully-automated species identification model with the accuracy higher than 80%. The purpose of the current study is to develop a fully-automated model, based on the otolith contours, to identify the fish species with the high classification accuracy. Methods. Images of the right sagittal otoliths of 14 fish species from three families namely Sciaenidae, Ariidae, and Engraulidae were used to develop the proposed identification model. Short-time Fourier transform (STFT) was used, for the first time in the area of otolith shape analysis, to extract important features of the otolith contours. Discriminant Analysis (DA), as a classification technique, was used to train and test the model based on the extracted features. Results. Performance of the model was demonstrated using species from three families separately, as well as all species combined. Overall classification accuracy of the model was greater than 90% for all cases. In addition, effects of STFT variables on the performance of the identification model were explored in this study. Conclusions. Short-time Fourier transform could determine important features of the otolith outlines. The fully-automated model proposed in this study (STFT-DA) could predict species of an unknown specimen with acceptable identification accuracy. The model codes can be accessed at http://mybiodiversityontologies.um.edu.my/Otolith/ and https://peerj.com/preprints/1517/. The current model has flexibility to be used for more species and families in future studies.
NASA Astrophysics Data System (ADS)
Essa, Mohammed Sh.; Chiad, Bahaa T.; Shafeeq, Omer Sh.
2017-09-01
Thin Films of Copper Oxide (CuO) absorption layer have been deposited using home-made Fully Computerized Spray Pyrolysis Deposition system FCSPD on glass substrates, at the nozzle to substrate distance equal to 20,35 cm, and computerized spray mode (continues spray, macro-control spray). The substrate temperature has been kept at 450 °c with the optional user can enter temperature tolerance values ± 5 °C. Also that fixed molar concentration of 0.1 M, and 2D platform speed or deposition platform speed of 4mm/s. more than 1000 instruction program code, and specific design of graphical user interface GUI to fully control the deposition process and real-time monitoring and controlling the deposition temperature at every 200 ms. The changing in the temperature has been recorded during deposition processes, in addition to all deposition parameters. The films have been characterized to evaluate the thermal distribution over the X, Y movable hot plate, the structure and optical energy gap, thermal and temperature distribution exhibited a good and uniform distribution over 20 cm2 hot plate area, X-ray diffraction (XRD) measurement revealed that the films are polycrystalline in nature and can be assigned to monoclinic CuO structure. Optical band gap varies from 1.5-1.66 eV depending on deposition parameter.
NASA Astrophysics Data System (ADS)
Lui, E. W.; Xu, W.; Pateras, A.; Qian, M.; Brandt, M.
2017-12-01
Recent progress has shown that Ti-6Al-4V fabricated by selective laser melting (SLM) can achieve a fully lamellar α + β microstructure using 60 µm layer thickness in the as-built state via in situ martensite decomposition by manipulating the processing parameters. The potential to broaden the processing window was explored in this study by increasing the layer thickness to the less commonly used 90 µm. Fully lamellar α + β microstructures were produced in the as-built state using inter-layer times in the range of 1-12 s. Microstructural features such as the α-lath thickness and morphology were sensitive to both build height and inter-layer time. The α-laths produced using the inter-layer time of 1 s were much coarser than those produced with the inter-layer time of 12 s. The fine fully lamellar α + β structure resulted in tensile ductility of 11% and yield strength of 980 MPa. The tensile properties can be further improved by minimizing the presence of process-induced defects.
A modified Leslie-Gower predator-prey interaction model and parameter identifiability
NASA Astrophysics Data System (ADS)
Tripathi, Jai Prakash; Meghwani, Suraj S.; Thakur, Manoj; Abbas, Syed
2018-01-01
In this work, bifurcation and a systematic approach for estimation of identifiable parameters of a modified Leslie-Gower predator-prey system with Crowley-Martin functional response and prey refuge is discussed. Global asymptotic stability is discussed by applying fluctuation lemma. The system undergoes into Hopf bifurcation with respect to parameters intrinsic growth rate of predators (s) and prey reserve (m). The stability of Hopf bifurcation is also discussed by calculating Lyapunov number. The sensitivity analysis of the considered model system with respect to all variables is performed which also supports our theoretical study. To estimate the unknown parameter from the data, an optimization procedure (pseudo-random search algorithm) is adopted. System responses and phase plots for estimated parameters are also compared with true noise free data. It is found that the system dynamics with true set of parametric values is similar to the estimated parametric values. Numerical simulations are presented to substantiate the analytical findings.
Sun, Jing; Cao, Ling; Feng, Youlong; Tan, Li
2014-11-01
The compounds with similar structure often have similar pharmacological activities. So it is a trend for illegal addition that new derivatives of effective drugs are synthesized to avoid the statutory test. This bring challenges to crack down on illegal addition behavior, however, modified derivatives usually have similar product ions, which allow for precursor ion scanning. In this work, precursor ion scanning mode of a triple quadrupole mass spectrometer was first applied to screen illegally added drugs in complex matrix such as Chinese traditional patent medicines and healthy foods. Phosphodiesterase-5 inhibitors were used as experimental examples. Through the analysis of the structure and mass spectrum characteristics of the compounds, phosphodiesterase-5 inhibitors were classified, and their common product ions were screened by full scan of product ions of typical compounds. Then high performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) method with precursor ion scanning mode was established based on the optimization of MS parameters. The effect of mass parameters and the choice of fragment ions were also studied. The method was applied to determine actual samples and further refined. The results demonstrated that this method can meet the need of rapid screening of unknown derivatives of phosphodiesterase-5 inhibitors in complex matrix, and prevent unknown derivatives undetected. This method shows advantages in sensitivity, specificity and efficiency, and is worth to be further investigated.
Chem Lab Simulation #3 and #4.
ERIC Educational Resources Information Center
Pipeline, 1983
1983-01-01
Two copy-protected chemistry simulations (for Apple II) are described. The first demonstrates Hess' law of heat reaction. The second illustrates how heat of vaporization can be used to determine an unknown liquid and shows how to find thermodynamic parameters in an equilibrium reaction. Both are self-instructing and use high-resolution graphics.…