Boundary conditions estimation on a road network using compressed sensing.
DOT National Transportation Integrated Search
2016-02-01
This report presents a new boundary condition estimation framework for transportation networks in which : the state is modeled by a first order scalar conservation law. Using an equivalent formulation based on a : Hamilton-Jacobi equation, we pose th...
Improved first-order uncertainty method for water-quality modeling
Melching, C.S.; Anmangandla, S.
1992-01-01
Uncertainties are unavoidable in water-quality modeling and subsequent management decisions. Monte Carlo simulation and first-order uncertainty analysis (involving linearization at central values of the uncertain variables) have been frequently used to estimate probability distributions for water-quality model output due to their simplicity. Each method has its drawbacks: Monte Carlo simulation's is mainly computational time; and first-order analysis are mainly questions of accuracy and representativeness, especially for nonlinear systems and extreme conditions. An improved (advanced) first-order method is presented, where the linearization point varies to match the output level whose exceedance probability is sought. The advanced first-order method is tested on the Streeter-Phelps equation to estimate the probability distribution of critical dissolved-oxygen deficit and critical dissolved oxygen using two hypothetical examples from the literature. The advanced first-order method provides a close approximation of the exceedance probability for the Streeter-Phelps model output estimated by Monte Carlo simulation using less computer time - by two orders of magnitude - regardless of the probability distributions assumed for the uncertain model parameters.
Estimating Distance in Real and Virtual Environments: Does Order Make a Difference?
Ziemer, Christine J.; Plumert, Jodie M.; Cremer, James F.; Kearney, Joseph K.
2010-01-01
This investigation examined how the order in which people experience real and virtual environments influences their distance estimates. Participants made two sets of distance estimates in one of the following conditions: 1) real environment first, virtual environment second; 2) virtual environment first, real environment second; 3) real environment first, real environment second; or 4) virtual environment first, virtual environment second. In Experiment 1, participants imagined how long it would take to walk to targets in real and virtual environments. Participants’ first estimates were significantly more accurate in the real than in the virtual environment. When the second environment was the same as the first environment (real-real and virtual-virtual), participants’ second estimates were also more accurate in the real than in the virtual environment. When the second environment differed from the first environment (real-virtual and virtual-real), however, participants’ second estimates did not differ significantly across the two environments. A second experiment in which participants walked blindfolded to targets in the real environment and imagined how long it would take to walk to targets in the virtual environment replicated these results. These subtle, yet persistent order effects suggest that memory can play an important role in distance perception. PMID:19525540
Quantitative measurement of protein digestion in simulated gastric fluid.
Herman, Rod A; Korjagin, Valerie A; Schafer, Barry W
2005-04-01
The digestibility of novel proteins in simulated gastric fluid is considered to be an indicator of reduced risk of allergenic potential in food, and estimates of digestibility for transgenic proteins expressed in crops are required for making a human-health risk assessment by regulatory authorities. The estimation of first-order rate constants for digestion under conditions of low substrate concentration was explored for two protein substrates (azocoll and DQ-ovalbumin). Data conformed to first-order kinetics, and half-lives were relatively insensitive to significant variations in both substrate and pepsin concentration when high purity pepsin preparations were used. Estimation of digestion efficiency using densitometric measurements of relative protein concentration based on SDS-PAGE corroborated digestion estimates based on measurements of dye or fluorescence release from the labeled substrates. The suitability of first-order rate constants for estimating the efficiency of the pepsin digestion of novel proteins is discussed. Results further support a kinetic approach as appropriate for comparing the digestibility of proteins in simulated gastric fluid.
Paul P. Kormanik; H.D. Muse; S.J Sung
1991-01-01
Frequency distribution and heritability of first-order later root (FOLR) numbers in 1-0 seedlings were followed for 5 years for 115 different half-sib seedlots from the Georgia Forestry Commission's Arrowhead and Baldwin Seed Orchards. In 1986 and 1987, seedlings were permitted unrestricted growth under management conditions similar to those practiced in most...
Regnery, J; Wing, A D; Alidina, M; Drewes, J E
2015-08-01
This study developed relationships between the attenuation of emerging trace organic chemicals (TOrC) during managed aquifer recharge (MAR) as a function of retention time, system characteristics, and operating conditions using controlled laboratory-scale soil column experiments simulating MAR. The results revealed that MAR performance in terms of TOrC attenuation is primarily determined by key environmental parameters (i.e., redox, primary substrate). Soil columns with suboxic and anoxic conditions performed poorly (i.e., less than 30% attenuation of moderately degradable TOrC) in comparison to oxic conditions (on average between 70-100% attenuation for the same compounds) within a residence time of three days. Given this dependency on redox conditions, it was investigated if key parameter-dependent rate constants are more suitable for contaminant transport modeling to properly capture the dynamic TOrC attenuation under field-scale conditions. Laboratory-derived first-order removal kinetics were determined for 19 TOrC under three different redox conditions and rate constants were applied to MAR field data. Our findings suggest that simplified first-order rate constants will most likely not provide any meaningful results if the target compounds exhibit redox dependent biotransformation behavior or if the intention is to exactly capture the decline in concentration over time and distance at field-scale MAR. However, if the intention is to calculate the percent removal after an extended time period and subsurface travel distance, simplified first-order rate constants seem to be sufficient to provide a first estimate on TOrC attenuation during MAR. Copyright © 2015 Elsevier B.V. All rights reserved.
Knopman, Debra S.; Voss, Clifford I.
1988-01-01
Sensitivities of solute concentration to parameters associated with first-order chemical decay, boundary conditions, initial conditions, and multilayer transport are examined in one-dimensional analytical models of transient solute transport in porous media. A sensitivity is a change in solute concentration resulting from a change in a model parameter. Sensitivity analysis is important because minimum information required in regression on chemical data for the estimation of model parameters by regression is expressed in terms of sensitivities. Nonlinear regression models of solute transport were tested on sets of noiseless observations from known models that exceeded the minimum sensitivity information requirements. Results demonstrate that the regression models consistently converged to the correct parameters when the initial sets of parameter values substantially deviated from the correct parameters. On the basis of the sensitivity analysis, several statements may be made about design of sampling for parameter estimation for the models examined: (1) estimation of parameters associated with solute transport in the individual layers of a multilayer system is possible even when solute concentrations in the individual layers are mixed in an observation well; (2) when estimating parameters in a decaying upstream boundary condition, observations are best made late in the passage of the front near a time chosen by adding the inverse of an hypothesized value of the source decay parameter to the estimated mean travel time at a given downstream location; (3) estimation of a first-order chemical decay parameter requires observations to be made late in the passage of the front, preferably near a location corresponding to a travel time of √2 times the half-life of the solute; and (4) estimation of a parameter relating to spatial variability in an initial condition requires observations to be made early in time relative to passage of the solute front.
Rapid and accurate estimation of release conditions in the javelin throw.
Hubbard, M; Alaways, L W
1989-01-01
We have developed a system to measure initial conditions in the javelin throw rapidly enough to be used by the thrower for feedback in performance improvement. The system consists of three subsystems whose main tasks are: (A) acquisition of automatically digitized high speed (200 Hz) video x, y position data for the first 0.1-0.2 s of the javelin flight after release (B) estimation of five javelin release conditions from the x, y position data and (C) graphical presentation to the thrower of these release conditions and a simulation of the subsequent flight together with optimal conditions and flight for the sam release velocity. The estimation scheme relies on a simulation model and is at least an order of magnitude more accurate than previously reported measurements of javelin release conditions. The system provides, for the first time ever in any throwing event, the ability to critique nearly instantly in a precise, quantitative manner the crucial factors in the throw which determine the range. This should be expected to much greater control and consistency of throwing variables by athletes who use system and could even lead to an evolution of new throwing techniques.
Bayesian Image Segmentations by Potts Prior and Loopy Belief Propagation
NASA Astrophysics Data System (ADS)
Tanaka, Kazuyuki; Kataoka, Shun; Yasuda, Muneki; Waizumi, Yuji; Hsu, Chiou-Ting
2014-12-01
This paper presents a Bayesian image segmentation model based on Potts prior and loopy belief propagation. The proposed Bayesian model involves several terms, including the pairwise interactions of Potts models, and the average vectors and covariant matrices of Gauss distributions in color image modeling. These terms are often referred to as hyperparameters in statistical machine learning theory. In order to determine these hyperparameters, we propose a new scheme for hyperparameter estimation based on conditional maximization of entropy in the Potts prior. The algorithm is given based on loopy belief propagation. In addition, we compare our conditional maximum entropy framework with the conventional maximum likelihood framework, and also clarify how the first order phase transitions in loopy belief propagations for Potts models influence our hyperparameter estimation procedures.
Space-time asymptotics of the two dimensional Navier-Stokes flow in the whole plane
NASA Astrophysics Data System (ADS)
Okabe, Takahiro
2018-01-01
We consider the space-time behavior of the two dimensional Navier-Stokes flow. Introducing some qualitative structure of initial data, we succeed to derive the first order asymptotic expansion of the Navier-Stokes flow without moment condition on initial data in L1 (R2) ∩ Lσ2 (R2). Moreover, we characterize the necessary and sufficient condition for the rapid energy decay ‖ u (t) ‖ 2 = o (t-1) as t → ∞ motivated by Miyakawa-Schonbek [21]. By weighted estimated in Hardy spaces, we discuss the possibility of the second order asymptotic expansion of the Navier-Stokes flow assuming the first order moment condition on initial data. Moreover, observing that the Navier-Stokes flow u (t) lies in the Hardy space H1 (R2) for t > 0, we consider the asymptotic expansions in terms of Hardy-norm. Finally we consider the rapid time decay ‖ u (t) ‖ 2 = o (t - 3/2 ) as t → ∞ with cyclic symmetry introduced by Brandolese [2].
NASA Astrophysics Data System (ADS)
Azarnavid, Babak; Parand, Kourosh; Abbasbandy, Saeid
2018-06-01
This article discusses an iterative reproducing kernel method with respect to its effectiveness and capability of solving a fourth-order boundary value problem with nonlinear boundary conditions modeling beams on elastic foundations. Since there is no method of obtaining reproducing kernel which satisfies nonlinear boundary conditions, the standard reproducing kernel methods cannot be used directly to solve boundary value problems with nonlinear boundary conditions as there is no knowledge about the existence and uniqueness of the solution. The aim of this paper is, therefore, to construct an iterative method by the use of a combination of reproducing kernel Hilbert space method and a shooting-like technique to solve the mentioned problems. Error estimation for reproducing kernel Hilbert space methods for nonlinear boundary value problems have yet to be discussed in the literature. In this paper, we present error estimation for the reproducing kernel method to solve nonlinear boundary value problems probably for the first time. Some numerical results are given out to demonstrate the applicability of the method.
The isentropic quantum drift-diffusion model in two or three space dimensions
NASA Astrophysics Data System (ADS)
Chen, Xiuqing
2009-05-01
We investigate the isentropic quantum drift-diffusion model, a fourth order parabolic system, in space dimensions d = 2, 3. First, we establish the global weak solutions with large initial value and periodic boundary conditions. Then we show the semiclassical limit by delicate interpolation estimates and compactness argument.
NASA Astrophysics Data System (ADS)
Bailly, J. S.; Dartevelle, M.; Delenne, C.; Rousseau, A.
2017-12-01
Floodplain and major river bed topography govern many river biophysical processes during floods. Despite the grow of direct topographic measurements from LiDARS on riverine systems, it still room to develop methods for large (e.g. deltas) or very local (e.g. ponds) riverine systems that take advantage of information coming from simple SAR or optical image processing on floodplain, resulting from waterbodies delineation during flood up or down, and producing ordered coutour lines. The next challenge is thus to exploit such data in order to estimate continuous topography on the floodplain combining heterogeneous data: a topographic points dataset and a located but unknown and ordered contourline dataset. This article is comparing two methods designed to estimate continuous topography on the floodplain mixing ordinal coutour lines and continuous topographic points. For both methods a first estimation step is to value each contourline with elevation and a second step is next to estimate the continuous field from both topographic points and valued contourlines. The first proposed method is a stochastic method starting from multigaussian random-fields and conditional simualtion. The second is a deterministic method based on radial spline fonction for thin layers used for approximated bivariate surface construction. Results are first shown and discussed from a set of synoptic case studies presenting various topographic points density and topographic smoothness. Next, results are shown and discuss on an actual case study in the Montagua laguna, located in the north of Valparaiso, Chile.
NASA Astrophysics Data System (ADS)
Brown, T. G.; Lespez, L.; Sear, D. A.; Houben, P.; Klimek, K.
2016-12-01
Floodplain and major river bed topography govern many river biophysical processes during floods. Despite the grow of direct topographic measurements from LiDARS on riverine systems, it still room to develop methods for large (e.g. deltas) or very local (e.g. ponds) riverine systems that take advantage of information coming from simple SAR or optical image processing on floodplain, resulting from waterbodies delineation during flood up or down, and producing ordered coutour lines. The next challenge is thus to exploit such data in order to estimate continuous topography on the floodplain combining heterogeneous data: a topographic points dataset and a located but unknown and ordered contourline dataset. This article is comparing two methods designed to estimate continuous topography on the floodplain mixing ordinal coutour lines and continuous topographic points. For both methods a first estimation step is to value each contourline with elevation and a second step is next to estimate the continuous field from both topographic points and valued contourlines. The first proposed method is a stochastic method starting from multigaussian random-fields and conditional simualtion. The second is a deterministic method based on radial spline fonction for thin layers used for approximated bivariate surface construction. Results are first shown and discussed from a set of synoptic case studies presenting various topographic points density and topographic smoothness. Next, results are shown and discuss on an actual case study in the Montagua laguna, located in the north of Valparaiso, Chile.
Novel Application of Density Estimation Techniques in Muon Ionization Cooling Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohayai, Tanaz Angelina; Snopok, Pavel; Neuffer, David
The international Muon Ionization Cooling Experiment (MICE) aims to demonstrate muon beam ionization cooling for the first time and constitutes a key part of the R&D towards a future neutrino factory or muon collider. Beam cooling reduces the size of the phase space volume occupied by the beam. Non-parametric density estimation techniques allow very precise calculation of the muon beam phase-space density and its increase as a result of cooling. These density estimation techniques are investigated in this paper and applied in order to estimate the reduction in muon beam size in MICE under various conditions.
Theoretical predictions of latitude dependencies in the solar wind
NASA Technical Reports Server (NTRS)
Winge, C. R., Jr.; Coleman, P. J., Jr.
1974-01-01
Results are presented which were obtained with the Winge-Coleman model for theoretical predictions of latitudinal dependencies in the solar wind. A first-order expansion is described which allows analysis of first-order latitudinal variations in the coronal boundary conditions and results in a second-order partial differential equation for the perturbation stream function. Latitudinal dependencies are analytically separated out in the form of Legendre polynomials and their derivative, and are reduced to the solution of radial differential equations. This analysis is shown to supply an estimate of how large the coronal variation in latitude must be to produce an 11 km/sec/deg gradient in the radial velocity of the solar wind, assuming steady-state processes.
MODFLOW 2000 Head Uncertainty, a First-Order Second Moment Method
Glasgow, H.S.; Fortney, M.D.; Lee, J.; Graettinger, A.J.; Reeves, H.W.
2003-01-01
A computationally efficient method to estimate the variance and covariance in piezometric head results computed through MODFLOW 2000 using a first-order second moment (FOSM) approach is presented. This methodology employs a first-order Taylor series expansion to combine model sensitivity with uncertainty in geologic data. MODFLOW 2000 is used to calculate both the ground water head and the sensitivity of head to changes in input data. From a limited number of samples, geologic data are extrapolated and their associated uncertainties are computed through a conditional probability calculation. Combining the spatially related sensitivity and input uncertainty produces the variance-covariance matrix, the diagonal of which is used to yield the standard deviation in MODFLOW 2000 head. The variance in piezometric head can be used for calibrating the model, estimating confidence intervals, directing exploration, and evaluating the reliability of a design. A case study illustrates the approach, where aquifer transmissivity is the spatially related uncertain geologic input data. The FOSM methodology is shown to be applicable for calculating output uncertainty for (1) spatially related input and output data, and (2) multiple input parameters (transmissivity and recharge).
A spline-based parameter estimation technique for static models of elastic structures
NASA Technical Reports Server (NTRS)
Dutt, P.; Taasan, S.
1986-01-01
The problem of identifying the spatially varying coefficient of elasticity using an observed solution to the forward problem is considered. Under appropriate conditions this problem can be treated as a first order hyperbolic equation in the unknown coefficient. Some continuous dependence results are developed for this problem and a spline-based technique is proposed for approximating the unknown coefficient, based on these results. The convergence of the numerical scheme is established and error estimates obtained.
NASA Astrophysics Data System (ADS)
Sarna, Neeraj; Torrilhon, Manuel
2018-01-01
We define certain criteria, using the characteristic decomposition of the boundary conditions and energy estimates, which a set of stable boundary conditions for a linear initial boundary value problem, involving a symmetric hyperbolic system, must satisfy. We first use these stability criteria to show the instability of the Maxwell boundary conditions proposed by Grad (Commun Pure Appl Math 2(4):331-407, 1949). We then recognise a special block structure of the moment equations which arises due to the recursion relations and the orthogonality of the Hermite polynomials; the block structure will help us in formulating stable boundary conditions for an arbitrary order Hermite discretization of the Boltzmann equation. The formulation of stable boundary conditions relies upon an Onsager matrix which will be constructed such that the newly proposed boundary conditions stay close to the Maxwell boundary conditions at least in the lower order moments.
On the maximum principle for complete second-order elliptic operators in general domains
NASA Astrophysics Data System (ADS)
Vitolo, Antonio
This paper is concerned with the maximum principle for second-order linear elliptic equations in a wide generality. By means of a geometric condition previously stressed by Berestycki-Nirenberg-Varadhan, Cabré was very able to improve the classical ABP estimate obtaining the maximum principle also in unbounded domains, such as infinite strips and open connected cones with closure different from the whole space. Now we introduce a new geometric condition that extends the result to a more general class of domains including the complements of hypersurfaces, as for instance the cut plane. The methods developed here allow us to deal with complete second-order equations, where the admissible first-order term, forced to be zero in a preceding result with Cafagna, depends on the geometry of the domain.
Abrahamson, Joseph P; Zelina, Joseph; Andac, M Gurhan; Vander Wal, Randy L
2016-11-01
The first order approximation (FOA3) currently employed to estimate BC mass emissions underpredicts BC emissions due to inaccuracies in measuring low smoke numbers (SNs) produced by modern high bypass ratio engines. The recently developed Formation and Oxidation (FOX) method removes the need for and hence uncertainty associated with (SNs), instead relying upon engine conditions in order to predict BC mass. Using the true engine operating conditions from proprietary engine cycle data an improved FOX (ImFOX) predictive relation is developed. Still, the current methods are not optimized to estimate cruise emissions nor account for the use of alternative jet fuels with reduced aromatic content. Here improved correlations are developed to predict engine conditions and BC mass emissions at ground and cruise altitude. This new ImFOX is paired with a newly developed hydrogen relation to predict emissions from alternative fuels and fuel blends. The ImFOX is designed for rich-quench-lean style combustor technologies employed predominately in the current aviation fleet.
Local energy flux estimates for unstable conditions using variance data in semiarid rangelands
Kustas, William P.; Blanford, J.H.; Stannard, D.I.; Daughtry, C.S.T.; Nichols, W.D.; Weltz, M.A.
1994-01-01
A network of meteorological stations was installed during the Monsoon '90 field campaign in the Walnut Gulch experimental watershed. The study area has a fairly complex surface. The vegetation cover is heterogeneous and sparse, and the terrain is mildly hilly, but dissected by ephemeral channels. Besides measurement of some of the standard weather data such as wind speed, air temperature, and solar radiation, these sites also contained instruments for estimating the local surface energy balance. The approach utilized measurements of net radiation (Rn), soil heat flux (G) and Monin-Obukhov similarity theory applied to first- and second-order turbulent statistics of wind speed and temperature for determining the sensible heat flux (H). The latent heat flux (LE) was solved as a residual in the surface energy balance equation, namely, LE = −(Rn + G + H). This procedure (VAR-RESID) for estimating the energy fluxes satisfied monetary constraints and the requirement for low maintenance and continued operation through the harsh environmental conditions experienced in semiarid regions. Comparison of energy fluxes using this approach with more traditional eddy correlation techniques showed differences were within 20% under unstable conditions. Similar variability in flux estimates over the study area was present in the eddy correlation data. Hence, estimates of H and LE using the VAR-RESID approach under unstable conditions were considered satisfactory. Also, with second-order statistics of vertical velocity collected at several sites, the local momentum roughness length was estimated. This is an important parameter used in modeling the turbulent transfer of momentum and sensible heat fluxes across the surface-atmosphere interface.
Fellner, Klemens; Kovtunenko, Victor A
2016-01-01
A nonlinear Poisson-Boltzmann equation with inhomogeneous Robin type boundary conditions at the interface between two materials is investigated. The model describes the electrostatic potential generated by a vector of ion concentrations in a periodic multiphase medium with dilute solid particles. The key issue stems from interfacial jumps, which necessitate discontinuous solutions to the problem. Based on variational techniques, we derive the homogenisation of the discontinuous problem and establish a rigorous residual error estimate up to the first-order correction.
Cost of Crashes Related to Road Conditions, United States, 2006
Zaloshnja, Eduard; Miller, Ted R.
2009-01-01
This is the first study to estimate the cost of crashes related to road conditions in the U.S. To model the probability that road conditions contributed to the involvement of a vehicle in the crash, we used 2000–03 Large Truck Crash Causation Study (LTCCS) data, the only dataset that provides detailed information whether road conditions contributed to crash occurrence. We applied the logistic regression results to a costed national crash dataset in order to calculate the probability that road conditions contributed to the involvement of a vehicle in each crash. In crashes where someone was moderately to seriously injured (AIS-2-6) in a vehicle that harmfully impacted a large tree or medium or large non-breakaway pole, or if the first harmful event was collision with a bridge, we changed the calculated probability of being road-related to 1. We used the state distribution of costs of fatal crashes where road conditions contributed to crash occurrence or severity to estimate the respective state distribution of non-fatal crash costs. The estimated comprehensive cost of traffic crashes where road conditions contributed to crash occurrence or severity was $217.5 billion in 2006. This represented 43.6% of the total comprehensive crash cost. The large share of crash costs related to road design and conditions underlines the importance of these factors in highway safety. Road conditions are largely controllable. Road maintenance and upgrading can prevent crashes and reduce injury severity. PMID:20184840
1982-09-01
considered to be Markovian and the fact that Ehrenberg has been openly critical of the use of first-order Markov processes in describing consumer ... behavior -/ disinclines us to treating these data in this manner. We Shall therefore interpret the p (i,i) as joint rather than conditional probabilities
Mathieu, Amélie; Vidal, Tiphaine; Jullien, Alexandra; Wu, QiongLi; Chambon, Camille; Bayol, Benoit; Cournède, Paul-Henry
2018-06-19
Functional-structural plant models (FSPMs) describe explicitly the interactions between plants and their environment at organ to plant scale. However, the high level of description of the structure or model mechanisms makes this type of model very complex and hard to calibrate. A two-step methodology to facilitate the calibration process is proposed here. First, a global sensitivity analysis method was applied to the calibration loss function. It provided first-order and total-order sensitivity indexes that allow parameters to be ranked by importance in order to select the most influential ones. Second, the Akaike information criterion (AIC) was used to quantify the model's quality of fit after calibration with different combinations of selected parameters. The model with the lowest AIC gives the best combination of parameters to select. This methodology was validated by calibrating the model on an independent data set (same cultivar, another year) with the parameters selected in the second step. All the parameters were set to their nominal value; only the most influential ones were re-estimated. Sensitivity analysis applied to the calibration loss function is a relevant method to underline the most significant parameters in the estimation process. For the studied winter oilseed rape model, 11 out of 26 estimated parameters were selected. Then, the model could be recalibrated for a different data set by re-estimating only three parameters selected with the model selection method. Fitting only a small number of parameters dramatically increases the efficiency of recalibration, increases the robustness of the model and helps identify the principal sources of variation in varying environmental conditions. This innovative method still needs to be more widely validated but already gives interesting avenues to improve the calibration of FSPMs.
Lag-One Autocorrelation in Short Series: Estimation and Hypotheses Testing
ERIC Educational Resources Information Center
Solanas, Antonio; Manolov, Rumen; Sierra, Vicenta
2010-01-01
In the first part of the study, nine estimators of the first-order autoregressive parameter are reviewed and a new estimator is proposed. The relationships and discrepancies between the estimators are discussed in order to achieve a clear differentiation. In the second part of the study, the precision in the estimation of autocorrelation is…
Impact of transverse and longitudinal dispersion on first-order degradation rate constant estimation
NASA Astrophysics Data System (ADS)
Stenback, Greg A.; Ong, Say Kee; Rogers, Shane W.; Kjartanson, Bruce H.
2004-09-01
A two-dimensional analytical model is employed for estimating the first-order degradation rate constant of hydrophobic organic compounds (HOCs) in contaminated groundwater under steady-state conditions. The model may utilize all aqueous concentration data collected downgradient of a source area, but does not require that any data be collected along the plume centerline. Using a least squares fit of the model to aqueous concentrations measured in monitoring wells, degradation rate constants were estimated at a former manufactured gas plant (FMGP) site in the Midwest U.S. The estimated degradation rate constants are 0.0014, 0.0034, 0.0031, 0.0019, and 0.0053 day -1 for acenaphthene, naphthalene, benzene, ethylbenzene, and toluene, respectively. These estimated rate constants were as low as one-half those estimated with the one-dimensional (centerline) approach of Buscheck and Alcantar [Buscheck, T.E., Alcantar, C.M., 1995. Regression techniques and analytical solutions to demonstrate intrinsic bioremediation. In: Hinchee, R.E., Wilson, J.T., Downey, D.C. (Eds.), Intrinsic Bioremediation, Battelle Press, Columbus, OH, pp. 109-116] which does not account for transverse dispersivity. Varying the transverse and longitudinal dispersivity values over one order of magnitude for toluene data obtained from the FMGP site resulted in nearly a threefold variation in the estimated degradation rate constant—highlighting the importance of reliable estimates of the dispersion coefficients for obtaining reasonable estimates of the degradation rate constants. These results have significant implications for decision making and site management where overestimation of a degradation rate may result in remediation times and bioconversion factors that exceed expectations. For a complex source area or non-steady-state plume, a superposition of analytical models that incorporate longitudinal and transverse dispersion and time may be used at sites where the centerline method would not be applicable.
Fourth order difference methods for hyperbolic IBVP's
NASA Technical Reports Server (NTRS)
Gustafsson, Bertil; Olsson, Pelle
1994-01-01
Fourth order difference approximations of initial-boundary value problems for hyperbolic partial differential equations are considered. We use the method of lines approach with both explicit and compact implicit difference operators in space. The explicit operator satisfies an energy estimate leading to strict stability. For the implicit operator we develop boundary conditions and give a complete proof of strong stability using the Laplace transform technique. We also present numerical experiments for the linear advection equation and Burgers' equation with discontinuities in the solution or in its derivative. The first equation is used for modeling contact discontinuities in fluid dynamics, the second one for modeling shocks and rarefaction waves. The time discretization is done with a third order Runge-Kutta TVD method. For solutions with discontinuities in the solution itself we add a filter based on second order viscosity. In case of the non-linear Burger's equation we use a flux splitting technique that results in an energy estimate for certain different approximations, in which case also an entropy condition is fulfilled. In particular we shall demonstrate that the unsplit conservative form produces a non-physical shock instead of the physically correct rarefaction wave. In the numerical experiments we compare our fourth order methods with a standard second order one and with a third order TVD-method. The results show that the fourth order methods are the only ones that give good results for all the considered test problems.
Ebel, B.A.; Mirus, B.B.; Heppner, C.S.; VanderKwaak, J.E.; Loague, K.
2009-01-01
Distributed hydrologic models capable of simulating fully-coupled surface water and groundwater flow are increasingly used to examine problems in the hydrologic sciences. Several techniques are currently available to couple the surface and subsurface; the two most frequently employed approaches are first-order exchange coefficients (a.k.a., the surface conductance method) and enforced continuity of pressure and flux at the surface-subsurface boundary condition. The effort reported here examines the parameter sensitivity of simulated hydrologic response for the first-order exchange coefficients at a well-characterized field site using the fully coupled Integrated Hydrology Model (InHM). This investigation demonstrates that the first-order exchange coefficients can be selected such that the simulated hydrologic response is insensitive to the parameter choice, while simulation time is considerably reduced. Alternatively, the ability to choose a first-order exchange coefficient that intentionally decouples the surface and subsurface facilitates concept-development simulations to examine real-world situations where the surface-subsurface exchange is impaired. While the parameters comprising the first-order exchange coefficient cannot be directly estimated or measured, the insensitivity of the simulated flow system to these parameters (when chosen appropriately) combined with the ability to mimic actual physical processes suggests that the first-order exchange coefficient approach can be consistent with a physics-based framework. Copyright ?? 2009 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryding, Kristen E.; Skalski, John R.
1999-06-01
The purpose of this report is to illustrate the development of a stochastic model using coded wire-tag (CWT) release and age-at-return data, in order to regress first year ocean survival probabilities against coastal ocean conditions and climate covariates.
Output Feedback Distributed Containment Control for High-Order Nonlinear Multiagent Systems.
Li, Yafeng; Hua, Changchun; Wu, Shuangshuang; Guan, Xinping
2017-01-31
In this paper, we study the problem of output feedback distributed containment control for a class of high-order nonlinear multiagent systems under a fixed undirected graph and a fixed directed graph, respectively. Only the output signals of the systems can be measured. The novel reduced order dynamic gain observer is constructed to estimate the unmeasured state variables of the system with the less conservative condition on nonlinear terms than traditional Lipschitz one. Via the backstepping method, output feedback distributed nonlinear controllers for the followers are designed. By means of the novel first virtual controllers, we separate the estimated state variables of different agents from each other. Consequently, the designed controllers show independence on the estimated state variables of neighbors except outputs information, and the dynamics of each agent can be greatly different, which make the design method have a wider class of applications. Finally, a numerical simulation is presented to illustrate the effectiveness of the proposed method.
Sparse Method for Direction of Arrival Estimation Using Denoised Fourth-Order Cumulants Vector.
Fan, Yangyu; Wang, Jianshu; Du, Rui; Lv, Guoyun
2018-06-04
Fourth-order cumulants (FOCs) vector-based direction of arrival (DOA) estimation methods of non-Gaussian sources may suffer from poor performance for limited snapshots or difficulty in setting parameters. In this paper, a novel FOCs vector-based sparse DOA estimation method is proposed. Firstly, by utilizing the concept of a fourth-order difference co-array (FODCA), an advanced FOCs vector denoising or dimension reduction procedure is presented for arbitrary array geometries. Then, a novel single measurement vector (SMV) model is established by the denoised FOCs vector, and efficiently solved by an off-grid sparse Bayesian inference (OGSBI) method. The estimation errors of FOCs are integrated in the SMV model, and are approximately estimated in a simple way. A necessary condition regarding the number of identifiable sources of our method is presented that, in order to uniquely identify all sources, the number of sources K must fulfill K ≤ ( M 4 - 2 M 3 + 7 M 2 - 6 M ) / 8 . The proposed method suits any geometry, does not need prior knowledge of the number of sources, is insensitive to associated parameters, and has maximum identifiability O ( M 4 ) , where M is the number of sensors in the array. Numerical simulations illustrate the superior performance of the proposed method.
Inverse analysis and regularisation in conditional source-term estimation modelling
NASA Astrophysics Data System (ADS)
Labahn, Jeffrey W.; Devaud, Cecile B.; Sipkens, Timothy A.; Daun, Kyle J.
2014-05-01
Conditional Source-term Estimation (CSE) obtains the conditional species mass fractions by inverting a Fredholm integral equation of the first kind. In the present work, a Bayesian framework is used to compare two different regularisation methods: zeroth-order temporal Tikhonov regulatisation and first-order spatial Tikhonov regularisation. The objectives of the current study are: (i) to elucidate the ill-posedness of the inverse problem; (ii) to understand the origin of the perturbations in the data and quantify their magnitude; (iii) to quantify the uncertainty in the solution using different priors; and (iv) to determine the regularisation method best suited to this problem. A singular value decomposition shows that the current inverse problem is ill-posed. Perturbations to the data may be caused by the use of a discrete mixture fraction grid for calculating the mixture fraction PDF. The magnitude of the perturbations is estimated using a box filter and the uncertainty in the solution is determined based on the width of the credible intervals. The width of the credible intervals is significantly reduced with the inclusion of a smoothing prior and the recovered solution is in better agreement with the exact solution. The credible intervals for temporal and spatial smoothing are shown to be similar. Credible intervals for temporal smoothing depend on the solution from the previous time step and a smooth solution is not guaranteed. For spatial smoothing, the credible intervals are not dependent upon a previous solution and better predict characteristics for higher mixture fraction values. These characteristics make spatial smoothing a promising alternative method for recovering a solution from the CSE inversion process.
Quasi-projective synchronization of fractional-order complex-valued recurrent neural networks.
Yang, Shuai; Yu, Juan; Hu, Cheng; Jiang, Haijun
2018-08-01
In this paper, without separating the complex-valued neural networks into two real-valued systems, the quasi-projective synchronization of fractional-order complex-valued neural networks is investigated. First, two new fractional-order inequalities are established by using the theory of complex functions, Laplace transform and Mittag-Leffler functions, which generalize traditional inequalities with the first-order derivative in the real domain. Additionally, different from hybrid control schemes given in the previous work concerning the projective synchronization, a simple and linear control strategy is designed in this paper and several criteria are derived to ensure quasi-projective synchronization of the complex-valued neural networks with fractional-order based on the established fractional-order inequalities and the theory of complex functions. Moreover, the error bounds of quasi-projective synchronization are estimated. Especially, some conditions are also presented for the Mittag-Leffler synchronization of the addressed neural networks. Finally, some numerical examples with simulations are provided to show the effectiveness of the derived theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.
Evaluation Of Statistical Models For Forecast Errors From The HBV-Model
NASA Astrophysics Data System (ADS)
Engeland, K.; Kolberg, S.; Renard, B.; Stensland, I.
2009-04-01
Three statistical models for the forecast errors for inflow to the Langvatn reservoir in Northern Norway have been constructed and tested according to how well the distribution and median values of the forecasts errors fit to the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order autoregressive model was constructed for the forecast errors. The parameters were conditioned on climatic conditions. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order autoregressive model was constructed for the forecast errors. For the last model positive and negative errors were modeled separately. The errors were first NQT-transformed before a model where the mean values were conditioned on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: We wanted a) the median values to be close to the observed values; b) the forecast intervals to be narrow; c) the distribution to be correct. The results showed that it is difficult to obtain a correct model for the forecast errors, and that the main challenge is to account for the auto-correlation in the errors. Model 1 and 2 gave similar results, and the main drawback is that the distributions are not correct. The 95% forecast intervals were well identified, but smaller forecast intervals were over-estimated, and larger intervals were under-estimated. Model 3 gave a distribution that fits better, but the median values do not fit well since the auto-correlation is not properly accounted for. If the 95% forecast interval is of interest, Model 2 is recommended. If the whole distribution is of interest, Model 3 is recommended.
Sabatini, Angelo Maria
2011-01-01
In this paper we present a quaternion-based Extended Kalman Filter (EKF) for estimating the three-dimensional orientation of a rigid body. The EKF exploits the measurements from an Inertial Measurement Unit (IMU) that is integrated with a tri-axial magnetic sensor. Magnetic disturbances and gyro bias errors are modeled and compensated by including them in the filter state vector. We employ the observability rank criterion based on Lie derivatives to verify the conditions under which the nonlinear system that describes the process of motion tracking by the IMU is observable, namely it may provide sufficient information for performing the estimation task with bounded estimation errors. The observability conditions are that the magnetic field, perturbed by first-order Gauss-Markov magnetic variations, and the gravity vector are not collinear and that the IMU is subject to some angular motions. Computer simulations and experimental testing are presented to evaluate the algorithm performance, including when the observability conditions are critical. PMID:22163689
NASA Astrophysics Data System (ADS)
Béranger, Sandra C.; Sleep, Brent E.; Lollar, Barbara Sherwood; Monteagudo, Fernando Perez
2005-01-01
An analytical, one-dimensional, multi-species, reactive transport model for simulating the concentrations and isotopic signatures of tetrachloroethylene (PCE) and its daughter products was developed. The simulation model was coupled to a genetic algorithm (GA) combined with a gradient-based (GB) method to estimate the first order decay coefficients and enrichment factors. In testing with synthetic data, the hybrid GA-GB method reduced the computational requirements for parameter estimation by a factor as great as 300. The isotopic signature profiles were observed to be more sensitive than the concentration profiles to estimates of both the first order decay constants and enrichment factors. Including isotopic data for parameter estimation significantly increased the GA convergence rate and slightly improved the accuracy of estimation of first order decay constants.
Error estimation for CFD aeroheating prediction under rarefied flow condition
NASA Astrophysics Data System (ADS)
Jiang, Yazhong; Gao, Zhenxun; Jiang, Chongwen; Lee, Chunhian
2014-12-01
Both direct simulation Monte Carlo (DSMC) and Computational Fluid Dynamics (CFD) methods have become widely used for aerodynamic prediction when reentry vehicles experience different flow regimes during flight. The implementation of slip boundary conditions in the traditional CFD method under Navier-Stokes-Fourier (NSF) framework can extend the validity of this approach further into transitional regime, with the benefit that much less computational cost is demanded compared to DSMC simulation. Correspondingly, an increasing error arises in aeroheating calculation as the flow becomes more rarefied. To estimate the relative error of heat flux when applying this method for a rarefied flow in transitional regime, theoretical derivation is conducted and a dimensionless parameter ɛ is proposed by approximately analyzing the ratio of the second order term to first order term in the heat flux expression in Burnett equation. DSMC simulation for hypersonic flow over a cylinder in transitional regime is performed to test the performance of parameter ɛ, compared with two other parameters, Knρ and MaṡKnρ.
Spatial Decomposition of Translational Water–Water Correlation Entropy in Binding Pockets
2015-01-01
A number of computational tools available today compute the thermodynamic properties of water at surfaces and in binding pockets by using inhomogeneous solvation theory (IST) to analyze explicit-solvent simulations. Such methods enable qualitative spatial mappings of both energy and entropy around a solute of interest and can also be applied quantitatively. However, the entropy estimates of existing methods have, to date, been almost entirely limited to the first-order terms in the IST’s entropy expansion. These first-order terms account for localization and orientation of water molecules in the field of the solute but not for the modification of water–water correlations by the solute. Here, we present an extension of the Grid Inhomogeneous Solvation Theory (GIST) approach which accounts for water–water translational correlations. The method involves rewriting the two-point density of water in terms of a conditional density and utilizes the efficient nearest-neighbor entropy estimation approach. Spatial maps of this second order term, for water in and around the synthetic host cucurbit[7]uril and in the binding pocket of the enzyme Factor Xa, reveal mainly negative contributions, indicating solute-induced water–water correlations relative to bulk water; particularly strong signals are obtained for sites at the entrances of cavities or pockets. This second-order term thus enters with the same, negative, sign as the first order translational and orientational terms. Numerical and convergence properties of the methodology are examined. PMID:26636620
NASA Astrophysics Data System (ADS)
Graham, Wendy D.; Neff, Christina R.
1994-05-01
The first-order analytical solution of the inverse problem for estimating spatially variable recharge and transmissivity under steady-state groundwater flow, developed in Part 1 is applied to the Upper Floridan Aquifer in NE Florida. Parameters characterizing the statistical structure of the log-transmissivity and head fields are estimated from 152 measurements of transmissivity and 146 measurements of hydraulic head available in the study region. Optimal estimates of the recharge, transmissivity and head fields are produced throughout the study region by conditioning on the nearest 10 available transmissivity measurements and the nearest 10 available head measurements. Head observations are shown to provide valuable information for estimating both the transmissivity and the recharge fields. Accurate numerical groundwater model predictions of the aquifer flow system are obtained using the optimal transmissivity and recharge fields as input parameters, and the optimal head field to define boundary conditions. For this case study, both the transmissivity field and the uncertainty of the transmissivity field prediction are poorly estimated, when the effects of random recharge are neglected.
NASA Astrophysics Data System (ADS)
Wells, Aaron Raymond
This research focuses on the Emory and Obed Watersheds in the Cumberland Plateau in Central Tennessee and the Lower Hatchie River Watershed in West Tennessee. A framework based on market and nonmarket valuation techniques was used to empirically estimate economic values for environmental amenities and negative externalities in these areas. The specific techniques employed include a variation of hedonic pricing and discrete choice conjoint analysis (i.e., choice modeling), in addition to geographic information systems (GIS) and remote sensing. Microeconomic models of agent behavior, including random utility theory and profit maximization, provide the principal theoretical foundation linking valuation techniques and econometric models. The generalized method of moments estimator for a first-order spatial autoregressive function and mixed logit models are the principal econometric methods applied within the framework. The dissertation is subdivided into three separate chapters written in a manuscript format. The first chapter provides the necessary theoretical and mathematical conditions that must be satisfied in order for a forest amenity enhancement program to be implemented. These conditions include utility, value, and profit maximization. The second chapter evaluates the effect of forest land cover and information about future land use change on respondent preferences and willingness to pay for alternative hypothetical forest amenity enhancement options. Land use change information and the amount of forest land cover significantly influenced respondent preferences, choices, and stated willingness to pay. Hicksian welfare estimates for proposed enhancement options ranged from 57.42 to 25.53, depending on the policy specification, information level, and econometric model. The third chapter presents economic values for negative externalities associated with channelization that affect the productivity and overall market value of forested wetlands. Results of robust, generalized moments estimation of a double logarithmic first-order spatial autoregressive error model (inverse distance weights with spatial dependence up to 1500m) indicate that the implicit cost of damages to forested wetlands caused by channelization equaled -$5,438 ha-1. Collectively, the results of this dissertation provide economic measures of the damages to and benefits of environmental assets, help private landowners and policy makers identify the amenity attributes preferred by the public, and improve the management of natural resources.
Liu, Wanting; Fan, Jie; Gan, Jun; Lei, Hui; Niu, Chaoyang; Chan, Raymond C K; Zhu, Xiongzhao
2017-09-01
Impairment in social functioning has been widely described in obsessive-compulsive disorder (OCD). However, several aspects of social cognition, such as theory of mind (ToM), have not been substantially investigated in this context. This study examined cognitive and affective ToM in 40 OCD patients and 38 age-, sex-, and education-matched healthy controls (HCs) with the computerized Yoni task and a battery of neurocognitive tests. OCD symptom severity was assessed with the Yale-Brown Obsessive-Compulsive Scale (Y-BOCS). Depressive and anxiety symptoms were also assessed. Compared to HCs, OCD patients performed worse on second-order affective condition trials, but not cognitive or physical condition trials, of the Yoni task; there were not group differences in any of the first-order condition domains. Second-order ToM performance of OCD patients was associated with estimated intelligence and working memory performance. After controlling for neurocognitive variables, the group difference in second-order affective condition performance remained significant. These findings indicate that the affective component of ToM may be selectively impaired in OCD patients and that the observed deficit is largely independent of other neurocognitive impairments and clinical characteristics. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Pradhan, Sudeep; Song, Byungjeong; Lee, Jaeyeon; Chae, Jung-Woo; Kim, Kyung Im; Back, Hyun-Moon; Han, Nayoung; Kwon, Kwang-Il; Yun, Hwi-Yeol
2017-12-01
Exploratory preclinical, as well as clinical trials, may involve a small number of patients, making it difficult to calculate and analyze the pharmacokinetic (PK) parameters, especially if the PK parameters show very high inter-individual variability (IIV). In this study, the performance of a classical first-order conditional estimation with interaction (FOCE-I) and expectation maximization (EM)-based Markov chain Monte Carlo Bayesian (BAYES) estimation methods were compared for estimating the population parameters and its distribution from data sets having a low number of subjects. In this study, 100 data sets were simulated with eight sampling points for each subject and with six different levels of IIV (5%, 10%, 20%, 30%, 50%, and 80%) in their PK parameter distribution. A stochastic simulation and estimation (SSE) study was performed to simultaneously simulate data sets and estimate the parameters using four different methods: FOCE-I only, BAYES(C) (FOCE-I and BAYES composite method), BAYES(F) (BAYES with all true initial parameters and fixed ω 2 ), and BAYES only. Relative root mean squared error (rRMSE) and relative estimation error (REE) were used to analyze the differences between true and estimated values. A case study was performed with a clinical data of theophylline available in NONMEM distribution media. NONMEM software assisted by Pirana, PsN, and Xpose was used to estimate population PK parameters, and R program was used to analyze and plot the results. The rRMSE and REE values of all parameter (fixed effect and random effect) estimates showed that all four methods performed equally at the lower IIV levels, while the FOCE-I method performed better than other EM-based methods at higher IIV levels (greater than 30%). In general, estimates of random-effect parameters showed significant bias and imprecision, irrespective of the estimation method used and the level of IIV. Similar performance of the estimation methods was observed with theophylline dataset. The classical FOCE-I method appeared to estimate the PK parameters more reliably than the BAYES method when using a simple model and data containing only a few subjects. EM-based estimation methods can be considered for adapting to the specific needs of a modeling project at later steps of modeling.
EVALUATING THE IMPORTANCE OF FACTORS IN ANY GIVEN ORDER OF FACTORING.
Humphreys, L G; Tucker, L R; Dachler, P
1970-04-01
A methodology has been described and illustrated for obtaining an evaluation of the importance of the factors in a particular order of factoring that does not require faotoring beyond that order. For example, one can estimate the intercorrelations of the original measures with the perturbations of the first-order factor held constant or, the reverse, estimate the contribution to the intercorrelations of the originral measures from the first-order factors alone. Similar operations are possible at higher orders.
NASA Astrophysics Data System (ADS)
Poulter, B.; Ciais, P.; Joetzjer, E.; Maignan, F.; Luyssaert, S.; Barichivich, J.
2015-12-01
Accurately estimating forest biomass and forest carbon dynamics requires new integrated remote sensing, forest inventory, and carbon cycle modeling approaches. Presently, there is an increasing and urgent need to reduce forest biomass uncertainty in order to meet the requirements of carbon mitigation treaties, such as Reducing Emissions from Deforestation and forest Degradation (REDD+). Here we describe a new parameterization and assimilation methodology used to estimate tropical forest biomass using the ORCHIDEE-CAN dynamic global vegetation model. ORCHIDEE-CAN simulates carbon uptake and allocation to individual trees using a mechanistic representation of photosynthesis, respiration and other first-order processes. The model is first parameterized using forest inventory data to constrain background mortality rates, i.e., self-thinning, and productivity. Satellite remote sensing data for forest structure, i.e., canopy height, is used to constrain simulated forest stand conditions using a look-up table approach to match canopy height distributions. The resulting forest biomass estimates are provided for spatial grids that match REDD+ project boundaries and aim to provide carbon estimates for the criteria described in the IPCC Good Practice Guidelines Tier 3 category. With the increasing availability of forest structure variables derived from high-resolution LIDAR, RADAR, and optical imagery, new methodologies and applications with process-based carbon cycle models are becoming more readily available to inform land management.
Experience in estimating neutron poison worths
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiang, R.T.; Congdon, S.P.
1989-01-01
Gadolinia, {sup 135}Xe, {sup 149}Sm, control rod, and soluble boron are five neutron poisons that may appear in light water reactor assemblies. Reliable neutron poison worth estimation is useful for evaluating core operating strategies, fuel cycle economics, and reactor safety design. Based on physical presence, neutron poisons can be divided into two categories: local poisons and global poisons. Gadolinia and control rod are local poisons, and {sup 135}Xe, {sup 149}Sm, and soluble boron are global poisons. The first-order perturbation method is commonly used to estimate nuclide worths in fuel assemblies. It is well known, however, that the first-order perturbation methodmore » was developed for small perturbations, such as the perturbation due to weak absorbers, and that neutron poisons are not weak absorbers. The authors have developed an improved method to replace the first-order perturbation method, which yields very poor results, for estimating local poison worths. It has also been shown that the first-order perturbation method seems adequate to estimate worths for global poisons caused by flux compensation.« less
Stochastic Optimal Prediction with Application to Averaged Euler Equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bell, John; Chorin, Alexandre J.; Crutchfield, William
Optimal prediction (OP) methods compensate for a lack of resolution in the numerical solution of complex problems through the use of an invariant measure as a prior measure in the Bayesian sense. In first-order OP, unresolved information is approximated by its conditional expectation with respect to the invariant measure. In higher-order OP, unresolved information is approximated by a stochastic estimator, leading to a system of random or stochastic differential equations. We explain the ideas through a simple example, and then apply them to the solution of Averaged Euler equations in two space dimensions.
Design of experiments for zeroth and first-order reaction rates.
Amo-Salas, Mariano; Martín-Martín, Raúl; Rodríguez-Aragón, Licesio J
2014-09-01
This work presents optimum designs for reaction rates experiments. In these experiments, time at which observations are to be made and temperatures at which reactions are to be run need to be designed. Observations are performed along time under isothermal conditions. Each experiment needs a fixed temperature and so the reaction can be measured at the designed times. For these observations under isothermal conditions over the same reaction a correlation structure has been considered. D-optimum designs are the aim of our work for zeroth and first-order reaction rates. Temperatures for the isothermal experiments and observation times, to obtain the most accurate estimates of the unknown parameters, are provided in these designs. D-optimum designs for a single observation in each isothermal experiment or for several correlated observations have been obtained. Robustness of the optimum designs for ranges of the correlation parameter and comparisons of the information gathered by different designs are also shown. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Tarazona, J V; Rodríguez, C; Alonso, E; Sáez, M; González, F; San Andrés, M D; Jiménez, B; San Andrés, M I
2015-01-22
This article describes the toxicokinetics of perfluorooctane sulfonate (PFOS) in birds under low repeated dosing, equivalent to 0.085 μg/kg per day, representing environmentally realistic exposure conditions. The best fitting was provided by a simple pseudo monocompartmental first-order kinetics model, regulated by two rates, with a pseudo first-order dissipation half-life of 230 days, accounting for real elimination as well as binding of PFOS to non-exchangeable structures. The calculated assimilation efficiency was 0.66 with confidence intervals of 0.64 and 0.68. The model calculations confirmed that the measured maximum concentrations were still far from the steady state situation, which for this dose regime, was estimated at a value of about 65 μg PFOS/L serum achieved after a theoretical 210 weeks continuous exposure. The results confirm a very different kinetics than that observed in single-dose experiments confirming clear dose-related differences in apparent elimination rates in birds, as described for humans and monkeys; suggesting that a capacity-limited saturable process should also be considered in the kinetic behavior of PFOS in birds. Pseudo first-order kinetic models are highly convenient and frequently used for predicting bioaccumulation of chemicals in livestock and wildlife; the study suggests that previous bioaccumulation models using half-lives obtained at high doses are expected to underestimate the biomagnification potential of PFOS. The toxicokinetic parameters presented here can be used for higher-tier bioaccumulation estimations of PFOS in chickens and as surrogate values for modeling PFOS kinetics in wild bird species. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Wu, Ning Ying; Conger, Anthony J; Dygdon, Judith A
2006-04-01
Two hundred fifty one men and women participated in a study of the prediction of fear of heights, snakes, and public speaking by providing retrospective accounts of multimodal classical conditioning events involving those stimuli. The fears selected for study represent those believed by some to be innate (i.e., heights), prepared (i.e., snakes), and purely experientially learned (i.e., public speaking). This study evaluated the extent to which classical conditioning experiences in direct, observational, and verbal modes contributed to the prediction of the current level of fear severity. Subjects were asked to describe their current level of fear and to estimate their experience with fear response-augmenting events (first- and higher-order aversive pairings) and fear response-moderating events (first- and higher-order appetitive pairings, and pre- and post-conditioning neutral presentations) in direct, observational, and verbal modes. For each stimulus, fear was predictable from direct response-augmenting events and prediction was enhanced by the inclusion of response-moderating events. Furthermore, for each fear, maximum prediction was attained by the addition of variables tapping experiences in the observational and/or verbal modes. Conclusions are offered regarding the importance of including response-augmenting and response-moderating events in all three modes in both research and clinical applications of classical conditioning.
On the Possibility of Ill-Conditioned Covariance Matrices in the First-Order Two-Step Estimator
NASA Technical Reports Server (NTRS)
Garrison, James L.; Axelrod, Penina; Kasdin, N. Jeremy
1997-01-01
The first-order two-step nonlinear estimator, when applied to a problem of orbital navigation, is found to occasionally produce first step covariance matrices with very low eigenvalues at certain trajectory points. This anomaly is the result of the linear approximation to the first step covariance propagation. The study of this anomaly begins with expressing the propagation of the first and second step covariance matrices in terms of a single matrix. This matrix is shown to have a rank equal to the difference between the number of first step states and the number of second step states. Furthermore, under some simplifying assumptions, it is found that the basis of the column space of this matrix remains fixed once the filter has removed the large initial state error. A test matrix containing the basis of this column space and the partial derivative matrix relating first and second step states is derived. This square test matrix, which has dimensions equal to the number of first step states, numerically drops rank at the same locations that the first step covariance does. It is formulated in terms of a set of constant vectors (the basis) and a matrix which can be computed from a reference trajectory (the partial derivative matrix). A simple example problem involving dynamics which are described by two states and a range measurement illustrate the cause of this anomaly and the application of the aforementioned numerical test in more detail.
On High-Order Radiation Boundary Conditions
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas
1995-01-01
In this paper we develop the theory of high-order radiation boundary conditions for wave propagation problems. In particular, we study the convergence of sequences of time-local approximate conditions to the exact boundary condition, and subsequently estimate the error in the solutions obtained using these approximations. We show that for finite times the Pade approximants proposed by Engquist and Majda lead to exponential convergence if the solution is smooth, but that good long-time error estimates cannot hold for spatially local conditions. Applications in fluid dynamics are also discussed.
Nonlinear spline wavefront reconstruction through moment-based Shack-Hartmann sensor measurements.
Viegers, M; Brunner, E; Soloviev, O; de Visser, C C; Verhaegen, M
2017-05-15
We propose a spline-based aberration reconstruction method through moment measurements (SABRE-M). The method uses first and second moment information from the focal spots of the SH sensor to reconstruct the wavefront with bivariate simplex B-spline basis functions. The proposed method, since it provides higher order local wavefront estimates with quadratic and cubic basis functions can provide the same accuracy for SH arrays with a reduced number of subapertures and, correspondingly, larger lenses which can be beneficial for application in low light conditions. In numerical experiments the performance of SABRE-M is compared to that of the first moment method SABRE for aberrations of different spatial orders and for different sizes of the SH array. The results show that SABRE-M is superior to SABRE, in particular for the higher order aberrations and that SABRE-M can give equal performance as SABRE on a SH grid of halved sampling.
A method for direct measurement of the first-order mass moments of human body segments.
Fujii, Yusaku; Shimada, Kazuhito; Maru, Koichi; Ozawa, Junichi; Lu, Rong-Sheng
2010-01-01
We propose a simple and direct method for measuring the first-order mass moment of a human body segment. With the proposed method, the first-order mass moment of the body segment can be directly measured by using only one precision scale and one digital camera. In the dummy mass experiment, the relative standard uncertainty of a single set of measurements of the first-order mass moment is estimated to be 1.7%. The measured value will be useful as a reference for evaluating the uncertainty of the body segment inertial parameters (BSPs) estimated using an indirect method.
Enhancing second-order conditioning with lesions of the basolateral amygdala.
Holland, Peter C
2016-04-01
Because the occurrence of primary reinforcers in natural environments is relatively rare, conditioned reinforcement plays an important role in many accounts of behavior, including pathological behaviors such as the abuse of alcohol or drugs. As a result of pairing with natural or drug reinforcers, initially neutral cues acquire the ability to serve as reinforcers for subsequent learning. Accepting a major role for conditioned reinforcement in everyday learning is complicated by the often-evanescent nature of this phenomenon in the laboratory, especially when primary reinforcers are entirely absent from the test situation. Here, I found that under certain conditions, the impact of conditioned reinforcement could be extended by lesions of the basolateral amygdala (BLA). Rats received first-order Pavlovian conditioning pairings of 1 visual conditioned stimulus (CS) with food prior to receiving excitotoxic or sham lesions of the BLA, and first-order pairings of another visual CS with food after that surgery. Finally, each rat received second-order pairings of a different auditory cue with each visual first-order CS. As in prior studies, relative to sham-lesioned control rats, lesioned rats were impaired in their acquisition of second-order conditioning to the auditory cue paired with the first-order CS that was trained after surgery. However, lesioned rats showed enhanced and prolonged second-order conditioning to the auditory cue paired with the first-order CS that was trained before amygdala damage was made. Implications for an enhanced role for conditioned reinforcement by drug-related cues after drug-induced alterations in neural plasticity are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods.
Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti
2012-04-07
Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell's equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions that have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called minimum norm estimates (MNE), promote source estimates with a small ℓ₂ norm. Here, we consider a more general class of priors based on mixed norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as mixed-norm estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ₁/ℓ₂ mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ₁/ℓ₂ norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furthermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data.
Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods
Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti
2012-01-01
Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell’s equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions than have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called Minimum Norm Estimates (MNE), promote source estimates with a small ℓ2 norm. Here, we consider a more general class of priors based on mixed-norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as Mixed-Norm Estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ1/ℓ2 mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ1/ℓ2 norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furhermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data. PMID:22421459
The NEWS Water Cycle Climatology
NASA Astrophysics Data System (ADS)
Rodell, M.; Beaudoing, H. K.; L'Ecuyer, T.; Olson, W. S.
2012-12-01
NASA's Energy and Water Cycle Study (NEWS) program fosters collaborative research towards improved quantification and prediction of water and energy cycle consequences of climate change. In order to measure change, it is first necessary to describe current conditions. The goal of the first phase of the NEWS Water and Energy Cycle Climatology project was to develop "state of the global water cycle" and "state of the global energy cycle" assessments based on data from modern ground and space based observing systems and data integrating models. The project was a multi-institutional collaboration with more than 20 active contributors. This presentation will describe the results of the water cycle component of the first phase of the project, which include seasonal (monthly) climatologies of water fluxes over land, ocean, and atmosphere at continental and ocean basin scales. The requirement of closure of the water budget (i.e., mass conservation) at various scales was exploited to constrain the flux estimates via an optimization approach that will also be described. Further, error assessments were included with the input datasets, and we examine these in relation to inferred uncertainty in the optimized flux estimates in order to gauge our current ability to close the water budget within an expected uncertainty range.
The NEWS Water Cycle Climatology
NASA Technical Reports Server (NTRS)
Rodell, Matthew; Beaudoing, Hiroko Kato; L'Ecuyer, Tristan; William, Olson
2012-01-01
NASA's Energy and Water Cycle Study (NEWS) program fosters collaborative research towards improved quantification and prediction of water and energy cycle consequences of climate change. In order to measure change, it is first necessary to describe current conditions. The goal of the first phase of the NEWS Water and Energy Cycle Climatology project was to develop "state of the global water cycle" and "state of the global energy cycle" assessments based on data from modern ground and space based observing systems and data integrating models. The project was a multi-institutional collaboration with more than 20 active contributors. This presentation will describe the results of the water cycle component of the first phase of the project, which include seasonal (monthly) climatologies of water fluxes over land, ocean, and atmosphere at continental and ocean basin scales. The requirement of closure of the water budget (i.e., mass conservation) at various scales was exploited to constrain the flux estimates via an optimization approach that will also be described. Further, error assessments were included with the input datasets, and we examine these in relation to inferred uncertainty in the optimized flux estimates in order to gauge our current ability to close the water budget within an expected uncertainty range.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedman, N.; Koller, D.; Halpern, J.Y.
Conditional logics play an important role in recent attempts to investigate default reasoning. This paper investigates first-order conditional logic. We show that, as for first-order probabilistic logic, it is important not to confound statistical conditionals over the domain (such as {open_quotes}most birds fly{close_quotes}), and subjective conditionals over possible worlds (such as I believe that Tweety is unlikely to fly). We then address the issue of ascribing semantics to first-order conditional logic. As in the propositional case, there are many possible semantics. To study the problem in a coherent way, we use plausibility structures. These provide us with a general frameworkmore » in which many of the standard approaches can be embedded. We show that while these standard approaches are all the same at the propositional level, they are significantly different in the context of a first-order language. We show that plausibilities provide the most natural extension of conditional logic to the first-order case: We provide a sound and complete axiomatization that contains only the KLM properties and standard axioms of first-order modal logic. We show that most of the other approaches have additional properties, which result in an inappropriate treatment of an infinitary version of the lottery paradox.« less
NASA Technical Reports Server (NTRS)
Sirkis, James S. (Inventor); Sivanesan, Ponniah (Inventor); Venkat, Venki S. (Inventor)
2001-01-01
A Bragg grating sensor for measuring distributed strain and temperature at the same time comprises an optical fiber having a single mode operating wavelength region and below a cutoff wavelength of the fiber having a multimode operating wavelength region. A saturated, higher order Bragg grating having first and second order Bragg conditions is fabricated in the optical fiber. The first order of Bragg resonance wavelength of the Bragg grating is within the single mode operating wavelength region of the optical fiber and the second order of Bragg resonance wavelength is below the cutoff wavelength of the fiber within the multimode operating wavelength region. The reflectivities of the saturated Bragg grating at the first and second order Bragg conditions are less than two orders of magnitude of one another. In use, the first and second order Bragg conditions are simultaneously created in the sensor at the respective wavelengths and a signal from the sensor is demodulated with respect to each of the wavelengths corresponding to the first and second order Bragg conditions. Two Bragg conditions have different responsivities to strain and temperature, thus allowing two equations for axial strain and temperature to be found in terms of the measure shifts in the primary and second order Bragg wavelengths. This system of equations can be solved for strain and temperature.
Evaluation of statistical models for forecast errors from the HBV model
NASA Astrophysics Data System (ADS)
Engeland, Kolbjørn; Renard, Benjamin; Steinsland, Ingelin; Kolberg, Sjur
2010-04-01
SummaryThree statistical models for the forecast errors for inflow into the Langvatn reservoir in Northern Norway have been constructed and tested according to the agreement between (i) the forecast distribution and the observations and (ii) median values of the forecast distribution and the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order auto-regressive model was constructed for the forecast errors. The parameters were conditioned on weather classes. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order auto-regressive model was constructed for the forecast errors. For the third model positive and negative errors were modeled separately. The errors were first NQT-transformed before conditioning the mean error values on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: we wanted (a) the forecast distribution to be reliable; (b) the forecast intervals to be narrow; (c) the median values of the forecast distribution to be close to the observed values. Models 1 and 2 gave almost identical results. The median values improved the forecast with Nash-Sutcliffe R eff increasing from 0.77 for the original forecast to 0.87 for the corrected forecasts. Models 1 and 2 over-estimated the forecast intervals but gave the narrowest intervals. Their main drawback was that the distributions are less reliable than Model 3. For Model 3 the median values did not fit well since the auto-correlation was not accounted for. Since Model 3 did not benefit from the potential variance reduction that lies in bias estimation and removal it gave on average wider forecasts intervals than the two other models. At the same time Model 3 on average slightly under-estimated the forecast intervals, probably explained by the use of average measures to evaluate the fit.
Constrained State Estimation for Individual Localization in Wireless Body Sensor Networks
Feng, Xiaoxue; Snoussi, Hichem; Liang, Yan; Jiao, Lianmeng
2014-01-01
Wireless body sensor networks based on ultra-wideband radio have recently received much research attention due to its wide applications in health-care, security, sports and entertainment. Accurate localization is a fundamental problem to realize the development of effective location-aware applications above. In this paper the problem of constrained state estimation for individual localization in wireless body sensor networks is addressed. Priori knowledge about geometry among the on-body nodes as additional constraint is incorporated into the traditional filtering system. The analytical expression of state estimation with linear constraint to exploit the additional information is derived. Furthermore, for nonlinear constraint, first-order and second-order linearizations via Taylor series expansion are proposed to transform the nonlinear constraint to the linear case. Examples between the first-order and second-order nonlinear constrained filters based on interacting multiple model extended kalman filter (IMM-EKF) show that the second-order solution for higher order nonlinearity as present in this paper outperforms the first-order solution, and constrained IMM-EKF obtains superior estimation than IMM-EKF without constraint. Another brownian motion individual localization example also illustrates the effectiveness of constrained nonlinear iterative least square (NILS), which gets better filtering performance than NILS without constraint. PMID:25390408
Efficient estimation of Pareto model: Some modified percentile estimators.
Bhatti, Sajjad Haider; Hussain, Shahzad; Ahmad, Tanvir; Aslam, Muhammad; Aftab, Muhammad; Raza, Muhammad Ali
2018-01-01
The article proposes three modified percentile estimators for parameter estimation of the Pareto distribution. These modifications are based on median, geometric mean and expectation of empirical cumulative distribution function of first-order statistic. The proposed modified estimators are compared with traditional percentile estimators through a Monte Carlo simulation for different parameter combinations with varying sample sizes. Performance of different estimators is assessed in terms of total mean square error and total relative deviation. It is determined that modified percentile estimator based on expectation of empirical cumulative distribution function of first-order statistic provides efficient and precise parameter estimates compared to other estimators considered. The simulation results were further confirmed using two real life examples where maximum likelihood and moment estimators were also considered.
Decomposition of conditional probability for high-order symbolic Markov chains.
Melnik, S S; Usatenko, O V
2017-07-01
The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.
Decomposition of conditional probability for high-order symbolic Markov chains
NASA Astrophysics Data System (ADS)
Melnik, S. S.; Usatenko, O. V.
2017-07-01
The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.
NASA Astrophysics Data System (ADS)
Li, Xiaoyu; Pan, Ke; Fan, Guodong; Lu, Rengui; Zhu, Chunbo; Rizzoni, Giorgio; Canova, Marcello
2017-11-01
State of energy (SOE) is an important index for the electrochemical energy storage system in electric vehicles. In this paper, a robust state of energy estimation method in combination with a physical model parameter identification method is proposed to achieve accurate battery state estimation at different operating conditions and different aging stages. A physics-based fractional order model with variable solid-state diffusivity (FOM-VSSD) is used to characterize the dynamic performance of a LiFePO4/graphite battery. In order to update the model parameter automatically at different aging stages, a multi-step model parameter identification method based on the lexicographic optimization is especially designed for the electric vehicle operating conditions. As the battery available energy changes with different applied load current profiles, the relationship between the remaining energy loss and the state of charge, the average current as well as the average squared current is modeled. The SOE with different operating conditions and different aging stages are estimated based on an adaptive fractional order extended Kalman filter (AFEKF). Validation results show that the overall SOE estimation error is within ±5%. The proposed method is suitable for the electric vehicle online applications.
Hydrostatic Bearing Pad Maximum Load and Overturning Conditions for the 70-meter Antenna
NASA Technical Reports Server (NTRS)
Mcginness, H. D.
1985-01-01
The reflector diameters of the 64-m antennas were increased to 70-m. In order to evaluate the minimum film thickness of the hydrostatic bearing which supports the antenna weight, it is first necessary to have a good estimation of the maximum operational load on the most heavily loaded bearing pad. The maximum hydrostatic bearing load is shown to be sufficiently small and the ratios of stabilizing to over turning moments are ample.
Leander, Jacob; Almquist, Joachim; Ahlström, Christine; Gabrielsson, Johan; Jirstrand, Mats
2015-05-01
Inclusion of stochastic differential equations in mixed effects models provides means to quantify and distinguish three sources of variability in data. In addition to the two commonly encountered sources, measurement error and interindividual variability, we also consider uncertainty in the dynamical model itself. To this end, we extend the ordinary differential equation setting used in nonlinear mixed effects models to include stochastic differential equations. The approximate population likelihood is derived using the first-order conditional estimation with interaction method and extended Kalman filtering. To illustrate the application of the stochastic differential mixed effects model, two pharmacokinetic models are considered. First, we use a stochastic one-compartmental model with first-order input and nonlinear elimination to generate synthetic data in a simulated study. We show that by using the proposed method, the three sources of variability can be successfully separated. If the stochastic part is neglected, the parameter estimates become biased, and the measurement error variance is significantly overestimated. Second, we consider an extension to a stochastic pharmacokinetic model in a preclinical study of nicotinic acid kinetics in obese Zucker rats. The parameter estimates are compared between a deterministic and a stochastic NiAc disposition model, respectively. Discrepancies between model predictions and observations, previously described as measurement noise only, are now separated into a comparatively lower level of measurement noise and a significant uncertainty in model dynamics. These examples demonstrate that stochastic differential mixed effects models are useful tools for identifying incomplete or inaccurate model dynamics and for reducing potential bias in parameter estimates due to such model deficiencies.
Kurtosis Approach Nonlinear Blind Source Separation
NASA Technical Reports Server (NTRS)
Duong, Vu A.; Stubbemd, Allen R.
2005-01-01
In this paper, we introduce a new algorithm for blind source signal separation for post-nonlinear mixtures. The mixtures are assumed to be linearly mixed from unknown sources first and then distorted by memoryless nonlinear functions. The nonlinear functions are assumed to be smooth and can be approximated by polynomials. Both the coefficients of the unknown mixing matrix and the coefficients of the approximated polynomials are estimated by the gradient descent method conditional on the higher order statistical requirements. The results of simulation experiments presented in this paper demonstrate the validity and usefulness of our approach for nonlinear blind source signal separation Keywords: Independent Component Analysis, Kurtosis, Higher order statistics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Favorite, Jeffrey A.; Gonzalez, Esteban
Adjoint-based first-order perturbation theory is applied again to boundary perturbation problems. Rahnema developed a perturbation estimate that gives an accurate first-order approximation of a flux or reaction rate within a radioactive system when the boundary is perturbed. When the response of interest is the flux or leakage current on the boundary, the Roussopoulos perturbation estimate has long been used. The Rahnema and Roussopoulos estimates differ in one term. Our paper shows that the Rahnema and Roussopoulos estimates can be derived consistently, using different responses, from a single variational functional (due to Gheorghiu and Rahnema), resolving any apparent contradiction. In analyticmore » test problems, Rahnema’s estimate and the Roussopoulos estimate produce exact first derivatives of the response of interest when appropriately applied. We also present a realistic, nonanalytic test problem.« less
Pavlovian second-order conditioned analgesia.
Ross, R T
1986-01-01
Three experiments with rat subjects assessed conditioned analgesia in a Pavlovian second-order conditioning procedure by using inhibition of responding to thermal stimulation as an index of pain sensitivity. In Experiment 1, rats receiving second-order conditioning showed longer response latencies during a test of pain sensitivity in the presence of the second-order conditioned stimulus (CS) than rats receiving appropriate control procedures. Experiment 2 found that extinction of the first-order CS had no effect on established second-order conditioned analgesia. Experiment 3 evaluated the effects of post second-order conditioning pairings of morphine and the shock unconditioned stimulus (US). Rats receiving paired morphine-shock presentations showed significantly shorter response latencies during a hot-plate test of pain sensitivity in the presence of the second-order CS than did groups of rats receiving various control procedures; second-order analgesia was attenuated. These data extend the associative account of conditioned analgesia to second-order conditioning situations and are discussed in terms of the mediation of both first- and second-order analgesia by an association between the CS and a representation or expectancy of the US, which may directly activate endogenous pain inhibition systems.
Estimation of homogeneous nucleation flux via a kinetic model
NASA Technical Reports Server (NTRS)
Wilcox, C. F.; Bauer, S. H.
1991-01-01
The proposed kinetic model for condensation under homogeneous conditions, and the onset of unidirectional cluster growth in supersaturated gases, does not suffer from the conceptual flaws that characterize classical nucleation theory. When a full set of simultaneous rate equation is solved, a characteristic time emerges, for each cluster size, at which the production rate, and its rate of conversion to the next size (n + 1) are equal. Procedures for estimating the essential parameters are proposed; condensation fluxes J(kin) exp ss are evaluated. Since there are practical limits to the cluster size that can be incorporated in the set of simultaneous first-order differential equations, a code was developed for computing an approximate J(th) exp ss based on estimates of a 'constrained equilibrium' distribution, and identification of its minimum.
Revisiting Boundary Perturbation Theory for Inhomogeneous Transport Problems
Favorite, Jeffrey A.; Gonzalez, Esteban
2017-03-10
Adjoint-based first-order perturbation theory is applied again to boundary perturbation problems. Rahnema developed a perturbation estimate that gives an accurate first-order approximation of a flux or reaction rate within a radioactive system when the boundary is perturbed. When the response of interest is the flux or leakage current on the boundary, the Roussopoulos perturbation estimate has long been used. The Rahnema and Roussopoulos estimates differ in one term. Our paper shows that the Rahnema and Roussopoulos estimates can be derived consistently, using different responses, from a single variational functional (due to Gheorghiu and Rahnema), resolving any apparent contradiction. In analyticmore » test problems, Rahnema’s estimate and the Roussopoulos estimate produce exact first derivatives of the response of interest when appropriately applied. We also present a realistic, nonanalytic test problem.« less
Nucleus-size pinning for determination of nucleation free-energy barriers and nucleus geometry
NASA Astrophysics Data System (ADS)
Sharma, Abhishek K.; Escobedo, Fernando A.
2018-05-01
Classical Nucleation Theory (CNT) has recently been used in conjunction with a seeding approach to simulate nucleation phenomena at small-to-moderate supersaturation conditions when large free-energy barriers ensue. In this study, the conventional seeding approach [J. R. Espinosa et al., J. Chem. Phys. 144, 034501 (2016)] is improved by a novel, more robust method to estimate nucleation barriers. Inspired by the interfacial pinning approach [U. R. Pedersen, J. Chem. Phys. 139, 104102 (2013)] used before to determine conditions where two phases coexist, the seed of the incipient phase is pinned to a preselected size to iteratively drive the system toward the conditions where the seed becomes a critical nucleus. The proposed technique is first validated by estimating the critical nucleation conditions for the disorder-to-order transition in hard spheres and then applied to simulate and characterize the highly non-trivial (prolate) morphology of the critical crystal nucleus in hard gyrobifastigia. A generalization of CNT is used to account for nucleus asphericity and predict nucleation free-energy barriers for gyrobifastigia. These predictions of nuclei shape and barriers are validated by independent umbrella sampling calculations.
Multi-assortment rhythmic production planning and control
NASA Astrophysics Data System (ADS)
Skolud, B.; Krenczyk, D.; Zemczak, M.
2015-11-01
A method for production planning in a repetitive manufacturing system which allows for estimating the possibility of processing work orders in due time is presented. The difference between two approaches are presented; the first one one-piece flow elaborated in Toyota and the second one elaborated by authors that consists in defining sufficient conditions to filter all solutions and providing a set of admissible solutions for both the client and the producer. In the paper attention is focused on the buffer allocation. Illustrative examples are presented.
Kim, Daewook; Kim, Dojin; Hong, Keum-Shik; Jung, Il Hyo
2014-01-01
The first objective of this paper is to prove the existence and uniqueness of global solutions for a Kirchhoff-type wave equation with nonlinear dissipation of the form Ku'' + M(|A (1/2) u|(2))Au + g(u') = 0 under suitable assumptions on K, A, M(·), and g(·). Next, we derive decay estimates of the energy under some growth conditions on the nonlinear dissipation g. Lastly, numerical simulations in order to verify the analytical results are given.
A Numerical Study of Spray Injected in a Gas Turbine Lean Pre-Mixed Pre-Vaporized Combustor
NASA Astrophysics Data System (ADS)
Amoresano, Amedeo; Cameretti, Maria Cristina; Tuccillo, Raffaele
2015-04-01
The authors have performed a numerical study to investigate the spray evolution in a modern gas turbine combustor of the Lean Pre-Mixed Pre-vaporized type. The CFD tool is able to simulate the injection conditions, by isolating and studying some specific phenomena. The calculations have been performed by using a 3-D fluid dynamic code, the FLUENT flow solver, by choosing the injection models on the basis of a comparative analysis with some experimental data, in terms of droplet diameters, obtained by PDA technique. In a first phase of the investigation, the numerical simulation refers to non-evaporating flow conditions, in order to validate the estimation of the fundamental spray parameters. Next, the calculations employ boundary conditions close to those occurring in the actual combustor operation, in order to predict the fuel vapour distribution throughout the premixing chamber. The results obtained allow the authors to perform combustion simulation in the whole domain.
NASA Astrophysics Data System (ADS)
Li, Xiaoyu; Fan, Guodong; Pan, Ke; Wei, Guo; Zhu, Chunbo; Rizzoni, Giorgio; Canova, Marcello
2017-11-01
The design of a lumped parameter battery model preserving physical meaning is especially desired by the automotive researchers and engineers due to the strong demand for battery system control, estimation, diagnosis and prognostics. In light of this, a novel simplified fractional order electrochemical model is developed for electric vehicle (EV) applications in this paper. In the model, a general fractional order transfer function is designed for the solid phase lithium ion diffusion approximation. The dynamic characteristics of the electrolyte concentration overpotential are approximated by a first-order resistance-capacitor transfer function in the electrolyte phase. The Ohmic resistances and electrochemical reaction kinetics resistance are simplified to a lumped Ohmic resistance parameter. Overall, the number of model parameters is reduced from 30 to 9, yet the accuracy of the model is still guaranteed. In order to address the dynamics of phase-change phenomenon in the active particle during charging and discharging, variable solid-state diffusivity is taken into consideration in the model. Also, the observability of the model is analyzed on two types of lithium ion batteries subsequently. Results show the fractional order model with variable solid-state diffusivity agrees very well with experimental data at various current input conditions and is suitable for electric vehicle applications.
Building unbiased estimators from non-gaussian likelihoods with application to shear estimation
Madhavacheril, Mathew S.; McDonald, Patrick; Sehgal, Neelima; ...
2015-01-15
We develop a general framework for generating estimators of a given quantity which are unbiased to a given order in the difference between the true value of the underlying quantity and the fiducial position in theory space around which we expand the likelihood. We apply this formalism to rederive the optimal quadratic estimator and show how the replacement of the second derivative matrix with the Fisher matrix is a generic way of creating an unbiased estimator (assuming choice of the fiducial model is independent of data). Next we apply the approach to estimation of shear lensing, closely following the workmore » of Bernstein and Armstrong (2014). Our first order estimator reduces to their estimator in the limit of zero shear, but it also naturally allows for the case of non-constant shear and the easy calculation of correlation functions or power spectra using standard methods. Both our first-order estimator and Bernstein and Armstrong’s estimator exhibit a bias which is quadratic in true shear. Our third-order estimator is, at least in the realm of the toy problem of Bernstein and Armstrong, unbiased to 0.1% in relative shear errors Δg/g for shears up to |g| = 0.2.« less
Building unbiased estimators from non-Gaussian likelihoods with application to shear estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madhavacheril, Mathew S.; Sehgal, Neelima; McDonald, Patrick
2015-01-01
We develop a general framework for generating estimators of a given quantity which are unbiased to a given order in the difference between the true value of the underlying quantity and the fiducial position in theory space around which we expand the likelihood. We apply this formalism to rederive the optimal quadratic estimator and show how the replacement of the second derivative matrix with the Fisher matrix is a generic way of creating an unbiased estimator (assuming choice of the fiducial model is independent of data). Next we apply the approach to estimation of shear lensing, closely following the workmore » of Bernstein and Armstrong (2014). Our first order estimator reduces to their estimator in the limit of zero shear, but it also naturally allows for the case of non-constant shear and the easy calculation of correlation functions or power spectra using standard methods. Both our first-order estimator and Bernstein and Armstrong's estimator exhibit a bias which is quadratic in true shear. Our third-order estimator is, at least in the realm of the toy problem of Bernstein and Armstrong, unbiased to 0.1% in relative shear errors Δg/g for shears up to |g|=0.2.« less
An investigation of using an RQP based method to calculate parameter sensitivity derivatives
NASA Technical Reports Server (NTRS)
Beltracchi, Todd J.; Gabriele, Gary A.
1989-01-01
Estimation of the sensitivity of problem functions with respect to problem variables forms the basis for many of our modern day algorithms for engineering optimization. The most common application of problem sensitivities has been in the calculation of objective function and constraint partial derivatives for determining search directions and optimality conditions. A second form of sensitivity analysis, parameter sensitivity, has also become an important topic in recent years. By parameter sensitivity, researchers refer to the estimation of changes in the modeling functions and current design point due to small changes in the fixed parameters of the formulation. Methods for calculating these derivatives have been proposed by several authors (Armacost and Fiacco 1974, Sobieski et al 1981, Schmit and Chang 1984, and Vanderplaats and Yoshida 1985). Two drawbacks to estimating parameter sensitivities by current methods have been: (1) the need for second order information about the Lagrangian at the current point, and (2) the estimates assume no change in the active set of constraints. The first of these two problems is addressed here and a new algorithm is proposed that does not require explicit calculation of second order information.
Russell F. Thurow; James T. Peterson; John W. Guzevich
2006-01-01
Despite the widespread use of underwater observation to census stream-dwelling fishes, the accuracy of snorkeling methods has rarely been validated. We evaluated the efficiency of day and night snorkel counts for estimating the abundance of bull trout Salvelinus confluentus in 215 sites within first- to third-order streams. We used a dual-gear...
Estimation in SEM: A Concrete Example
ERIC Educational Resources Information Center
Ferron, John M.; Hess, Melinda R.
2007-01-01
A concrete example is used to illustrate maximum likelihood estimation of a structural equation model with two unknown parameters. The fitting function is found for the example, as are the vector of first-order partial derivatives, the matrix of second-order partial derivatives, and the estimates obtained from each iteration of the Newton-Raphson…
Critical ignition conditions in exothermically reacting systems: first-order reactions
NASA Astrophysics Data System (ADS)
Filimonov, Valeriy Yu.
2017-10-01
In this paper, the comparative analysis of the thermal explosion (TE) critical conditions on the planes temperature-conversion degree and temperature-time was conducted. It was established that the ignition criteria are almost identical only at relatively small values of Todes parameter. Otherwise, the results of critical conditions analysis on the plane temperature-conversion degree may be wrong. The asymptotic method of critical conditions calculation for the first-order reactions was proposed (taking into account the reactant consumption). The degeneration conditions of TE were determined. The calculation of critical conditions for specific first-order reaction was made. The comparison of the analytical results obtained with the results of numerical calculations and experimental data showed that they are in good agreement.
Critical ignition conditions in exothermically reacting systems: first-order reactions.
Filimonov, Valeriy Yu
2017-10-01
In this paper, the comparative analysis of the thermal explosion (TE) critical conditions on the planes temperature-conversion degree and temperature-time was conducted. It was established that the ignition criteria are almost identical only at relatively small values of Todes parameter. Otherwise, the results of critical conditions analysis on the plane temperature-conversion degree may be wrong. The asymptotic method of critical conditions calculation for the first-order reactions was proposed (taking into account the reactant consumption). The degeneration conditions of TE were determined. The calculation of critical conditions for specific first-order reaction was made. The comparison of the analytical results obtained with the results of numerical calculations and experimental data showed that they are in good agreement.
Entropy Splitting for High Order Numerical Simulation of Vortex Sound at Low Mach Numbers
NASA Technical Reports Server (NTRS)
Mueller, B.; Yee, H. C.; Mansour, Nagi (Technical Monitor)
2001-01-01
A method of minimizing numerical errors, and improving nonlinear stability and accuracy associated with low Mach number computational aeroacoustics (CAA) is proposed. The method consists of two levels. From the governing equation level, we condition the Euler equations in two steps. The first step is to split the inviscid flux derivatives into a conservative and a non-conservative portion that satisfies a so called generalized energy estimate. This involves the symmetrization of the Euler equations via a transformation of variables that are functions of the physical entropy. Owing to the large disparity of acoustic and stagnation quantities in low Mach number aeroacoustics, the second step is to reformulate the split Euler equations in perturbation form with the new unknowns as the small changes of the conservative variables with respect to their large stagnation values. From the numerical scheme level, a stable sixth-order central interior scheme with a third-order boundary schemes that satisfies the discrete analogue of the integration-by-parts procedure used in the continuous energy estimate (summation-by-parts property) is employed.
Modeling an alkaline electrolysis cell through reduced-order and loss-estimate approaches
NASA Astrophysics Data System (ADS)
Milewski, Jaroslaw; Guandalini, Giulio; Campanari, Stefano
2014-12-01
The paper presents two approaches to the mathematical modeling of an Alkaline Electrolyzer Cell. The presented models were compared and validated against available experimental results taken from a laboratory test and against literature data. The first modeling approach is based on the analysis of estimated losses due to the different phenomena occurring inside the electrolytic cell, and requires careful calibration of several specific parameters (e.g. those related to the electrochemical behavior of the electrodes) some of which could be hard to define. An alternative approach is based on a reduced-order equivalent circuit, resulting in only two fitting parameters (electrodes specific resistance and parasitic losses) and calculation of the internal electric resistance of the electrolyte. Both models yield satisfactory results with an average error limited below 3% vs. the considered experimental data and show the capability to describe with sufficient accuracy the different operating conditions of the electrolyzer; the reduced-order model could be preferred thanks to its simplicity for implementation within plant simulation tools dealing with complex systems, such as electrolyzers coupled with storage facilities and intermittent renewable energy sources.
Deng, Zhimin; Tian, Tianhai
2014-07-29
The advances of systems biology have raised a large number of sophisticated mathematical models for describing the dynamic property of complex biological systems. One of the major steps in developing mathematical models is to estimate unknown parameters of the model based on experimentally measured quantities. However, experimental conditions limit the amount of data that is available for mathematical modelling. The number of unknown parameters in mathematical models may be larger than the number of observation data. The imbalance between the number of experimental data and number of unknown parameters makes reverse-engineering problems particularly challenging. To address the issue of inadequate experimental data, we propose a continuous optimization approach for making reliable inference of model parameters. This approach first uses a spline interpolation to generate continuous functions of system dynamics as well as the first and second order derivatives of continuous functions. The expanded dataset is the basis to infer unknown model parameters using various continuous optimization criteria, including the error of simulation only, error of both simulation and the first derivative, or error of simulation as well as the first and second derivatives. We use three case studies to demonstrate the accuracy and reliability of the proposed new approach. Compared with the corresponding discrete criteria using experimental data at the measurement time points only, numerical results of the ERK kinase activation module show that the continuous absolute-error criteria using both function and high order derivatives generate estimates with better accuracy. This result is also supported by the second and third case studies for the G1/S transition network and the MAP kinase pathway, respectively. This suggests that the continuous absolute-error criteria lead to more accurate estimates than the corresponding discrete criteria. We also study the robustness property of these three models to examine the reliability of estimates. Simulation results show that the models with estimated parameters using continuous fitness functions have better robustness properties than those using the corresponding discrete fitness functions. The inference studies and robustness analysis suggest that the proposed continuous optimization criteria are effective and robust for estimating unknown parameters in mathematical models.
An Analysis of Second-Order Autoshaping
ERIC Educational Resources Information Center
Ward-Robinson, Jasper
2004-01-01
Three mechanisms can explain second-order conditioning: (1) The second-order conditioned stimulus (CS2) could activate a representation of the first-order conditioned stimulus (CS1), thereby provoking the conditioned response (CR); The CS2 could enter into an excitatory association with either (2) the representation governing the CR, or (3) with a…
Genescà, Meritxell; Svensson, U Peter; Taraldsen, Gunnar
2015-04-01
Ground reflections cause problems when estimating the direction of arrival of aircraft noise. In traditional methods, based on the time differences between the microphones of a compact array, they may cause a significant loss of accuracy in the vertical direction. This study evaluates the use of first-order directional microphones, instead of omnidirectional, with the aim of reducing the amplitude of the reflected sound. Such a modification allows the problem to be treated as in free field conditions. Although further tests are needed for a complete evaluation of the method, the experimental results presented here show that under the particular conditions tested the vertical angle error is reduced ∼10° for both jet and propeller aircraft by selecting an appropriate directivity pattern. It is also shown that the final level of error depends on the vertical angle of arrival of the sound, and that the estimates of the horizontal angle of arrival are not influenced by the directivity pattern of the microphones nor by the reflective properties of the ground.
Infants' prospective control during object manipulation in an uncertain environment.
Gottwald, Janna M; Gredebäck, Gustaf
2015-08-01
This study investigates how infants use visual and sensorimotor information to prospectively control their actions. We gave 14-month-olds two objects of different weight and observed how high they were lifted, using a Qualisys Motion Capture System. In one condition, the two objects were visually distinct (different color condition) in another they were visually identical (same color condition). Lifting amplitudes of the first movement unit were analyzed in order to assess prospective control. Results demonstrate that infants lifted a light object higher than a heavy object, especially when vision could be used to assess weight (different color condition). When being confronted with two visually identical objects of different weight (same color condition), infants showed a different lifting pattern than what could be observed in the different color condition, expressed by a significant interaction effect between object weight and color condition on lifting amplitude. These results indicate that (a) visual information about object weight can be used to prospectively control lifting actions and that (b) infants are able to prospectively control their lifting actions even without visual information about object weight. We argue that infants, in the absence of reliable visual information about object weight, heighten their dependence on non-visual information (tactile, sensorimotor memory) in order to estimate weight and pre-adjust their lifting actions in a prospective manner.
Identification of transmissivity fields using a Bayesian strategy and perturbative approach
NASA Astrophysics Data System (ADS)
Zanini, Andrea; Tanda, Maria Giovanna; Woodbury, Allan D.
2017-10-01
The paper deals with the crucial problem of the groundwater parameter estimation that is the basis for efficient modeling and reclamation activities. A hierarchical Bayesian approach is developed: it uses the Akaike's Bayesian Information Criteria in order to estimate the hyperparameters (related to the covariance model chosen) and to quantify the unknown noise variance. The transmissivity identification proceeds in two steps: the first, called empirical Bayesian interpolation, uses Y* (Y = lnT) observations to interpolate Y values on a specified grid; the second, called empirical Bayesian update, improve the previous Y estimate through the addition of hydraulic head observations. The relationship between the head and the lnT has been linearized through a perturbative solution of the flow equation. In order to test the proposed approach, synthetic aquifers from literature have been considered. The aquifers in question contain a variety of boundary conditions (both Dirichelet and Neuman type) and scales of heterogeneities (σY2 = 1.0 and σY2 = 5.3). The estimated transmissivity fields were compared to the true one. The joint use of Y* and head measurements improves the estimation of Y considering both degrees of heterogeneity. Even if the variance of the strong transmissivity field can be considered high for the application of the perturbative approach, the results show the same order of approximation of the non-linear methods proposed in literature. The procedure allows to compute the posterior probability distribution of the target quantities and to quantify the uncertainty in the model prediction. Bayesian updating has advantages related both to the Monte-Carlo (MC) and non-MC approaches. In fact, as the MC methods, Bayesian updating allows computing the direct posterior probability distribution of the target quantities and as non-MC methods it has computational times in the order of seconds.
A Systematic Approach for Model-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.
NASA Astrophysics Data System (ADS)
Wang, Chao; Yang, Chuan-sheng
2017-09-01
In this paper, we present a simplified parsimonious higher-order multivariate Markov chain model with new convergence condition. (TPHOMMCM-NCC). Moreover, estimation method of the parameters in TPHOMMCM-NCC is give. Numerical experiments illustrate the effectiveness of TPHOMMCM-NCC.
The Hurst Phenomenon in Error Estimates Related to Atmospheric Turbulence
NASA Astrophysics Data System (ADS)
Dias, Nelson Luís; Crivellaro, Bianca Luhm; Chamecki, Marcelo
2018-05-01
The Hurst phenomenon is a well-known feature of long-range persistence first observed in hydrological and geophysical time series by E. Hurst in the 1950s. It has also been found in several cases in turbulence time series measured in the wind tunnel, the atmosphere, and in rivers. Here, we conduct a systematic investigation of the value of the Hurst coefficient H in atmospheric surface-layer data, and its impact on the estimation of random errors. We show that usually H > 0.5 , which implies the non-existence (in the statistical sense) of the integral time scale. Since the integral time scale is present in the Lumley-Panofsky equation for the estimation of random errors, this has important practical consequences. We estimated H in two principal ways: (1) with an extension of the recently proposed filtering method to estimate the random error (H_p ), and (2) with the classical rescaled range introduced by Hurst (H_R ). Other estimators were tried but were found less able to capture the statistical behaviour of the large scales of turbulence. Using data from three micrometeorological campaigns we found that both first- and second-order turbulence statistics display the Hurst phenomenon. Usually, H_R is larger than H_p for the same dataset, raising the question that one, or even both, of these estimators, may be biased. For the relative error, we found that the errors estimated with the approach adopted by us, that we call the relaxed filtering method, and that takes into account the occurrence of the Hurst phenomenon, are larger than both the filtering method and the classical Lumley-Panofsky estimates. Finally, we found that there is no apparent relationship between H and the Obukhov stability parameter. The relative errors, however, do show stability dependence, particularly in the case of the error of the kinematic momentum flux in unstable conditions, and that of the kinematic sensible heat flux in stable conditions.
Improved Spatial Registration and Target Tracking Method for Sensors on Multiple Missiles.
Lu, Xiaodong; Xie, Yuting; Zhou, Jun
2018-05-27
Inspired by the problem that the current spatial registration methods are unsuitable for three-dimensional (3-D) sensor on high-dynamic platform, this paper focuses on the estimation for the registration errors of cooperative missiles and motion states of maneuvering target. There are two types of errors being discussed: sensor measurement biases and attitude biases. Firstly, an improved Kalman Filter on Earth-Centered Earth-Fixed (ECEF-KF) coordinate algorithm is proposed to estimate the deviations mentioned above, from which the outcomes are furtherly compensated to the error terms. Secondly, the Pseudo Linear Kalman Filter (PLKF) and the nonlinear scheme the Unscented Kalman Filter (UKF) with modified inputs are employed for target tracking. The convergence of filtering results are monitored by a position-judgement logic, and a low-pass first order filter is selectively introduced before compensation to inhibit the jitter of estimations. In the simulation, the ECEF-KF enhancement is proven to improve the accuracy and robustness of the space alignment, while the conditional-compensation-based PLKF method is demonstrated to be the optimal performance in target tracking.
NASA Astrophysics Data System (ADS)
Li, Liangliang; Huang, Yu; Chen, Goong; Huang, Tingwen
If a second order linear hyperbolic partial differential equation in one-space dimension can be factorized as a product of two first order operators and if the two first order operators commute, with one boundary condition being the van der Pol type and the other being linear, one can establish the occurrence of chaos when the parameters enter a certain regime [Chen et al., 2014]. However, if the commutativity of the two first order operators fails to hold, then the treatment in [Chen et al., 2014] no longer works and significant new challenges arise in determining nonlinear boundary conditions that engenders chaos. In this paper, we show that by incorporating a linear memory effect, a nonlinear van der Pol boundary condition can cause chaotic oscillations when the parameter enters a certain regime. Numerical simulations illustrating chaotic oscillations are also presented.
A model to estimate insulin sensitivity in dairy cows.
Holtenius, Paul; Holtenius, Kjell
2007-10-11
Impairment of the insulin regulation of energy metabolism is considered to be an etiologic key component for metabolic disturbances. Methods for studies of insulin sensitivity thus are highly topical. There are clear indications that reduced insulin sensitivity contributes to the metabolic disturbances that occurs especially among obese lactating cows. Direct measurements of insulin sensitivity are laborious and not suitable for epidemiological studies. We have therefore adopted an indirect method originally developed for humans to estimate insulin sensitivity in dairy cows. The method, "Revised Quantitative Insulin Sensitivity Check Index" (RQUICKI) is based on plasma concentrations of glucose, insulin and free fatty acids (FFA) and it generates good and linear correlations with different estimates of insulin sensitivity in human populations. We hypothesized that the RQUICKI method could be used as an index of insulin function in lactating dairy cows. We calculated RQUICKI in 237 apparently healthy dairy cows from 20 commercial herds. All cows included were in their first 15 weeks of lactation. RQUICKI was not affected by the homeorhetic adaptations in energy metabolism that occurred during the first 15 weeks of lactation. In a cohort of 24 experimental cows fed in order to obtain different body condition at parturition RQUICKI was lower in early lactation in cows with a high body condition score suggesting disturbed insulin function in obese cows. The results indicate that RQUICKI might be used to identify lactating cows with disturbed insulin function.
NASA Technical Reports Server (NTRS)
Trenchard, M. H. (Principal Investigator)
1980-01-01
Procedures and techniques for providing analyses of meteorological conditions at segments during the growing season were developed for the U.S./Canada Wheat and Barley Exploratory Experiment. The main product and analysis tool is the segment-level climagraph which depicts temporally meteorological variables for the current year compared with climatological normals. The variable values for the segment are estimates derived through objective analysis of values obtained at first-order station in the region. The procedures and products documented represent a baseline for future Foreign Commodity Production Forecasting experiments.
ERIC Educational Resources Information Center
DeSarbo, Wayne S.; Park, Joonwook; Scott, Crystal J.
2008-01-01
A cyclical conditional maximum likelihood estimation procedure is developed for the multidimensional unfolding of two- or three-way dominance data (e.g., preference, choice, consideration) measured on ordered successive category rating scales. The technical description of the proposed model and estimation procedure are discussed, as well as the…
NASA Astrophysics Data System (ADS)
Duru, K.; Dunham, E. M.; Bydlon, S. A.; Radhakrishnan, H.
2014-12-01
Dynamic propagation of shear ruptures on a frictional interface is a useful idealization of a natural earthquake.The conditions relating slip rate and fault shear strength are often expressed as nonlinear friction laws.The corresponding initial boundary value problems are both numerically and computationally challenging.In addition, seismic waves generated by earthquake ruptures must be propagated, far away from fault zones, to seismic stations and remote areas.Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods.We present a numerical method for:a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration;b) dynamic propagation of earthquake ruptures along rough faults; c) accurate propagation of seismic waves in heterogeneous media with free surface topography.We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts finite differences in space. The finite difference stencils are 6th order accurate in the interior and 3rd order accurate close to the boundaries. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme. We have performed extensive numerical experiments using a slip-weakening friction law on non-planar faults, including recent SCEC benchmark problems. We also show simulations on fractal faults revealing the complexity of rupture dynamics on rough faults. We are presently extending our method to rate-and-state friction laws and off-fault plasticity.
Candel, Math J J M; Van Breukelen, Gerard J P
2010-06-30
Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.
Compatible diagonal-norm staggered and upwind SBP operators
NASA Astrophysics Data System (ADS)
Mattsson, Ken; O'Reilly, Ossian
2018-01-01
The main motivation with the present study is to achieve a provably stable high-order accurate finite difference discretisation of linear first-order hyperbolic problems on a staggered grid. The use of a staggered grid makes it non-trivial to discretise advective terms. To overcome this difficulty we discretise the advective terms using upwind Summation-By-Parts (SBP) operators, while the remaining terms are discretised using staggered SBP operators. The upwind and staggered SBP operators (for each order of accuracy) are compatible, here meaning that they are based on the same diagonal norms, allowing for energy estimates to be formulated. The boundary conditions are imposed using a penalty (SAT) technique, to guarantee linear stability. The resulting SBP-SAT approximations lead to fully explicit ODE systems. The accuracy and stability properties are demonstrated for linear hyperbolic problems in 1D, and for the 2D linearised Euler equations with constant background flow. The newly derived upwind and staggered SBP operators lead to significantly more accurate numerical approximations, compared with the exclusive usage of (previously derived) central-difference first derivative SBP operators.
Alternative Statistical Frameworks for Student Growth Percentile Estimation
ERIC Educational Resources Information Center
Lockwood, J. R.; Castellano, Katherine E.
2015-01-01
This article suggests two alternative statistical approaches for estimating student growth percentiles (SGP). The first is to estimate percentile ranks of current test scores conditional on past test scores directly, by modeling the conditional cumulative distribution functions, rather than indirectly through quantile regressions. This would…
NASA Astrophysics Data System (ADS)
Zhang, Guoguang; Yu, Zitian; Wang, Junmin
2017-03-01
Yaw rate is a crucial signal for the motion control systems of ground vehicles. Yet it may be contaminated by sensor bias. In order to correct the contaminated yaw rate signal and estimate the sensor bias, a robust gain-scheduling observer is proposed in this paper. First of all, a two-degree-of-freedom (2DOF) vehicle lateral and yaw dynamic model is presented, and then a Luenberger-like observer is proposed. To make the observer more applicable to real vehicle driving operations, a 2DOF vehicle model with uncertainties on the coefficients of tire cornering stiffness is employed. Further, a gain-scheduling approach and a robustness enhancement are introduced, leading to a robust gain-scheduling observer. Sensor bias detection mechanism is also designed. Case studies are conducted using an electric ground vehicle to assess the performance of signal correction and sensor bias estimation under difference scenarios.
A test of different menu labeling presentations.
Liu, Peggy J; Roberto, Christina A; Liu, Linda J; Brownell, Kelly D
2012-12-01
Chain restaurants will soon need to disclose calorie information on menus, but research on the impact of calorie labels on food choices is mixed. This study tested whether calorie information presented in different formats influenced calories ordered and perceived restaurant healthfulness. Participants in an online survey were randomly assigned to a menu with either (1) no calorie labels (No Calories); (2) calorie labels (Calories); (3) calorie labels ordered from low to high calories (Rank-Ordered Calories); or (4) calorie labels ordered from low to high calories that also had red/green circles indicating higher and lower calorie choices (Colored Calories). Participants ordered items for dinner, estimated calories ordered, and rated restaurant healthfulness. Participants in the Rank-Ordered Calories condition and those in the Colored Calories condition ordered fewer calories than the No Calories group. There was no significant difference in calories ordered between the Calories and No Calories groups. Participants in each calorie label condition were significantly more accurate in estimating calories ordered compared to the No Calories group. Those in the Colored Calories group perceived the restaurant as healthier. The results suggest that presenting calorie information in the modified Rank-Ordered or Colored Calories formats may increase menu labeling effectiveness. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Beaudoin, Yanick; Desbiens, André; Gagnon, Eric; Landry, René
2018-01-01
The navigation system of a satellite launcher is of paramount importance. In order to correct the trajectory of the launcher, the position, velocity and attitude must be known with the best possible precision. In this paper, the observability of four navigation solutions is investigated. The first one is the INS/GPS couple. Then, attitude reference sensors, such as magnetometers, are added to the INS/GPS solution. The authors have already demonstrated that the reference trajectory could be used to improve the navigation performance. This approach is added to the two previously mentioned navigation systems. For each navigation solution, the observability is analyzed with different sensor error models. First, sensor biases are neglected. Then, sensor biases are modelled as random walks and as first order Markov processes. The observability is tested with the rank and condition number of the observability matrix, the time evolution of the covariance matrix and sensitivity to measurement outlier tests. The covariance matrix is exploited to evaluate the correlation between states in order to detect structural unobservability problems. Finally, when an unobservable subspace is detected, the result is verified with theoretical analysis of the navigation equations. The results show that evaluating only the observability of a model does not guarantee the ability of the aiding sensors to correct the INS estimates within the mission time. The analysis of the covariance matrix time evolution could be a powerful tool to detect this situation, however in some cases, the problem is only revealed with a sensitivity to measurement outlier test. None of the tested solutions provide GPS position bias observability. For the considered mission, the modelling of the sensor biases as random walks or Markov processes gives equivalent results. Relying on the reference trajectory can improve the precision of the roll estimates. But, in the context of a satellite launcher, the roll estimation error and gyroscope bias are only observable if attitude reference sensors are present.
A stochastic hybrid systems based framework for modeling dependent failure processes
Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying
2017-01-01
In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods. PMID:28231313
A stochastic hybrid systems based framework for modeling dependent failure processes.
Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying
2017-01-01
In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods.
Ramos, Inês I; Gregório, Bruno J R; Barreiros, Luísa; Magalhães, Luís M; Tóth, Ildikó V; Reis, Salette; Lima, José L F C; Segundo, Marcela A
2016-04-01
An automated oxygen radical absorbance capacity (ORAC) method based on programmable flow injection analysis was developed for the assessment of antioxidant reactivity. The method relies on real time spectrophotometric monitoring (540 nm) of pyrogallol red (PGR) bleaching mediated by peroxyl radicals in the presence of antioxidant compounds within the first minute of reaction, providing information about their initial reactivity against this type of radicals. The ORAC-PGR assay under programmable flow format affords a strict control of reaction conditions namely reagent mixing, temperature and reaction timing, which are critical parameters for in situ generation of peroxyl radical from 2,2'-azobis(2-amidinopropane) dihydrochloride (AAPH). The influence of reagent concentrations and programmable flow conditions on reaction development was studied, with application of 37.5 µM of PGR and 125 mM of AAPH in the flow cell, guaranteeing first order kinetics towards peroxyl radicals and pseudo-zero order towards PGR. Peroxyl-scavenging reactivity of antioxidants, bioactive compounds and phenolic-rich beverages was estimated employing the proposed methodology. Recovery assays using synthetic saliva provided values of 90 ± 5% for reduced glutathione. Detection limit calculated using the standard antioxidant compound Trolox was 8 μM. RSD values were <3.4 and <4.9%, for intra and inter-assay precision, respectively. Compared to previous batch automated ORAC assays, the developed system also accounted for high sampling frequency (29 h(-1)), low operating costs and low generation of waste. Copyright © 2015 Elsevier B.V. All rights reserved.
CFD determination of flow perturbation boundary conditions for seal rotordynamic modeling
NASA Astrophysics Data System (ADS)
Venkatesan, Ganesh
2002-09-01
A new approach has been developed and utilized to determine the flow field perturbations (i.e. disturbance due to rotor eccentricity and/or motion) upstream of and within a non-contacting seal. The results are proposed for use with bulk-flow perturbation and CFD-perturbation seal rotordynamic models, as well as in fully 3-D CFD models, to specify approximate boundary conditions for the first-order variables at the computational domain inlet. The perturbation quantities were evaluated by subtracting the numerical flow field solutions corresponding to the concentric rotor position from that for an eccentric rotor position. The disturbance pressure quantities predicted from the numerical solutions were validated by comparing with previous pressure measurements. A parametric study was performed to understand the influence of upstream chamber height, seal clearance, shaft speed, whirl speed, zeroth-order streamwise and swirl velocities, and downstream pressure on the distribution of the first-order quantities in the upstream chamber, seal inlet and seal exit regions. Radially bulk-averaged first-order quantities were evaluated in the upstream chamber, as well as at the seal inlet and exit. The results were finally presented in the form of generalized dimensionless boundary condition correlations so that they can be applied to seal rotordynamic models over a wide range of operating conditions and geometries. To examine the effect of the proposed, approximate first-order boundary conditions on the solutions of the fully 3-D CFD rotordynamic models, the first-order boundary condition correlations for the upstream chamber were used to adjust the circumferential distribution of domain inlet values. The benefit of the boundary condition expressions was assessed for two previously measured test cases, one for a gas seal and the other for a liquid seal. For the gas seal case, a significant improvement in the prediction of the cross-coupled stiffness, when including the proposed first-order inlet boundary values, was found. In the case of liquid seals the tangential impedance values obtained with boundary condition adjustments showed a very slight improvement for a range of whirl speeds over those obtained without them. The radial impedance values obtained with the new adjustments showed a significant improvement over those obtained without them.
NASA Astrophysics Data System (ADS)
Jiao, J.; Trautz, A.; Zhang, Y.; Illangasekera, T.
2017-12-01
Subsurface flow and transport characterization under data-sparse condition is addressed by a new and computationally efficient inverse theory that simultaneously estimates parameters, state variables, and boundary conditions. Uncertainty in static data can be accounted for while parameter structure can be complex due to process uncertainty. The approach has been successfully extended to inverting transient and unsaturated flows as well as contaminant source identification under unknown initial and boundary conditions. In one example, by sampling numerical experiments simulating two-dimensional steady-state flow in which tracer migrates, a sequential inversion scheme first estimates the flow field and permeability structure before the evolution of tracer plume and dispersivities are jointly estimated. Compared to traditional inversion techniques, the theory does not use forward simulations to assess model-data misfits, thus the knowledge of the difficult-to-determine site boundary condition is not required. To test the general applicability of the theory, data generated during high-precision intermediate-scale experiments (i.e., a scale intermediary to the field and column scales) in large synthetic aquifers can be used. The design of such experiments is not trivial as laboratory conditions have to be selected to mimic natural systems in order to provide useful data, thus requiring a variety of sensors and data collection strategies. This paper presents the design of such an experiment in a synthetic, multi-layered aquifer with dimensions of 242.7 x 119.3 x 7.7 cm3. Different experimental scenarios that will generate data to validate the theory are presented.
Estimates of RF-induced erosion at antenna-connected beryllium plasma-facing components in JET
Klepper, C. C.; Borodin, D.; Groth, M.; ...
2016-01-18
Radio-frequency (RF)-enhanced surface erosion of beryllium (Be) plasma-facing components is explored, for the first time, using the ERO code. We applied the code in order to measure the RF-enhanced edge Be line emission at JET Be outboard limiters, in the presence of high-power, ion cyclotronresonance heating (ICRH) in L-mode discharges. In this first modelling study, the RF sheath effect from an ICRH antenna on a magnetically connected, limiter region is simulated by adding a constant potential to the local sheath, in an attempt to match measured increases in local Be I and Be II emission of factors of 2 3.more » It was found that such increases are readily simulated with added potentials in the range of 100 200 V, which is compatible with expected values for potentials arising from rectification of sheath voltage oscillations from ICRH antennas in the scrape-off layer plasma. We also estimated absolute erosion values within the uncertainties in local plasma conditions.« less
Predictions of first passage times in sparse discrete fracture networks using graph-based reductions
NASA Astrophysics Data System (ADS)
Hyman, J.; Hagberg, A.; Srinivasan, G.; Mohd-Yusof, J.; Viswanathan, H. S.
2017-12-01
We present a graph-based methodology to reduce the computational cost of obtaining first passage times through sparse fracture networks. We derive graph representations of generic three-dimensional discrete fracture networks (DFNs) using the DFN topology and flow boundary conditions. Subgraphs corresponding to the union of the k shortest paths between the inflow and outflow boundaries are identified and transport on their equivalent subnetworks is compared to transport through the full network. The number of paths included in the subgraphs is based on the scaling behavior of the number of edges in the graph with the number of shortest paths. First passage times through the subnetworks are in good agreement with those obtained in the full network, both for individual realizations and in distribution. Accurate estimates of first passage times are obtained with an order of magnitude reduction of CPU time and mesh size using the proposed method.
Predictions of first passage times in sparse discrete fracture networks using graph-based reductions
NASA Astrophysics Data System (ADS)
Hyman, Jeffrey D.; Hagberg, Aric; Srinivasan, Gowri; Mohd-Yusof, Jamaludin; Viswanathan, Hari
2017-07-01
We present a graph-based methodology to reduce the computational cost of obtaining first passage times through sparse fracture networks. We derive graph representations of generic three-dimensional discrete fracture networks (DFNs) using the DFN topology and flow boundary conditions. Subgraphs corresponding to the union of the k shortest paths between the inflow and outflow boundaries are identified and transport on their equivalent subnetworks is compared to transport through the full network. The number of paths included in the subgraphs is based on the scaling behavior of the number of edges in the graph with the number of shortest paths. First passage times through the subnetworks are in good agreement with those obtained in the full network, both for individual realizations and in distribution. Accurate estimates of first passage times are obtained with an order of magnitude reduction of CPU time and mesh size using the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duru, Kenneth, E-mail: kduru@stanford.edu; Dunham, Eric M.; Institute for Computational and Mathematical Engineering, Stanford University, Stanford, CA
Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a)more » enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge–Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture dynamics on rough faults.« less
NASA Astrophysics Data System (ADS)
Duru, Kenneth; Dunham, Eric M.
2016-01-01
Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture dynamics on rough faults.
Sim, Kok Swee; NorHisham, Syafiq
2016-11-01
A technique based on linear Least Squares Regression (LSR) model is applied to estimate signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images. In order to test the accuracy of this technique on SNR estimation, a number of SEM images are initially corrupted with white noise. The autocorrelation function (ACF) of the original and the corrupted SEM images are formed to serve as the reference point to estimate the SNR value of the corrupted image. The LSR technique is then compared with the previous three existing techniques known as nearest neighbourhood, first-order interpolation, and the combination of both nearest neighborhood and first-order interpolation. The actual and the estimated SNR values of all these techniques are then calculated for comparison purpose. It is shown that the LSR technique is able to attain the highest accuracy compared to the other three existing techniques as the absolute difference between the actual and the estimated SNR value is relatively small. SCANNING 38:771-782, 2016. © 2016 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Andrews, Benjamin James
2011-01-01
The equity properties can be used to assess the quality of an equating. The degree to which expected scores conditional on ability are similar between test forms is referred to as first-order equity. Second-order equity is the degree to which conditional standard errors of measurement are similar between test forms after equating. The purpose of…
Targeted estimation of nuisance parameters to obtain valid statistical inference.
van der Laan, Mark J
2014-01-01
In order to obtain concrete results, we focus on estimation of the treatment specific mean, controlling for all measured baseline covariates, based on observing independent and identically distributed copies of a random variable consisting of baseline covariates, a subsequently assigned binary treatment, and a final outcome. The statistical model only assumes possible restrictions on the conditional distribution of treatment, given the covariates, the so-called propensity score. Estimators of the treatment specific mean involve estimation of the propensity score and/or estimation of the conditional mean of the outcome, given the treatment and covariates. In order to make these estimators asymptotically unbiased at any data distribution in the statistical model, it is essential to use data-adaptive estimators of these nuisance parameters such as ensemble learning, and specifically super-learning. Because such estimators involve optimal trade-off of bias and variance w.r.t. the infinite dimensional nuisance parameter itself, they result in a sub-optimal bias/variance trade-off for the resulting real-valued estimator of the estimand. We demonstrate that additional targeting of the estimators of these nuisance parameters guarantees that this bias for the estimand is second order and thereby allows us to prove theorems that establish asymptotic linearity of the estimator of the treatment specific mean under regularity conditions. These insights result in novel targeted minimum loss-based estimators (TMLEs) that use ensemble learning with additional targeted bias reduction to construct estimators of the nuisance parameters. In particular, we construct collaborative TMLEs (C-TMLEs) with known influence curve allowing for statistical inference, even though these C-TMLEs involve variable selection for the propensity score based on a criterion that measures how effective the resulting fit of the propensity score is in removing bias for the estimand. As a particular special case, we also demonstrate the required targeting of the propensity score for the inverse probability of treatment weighted estimator using super-learning to fit the propensity score.
NASA Astrophysics Data System (ADS)
Peres, David Johnny; Cancelliere, Antonino
2016-04-01
Assessment of shallow landslide hazard is important for appropriate planning of mitigation measures. Generally, return period of slope instability is assumed as a quantitative metric to map landslide triggering hazard on a catchment. The most commonly applied approach to estimate such return period consists in coupling a physically-based landslide triggering model (hydrological and slope stability) with rainfall intensity-duration-frequency (IDF) curves. Among the drawbacks of such an approach, the following assumptions may be mentioned: (1) prefixed initial conditions, with no regard to their probability of occurrence, and (2) constant intensity-hyetographs. In our work we propose the use of a Monte Carlo simulation approach in order to investigate the effects of the two above mentioned assumptions. The approach is based on coupling a physically based hydrological and slope stability model with a stochastic rainfall time series generator. By this methodology a long series of synthetic rainfall data can be generated and given as input to a landslide triggering physically based model, in order to compute the return period of landslide triggering as the mean inter-arrival time of a factor of safety less than one. In particular, we couple the Neyman-Scott rectangular pulses model for hourly rainfall generation and the TRIGRS v.2 unsaturated model for the computation of transient response to individual rainfall events. Initial conditions are computed by a water table recession model that links initial conditions at a given event to the final response at the preceding event, thus taking into account variable inter-arrival time between storms. One-thousand years of synthetic hourly rainfall are generated to estimate return periods up to 100 years. Applications are first carried out to map landslide triggering hazard in the Loco catchment, located in highly landslide-prone area of the Peloritani Mountains, Sicily, Italy. Then a set of additional simulations are performed in order to compare the results obtained by the traditional IDF-based method with the Monte Carlo ones. Results indicate that both variability of initial conditions and of intra-event rainfall intensity significantly affect return period estimation. In particular, the common assumption of an initial water table depth at the base of the pervious strata may lead in practice to an overestimation of return period up to one order of magnitude, while the assumption of constant-intensity hyetographs may yield an overestimation by a factor of two or three. Hence, it may be concluded that the analysed simplifications involved in the traditional IDF-based approach generally imply a non-conservative assessment of landslide triggering hazard.
Model Development for Risk Assessment of Driving on Freeway under Rainy Weather Conditions
Cai, Xiaonan; Wang, Chen; Chen, Shengdi; Lu, Jian
2016-01-01
Rainy weather conditions could result in significantly negative impacts on driving on freeways. However, due to lack of enough historical data and monitoring facilities, many regions are not able to establish reliable risk assessment models to identify such impacts. Given the situation, this paper provides an alternative solution where the procedure of risk assessment is developed based on drivers’ subjective questionnaire and its performance is validated by using actual crash data. First, an ordered logit model was developed, based on questionnaire data collected from Freeway G15 in China, to estimate the relationship between drivers’ perceived risk and factors, including vehicle type, rain intensity, traffic volume, and location. Then, weighted driving risk for different conditions was obtained by the model, and further divided into four levels of early warning (specified by colors) using a rank order cluster analysis. After that, a risk matrix was established to determine which warning color should be disseminated to drivers, given a specific condition. Finally, to validate the proposed procedure, actual crash data from Freeway G15 were compared with the safety prediction based on the risk matrix. The results show that the risk matrix obtained in the study is able to predict driving risk consistent with actual safety implications, under rainy weather conditions. PMID:26894434
Absolute phase estimation: adaptive local denoising and global unwrapping.
Bioucas-Dias, Jose; Katkovnik, Vladimir; Astola, Jaakko; Egiazarian, Karen
2008-10-10
The paper attacks absolute phase estimation with a two-step approach: the first step applies an adaptive local denoising scheme to the modulo-2 pi noisy phase; the second step applies a robust phase unwrapping algorithm to the denoised modulo-2 pi phase obtained in the first step. The adaptive local modulo-2 pi phase denoising is a new algorithm based on local polynomial approximations. The zero-order and the first-order approximations of the phase are calculated in sliding windows of varying size. The zero-order approximation is used for pointwise adaptive window size selection, whereas the first-order approximation is used to filter the phase in the obtained windows. For phase unwrapping, we apply the recently introduced robust (in the sense of discontinuity preserving) PUMA unwrapping algorithm [IEEE Trans. Image Process.16, 698 (2007)] to the denoised wrapped phase. Simulations give evidence that the proposed algorithm yields state-of-the-art performance, enabling strong noise attenuation while preserving image details. (c) 2008 Optical Society of America
Fast analytical scatter estimation using graphics processing units.
Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris
2015-01-01
To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.
Recharge and groundwater models: An overview
Sanford, W.
2002-01-01
Recharge is a fundamental component of groundwater systems, and in groundwater-modeling exercises recharge is either measured and specified or estimated during model calibration. The most appropriate way to represent recharge in a groundwater model depends upon both physical factors and study objectives. Where the water table is close to the land surface, as in humid climates or regions with low topographic relief, a constant-head boundary condition is used. Conversely, where the water table is relatively deep, as in drier climates or regions with high relief, a specified-flux boundary condition is used. In most modeling applications, mixed-type conditions are more effective, or a combination of the different types can be used. The relative distribution of recharge can be estimated from water-level data only, but flux observations must be incorporated in order to estimate rates of recharge. Flux measurements are based on either Darcian velocities (e.g., stream base-flow) or seepage velocities (e.g., groundwater age). In order to estimate the effective porosity independently, both types of flux measurements must be available. Recharge is often estimated more efficiently when automated inverse techniques are used. Other important applications are the delineation of areas contributing recharge to wells and the estimation of paleorecharge rates using carbon-14.
NASA Astrophysics Data System (ADS)
Pang, Liping; Goltz, Mark; Close, Murray
2003-01-01
In this note, we applied the temporal moment solutions of [Das and Kluitenberg, 1996. Soil Sci. Am. J. 60, 1724] for one-dimensional advective-dispersive solute transport with linear equilibrium sorption and first-order degradation for time pulse sources to analyse soil column experimental data. Unlike most other moment solutions, these solutions consider the interplay of degradation and sorption. This permits estimation of a first-order degradation rate constant using the zeroth moment of column breakthrough data, as well as estimation of the retardation factor or sorption distribution coefficient of a degrading solute using the first moment. The method of temporal moment (MOM) formulae was applied to analyse breakthrough data from a laboratory column study of atrazine, hexazinone and rhodamine WT transport in volcanic pumice sand, as well as experimental data from the literature. Transport and degradation parameters obtained using the MOM were compared to parameters obtained by fitting breakthrough data from an advective-dispersive transport model with equilibrium sorption and first-order degradation, using the nonlinear least-square curve-fitting program CXTFIT. The results derived from using the literature data were also compared with estimates reported in the literature using different equilibrium models. The good agreement suggests that the MOM could provide an additional useful means of parameter estimation for transport involving equilibrium sorption and first-order degradation. We found that the MOM fitted breakthrough curves with tailing better than curve fitting. However, the MOM analysis requires complete breakthrough curves and relatively frequent data collection to ensure the accuracy of the moments obtained from the breakthrough data.
A new phase correction method in NMR imaging based on autocorrelation and histogram analysis.
Ahn, C B; Cho, Z H
1987-01-01
A new statistical approach to phase correction in NMR imaging is proposed. The proposed scheme consists of first-and zero-order phase corrections each by the inverse multiplication of estimated phase error. The first-order error is estimated by the phase of autocorrelation calculated from the complex valued phase distorted image while the zero-order correction factor is extracted from the histogram of phase distribution of the first-order corrected image. Since all the correction procedures are performed on the spatial domain after completion of data acquisition, no prior adjustments or additional measurements are required. The algorithm can be applicable to most of the phase-involved NMR imaging techniques including inversion recovery imaging, quadrature modulated imaging, spectroscopic imaging, and flow imaging, etc. Some experimental results with inversion recovery imaging as well as quadrature spectroscopic imaging are shown to demonstrate the usefulness of the algorithm.
No meditation-related changes in the auditory N1 during first-time meditation.
Barnes, L J; McArthur, G M; Biedermann, B A; de Lissa, P; Polito, V; Badcock, N A
2018-05-01
Recent studies link meditation expertise with enhanced low-level attention, measured through auditory event-related potentials (ERPs). In this study, we tested the reliability and validity of a recent finding that the N1 ERP in first-time meditators is smaller during meditation than non-meditation - an effect not present in long-term meditators. In the first experiment, we replicated the finding in first-time meditators. In two subsequent experiments, we discovered that this finding was not due to stimulus-related instructions, but was explained by an effect of the order of conditions. Extended exposure to the same tones has been linked with N1 decrement in other studies, and may explain N1 decrement across our two conditions. We give examples of existing meditation and ERP studies that may include similar condition order effects. The role of condition order among first-time meditators in this study indicates the importance of counterbalancing meditation and non-mediation conditions in meditation studies that use event-related potentials. Copyright © 2018 Elsevier B.V. All rights reserved.
Constrained multiple indicator kriging using sequential quadratic programming
NASA Astrophysics Data System (ADS)
Soltani-Mohammadi, Saeed; Erhan Tercan, A.
2012-11-01
Multiple indicator kriging (MIK) is a nonparametric method used to estimate conditional cumulative distribution functions (CCDF). Indicator estimates produced by MIK may not satisfy the order relations of a valid CCDF which is ordered and bounded between 0 and 1. In this paper a new method has been presented that guarantees the order relations of the cumulative distribution functions estimated by multiple indicator kriging. The method is based on minimizing the sum of kriging variances for each cutoff under unbiasedness and order relations constraints and solving constrained indicator kriging system by sequential quadratic programming. A computer code is written in the Matlab environment to implement the developed algorithm and the method is applied to the thickness data.
Kurtosis Approach for Nonlinear Blind Source Separation
NASA Technical Reports Server (NTRS)
Duong, Vu A.; Stubbemd, Allen R.
2005-01-01
In this paper, we introduce a new algorithm for blind source signal separation for post-nonlinear mixtures. The mixtures are assumed to be linearly mixed from unknown sources first and then distorted by memoryless nonlinear functions. The nonlinear functions are assumed to be smooth and can be approximated by polynomials. Both the coefficients of the unknown mixing matrix and the coefficients of the approximated polynomials are estimated by the gradient descent method conditional on the higher order statistical requirements. The results of simulation experiments presented in this paper demonstrate the validity and usefulness of our approach for nonlinear blind source signal separation.
Tarazona, J V; Rodríguez, C; Alonso, E; Sáez, M; González, F; San Andrés, M D; Jiménez, B; San Andrés, M I
2016-01-22
This article describes the toxicokinetics of perfluorooctane sulfonate (PFOS) in rabbits under low repeated dosing, equivalent to 0.085μg/kg per day, and the observed differences between rabbits and chickens. The best fitting for both species was provided by a simple pseudo monocompartmental first-order kinetics model, regulated by two rates, and accounting for real elimination as well as binding of PFOS to non-exchangeable structures. Elimination was more rapid in rabbits, with a pseudo first-order dissipation half-life of 88 days compared to the 230 days observed for chickens. By contrast, the calculated assimilation efficiency for rabbits was almost 1, very close to full absorption, significantly higher than the 0.66 with confidence intervals of 0.64 and 0.68 observed for chickens. The results confirm a very different kinetics than that observed in single-dose experiments confirming clear dose-related differences in apparent elimination rates in rabbits, as previously described for humans and other mammals; suggesting the role of a capacity-limited saturable process resulting in different kinetic behaviours for PFOS in high dose versus environmentally relevant low dose exposure conditions. The model calculations confirmed that the measured maximum concentrations were still far from the steady state situation, and that the different kinetics between birds and mammals should may play a significant role in the biomagnifications assessment and potential exposure for humans and predators. For the same dose regime, the steady state concentration was estimated at about 36μg PFOS/L serum for rabbits, slightly above one-half of the 65μg PFOS/L serum estimated for chickens. The toxicokinetic parameters presented here can be used for higher-tier bioaccumulation estimations of PFOS in rabbits and chickens as starting point for human health exposure assessments and as surrogate values for modeling PFOS kinetics in wild mammals and bird in exposure assessment of predatory species. Published by Elsevier Ireland Ltd.
A-posteriori error estimation for second order mechanical systems
NASA Astrophysics Data System (ADS)
Ruiner, Thomas; Fehr, Jörg; Haasdonk, Bernard; Eberhard, Peter
2012-06-01
One important issue for the simulation of flexible multibody systems is the reduction of the flexible bodies degrees of freedom. As far as safety questions are concerned knowledge about the error introduced by the reduction of the flexible degrees of freedom is helpful and very important. In this work, an a-posteriori error estimator for linear first order systems is extended for error estimation of mechanical second order systems. Due to the special second order structure of mechanical systems, an improvement of the a-posteriori error estimator is achieved. A major advantage of the a-posteriori error estimator is that the estimator is independent of the used reduction technique. Therefore, it can be used for moment-matching based, Gramian matrices based or modal based model reduction techniques. The capability of the proposed technique is demonstrated by the a-posteriori error estimation of a mechanical system, and a sensitivity analysis of the parameters involved in the error estimation process is conducted.
Near Real-time GNSS-based Ionospheric Model using Expanded Kriging in the East Asia Region
NASA Astrophysics Data System (ADS)
Choi, P. H.; Bang, E.; Lee, J.
2016-12-01
Many applications which utilize radio waves (e.g. navigation, communications, and radio sciences) are influenced by the ionosphere. The technology to provide global ionospheric maps (GIM) which show ionospheric Total Electron Content (TEC) has been progressed by processing GNSS data. However, the GIMs have limited spatial resolution (e.g. 2.5° in latitude and 5° in longitude), because they are generated using globally-distributed and thus relatively sparse GNSS reference station networks. This study presents a near real-time and high spatial resolution TEC model over East Asia by using ionospheric observables from both International GNSS Service (IGS) and local GNSS networks and the expanded kriging method. New signals from multi-constellation (e.g,, GPS L5, Galileo E5) were also used to generate high-precision TEC estimates. The newly proposed estimation method is based on the universal kriging interpolation technique, but integrates TEC data from previous epochs to those from the current epoch to improve the TEC estimation performance by increasing ionospheric observability. To propagate previous measurements to the current epoch, we implemented a Kalman filter whose dynamic model was derived by using the first-order Gauss-Markov process which characterizes temporal ionospheric changes under the nominal ionospheric conditions. Along with the TEC estimates at grids, the method generates the confidence bounds on the estimates using resulting estimation covariance. We also suggest to classify the confidence bounds into several categories to allow users to recognize the quality levels of TEC estimates according to the requirements for user's applications. This paper examines the performance of the proposed method by obtaining estimation results for both nominal and disturbed ionospheric conditions, and compares these results to those provided by GIM of the NASA Jet propulsion Laboratory. In addition, the estimation results based on the expanded kriging method are compared to the results from the universal kriging method for both nominal and disturbed ionospheric conditions.
NASA Astrophysics Data System (ADS)
Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai
2017-10-01
With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.
Iterative initial condition reconstruction
NASA Astrophysics Data System (ADS)
Schmittfull, Marcel; Baldauf, Tobias; Zaldarriaga, Matias
2017-07-01
Motivated by recent developments in perturbative calculations of the nonlinear evolution of large-scale structure, we present an iterative algorithm to reconstruct the initial conditions in a given volume starting from the dark matter distribution in real space. In our algorithm, objects are first moved back iteratively along estimated potential gradients, with a progressively reduced smoothing scale, until a nearly uniform catalog is obtained. The linear initial density is then estimated as the divergence of the cumulative displacement, with an optional second-order correction. This algorithm should undo nonlinear effects up to one-loop order, including the higher-order infrared resummation piece. We test the method using dark matter simulations in real space. At redshift z =0 , we find that after eight iterations the reconstructed density is more than 95% correlated with the initial density at k ≤0.35 h Mpc-1 . The reconstruction also reduces the power in the difference between reconstructed and initial fields by more than 2 orders of magnitude at k ≤0.2 h Mpc-1 , and it extends the range of scales where the full broadband shape of the power spectrum matches linear theory by a factor of 2-3. As a specific application, we consider measurements of the baryonic acoustic oscillation (BAO) scale that can be improved by reducing the degradation effects of large-scale flows. In our idealized dark matter simulations, the method improves the BAO signal-to-noise ratio by a factor of 2.7 at z =0 and by a factor of 2.5 at z =0.6 , improving standard BAO reconstruction by 70% at z =0 and 30% at z =0.6 , and matching the optimal BAO signal and signal-to-noise ratio of the linear density in the same volume. For BAO, the iterative nature of the reconstruction is the most important aspect.
NASA Technical Reports Server (NTRS)
Barker, R. E., Jr.; Campbell, K. W.
1985-01-01
The applicability of classical nucleation theory to second (and higher) order thermodynamic transitions in the Ehrenfest sense has been investigated and expressions have been derived upon which the qualitative and quantitative success of the basic approach must ultimately depend. The expressions describe the effect of temperature undercooling, hydrostatic pressure, and tensile stress upon the critical parameters, the critical nucleus size, and critical free energy barrier, for nucleation in a thermodynamic transition of any general order. These expressions are then specialized for the case of first and second order transitions. The expressions for the case of undercooling are then used in conjunction with literature data to estimate values for the critical quantities in a system undergoing a pseudo-second order transition (the glass transition in polystyrene). Methods of estimating the interfacial energy gamma in systems undergoing a first and second order transition are also discussed.
Paul P. Kormanik; Shi-Jean S. Sung; Taryn L. Kormanik; Stanley J. Zarnoch; Scott Schlarbaum
1997-01-01
Heritability estimates (h2) were calculated for first-order lateral root (FOLR) numbers on a family plot mean basis for 5 Quercus species: Q. alba, Q. falcata, Q, michauxii, Q. pagoda, and Q. rubra. All species were grown with the...
Modeling Spatial Dependence of Rainfall Extremes Across Multiple Durations
NASA Astrophysics Data System (ADS)
Le, Phuong Dong; Leonard, Michael; Westra, Seth
2018-03-01
Determining the probability of a flood event in a catchment given that another flood has occurred in a nearby catchment is useful in the design of infrastructure such as road networks that have multiple river crossings. These conditional flood probabilities can be estimated by calculating conditional probabilities of extreme rainfall and then transforming rainfall to runoff through a hydrologic model. Each catchment's hydrological response times are unlikely to be the same, so in order to estimate these conditional probabilities one must consider the dependence of extreme rainfall both across space and across critical storm durations. To represent these types of dependence, this study proposes a new approach for combining extreme rainfall across different durations within a spatial extreme value model using max-stable process theory. This is achieved in a stepwise manner. The first step defines a set of common parameters for the marginal distributions across multiple durations. The parameters are then spatially interpolated to develop a spatial field. Storm-level dependence is represented through the max-stable process for rainfall extremes across different durations. The dependence model shows a reasonable fit between the observed pairwise extremal coefficients and the theoretical pairwise extremal coefficient function across all durations. The study demonstrates how the approach can be applied to develop conditional maps of the return period and return level across different durations.
Modeling of membrane processes for air revitalization and water recovery
NASA Technical Reports Server (NTRS)
Lange, Kevin E.; Foerg, Sandra L.; Dall-Bauman, Liese A.
1992-01-01
Gas-separation and reverse-osmosis membrane models are being developed in conjunction with membrane testing at NASA JSC. The completed gas-separation membrane model extracts effective component permeabilities from multicomponent test data, and predicts the effects of flow configuration, operating conditions, and membrane dimensions on module performance. Variable feed- and permeate-side pressures are considered. The model has been applied to test data for hollow-fiber membrane modules with simulated cabin-air feeds. Results are presented for a membrane designed for air drying applications. Extracted permeabilities are used to predict the effect of operating conditions on water enrichment in the permeate. A first-order reverse-osmosis model has been applied to test data for spiral wound membrane modules with a simulated hygiene water feed. The model estimates an effective local component rejection coefficient under pseudosteady-state conditions. Results are used to define requirements for a detailed reverse-osmosis model.
Code of Federal Regulations, 2011 CFR
2011-01-01
... with its geologic setting, in order to estimate the pre-waste-emplacement ground-water flow conditions.... • Preliminary estimates of ground-water travel times along the likely flow paths from the repository to... hydrochemical conditions of the host rock, of the surrounding geohydrologic units, and along likely ground-water...
Sim, K S; Norhisham, S
2016-11-01
A new method based on nonlinear least squares regression (NLLSR) is formulated to estimate signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images. The estimation of SNR value based on NLLSR method is compared with the three existing methods of nearest neighbourhood, first-order interpolation and the combination of both nearest neighbourhood and first-order interpolation. Samples of SEM images with different textures, contrasts and edges were used to test the performance of NLLSR method in estimating the SNR values of the SEM images. It is shown that the NLLSR method is able to produce better estimation accuracy as compared to the other three existing methods. According to the SNR results obtained from the experiment, the NLLSR method is able to produce approximately less than 1% of SNR error difference as compared to the other three existing methods. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Estimation of methane emission rate changes using age-defined waste in a landfill site.
Ishii, Kazuei; Furuichi, Toru
2013-09-01
Long term methane emissions from landfill sites are often predicted by first-order decay (FOD) models, in which the default coefficients of the methane generation potential and the methane generation rate given by the Intergovernmental Panel on Climate Change (IPCC) are usually used. However, previous studies have demonstrated the large uncertainty in these coefficients because they are derived from a calibration procedure under ideal steady-state conditions, not actual landfill site conditions. In this study, the coefficients in the FOD model were estimated by a new approach to predict more precise long term methane generation by considering region-specific conditions. In the new approach, age-defined waste samples, which had been under the actual landfill site conditions, were collected in Hokkaido, Japan (in cold region), and the time series data on the age-defined waste sample's methane generation potential was used to estimate the coefficients in the FOD model. The degradation coefficients were 0.0501/y and 0.0621/y for paper and food waste, and the methane generation potentials were 214.4 mL/g-wet waste and 126.7 mL/g-wet waste for paper and food waste, respectively. These coefficients were compared with the default coefficients given by the IPCC. Although the degradation coefficient for food waste was smaller than the default value, the other coefficients were within the range of the default coefficients. With these new coefficients to calculate methane generation, the long term methane emissions from the landfill site was estimated at 1.35×10(4)m(3)-CH(4), which corresponds to approximately 2.53% of the total carbon dioxide emissions in the city (5.34×10(5)t-CO(2)/y). Copyright © 2013 Elsevier Ltd. All rights reserved.
Nonlinear self-reflection of intense ultra-wideband femtosecond pulses in optical fiber
NASA Astrophysics Data System (ADS)
Konev, Leonid S.; Shpolyanskiy, Yuri A.
2013-05-01
We simulated propagation of few-cycle femtosecond pulses in fused silica fiber based on the set of first-order equations for forward and backward waves that generalizes widely used equation of unidirectional approximation. Appearance of a weak reflected field in conditions default to the unidirectional approach is observed numerically. It arises from nonmatched initial field distribution with the nonlinear medium response. Besides additional field propagating forward along with the input pulse is revealed. The analytical solution of a simplified set of equations valid over distances of a few wavelengths confirms generation of reflected and forward-propagating parts of the backward wave. It allowed us to find matched conditions when the reflected field is eliminated and estimate the amplitude of backward wave via medium properties. The amplitude has the order of the nonlinear contribution to the refractive index divided by the linear refractive index. It is small for the fused silica so the conclusions obtained in the unidirectional approach are valid. The backward wave should be proportionally higher in media with stronger nonlinear response. We did not observe in simulations additional self-reflection not related to non-matched boundary conditions.
NASA Astrophysics Data System (ADS)
Bieringer, Paul E.; Rodriguez, Luna M.; Vandenberghe, Francois; Hurst, Jonathan G.; Bieberbach, George; Sykes, Ian; Hannan, John R.; Zaragoza, Jake; Fry, Richard N.
2015-12-01
Accurate simulations of the atmospheric transport and dispersion (AT&D) of hazardous airborne materials rely heavily on the source term parameters necessary to characterize the initial release and meteorological conditions that drive the downwind dispersion. In many cases the source parameters are not known and consequently based on rudimentary assumptions. This is particularly true of accidental releases and the intentional releases associated with terrorist incidents. When available, meteorological observations are often not representative of the conditions at the location of the release and the use of these non-representative meteorological conditions can result in significant errors in the hazard assessments downwind of the sensors, even when the other source parameters are accurately characterized. Here, we describe a computationally efficient methodology to characterize both the release source parameters and the low-level winds (eg. winds near the surface) required to produce a refined downwind hazard. This methodology, known as the Variational Iterative Refinement Source Term Estimation (STE) Algorithm (VIRSA), consists of a combination of modeling systems. These systems include a back-trajectory based source inversion method, a forward Gaussian puff dispersion model, a variational refinement algorithm that uses both a simple forward AT&D model that is a surrogate for the more complex Gaussian puff model and a formal adjoint of this surrogate model. The back-trajectory based method is used to calculate a ;first guess; source estimate based on the available observations of the airborne contaminant plume and atmospheric conditions. The variational refinement algorithm is then used to iteratively refine the first guess STE parameters and meteorological variables. The algorithm has been evaluated across a wide range of scenarios of varying complexity. It has been shown to improve the source parameters for location by several hundred percent (normalized by the distance from source to the closest sampler), and improve mass estimates by several orders of magnitude. Furthermore, it also has the ability to operate in scenarios with inconsistencies between the wind and airborne contaminant sensor observations and adjust the wind to provide a better match between the hazard prediction and the observations.
Csermely, Gyula; Susánszky, Éva; Czeizel, Andrew E; Veszprémi, Béla
2014-08-01
In epidemiological studies at the estimation of risk factors in the origin of specified congenital abnormalities in general birth order (parity) is considered as confounder. The aim of this study was to analyze the possible association of first and high (four or more) birth order with the risk of congenital abnormalities in a population-based case-matched control data set. The large dataset of the Hungarian Case-Control Surveillance of Congenital Abnormalities included 21,494 cases with different isolated congenital abnormality and their 34,311 matched controls. First the distribution of birth order was compared of 24 congenital abnormality groups and their matched controls. In the second step the possible association of first and high birth order with the risk of congenital abnormalities was estimated. Finally some subgroups of neural-tube defects, congenital heart defects and abdominal wall's defects were evaluated separately. A higher risk of spina bifida aperta/cystica, esophageal atresia/stenosis and clubfoot was observed in the offspring of primiparous mothers. Of 24 congenital abnormality groups, 14 had mothers with larger proportion of high birth order. Ear defects, congenital heart defects, cleft lip± palate and obstructive defects of urinary tract had a linear trend from a lower proportion of first born cases to the larger proportion of high birth order. Birth order showed U-shaped distribution of neural-tube defects and clubfoot, i.e. both first and high birth order had a larger proportion in cases than in their matched controls. Birth order is a contributing factor in the origin of some isolated congenital abnormalities. The higher risk of certain congenital abnormalities in pregnant women with first or high birth order is worth considering in the clinical practice, e.g. ultrasound scanning. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Kowalski, Amanda
2016-01-02
Efforts to control medical care costs depend critically on how individuals respond to prices. I estimate the price elasticity of expenditure on medical care using a censored quantile instrumental variable (CQIV) estimator. CQIV allows estimates to vary across the conditional expenditure distribution, relaxes traditional censored model assumptions, and addresses endogeneity with an instrumental variable. My instrumental variable strategy uses a family member's injury to induce variation in an individual's own price. Across the conditional deciles of the expenditure distribution, I find elasticities that vary from -0.76 to -1.49, which are an order of magnitude larger than previous estimates.
The effect of presentation rate on implicit sequence learning in aging.
Foster, Chris M; Giovanello, Kelly S
2017-02-01
Implicit sequence learning is thought to be preserved in aging when the to-be learned associations are first-order; however, when associations are second-order, older adults (OAs) tend to experience deficits as compared to young adults (YAs). Two experiments were conducted using a first (Experiment 1) and second-order (Experiment 2) serial-reaction time task. Stimuli were presented at a constant rate of either 800 milliseconds (fast) or 1200 milliseconds (slow). Results indicate that both age groups learned first-order dependencies equally in both conditions. OAs and YAs also learned second-order dependencies, but the learning of lag-2 information was significantly impacted by the rate of presentation for both groups. OAs showed significant lag-2 learning in slow condition while YAs showed significant lag-2 learning in the fast condition. The sensitivity of implicit sequence learning to the rate of presentation supports the idea that OAs and YAs different processing speeds impact the ability to build complex associations across time and intervening events.
Combining the Hanning windowed interpolated FFT in both directions
NASA Astrophysics Data System (ADS)
Chen, Kui Fu; Li, Yan Feng
2008-06-01
The interpolated fast Fourier transform (IFFT) has been proposed as a way to eliminate the picket fence effect (PFE) of the fast Fourier transform. The modulus based IFFT, cited in most relevant references, makes use of only the 1st and 2nd highest spectral lines. An approach using three principal spectral lines is proposed. This new approach combines both directions of the complex spectrum based IFFT with the Hanning window. The optimal weight to minimize the estimation variance is established on the first order Taylor series expansion of noise interference. A numerical simulation is carried out, and the results are compared with the Cramer-Rao bound. It is demonstrated that the proposed approach has a lower estimation variance than the two-spectral-line approach. The improvement depends on the extent of sampling deviating from the coherent condition, and the best is decreasing variance by 2/7. However, it is also shown that the estimation variance of the windowed IFFT with the Hanning is significantly higher than that of without windowing.
Robust Target Tracking with Multi-Static Sensors under Insufficient TDOA Information.
Shin, Hyunhak; Ku, Bonhwa; Nelson, Jill K; Ko, Hanseok
2018-05-08
This paper focuses on underwater target tracking based on a multi-static sonar network composed of passive sonobuoys and an active ping. In the multi-static sonar network, the location of the target can be estimated using TDOA (Time Difference of Arrival) measurements. However, since the sensor network may obtain insufficient and inaccurate TDOA measurements due to ambient noise and other harsh underwater conditions, target tracking performance can be significantly degraded. We propose a robust target tracking algorithm designed to operate in such a scenario. First, track management with track splitting is applied to reduce performance degradation caused by insufficient measurements. Second, a target location is estimated by a fusion of multiple TDOA measurements using a Gaussian Mixture Model (GMM). In addition, the target trajectory is refined by conducting a stack-based data association method based on multiple-frames measurements in order to more accurately estimate target trajectory. The effectiveness of the proposed method is verified through simulations.
NASA Astrophysics Data System (ADS)
Gar Alalm, Mohamed; Tawfik, Ahmed; Ookawara, Shinichi
2017-03-01
In this study, solar photo-Fenton reaction using compound parabolic collectors reactor was assessed for removal of phenol from aqueous solution. The effect of irradiation time, initial concentration, initial pH, and dosage of Fenton reagent were investigated. H2O2 and aromatic intermediates (catechol, benzoquinone, and hydroquinone) were quantified during the reaction to study the pathways of the oxidation process. Complete degradation of phenol was achieved after 45 min of irradiation when the initial concentration was 100 mg/L. However, increasing the initial concentration up to 500 mg/L inhibited the degradation efficiency. The dosage of H2O2 and Fe+2 significantly affected the degradation efficiency of phenol. The observed optimum pH for the reaction was 3.1. Phenol degradation at different concentration was fitted to the pseudo-first order kinetic according to Langmuir-Hinshelwood model. Costs estimation for a large scale reactor based was performed. The total costs of the best economic condition with maximum degradation of phenol are 2.54 €/m3.
Structure of turbulent flow over regular arrays of cubical roughness
NASA Astrophysics Data System (ADS)
Coceal, O.; Dobre, A.; Thomas, T. G.; Belcher, S. E.
The structure of turbulent flow over large roughness consisting of regular arrays of cubical obstacles is investigated numerically under constant pressure gradient conditions. Results are analysed in terms of first- and second-order statistics, by visualization of instantaneous flow fields and by conditional averaging. The accuracy of the simulations is established by detailed comparisons of first- and second-order statistics with wind-tunnel measurements. Coherent structures in the log region are investigated. Structure angles are computed from two-point correlations, and quadrant analysis is performed to determine the relative importance of Q2 and Q4 events (ejections and sweeps) as a function of height above the roughness. Flow visualization shows the existence of low-momentum regions (LMRs) as well as vortical structures throughout the log layer. Filtering techniques are used to reveal instantaneous examples of the association of the vortices with the LMRs, and linear stochastic estimation and conditional averaging are employed to deduce their statistical properties. The conditional averaging results reveal the presence of LMRs and regions of Q2 and Q4 events that appear to be associated with hairpin-like vortices, but a quantitative correspondence between the sizes of the vortices and those of the LMRs is difficult to establish; a simple estimate of the ratio of the vortex width to the LMR width gives a value that is several times larger than the corresponding ratio over smooth walls. The shape and inclination of the vortices and their spatial organization are compared to recent findings over smooth walls. Characteristic length scales are shown to scale linearly with height in the log region. Whilst there are striking qualitative similarities with smooth walls, there are also important differences in detail regarding: (i) structure angles and sizes and their dependence on distance from the rough surface; (ii) the flow structure close to the roughness; (iii) the roles of inflows into and outflows from cavities within the roughness; (iv) larger vortices on the rough wall compared to the smooth wall; (v) the effect of the different generation mechanism at the wall in setting the scales of structures.
Blind channel estimation and deconvolution in colored noise using higher-order cumulants
NASA Astrophysics Data System (ADS)
Tugnait, Jitendra K.; Gummadavelli, Uma
1994-10-01
Existing approaches to blind channel estimation and deconvolution (equalization) focus exclusively on channel or inverse-channel impulse response estimation. It is well-known that the quality of the deconvolved output depends crucially upon the noise statistics also. Typically it is assumed that the noise is white and the signal-to-noise ratio is known. In this paper we remove these restrictions. Both the channel impulse response and the noise model are estimated from the higher-order (fourth, e.g.) cumulant function and the (second-order) correlation function of the received data via a least-squares cumulant/correlation matching criterion. It is assumed that the noise higher-order cumulant function vanishes (e.g., Gaussian noise, as is the case for digital communications). Consistency of the proposed approach is established under certain mild sufficient conditions. The approach is illustrated via simulation examples involving blind equalization of digital communications signals.
NASA Technical Reports Server (NTRS)
Bey, Kim S.; Oden, J. Tinsley
1993-01-01
A priori error estimates are derived for hp-versions of the finite element method for discontinuous Galerkin approximations of a model class of linear, scalar, first-order hyperbolic conservation laws. These estimates are derived in a mesh dependent norm in which the coefficients depend upon both the local mesh size h(sub K) and a number p(sub k) which can be identified with the spectral order of the local approximations over each element.
Probabilistic Analysis of a Composite Crew Module
NASA Technical Reports Server (NTRS)
Mason, Brian H.; Krishnamurthy, Thiagarajan
2011-01-01
An approach for conducting reliability-based analysis (RBA) of a Composite Crew Module (CCM) is presented. The goal is to identify and quantify the benefits of probabilistic design methods for the CCM and future space vehicles. The coarse finite element model from a previous NASA Engineering and Safety Center (NESC) project is used as the baseline deterministic analysis model to evaluate the performance of the CCM using a strength-based failure index. The first step in the probabilistic analysis process is the determination of the uncertainty distributions for key parameters in the model. Analytical data from water landing simulations are used to develop an uncertainty distribution, but such data were unavailable for other load cases. The uncertainty distributions for the other load scale factors and the strength allowables are generated based on assumed coefficients of variation. Probability of first-ply failure is estimated using three methods: the first order reliability method (FORM), Monte Carlo simulation, and conditional sampling. Results for the three methods were consistent. The reliability is shown to be driven by first ply failure in one region of the CCM at the high altitude abort load set. The final predicted probability of failure is on the order of 10-11 due to the conservative nature of the factors of safety on the deterministic loads.
Estimation of two ordered mean residual lifetime functions.
Ebrahimi, N
1993-06-01
In many statistical studies involving failure data, biometric mortality data, and actuarial data, mean residual lifetime (MRL) function is of prime importance. In this paper we introduce the problem of nonparametric estimation of a MRL function on an interval when this function is bounded from below by another such function (known or unknown) on that interval, and derive the corresponding two functional estimators. The first is to be used when there is a known bound, and the second when the bound is another MRL function to be estimated independently. Both estimators are obtained by truncating the empirical estimator discussed by Yang (1978, Annals of Statistics 6, 112-117). In the first case, it is truncated at a known bound; in the second, at a point somewhere between the two empirical estimates. Consistency of both estimators is proved, and a pointwise large-sample distribution theory of the first estimator is derived.
Thermal Conductivities in Solids from First Principles: Accurate Computations and Rapid Estimates
NASA Astrophysics Data System (ADS)
Carbogno, Christian; Scheffler, Matthias
In spite of significant research efforts, a first-principles determination of the thermal conductivity κ at high temperatures has remained elusive. Boltzmann transport techniques that account for anharmonicity perturbatively become inaccurate under such conditions. Ab initio molecular dynamics (MD) techniques using the Green-Kubo (GK) formalism capture the full anharmonicity, but can become prohibitively costly to converge in time and size. We developed a formalism that accelerates such GK simulations by several orders of magnitude and that thus enables its application within the limited time and length scales accessible in ab initio MD. For this purpose, we determine the effective harmonic potential occurring during the MD, the associated temperature-dependent phonon properties and lifetimes. Interpolation in reciprocal and frequency space then allows to extrapolate to the macroscopic scale. For both force-field and ab initio MD, we validate this approach by computing κ for Si and ZrO2, two materials known for their particularly harmonic and anharmonic character. Eventually, we demonstrate how these techniques facilitate reasonable estimates of κ from existing MD calculations at virtually no additional computational cost.
First-Order System Least-Squares for the Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Bochev, P.; Cai, Z.; Manteuffel, T. A.; McCormick, S. F.
1996-01-01
This paper develops a least-squares approach to the solution of the incompressible Navier-Stokes equations in primitive variables. As with our earlier work on Stokes equations, we recast the Navier-Stokes equations as a first-order system by introducing a velocity flux variable and associated curl and trace equations. We show that the resulting system is well-posed, and that an associated least-squares principle yields optimal discretization error estimates in the H(sup 1) norm in each variable (including the velocity flux) and optimal multigrid convergence estimates for the resulting algebraic system.
Predictions of first passage times in sparse discrete fracture networks using graph-based reductions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hyman, Jeffrey De'Haven; Hagberg, Aric Arild; Mohd-Yusof, Jamaludin
Here, we present a graph-based methodology to reduce the computational cost of obtaining first passage times through sparse fracture networks. We also derive graph representations of generic three-dimensional discrete fracture networks (DFNs) using the DFN topology and flow boundary conditions. Subgraphs corresponding to the union of the k shortest paths between the inflow and outflow boundaries are identified and transport on their equivalent subnetworks is compared to transport through the full network. The number of paths included in the subgraphs is based on the scaling behavior of the number of edges in the graph with the number of shortest paths.more » First passage times through the subnetworks are in good agreement with those obtained in the full network, both for individual realizations and in distribution. We obtain accurate estimates of first passage times with an order of magnitude reduction of CPU time and mesh size using the proposed method.« less
Predictions of first passage times in sparse discrete fracture networks using graph-based reductions
Hyman, Jeffrey De'Haven; Hagberg, Aric Arild; Mohd-Yusof, Jamaludin; ...
2017-07-10
Here, we present a graph-based methodology to reduce the computational cost of obtaining first passage times through sparse fracture networks. We also derive graph representations of generic three-dimensional discrete fracture networks (DFNs) using the DFN topology and flow boundary conditions. Subgraphs corresponding to the union of the k shortest paths between the inflow and outflow boundaries are identified and transport on their equivalent subnetworks is compared to transport through the full network. The number of paths included in the subgraphs is based on the scaling behavior of the number of edges in the graph with the number of shortest paths.more » First passage times through the subnetworks are in good agreement with those obtained in the full network, both for individual realizations and in distribution. We obtain accurate estimates of first passage times with an order of magnitude reduction of CPU time and mesh size using the proposed method.« less
NASA Astrophysics Data System (ADS)
Khellat, M. R.; Mirjalili, A.
2017-03-01
We first consider the idea of renormalization group-induced estimates, in the context of optimization procedures, for the Brodsky-Lepage-Mackenzie approach to generate higher-order contributions to QCD perturbative series. Secondly, we develop the deviation pattern approach (DPA) in which through a series of comparisons between lowerorder RG-induced estimates and the corresponding analytical calculations, one could modify higher-order RG-induced estimates. Finally, using the normal estimation procedure and DPA, we get estimates of αs4 corrections for the Bjorken sum rule of polarized deep-inelastic scattering and for the non-singlet contribution to the Adler function.
NASA Astrophysics Data System (ADS)
Arora, B. S.; Morgan, J.; Ord, S. M.; Tingay, S. J.; Hurley-Walker, N.; Bell, M.; Bernardi, G.; Bhat, N. D. R.; Briggs, F.; Callingham, J. R.; Deshpande, A. A.; Dwarakanath, K. S.; Ewall-Wice, A.; Feng, L.; For, B.-Q.; Hancock, P.; Hazelton, B. J.; Hindson, L.; Jacobs, D.; Johnston-Hollitt, M.; Kapińska, A. D.; Kudryavtseva, N.; Lenc, E.; McKinley, B.; Mitchell, D.; Oberoi, D.; Offringa, A. R.; Pindor, B.; Procopio, P.; Riding, J.; Staveley-Smith, L.; Wayth, R. B.; Wu, C.; Zheng, Q.; Bowman, J. D.; Cappallo, R. J.; Corey, B. E.; Emrich, D.; Goeke, R.; Greenhill, L. J.; Kaplan, D. L.; Kasper, J. C.; Kratzenberg, E.; Lonsdale, C. J.; Lynch, M. J.; McWhirter, S. R.; Morales, M. F.; Morgan, E.; Prabu, T.; Rogers, A. E. E.; Roshi, A.; Shankar, N. Udaya; Srivani, K. S.; Subrahmanyan, R.; Waterson, M.; Webster, R. L.; Whitney, A. R.; Williams, A.; Williams, C. L.
2015-08-01
We compare first-order (refractive) ionospheric effects seen by the MWA with the ionosphere as inferred from GPS data. The first-order ionosphere manifests itself as a bulk position shift of the observed sources across an MWA field of view. These effects can be computed from global ionosphere maps provided by GPS analysis centres, namely the CODE. However, for precision radio astronomy applications, data from local GPS networks needs to be incorporated into ionospheric modelling. For GPS observations, the ionospheric parameters are biased by GPS receiver instrument delays, among other effects, also known as receiver DCBs. The receiver DCBs need to be estimated for any non-CODE GPS station used for ionosphere modelling. In this work, single GPS station-based ionospheric modelling is performed at a time resolution of 10 min. Also the receiver DCBs are estimated for selected Geoscience Australia GPS receivers, located at Murchison Radio Observatory, Yarragadee, Mount Magnet and Wiluna. The ionospheric gradients estimated from GPS are compared with that inferred from MWA. The ionospheric gradients at all the GPS stations show a correlation with the gradients observed with the MWA. The ionosphere estimates obtained using GPS measurements show promise in terms of providing calibration information for the MWA.
Measuring Fisher Information Accurately in Correlated Neural Populations
Kohn, Adam; Pouget, Alexandre
2015-01-01
Neural responses are known to be variable. In order to understand how this neural variability constrains behavioral performance, we need to be able to measure the reliability with which a sensory stimulus is encoded in a given population. However, such measures are challenging for two reasons: First, they must take into account noise correlations which can have a large influence on reliability. Second, they need to be as efficient as possible, since the number of trials available in a set of neural recording is usually limited by experimental constraints. Traditionally, cross-validated decoding has been used as a reliability measure, but it only provides a lower bound on reliability and underestimates reliability substantially in small datasets. We show that, if the number of trials per condition is larger than the number of neurons, there is an alternative, direct estimate of reliability which consistently leads to smaller errors and is much faster to compute. The superior performance of the direct estimator is evident both for simulated data and for neuronal population recordings from macaque primary visual cortex. Furthermore we propose generalizations of the direct estimator which measure changes in stimulus encoding across conditions and the impact of correlations on encoding and decoding, typically denoted by Ishuffle and Idiag respectively. PMID:26030735
On entropy change measurements around first order phase transitions in caloric materials.
Caron, Luana; Ba Doan, Nguyen; Ranno, Laurent
2017-02-22
In this work we discuss the measurement protocols for indirect determination of the isothermal entropy change associated with first order phase transitions in caloric materials. The magneto-structural phase transitions giving rise to giant magnetocaloric effects in Cu-doped MnAs and FeRh are used as case studies to exemplify how badly designed protocols may affect isothermal measurements and lead to incorrect entropy change estimations. Isothermal measurement protocols which allow correct assessment of the entropy change around first order phase transitions in both direct and inverse cases are presented.
NASA Astrophysics Data System (ADS)
Hilger, A. M.; Schroeder, D. M.; Corr, H. F. J.; Blankenship, D. D.; Paden, J. D.
2017-12-01
Recent observational studies and models have shown that ocean forcing, bed topography, and basal conditions are major controls of the behavior of the Amundsen Sea Embayment of the West Antarctic Ice Sheet. This region contains Thwaites Glacier and Pine Island Glacier, the two most rapidly changing glaciers in Antarctica. Because they are adjacent, interactions between these two glaciers could potentially cause further destabilization as either glacier retreats. Accordingly, it is important to understand the basal conditions on the Thwaites-Pine Island boundary in order to accurately model the present and future behavior of these glaciers. Previous airborne geophysical surveys in this area have provided dense radar sounding coverage using multiple radar sounding systems, including the UTIG HiCARS system and the BAS PASIN system used in the 2004 AGASEA survey. Because the boundary region between Thwaites and Pine Island Glacier is at the respective boundaries of the UTIG and BAS surveys, accurate characterization of the basal conditions requires a synthesis of the data produced by the BAS and HiCARS systems. To this end, we present estimates of bed reflectivity spanning both glacier catchments. These estimates were produced using empirically determined attenuation rates. To improve the consistency of these attenuation rates, we fit across a two-dimensional area, rather than a one-dimensional line as in previous work. These estimates also include cross-calibration to account for the radar sounding systems' differing power and center frequency. This will provide the first cross-survey map of basal reflectivity spanning the entire Amundsen Sea Embayment.
ERIC Educational Resources Information Center
Lecce, Serena; Bianco, Federica; Demicheli, Patrizia; Cavallini, Elena
2014-01-01
This study investigated the relation between theory of mind (ToM) and metamemory knowledge using a training methodology. Sixty-two 4- to 5-year-old children were recruited and randomly assigned to one of two training conditions: A first-order false belief (ToM) and a control condition. Intervention and control groups were equivalent at pretest for…
Holakooie, Mohammad Hosein; Ojaghi, Mansour; Taheri, Asghar
2016-01-01
This paper investigates sensorless indirect field oriented control (IFOC) of SLIM with full-order Luenberger observer. The dynamic equations of SLIM are first elaborated to draw full-order Luenberger observer with some simplifying assumption. The observer gain matrix is derived from conventional procedure so that observer poles are proportional to SLIM poles to ensure the stability of system for wide range of linear speed. The operation of observer is significantly impressed by adaptive scheme. A fuzzy logic control (FLC) is proposed as adaptive scheme to estimate linear speed using speed tuning signal. The parameters of FLC are tuned using an off-line method through chaotic optimization algorithm (COA). The performance of the proposed observer is verified by both numerical simulation and real-time hardware-in-the-loop (HIL) implementation. Moreover, a detailed comparative study among proposed and other speed observers is obtained under different operation conditions. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Observations of eruption clouds from Sakura-zima volcano, Kyushu, Japan from Skylab 4
Friedman, J.D.; Heiken, G.; Randerson, D.; McKay, D.S.
1976-01-01
Hasselblad and Nikon stereographic photographs taken from Skylab between 9 June 1973 and 1 February 1974 give synoptic plan views of several entire eruption clouds emanating from Sakura-zima volcano in Kagoshima Bay, Kyushu, Japan. Analytical plots of these stereographic pairs, studied in combination with meteorological data, indicate that the eruption clouds did not penetrate the tropopause and thus did not create a stratospheric dust veil of long residence time. A horizontal eddy diffusivity of the order of 106 cm2 s-1 and a vertical eddy diffusivity of the order of 105 cm2 s-1 were calculated from the observed plume dimensions and from available meteorological data. These observations are the first, direct evidence that explosive eruption at an estimated energy level of about 1018 ergs per paroxysm may be too small under atmospheric conditions similar to those prevailing over Sakura-zima for volcanic effluents to penetrate low-level tropospheric temperature inversions and, consequently, the tropopause over northern middle latitudes. Maximum elevation of the volcanic clouds was determined to be 3.4 km. The cumulative thermal energy release in the rise of volcanic plumes for 385 observed explosive eruptions was estimated to be 1020 to 1021 ergs (1013 to 1014 J), but the entire thermal energy release associated with pyroclastic activity may be of the order of 2.5 ?? 1022 ergs (2.5 ?? 1015 J). Estimation of the kinetic energy component of explosive eruptions via satellite observation and meteorological consideration of eruption clouds is thus useful in volcanology as an alternative technique to confirm the kinetic energy estimates made by ground-based geological and geophysical methods, and to aid in construction of physical models of potential and historical tephra-fallout sectors with implications for volcano-hazard prediction. ?? 1976.
Kim, Eun Sook; Wang, Yan
2017-01-01
Population heterogeneity in growth trajectories can be detected with growth mixture modeling (GMM). It is common that researchers compute composite scores of repeated measures and use them as multiple indicators of growth factors (baseline performance and growth) assuming measurement invariance between latent classes. Considering that the assumption of measurement invariance does not always hold, we investigate the impact of measurement noninvariance on class enumeration and parameter recovery in GMM through a Monte Carlo simulation study (Study 1). In Study 2, we examine the class enumeration and parameter recovery of the second-order growth mixture modeling (SOGMM) that incorporates measurement models at the first order level. Thus, SOGMM estimates growth trajectory parameters with reliable sources of variance, that is, common factor variance of repeated measures and allows heterogeneity in measurement parameters between latent classes. The class enumeration rates are examined with information criteria such as AIC, BIC, sample-size adjusted BIC, and hierarchical BIC under various simulation conditions. The results of Study 1 showed that the parameter estimates of baseline performance and growth factor means were biased to the degree of measurement noninvariance even when the correct number of latent classes was extracted. In Study 2, the class enumeration accuracy of SOGMM depended on information criteria, class separation, and sample size. The estimates of baseline performance and growth factor mean differences between classes were generally unbiased but the size of measurement noninvariance was underestimated. Overall, SOGMM is advantageous in that it yields unbiased estimates of growth trajectory parameters and more accurate class enumeration compared to GMM by incorporating measurement models. PMID:28928691
NASA Astrophysics Data System (ADS)
Vafadar, Bahareh; Bones, Philip J.
2012-10-01
There is a strong motivation to reduce the amount of acquired data necessary to reconstruct clinically useful MR images, since less data means faster acquisition sequences, less time for the patient to remain motionless in the scanner and better time resolution for observing temporal changes within the body. We recently introduced an improvement in image quality for reconstructing parallel MR images by incorporating a data ordering step with compressed sensing (CS) in an algorithm named `PECS'. That method requires a prior estimate of the image to be available. We are extending the algorithm to explore ways of utilizing the data ordering step without requiring a prior estimate. The method presented here first reconstructs an initial image x1 by compressed sensing (with scarcity enhanced by SVD), then derives a data ordering from x1, R'1 , which ranks the voxels of x1 according to their value. A second reconstruction is then performed which incorporates minimization of the first norm of the estimate after ordering by R'1 , resulting in a new reconstruction x2. Preliminary results are encouraging.
SINFAC - SYSTEMS IMPROVED NUMERICAL FLUIDS ANALYSIS CODE
NASA Technical Reports Server (NTRS)
Costello, F. A.
1994-01-01
The Systems Improved Numerical Fluids Analysis Code, SINFAC, consists of additional routines added to the April 1983 revision of SINDA, a general thermal analyzer program. The purpose of the additional routines is to allow for the modeling of active heat transfer loops. The modeler can simulate the steady-state and pseudo-transient operations of 16 different heat transfer loop components including radiators, evaporators, condensers, mechanical pumps, reservoirs and many types of valves and fittings. In addition, the program contains a property analysis routine that can be used to compute the thermodynamic properties of 20 different refrigerants. SINFAC can simulate the response to transient boundary conditions. SINFAC was first developed as a method for computing the steady-state performance of two phase systems. It was then modified using CNFRWD, SINDA's explicit time-integration scheme, to accommodate transient thermal models. However, SINFAC cannot simulate pressure drops due to time-dependent fluid acceleration, transient boil-out, or transient fill-up, except in the accumulator. SINFAC also requires the user to be familiar with SINDA. The solution procedure used by SINFAC is similar to that which an engineer would use to solve a system manually. The solution to a system requires the determination of all of the outlet conditions of each component such as the flow rate, pressure, and enthalpy. To obtain these values, the user first estimates the inlet conditions to the first component of the system, then computes the outlet conditions from the data supplied by the manufacturer of the first component. The user then estimates the temperature at the outlet of the third component and computes the corresponding flow resistance of the second component. With the flow resistance of the second component, the user computes the conditions down stream, namely the inlet conditions of the third. The computations follow for the rest of the system, back to the first component. On the first pass, the user finds that the calculated outlet conditions of the last component do not match the estimated inlet conditions of the first. The user then modifies the estimated inlet conditions of the first component in an attempt to match the calculated values. The user estimated values are called State Variables. The differences between the user estimated values and calculated values are called the Error Variables. The procedure systematically changes the State Variables until all of the Error Variables are less than the user-specified iteration limits. The solution procedure is referred to as SCX. It consists of two phases, the Systems phase and the Controller phase. The X is to imply experimental. SCX computes each next set of State Variables in two phases. In the first phase, SCX fixes the controller positions and modifies the other State Variables by the Newton-Raphson method. This first phase is the Systems phase. Once the Newton-Raphson method has solved the problem for the fixed controller positions, SCX next calculates new controller positions based on Newton's method while treating each sensor-controller pair independently but allowing all to change in one iteration. This phase is the Controller phase. SINFAC is available by license for a period of ten (10) years to approved licensees. The licenced program product includes the source code for the additional routines to SINDA, the SINDA object code, command procedures, sample data and supporting documentation. Additional documentation may be purchased at the price below. SINFAC was created for use on a DEC VAX under VMS. Source code is written in FORTRAN 77, requires 180k of memory, and should be fully transportable. The program was developed in 1988.
Kowalski, Amanda
2015-01-01
Efforts to control medical care costs depend critically on how individuals respond to prices. I estimate the price elasticity of expenditure on medical care using a censored quantile instrumental variable (CQIV) estimator. CQIV allows estimates to vary across the conditional expenditure distribution, relaxes traditional censored model assumptions, and addresses endogeneity with an instrumental variable. My instrumental variable strategy uses a family member’s injury to induce variation in an individual’s own price. Across the conditional deciles of the expenditure distribution, I find elasticities that vary from −0.76 to −1.49, which are an order of magnitude larger than previous estimates. PMID:26977117
Increasing Accuracy in Computed Inviscid Boundary Conditions
NASA Technical Reports Server (NTRS)
Dyson, Roger
2004-01-01
A technique has been devised to increase the accuracy of computational simulations of flows of inviscid fluids by increasing the accuracy with which surface boundary conditions are represented. This technique is expected to be especially beneficial for computational aeroacoustics, wherein it enables proper accounting, not only for acoustic waves, but also for vorticity and entropy waves, at surfaces. Heretofore, inviscid nonlinear surface boundary conditions have been limited to third-order accuracy in time for stationary surfaces and to first-order accuracy in time for moving surfaces. For steady-state calculations, it may be possible to achieve higher accuracy in space, but high accuracy in time is needed for efficient simulation of multiscale unsteady flow phenomena. The present technique is the first surface treatment that provides the needed high accuracy through proper accounting of higher-order time derivatives. The present technique is founded on a method known in art as the Hermitian modified solution approximation (MESA) scheme. This is because high time accuracy at a surface depends upon, among other things, correction of the spatial cross-derivatives of flow variables, and many of these cross-derivatives are included explicitly on the computational grid in the MESA scheme. (Alternatively, a related method other than the MESA scheme could be used, as long as the method involves consistent application of the effects of the cross-derivatives.) While the mathematical derivation of the present technique is too lengthy and complex to fit within the space available for this article, the technique itself can be characterized in relatively simple terms: The technique involves correction of surface-normal spatial pressure derivatives at a boundary surface to satisfy the governing equations and the boundary conditions and thereby achieve arbitrarily high orders of time accuracy in special cases. The boundary conditions can now include a potentially infinite number of time derivatives of surface-normal velocity (consistent with no flow through the boundary) up to arbitrarily high order. The corrections for the first-order spatial derivatives of pressure are calculated by use of the first-order time derivative velocity. The corrected first-order spatial derivatives are used to calculate the second- order time derivatives of velocity, which, in turn, are used to calculate the corrections for the second-order pressure derivatives. The process as described is repeated, progressing through increasing orders of derivatives, until the desired accuracy is attained.
A time series model: First-order integer-valued autoregressive (INAR(1))
NASA Astrophysics Data System (ADS)
Simarmata, D. M.; Novkaniza, F.; Widyaningsih, Y.
2017-07-01
Nonnegative integer-valued time series arises in many applications. A time series model: first-order Integer-valued AutoRegressive (INAR(1)) is constructed by binomial thinning operator to model nonnegative integer-valued time series. INAR (1) depends on one period from the process before. The parameter of the model can be estimated by Conditional Least Squares (CLS). Specification of INAR(1) is following the specification of (AR(1)). Forecasting in INAR(1) uses median or Bayesian forecasting methodology. Median forecasting methodology obtains integer s, which is cumulative density function (CDF) until s, is more than or equal to 0.5. Bayesian forecasting methodology forecasts h-step-ahead of generating the parameter of the model and parameter of innovation term using Adaptive Rejection Metropolis Sampling within Gibbs sampling (ARMS), then finding the least integer s, where CDF until s is more than or equal to u . u is a value taken from the Uniform(0,1) distribution. INAR(1) is applied on pneumonia case in Penjaringan, Jakarta Utara, January 2008 until April 2016 monthly.
Robust learning for optimal treatment decision with NP-dimensionality
Shi, Chengchun; Song, Rui; Lu, Wenbin
2016-01-01
In order to identify important variables that are involved in making optimal treatment decision, Lu, Zhang and Zeng (2013) proposed a penalized least squared regression framework for a fixed number of predictors, which is robust against the misspecification of the conditional mean model. Two problems arise: (i) in a world of explosively big data, effective methods are needed to handle ultra-high dimensional data set, for example, with the dimension of predictors is of the non-polynomial (NP) order of the sample size; (ii) both the propensity score and conditional mean models need to be estimated from data under NP dimensionality. In this paper, we propose a robust procedure for estimating the optimal treatment regime under NP dimensionality. In both steps, penalized regressions are employed with the non-concave penalty function, where the conditional mean model of the response given predictors may be misspecified. The asymptotic properties, such as weak oracle properties, selection consistency and oracle distributions, of the proposed estimators are investigated. In addition, we study the limiting distribution of the estimated value function for the obtained optimal treatment regime. The empirical performance of the proposed estimation method is evaluated by simulations and an application to a depression dataset from the STAR*D study. PMID:28781717
Ordering structured populations in multiplayer cooperation games
Peña, Jorge; Wu, Bin; Traulsen, Arne
2016-01-01
Spatial structure greatly affects the evolution of cooperation. While in two-player games the condition for cooperation to evolve depends on a single structure coefficient, in multiplayer games the condition might depend on several structure coefficients, making it difficult to compare different population structures. We propose a solution to this issue by introducing two simple ways of ordering population structures: the containment order and the volume order. If population structure is greater than population structure in the containment or the volume order, then can be considered a stronger promoter of cooperation. We provide conditions for establishing the containment order, give general results on the volume order, and illustrate our theory by comparing different models of spatial games and associated update rules. Our results hold for a large class of population structures and can be easily applied to specific cases once the structure coefficients have been calculated or estimated. PMID:26819335
Wang, Yan-Yu; Wang, Yi; Zou, Ying-Min; Ni, Ke; Tian, Xue; Sun, Hong-Wei; Lui, Simon S Y; Cheung, Eric F C; Suckling, John; Chan, Raymond C K
2017-11-06
Although Theory of Mind (ToM) impairment has been observed in patients with a wide range of mental disorders, the similarity and uniqueness of these deficits across diagnostic groups has not been thoroughly investigated. We recruited 35 participants with schizophrenia (SCZ), 35 with bipolar disorder (BD), 35 with major depressive disorder (MDD), and 35 healthy controls in this study. All participants were matched in age, gender proportion and IQ estimates. The Yoni task, capturing both the cognitive and affective components of ToM at the first- and second-order level was administered. Repeated-measure ANOVA and MANOVA were conducted to compare the group differences in ToM performance. A network was then constructed with ToM performances, psychotic and depressive symptoms, and executive function as nodes exploring the clinical correlates of ToM. Overall, ToM impairments were observed in all patient groups compared with healthy controls, with patients with SCZ performing worse than those with BD. In second-order conditions, patients with SCZ and MDD showed deficits in both cognitive and affective conditions, while patients with BD performed significantly poorer in cognitive conditions. Network analysis showed that second-order affective ToM performance was associated with psychotic and depressive symptoms as well as executive dysfunction, while second-order affective ToM performance and negative symptoms showed relatively high centrality in the network. Patients with SCZ, MDD and BD exhibited different types and severity of impairments in ToM sub-components. Impairment in higher-order affective ToM appears to be closely related to clinical symptoms in both psychotic and affective disorders. Copyright © 2017. Published by Elsevier B.V.
Estimation of Stresses in a Dry Sand Layer Tested on Shaking Table
NASA Astrophysics Data System (ADS)
Sawicki, Andrzej; Kulczykowski, Marek; Jankowski, Robert
2012-12-01
Theoretical analysis of shaking table experiments, simulating earthquake response of a dry sand layer, is presented. The aim of such experiments is to study seismic-induced compaction of soil and resulting settlements. In order to determine the soil compaction, the cyclic stresses and strains should be calculated first. These stresses are caused by the cyclic horizontal acceleration at the base of soil layer, so it is important to determine the stress field as function of the base acceleration. It is particularly important for a proper interpretation of shaking table tests, where the base acceleration is controlled but the stresses are hard to measure, and they can only be deduced. Preliminary experiments have shown that small accelerations do not lead to essential settlements, whilst large accelerations cause some phenomena typical for limit states, including a visible appearance of slip lines. All these problems should be well understood for rational planning of experiments. The analysis of these problems is presented in this paper. First, some heuristic considerations about the dynamics of experimental system are presented. Then, the analysis of boundary conditions, expressed as resultants of respective stresses is shown. A particular form of boundary conditions has been chosen, which satisfies the macroscopic boundary conditions and the equilibrium equations. Then, some considerations are presented in order to obtain statically admissible stress field, which does not exceed the Coulomb-Mohr yield conditions. Such an approach leads to determination of the limit base accelerations, which do not cause the plastic state in soil. It was shown that larger accelerations lead to increase of the lateral stresses, and the respective method, which may replace complex plasticity analyses, is proposed. It is shown that it is the lateral stress coefficient K0 that controls the statically admissible stress field during the shaking table experiments.
Lecce, Serena; Bianco, Federica; Demicheli, Patrizia; Cavallini, Elena
2014-01-01
This study investigated the relation between theory of mind (ToM) and metamemory knowledge using a training methodology. Sixty-two 4- to 5-year-old children were recruited and randomly assigned to one of two training conditions: A first-order false belief (ToM) and a control condition. Intervention and control groups were equivalent at pretest for age, parents' education, verbal ability, inhibition, and ToM. Results showed that after the intervention children in the ToM group improved in their first-order false belief understanding significantly more than children in the control condition. Crucially, the positive effect of the ToM intervention was stable over 2 months and generalized to more complex ToM tasks and metamemory. © 2014 The Authors. Child Development © 2014 Society for Research in Child Development, Inc.
Lee, Eunyoung; Cumberbatch, Jewel; Wang, Meng; Zhang, Qiong
2017-03-01
Anaerobic co-digestion has a potential to improve biogas production, but limited kinetic information is available for co-digestion. This study introduced regression-based models to estimate the kinetic parameters for the co-digestion of microalgae and Waste Activated Sludge (WAS). The models were developed using the ratios of co-substrates and the kinetic parameters for the single substrate as indicators. The models were applied to the modified first-order kinetics and Monod model to determine the rate of hydrolysis and methanogenesis for the co-digestion. The results showed that the model using a hyperbola function was better for the estimation of the first-order kinetic coefficients, while the model using inverse tangent function closely estimated the Monod kinetic parameters. The models can be used for estimating kinetic parameters for not only microalgae-WAS co-digestion but also other substrates' co-digestion such as microalgae-swine manure and WAS-aquatic plants. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Broussard, John R.
1987-01-01
Relationships between observers, Kalman Filters and dynamic compensators using feedforward control theory are investigated. In particular, the relationship, if any, between the dynamic compensator state and linear functions of a discrete plane state are investigated. It is shown that, in steady state, a dynamic compensator driven by the plant output can be expressed as the sum of two terms. The first term is a linear combination of the plant state. The second term depends on plant and measurement noise, and the plant control. Thus, the state of the dynamic compensator can be expressed as an estimator of the first term with additive error given by the second term. Conditions under which a dynamic compensator is a Kalman filter are presented, and reduced-order optimal estimaters are investigated.
Rakocevic, Miroslava; Matsunaga, Fabio Takeshi
2018-04-05
Dynamics in branch and leaf growth parameters, such as the phyllochron, duration of leaf expansion, leaf life span and bud mortality, determine tree architecture and canopy foliage distribution. We aimed to estimate leaf growth parameters in adult Arabica coffee plants based on leaf supporter axis order and position along the vertical profile, considering their modifications related to seasonal growth, air [CO2] and water availability. Growth and mortality of leaves and terminal buds of adult Arabica coffee trees were followed in two independent field experiments in two sub-tropical climate regions of Brazil, Londrina-PR (Cfa) and Jaguariúna-SP (Cwa). In the Cwa climate, coffee trees were grown under a FACE (free air CO2 enrichment) facility, where half of those had been irrigated. Plants were observed at a 15-30 d frequency for 1 year. Leaf growth parameters were estimated on five axes orders and expressed as functions of accumulated thermal time (°Cd per leaf). The phyllochron and duration of leaf expansion increased with axis order, from the seond to the fourth. The phyllochron and life span during the reduced vegetative seasonal growth were greater than during active growth. It took more thermal time for leaves from the first- to fourth-order axes to expand their blades under irrigation compared with rainfed conditions. The compensation effects of high [CO2] for low water availability were observed on leaf retention on the second and third axes orders, and duration of leaf expansion on the first- and fourth-order axes. The second-degree polynomials modelled leaf growth parameter distribution in the vertical tree profile, and linear regressions modelled the proportion of terminal bud mortality. Leaf growth parameters in coffee plants were determined by axis order. The duration of leaf expansion contributed to phyllochron determination. Leaf growth parameters varied according the position of the axis supporter along the vertical profile, suggesting an effect of axes age and micro-environmental light modulations.
Dassis, M; Rodríguez, D H; Ieno, E N; Denuncio, P E; Loureiro, J; Davis, R W
2014-02-01
Bio-energetic models used to characterize an animal's energy budget require the accurate estimate of different variables such as the resting metabolic rate (RMR) and the heat increment of feeding (HIF). In this study, we estimated the in air RMR of wild juvenile South American fur seals (SAFS; Arctocephalus australis) temporarily held in captivity by measuring oxygen consumption while at rest in a postabsorptive condition. HIF, which is an increase in metabolic rate associated with digestion, assimilation and nutrient interconversion, was estimated as the difference in resting metabolic rate between the postabsorptive condition and the first 3.5h postprandial. As data were hierarchically structured, linear mixed effect models were used to compare RMR measures under both physiological conditions. Results indicated a significant increase (61%) for the postprandial RMR compared to the postabsorptive condition, estimated at 17.93±1.84 and 11.15±1.91mL O2 min(-1)kg(-1), respectively. These values constitute the first estimation of RMR and HIF in this species, and should be considered in the energy budgets for juvenile SAFS foraging at-sea. Copyright © 2013 Elsevier Inc. All rights reserved.
Reanalysis of in situ permeability measurements in the Barbados décollement
Bekins, B.A.; Matmon, D.; Screaton, E.J.; Brown, K.M.
2011-01-01
A cased and sealed borehole in the Northern Barbados accretionary complex was the site of the first attempts to measure permeability in situ along a plate boundary décollement. Three separate efforts at Hole 949C yielded permeability estimates for the décollement spanning four orders of magnitude. An analysis of problems encountered during installation of the casing and seals provides insights into how the borehole conditions may have led to the wide range of results. During the installation, sediments from the surrounding formation repeatedly intruded into the borehole and casing. Stress analysis shows that the weak sediments were deforming plastically and the radial and tangential stresses around the borehole were significantly lower than lithostatic. This perturbed stress state may explain why the test pressure records showed indications of hydrofracture at pressures below lithostatic, and permeabilities rose rapidly as the estimated effective stress dropped below 0.8 MPa. Even after the borehole was sealed, the plastic deformation of the formation and relatively large gap of the wire wrapped screen allowed sediment to flow into the casing. Force equilibrium calculations predict sediment would have filled the borehole to 10 cm above the top of the screen by the time slug tests were conducted 1.5 years after the borehole was sealed. Reanalysis of the slug test results with these conditions yields several orders of magnitude higher permeability estimates than the original analysis which assumed an open casing. Overall the results based on only the tests with no sign of hydrofracture yield a permeability range of 10−14–10−15 m2 and a rate of increase in permeability with decreasing effective stress consistent with laboratory tests on samples from the décollement zone.
First-order kinetic gas generation model parameters for wet landfills.
Faour, Ayman A; Reinhart, Debra R; You, Huaxin
2007-01-01
Landfill gas collection data from wet landfill cells were analyzed and first-order gas generation model parameters were estimated for the US EPA landfill gas emissions model (LandGEM). Parameters were determined through statistical comparison of predicted and actual gas collection. The US EPA LandGEM model appeared to fit the data well, provided it is preceded by a lag phase, which on average was 1.5 years. The first-order reaction rate constant, k, and the methane generation potential, L(o), were estimated for a set of landfills with short-term waste placement and long-term gas collection data. Mean and 95% confidence parameter estimates for these data sets were found using mixed-effects model regression followed by bootstrap analysis. The mean values for the specific methane volume produced during the lag phase (V(sto)), L(o), and k were 33 m(3)/Megagrams (Mg), 76 m(3)/Mg, and 0.28 year(-1), respectively. Parameters were also estimated for three full scale wet landfills where waste was placed over many years. The k and L(o) estimated for these landfills were 0.21 year(-1), 115 m(3)/Mg, 0.11 year(-1), 95 m(3)/Mg, and 0.12 year(-1) and 87 m(3)/Mg, respectively. A group of data points from wet landfills cells with short-term data were also analyzed. A conservative set of parameter estimates was suggested based on the upper 95% confidence interval parameters as a k of 0.3 year(-1) and a L(o) of 100 m(3)/Mg if design is optimized and the lag is minimized.
Simulation of substrate degradation in composting of sewage sludge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Jun; Gao Ding, E-mail: gaod@igsnrr.ac.c; Chen Tongbin
2010-10-15
To simulate the substrate degradation kinetics of the composting process, this paper develops a mathematical model with a first-order reaction assumption and heat/mass balance equations. A pilot-scale composting test with a mixture of sewage sludge and wheat straw was conducted in an insulated reactor. The BVS (biodegradable volatile solids) degradation process, matrix mass, MC (moisture content), DM (dry matter) and VS (volatile solid) were simulated numerically by the model and experimental data. The numerical simulation offered a method for simulating k (the first-order rate constant) and estimating k{sub 20} (the first-order rate constant at 20 {sup o}C). After comparison withmore » experimental values, the relative error of the simulation value of the mass of the compost at maturity was 0.22%, MC 2.9%, DM 4.9% and VS 5.2%, which mean that the simulation is a good fit. The k of sewage sludge was simulated, and k{sub 20}, k{sub 20s} (first-order rate coefficient of slow fraction of BVS at 20 {sup o}C) of the sewage sludge were estimated as 0.082 and 0.015 d{sup -1}, respectively.« less
Wagner, Brian J.; Gorelick, Steven M.
1986-01-01
A simulation nonlinear multiple-regression methodology for estimating parameters that characterize the transport of contaminants is developed and demonstrated. Finite difference contaminant transport simulation is combined with a nonlinear weighted least squares multiple-regression procedure. The technique provides optimal parameter estimates and gives statistics for assessing the reliability of these estimates under certain general assumptions about the distributions of the random measurement errors. Monte Carlo analysis is used to estimate parameter reliability for a hypothetical homogeneous soil column for which concentration data contain large random measurement errors. The value of data collected spatially versus data collected temporally was investigated for estimation of velocity, dispersion coefficient, effective porosity, first-order decay rate, and zero-order production. The use of spatial data gave estimates that were 2–3 times more reliable than estimates based on temporal data for all parameters except velocity. Comparison of estimated linear and nonlinear confidence intervals based upon Monte Carlo analysis showed that the linear approximation is poor for dispersion coefficient and zero-order production coefficient when data are collected over time. In addition, examples demonstrate transport parameter estimation for two real one-dimensional systems. First, the longitudinal dispersivity and effective porosity of an unsaturated soil are estimated using laboratory column data. We compare the reliability of estimates based upon data from individual laboratory experiments versus estimates based upon pooled data from several experiments. Second, the simulation nonlinear regression procedure is extended to include an additional governing equation that describes delayed storage during contaminant transport. The model is applied to analyze the trends, variability, and interrelationship of parameters in a mourtain stream in northern California.
Estimated Accuracy of Three Common Trajectory Statistical Methods
NASA Technical Reports Server (NTRS)
Kabashnikov, Vitaliy P.; Chaikovsky, Anatoli P.; Kucsera, Tom L.; Metelskaya, Natalia S.
2011-01-01
Three well-known trajectory statistical methods (TSMs), namely concentration field (CF), concentration weighted trajectory (CWT), and potential source contribution function (PSCF) methods were tested using known sources and artificially generated data sets to determine the ability of TSMs to reproduce spatial distribution of the sources. In the works by other authors, the accuracy of the trajectory statistical methods was estimated for particular species and at specified receptor locations. We have obtained a more general statistical estimation of the accuracy of source reconstruction and have found optimum conditions to reconstruct source distributions of atmospheric trace substances. Only virtual pollutants of the primary type were considered. In real world experiments, TSMs are intended for application to a priori unknown sources. Therefore, the accuracy of TSMs has to be tested with all possible spatial distributions of sources. An ensemble of geographical distributions of virtual sources was generated. Spearman s rank order correlation coefficient between spatial distributions of the known virtual and the reconstructed sources was taken to be a quantitative measure of the accuracy. Statistical estimates of the mean correlation coefficient and a range of the most probable values of correlation coefficients were obtained. All the TSMs that were considered here showed similar close results. The maximum of the ratio of the mean correlation to the width of the correlation interval containing the most probable correlation values determines the optimum conditions for reconstruction. An optimal geographical domain roughly coincides with the area supplying most of the substance to the receptor. The optimal domain s size is dependent on the substance decay time. Under optimum reconstruction conditions, the mean correlation coefficients can reach 0.70 0.75. The boundaries of the interval with the most probable correlation values are 0.6 0.9 for the decay time of 240 h and 0.5 0.95 for the decay time of 12 h. The best results of source reconstruction can be expected for the trace substances with a decay time on the order of several days. Although the methods considered in this paper do not guarantee high accuracy they are computationally simple and fast. Using the TSMs in optimum conditions and taking into account the range of uncertainties, one can obtain a first hint on potential source areas.
FIRST ORDER ESTIMATES OF ENERGY REQUIREMENTS FOR POLLUTION CONTROL
This report presents estimates of the energy demand attributable to environmental control of pollution from 'stationary point sources.' This class of pollution source includes powerplants, factories, refineries, municipal waste water treatment plants, etc., but excludes 'mobile s...
A Preliminary Assessment of the S-3A SRAL Performances in SAR Mode
NASA Astrophysics Data System (ADS)
Dinardo, Salvatore; Scharroo, Remko; Bonekamp, Hans; Lucas, Bruno; Loddo, Carolina; Benveniste, Jerome
2016-08-01
The present work aims to assess and characterize the S3-A SRAL Altimeter performance in closed-loop tracking mode and in open ocean conditions. We have processed the Sentinel-3 SAR data products from L0 until L2 using an adaptation of the ESRIN GPOD CryoSat-2 Processor SARvatore.During the Delay-Doppler processing, we have chosen to activate the range zero-padding option.The L2 altimetric geophysical parameters, that are to be validated, are the sea surface height above the ellipsoid (SSH), sea level anomaly (SLA), the significant wave height (SWH) and wind speed (U10), all estimated at 20 Hz.The orbit files are the POD MOE, while the geo- corrections are extracted from the RADS database.In order to assess the accuracy of the wave&wind products, we have been using an ocean wave&wind speed model output (wind speed at 10 meter high above the sea surface) from the ECMWF.We have made a first order approximation of the sea state bias as -4.7% of the SWH.In order to assess the precision performance of SRAL SAR mode, we compute the level of instrumental noise (range, wave height and wind speed) for different conditions of sea state.
The Simple Lamb Wave Analysis to Characterize Concrete Wide Beams by the Practical MASW Test
Lee, Young Hak; Oh, Taekeun
2016-01-01
In recent years, the Lamb wave analysis by the multi-channel analysis of surface waves (MASW) for concrete structures has been an effective nondestructive evaluation, such as the condition assessment and dimension identification by the elastic wave velocities and their reflections from boundaries. This study proposes an effective Lamb wave analysis by the practical application of MASW to concrete wide beams in an easy and simple manner in order to identify the dimension and elastic wave velocity (R-wave) for the condition assessment (e.g., the estimation of elastic properties). This is done by identifying the zero-order antisymmetric (A0) and first-order symmetric (S1) modes among multimodal Lamb waves. The MASW data were collected on eight concrete wide beams and compared to the actual depth and to the pressure (P-) wave velocities collected for the same specimen. Information is extracted from multimodal Lamb wave dispersion curves to obtain the elastic stiffness parameters and the thickness of the concrete structures. Due to the simple and cost-effective procedure associated with the MASW processing technique, the characteristics of several fundamental modes in the experimental Lamb wave dispersion curves could be measured. Available reference data are in good agreement with the parameters that were determined by our analysis scheme. PMID:28773562
NASA Astrophysics Data System (ADS)
Lowman, L.; Barros, A. P.
2014-12-01
Computational modeling of surface erosion processes is inherently difficult because of the four-dimensional nature of the problem and the multiple temporal and spatial scales that govern individual mechanisms. Landscapes are modified via surface and fluvial erosion and exhumation, each of which takes place over a range of time scales. Traditional field measurements of erosion/exhumation rates are scale dependent, often valid for a single point-wise location or averaging over large aerial extents and periods with intense and mild erosion. We present a method of remotely estimating erosion rates using a Bayesian hierarchical model based upon the stream power erosion law (SPEL). A Bayesian approach allows for estimating erosion rates using the deterministic relationship given by the SPEL and data on channel slopes and precipitation at the basin and sub-basin scale. The spatial scale associated with this framework is the elevation class, where each class is characterized by distinct morphologic behavior observed through different modes in the distribution of basin outlet elevations. Interestingly, the distributions of first-order outlets are similar in shape and extent to the distribution of precipitation events (i.e. individual storms) over a 14-year period between 1998-2011. We demonstrate an application of the Bayesian hierarchical modeling framework for five basins and one intermontane basin located in the central Andes between 5S and 20S. Using remotely sensed data of current annual precipitation rates from the Tropical Rainfall Measuring Mission (TRMM) and topography from a high resolution (3 arc-seconds) digital elevation map (DEM), our erosion rate estimates are consistent with decadal-scale estimates based on landslide mapping and sediment flux observations and 1-2 orders of magnitude larger than most millennial and million year timescale estimates from thermochronology and cosmogenic nuclides.
NASA Astrophysics Data System (ADS)
Herrera, L.; Hoyos Ortiz, C. D.
2017-12-01
The spatio-temporal evolution of the Atmospheric Boundary Layer (ABL) in the Aburrá Valley, a narrow highly complex mountainous terrain located in the Colombian Andes, is studied using different datasets including radiosonde and remote sensors from the meteorological network of the Aburrá Valley Early Warning System. Different techniques are developed in order to estimate Mixed Layer Height (MLH) based on variance of the ceilometer backscattering profiles. The Medellín metropolitan area, home of 4.5 million people, is located on the base and the hills of the valley. The generally large aerosol load within the valley from anthropogenic emissions allows the use of ceilometer retrievals of the MLH, especially under stable atmospheric conditions (late at night and early in the morning). Convective atmospheres, however, favor the aerosol dispersion which in turns increases the uncertainty associated with the estimation of the Convective Boundary Layer using ceilometer retrievals. A multi-sensor technique is also developed based on Richardson Number estimations using a Radar Wind Profiler combined with a Microwave Radiometer. Results of this technique seem to be more accurate thorough the diurnal cycle. ABL retrievals are available from October 2014 to April 2017. The diurnal cycle of the ABL exhibits monomodal behavior, highly influenced by the evolution of the potential temperature profile, and the turbulent fluxes near the surface. On the other hand, the backscattering diurnal cycle presents a bimodal structure, showing that the amount of aerosol particles at the lower troposphere is strongly influenced by anthropogenic emissions, dispersion conditioned by topography and by the ABL dynamics, conditioning the available vertical height for the pollutants to interact and disperse. Nevertheless, the amount, distribution or type of atmospheric aerosols does not appear to have a first order influence on the MLH variations or evolution. Results also show that intra-annual and interannual variations of cloudiness and surface incident radiation strongly condition the ABL expansion rate leading to oscillatory patterns. March (July) is the month with the lowest (highest) ABL mean. In March, the ABL at the base of the Valley is less than the height of surrounding mountains, leading to particulate matter accumulation.
NASA Astrophysics Data System (ADS)
Harlow, R. C.; Blockley, E. W.; Brooks, I. M.; Essery, R.; Milton, S.; Renfrew, I.; Vosper, S.
2016-12-01
The spatio-temporal evolution of the Atmospheric Boundary Layer (ABL) in the Aburrá Valley, a narrow highly complex mountainous terrain located in the Colombian Andes, is studied using different datasets including radiosonde and remote sensors from the meteorological network of the Aburrá Valley Early Warning System. Different techniques are developed in order to estimate Mixed Layer Height (MLH) based on variance of the ceilometer backscattering profiles. The Medellín metropolitan area, home of 4.5 million people, is located on the base and the hills of the valley. The generally large aerosol load within the valley from anthropogenic emissions allows the use of ceilometer retrievals of the MLH, especially under stable atmospheric conditions (late at night and early in the morning). Convective atmospheres, however, favor the aerosol dispersion which in turns increases the uncertainty associated with the estimation of the Convective Boundary Layer using ceilometer retrievals. A multi-sensor technique is also developed based on Richardson Number estimations using a Radar Wind Profiler combined with a Microwave Radiometer. Results of this technique seem to be more accurate thorough the diurnal cycle. ABL retrievals are available from October 2014 to April 2017. The diurnal cycle of the ABL exhibits monomodal behavior, highly influenced by the evolution of the potential temperature profile, and the turbulent fluxes near the surface. On the other hand, the backscattering diurnal cycle presents a bimodal structure, showing that the amount of aerosol particles at the lower troposphere is strongly influenced by anthropogenic emissions, dispersion conditioned by topography and by the ABL dynamics, conditioning the available vertical height for the pollutants to interact and disperse. Nevertheless, the amount, distribution or type of atmospheric aerosols does not appear to have a first order influence on the MLH variations or evolution. Results also show that intra-annual and interannual variations of cloudiness and surface incident radiation strongly condition the ABL expansion rate leading to oscillatory patterns. March (July) is the month with the lowest (highest) ABL mean. In March, the ABL at the base of the Valley is less than the height of surrounding mountains, leading to particulate matter accumulation.
Engel, Aaron J; Bashford, Gregory R
2015-08-01
Ultrasound based shear wave elastography (SWE) is a technique used for non-invasive characterization and imaging of soft tissue mechanical properties. Robust estimation of shear wave propagation speed is essential for imaging of soft tissue mechanical properties. In this study we propose to estimate shear wave speed by inversion of the first-order wave equation following directional filtering. This approach relies on estimation of first-order derivatives which allows for accurate estimations using smaller smoothing filters than when estimating second-order derivatives. The performance was compared to three current methods used to estimate shear wave propagation speed: direct inversion of the wave equation (DIWE), time-to-peak (TTP) and cross-correlation (CC). The shear wave speed of three homogeneous phantoms of different elastic moduli (gelatin by weight of 5%, 7%, and 9%) were measured with each method. The proposed method was shown to produce shear speed estimates comparable to the conventional methods (standard deviation of measurements being 0.13 m/s, 0.05 m/s, and 0.12 m/s), but with simpler processing and usually less time (by a factor of 1, 13, and 20 for DIWE, CC, and TTP respectively). The proposed method was able to produce a 2-D speed estimate from a single direction of wave propagation in about four seconds using an off-the-shelf PC, showing the feasibility of performing real-time or near real-time elasticity imaging with dedicated hardware.
Design of a compensation for an ARMA model of a discrete time system. M.S. Thesis
NASA Technical Reports Server (NTRS)
Mainemer, C. I.
1978-01-01
The design of an optimal dynamic compensator for a multivariable discrete time system is studied. Also the design of compensators to achieve minimum variance control strategies for single input single output systems is analyzed. In the first problem the initial conditions of the plant are random variables with known first and second order moments, and the cost is the expected value of the standard cost, quadratic in the states and controls. The compensator is based on the minimum order Luenberger observer and it is found optimally by minimizing a performance index. Necessary and sufficient conditions for optimality of the compensator are derived. The second problem is solved in three different ways; two of them working directly in the frequency domain and one working in the time domain. The first and second order moments of the initial conditions are irrelevant to the solution. Necessary and sufficient conditions are derived for the compensator to minimize the variance of the output.
Proper orthogonal decomposition-based spectral higher-order stochastic estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baars, Woutijn J., E-mail: wbaars@unimelb.edu.au; Tinney, Charles E.
A unique routine, capable of identifying both linear and higher-order coherence in multiple-input/output systems, is presented. The technique combines two well-established methods: Proper Orthogonal Decomposition (POD) and Higher-Order Spectra Analysis. The latter of these is based on known methods for characterizing nonlinear systems by way of Volterra series. In that, both linear and higher-order kernels are formed to quantify the spectral (nonlinear) transfer of energy between the system's input and output. This reduces essentially to spectral Linear Stochastic Estimation when only first-order terms are considered, and is therefore presented in the context of stochastic estimation as spectral Higher-Order Stochastic Estimationmore » (HOSE). The trade-off to seeking higher-order transfer kernels is that the increased complexity restricts the analysis to single-input/output systems. Low-dimensional (POD-based) analysis techniques are inserted to alleviate this void as POD coefficients represent the dynamics of the spatial structures (modes) of a multi-degree-of-freedom system. The mathematical framework behind this POD-based HOSE method is first described. The method is then tested in the context of jet aeroacoustics by modeling acoustically efficient large-scale instabilities as combinations of wave packets. The growth, saturation, and decay of these spatially convecting wave packets are shown to couple both linearly and nonlinearly in the near-field to produce waveforms that propagate acoustically to the far-field for different frequency combinations.« less
A state space based approach to localizing single molecules from multi-emitter images.
Vahid, Milad R; Chao, Jerry; Ward, E Sally; Ober, Raimund J
2017-01-28
Single molecule super-resolution microscopy is a powerful tool that enables imaging at sub-diffraction-limit resolution. In this technique, subsets of stochastically photoactivated fluorophores are imaged over a sequence of frames and accurately localized, and the estimated locations are used to construct a high-resolution image of the cellular structures labeled by the fluorophores. Available localization methods typically first determine the regions of the image that contain emitting fluorophores through a process referred to as detection. Then, the locations of the fluorophores are estimated accurately in an estimation step. We propose a novel localization method which combines the detection and estimation steps. The method models the given image as the frequency response of a multi-order system obtained with a balanced state space realization algorithm based on the singular value decomposition of a Hankel matrix, and determines the locations of intensity peaks in the image as the pole locations of the resulting system. The locations of the most significant peaks correspond to the locations of single molecules in the original image. Although the accuracy of the location estimates is reasonably good, we demonstrate that, by using the estimates as the initial conditions for a maximum likelihood estimator, refined estimates can be obtained that have a standard deviation close to the Cramér-Rao lower bound-based limit of accuracy. We validate our method using both simulated and experimental multi-emitter images.
Painting recognition with smartphones equipped with inertial measurement unit
NASA Astrophysics Data System (ADS)
Masiero, Andrea; Guarnieri, Alberto; Pirotti, Francesco; Vettore, Antonio
2015-06-01
Recently, several works have been proposed in the literature to take advantage of the diffusion of smartphones to improve people experience during museum visits. The rationale is that of substituting traditional written/audio guides with interactive electronic guides usable on a mobile phone. Augmented reality systems are usually considered to make the use of such electronic guides more effective for the user. The main goal of such augmented reality system (i.e. providing the user with the information of his/her interest) is usually achieved by properly executing the following three tasks: recognizing the object of interest to the user, retrieving the most relevant information about it, properly presenting the retrieved information. This paper focuses on the first task: we consider the problem of painting recognition by means of measure- ments provided by a smartphone. We assume that the user acquires one image of the painting of interest with the standard camera of the device. This image is compared with a set of reference images of the museum objects in order to recognize the object of interest to the user. Since comparing images taken in different conditions can lead to unsatisfactory recognition results, the acquired image is typically properly transformed in order to improve the results of the recognition system: first, the system estimates the homography between properly matched features in the two images. Then, the user image is transformed accordingly to the estimated homography. Finally, it is compared with the reference one. This work proposes a novel method to exploit inertial measurement unit (IMU) measurements to improve the system performance, in particular in terms of computational load reduction: IMU measurements are exploited to reduce both the computational burden required to estimate the transformation to be applied to the user image, and the number of reference images to be compared with it.
Pant, Jeevan K; Krishnan, Sridhar
2018-03-15
To present a new compressive sensing (CS)-based method for the acquisition of ECG signals and for robust estimation of heart-rate variability (HRV) parameters from compressively sensed measurements with high compression ratio. CS is used in the biosensor to compress the ECG signal. Estimation of the locations of QRS segments is carried out by applying two algorithms on the compressed measurements. The first algorithm reconstructs the ECG signal by enforcing a block-sparse structure on the first-order difference of the signal, so the transient QRS segments are significantly emphasized on the first-order difference of the signal. Multiple block-divisions of the signals are carried out with various block lengths, and multiple reconstructed signals are combined to enhance the robustness of the localization of the QRS segments. The second algorithm removes errors in the locations of QRS segments by applying low-pass filtering and morphological operations. The proposed CS-based method is found to be effective for the reconstruction of ECG signals by enforcing transient QRS structures on the first-order difference of the signal. It is demonstrated to be robust not only to high compression ratio but also to various artefacts present in ECG signals acquired by using on-body wireless sensors. HRV parameters computed by using the QRS locations estimated from the signals reconstructed with a compression ratio as high as 90% are comparable with that computed by using QRS locations estimated by using the Pan-Tompkins algorithm. The proposed method is useful for the realization of long-term HRV monitoring systems by using CS-based low-power wireless on-body biosensors.
Order of stimulus presentation influences children's acquisition in receptive identification tasks.
Petursdottir, Anna Ingeborg; Aguilar, Gabriella
2016-03-01
Receptive identification is usually taught in matching-to-sample format, which entails the presentation of an auditory sample stimulus and several visual comparison stimuli in each trial. Conflicting recommendations exist regarding the order of stimulus presentation in matching-to-sample trials. The purpose of this study was to compare acquisition in receptive identification tasks under 2 conditions: when the sample was presented before the comparisons (sample first) and when the comparisons were presented before the sample (comparison first). Participants included 4 typically developing kindergarten-age boys. Stimuli, which included birds and flags, were presented on a computer screen. Acquisition in the 2 conditions was compared in an adapted alternating-treatments design combined with a multiple baseline design across stimulus sets. All participants took fewer trials to meet the mastery criterion in the sample-first condition than in the comparison-first condition. © 2015 Society for the Experimental Analysis of Behavior.
Sun, Caixia; Cang, Tao; Wang, Zhiwei; Wang, Xinquan; Yu, Ruixian; Wang, Qiang; Zhao, Xueping
2015-05-01
The health risk to humans of pesticide application on minor crops, such as strawberry, requires quantification. Here, the dissipation and residual levels of three fungicides (pyraclostrobin, myclobutanil, and difenoconazole) were studied for strawberry under greenhouse conditions using high-performance liquid chromatography (HPLC)-tandem mass spectrometry after Quick, Easy, Cheap, Effective, Rugged, and Safe extraction. This method was validated using blank samples, with all mean recoveries of these three fungicides exceeding 80%. The residues of all three fungicides dissipated following first-order kinetics. The half-lives of pyraclostrobin, myclobutanil, and difenoconazole were 1.69, 3.30, and 3.65 days following one time application and 1.73, 5.78, and 6.30 days following two times applications, respectively. Fungicide residue was determined by comparing the estimated daily intake of the three fungicides against the acceptable daily intake. The results indicate that the potential health risk of the three fungicides was not significant in strawberry when following good agricultural practices (GAP) under greenhouse conditions.
NASA Astrophysics Data System (ADS)
van Gent, P. L.; Schrijer, F. F. J.; van Oudheusden, B. W.
2018-04-01
Pseudo-tracking refers to the construction of imaginary particle paths from PIV velocity fields and the subsequent estimation of the particle (material) acceleration. In view of the variety of existing and possible alternative ways to perform the pseudo-tracking method, it is not straightforward to select a suitable combination of numerical procedures for its implementation. To address this situation, this paper extends the theoretical framework for the approach. The developed theory is verified by applying various implementations of pseudo-tracking to a simulated PIV experiment. The findings of the investigations allow us to formulate the following insights and practical recommendations: (1) the velocity errors along the imaginary particle track are primarily a function of velocity measurement errors and spatial velocity gradients; (2) the particle path may best be calculated with second-order accurate numerical procedures while ensuring that the CFL condition is met; (3) least-square fitting of a first-order polynomial is a suitable method to estimate the material acceleration from the track; and (4) a suitable track length may be selected on the basis of the variation in material acceleration with track length.
LANDSAT-4/5 image data quality analysis
NASA Technical Reports Server (NTRS)
Malaret, E.; Bartolucci, L. A.; Lozano, D. F.; Anuta, P. E.; Mcgillem, C. D.
1984-01-01
A LANDSAT Thematic Mapper (TM) quality evaluation study was conducted to identify geometric and radiometric sensor errors in the post-launch environment. The study began with the launch of LANDSAT-4. Several error conditions were found, including band-to-band misregistration and detector-to detector radiometric calibration errors. Similar analysis was made for the LANDSAT-5 Thematic Mapper and compared with results for LANDSAT-4. Remaining band-to-band misregistration was found to be within tolerances and detector-to-detector calibration errors were not severe. More coherent noise signals were observed in TM-5 than in TM-4, although the amplitude was generally less. The scan direction differences observed in TM-4 were still evident in TM-5. The largest effect was in Band 4 where nearly a one digital count difference was observed. Resolution estimation was carried out using roads in TM-5 for the primary focal plane bands rather than field edges as in TM-4. Estimates using roads gave better resolution. Thermal IR band calibration studies were conducted and new nonlinear calibration procedures were defined for TM-5. The overall conclusion is that there are no first order errors in TM-5 and any remaining problems are second or third order.
Nonlinear estimation theory applied to orbit determination
NASA Technical Reports Server (NTRS)
Choe, C. Y.
1972-01-01
The development of an approximate nonlinear filter using the Martingale theory and appropriate smoothing properties is considered. Both the first order and the second order moments were estimated. The filter developed can be classified as a modified Gaussian second order filter. Its performance was evaluated in a simulated study of the problem of estimating the state of an interplanetary space vehicle during both a simulated Jupiter flyby and a simulated Jupiter orbiter mission. In addition to the modified Gaussian second order filter, the modified truncated second order filter was also evaluated in the simulated study. Results obtained with each of these filters were compared with numerical results obtained with the extended Kalman filter and the performance of each filter is determined by comparison with the actual estimation errors. The simulations were designed to determine the effects of the second order terms in the dynamic state relations, the observation state relations, and the Kalman gain compensation term. It is shown that the Kalman gain-compensated filter which includes only the Kalman gain compensation term is superior to all of the other filters.
Equations of condition for high order Runge-Kutta-Nystrom formulae
NASA Technical Reports Server (NTRS)
Bettis, D. G.
1974-01-01
Derivation of the equations of condition of order eight for a general system of second-order differential equations approximated by the basic Runge-Kutta-Nystrom algorithm. For this general case, the number of equations of condition is considerably larger than for the special case where the first derivative is not present. Specifically, it is shown that, for orders two through eight, the number of equations for each order is 1, 1, 1, 2, 3, 5, and 9 for the special case and is 1, 1, 2, 5, 13, 34, and 95 for the general case.
NASA Astrophysics Data System (ADS)
Somu, Vijaya Bhaskar
Apparent ionospheric reflection heights estimated using the zero-to-zero and peak-to-peak methods to measure skywave delay relative to the groundwave were compared for 108 first and 124 subsequent strokes observed at LOG in 2009. For either metric there was a considerable decrease in average re ection height for subsequent strokes relative to first strokes. Median uncertainties in daytime re ection heights did not exceed 0.7 km. The standard errors in mean re ection heights were less than 3% of the mean value. Apparent changes in re ection height (estimated using the peak-to-peak method) within individual ashes for 54 daytime and 11 nighttime events at distances ranging from 50 km to 330 km were compared. For daytime conditions, the majority of the ashes showed a monotonic decrease in re ection height. For nighttime ashes, the monotonic decrease was found to be considerably less frequent. The apparent ionospheric re ection height tends to increase with return-stroke peak current. In order to increase the sample size for nighttime conditions, additional data for 43 nighttime flashes observed at LOG in 2014 were analyzed. The "fast-break-point" method of measuring skywave delay (McDonald et al., 1979) was additionally used. The 2014 results for return strokes are generally consistent with the 2009 results. The 2014 data were also used for estimating ionospheric re ection heights for elevated sources (6 CIDs and 3 PB pulses) using the double-skywave feature. The results were compared with re ection heights estimated for corresponding return strokes (if any), and fairly good agreement was generally found. It has been shown, using two different FDTD simulation codes, that the observed differences in re ection height cannot be explained by the difference in the frequency content of first and subsequent return-stroke currents. FDTD simulations showed that within 200 km the re ection heights estimated using the peak-to-peak method are close to the hOE parameter of the ionospheric profile for both daytime and nighttime conditions and for both first and second skywaves. The TL model was used to estimate the radial extent of elves produced by the interaction of LEMP with the ionosphere as a function of return-stroke peak current. For a peak current of 100 kA and the speed equal to one-half of the speed of light, the expected radius of elves is 157 km. Skywaves associated with 24 return strokes in 6 lightning ashes triggered at CB in 2015 and recorded at LOG (at a distance of 45 km from CB) were not found for any of the strokes recorded. In contrast, natural-lightning strokes do produce skywaves at comparable distances. One possible reason is the difference in the higher-frequency content (field waveforms for triggered lightning are more narrow than for natural lightning).
Bioreactors with immobilized lipases: state of the art.
Balcão, V M; Paiva, A L; Malcata, F X
1996-05-01
This review attempts to provide an updated compilation of studies reported in the literature pertaining to reactors containing lipases in immobilized forms, in a way that helps the reader direct a bibliographic search and develop an integrated perspective of the subject. Highlights are given to industrial applications of lipases (including control and economic considerations), as well as to methods of immobilization and configurations of reactors in which lipases are used. Features associated with immobilized lipase kinetics such as enzyme activities, adsorption properties, optimum operating conditions, and estimates of the lumped parameters in classical kinetic formulations (Michaelis-Menten model for enzyme action and first-order model for enzyme decay) are presented in the text in a systematic tabular form.
Modified physiologically equivalent temperature—basics and applications for western European climate
NASA Astrophysics Data System (ADS)
Chen, Yung-Chang; Matzarakis, Andreas
2018-05-01
A new thermal index, the modified physiologically equivalent temperature (mPET) has been developed for universal application in different climate zones. The mPET has been improved against the weaknesses of the original physiologically equivalent temperature (PET) by enhancing evaluation of the humidity and clothing variability. The principles of mPET and differences between original PET and mPET are introduced and discussed in this study. Furthermore, this study has also evidenced the usability of mPET with climatic data in Freiburg, which is located in Western Europe. Comparisons of PET, mPET, and Universal Thermal Climate Index (UTCI) have shown that mPET gives a more realistic estimation of human thermal sensation than the other two thermal indices (PET, UTCI) for the thermal conditions in Freiburg. Additionally, a comparison of physiological parameters between mPET model and PET model (Munich Energy Balance Model for Individual, namely MEMI) is proposed. The core temperatures and skin temperatures of PET model vary more violently to a low temperature during cold stress than the mPET model. It can be regarded as that the mPET model gives a more realistic core temperature and mean skin temperature than the PET model. Statistical regression analysis of mPET based on the air temperature, mean radiant temperature, vapor pressure, and wind speed has been carried out. The R square (0.995) has shown a well co-relationship between human biometeorological factors and mPET. The regression coefficient of each factor represents the influence of the each factor on changing mPET (i.e., ±1 °C of T a = ± 0.54 °C of mPET). The first-order regression has been considered predicting a more realistic estimation of mPET at Freiburg during 2003 than the other higher order regression model, because the predicted mPET from the first-order regression has less difference from mPET calculated from measurement data. Statistic tests recognize that mPET can effectively evaluate the influences of all human biometeorological factors on thermal environments. Moreover, a first-order regression function can also predict the thermal evaluations of the mPET by using human biometeorological factors in Freiburg.
Doeschl-Wilson, Andrea B.; Villanueva, Beatriz; Kyriazakis, Ilias
2012-01-01
Reliable phenotypes are paramount for meaningful quantification of genetic variation and for estimating individual breeding values on which genetic selection is based. In this paper, we assert that genetic improvement of host tolerance to disease, although desirable, may be first of all handicapped by the ability to obtain unbiased tolerance estimates at a phenotypic level. In contrast to resistance, which can be inferred by appropriate measures of within host pathogen burden, tolerance is more difficult to quantify as it refers to change in performance with respect to changes in pathogen burden. For this reason, tolerance phenotypes have only been specified at the level of a group of individuals, where such phenotypes can be estimated using regression analysis. However, few stsudies have raised the potential bias in these estimates resulting from confounding effects between resistance and tolerance. Using a simulation approach, we demonstrate (i) how these group tolerance estimates depend on within group variation and co-variation in resistance, tolerance, and vigor (performance in a pathogen free environment); and (ii) how tolerance estimates are affected by changes in pathogen virulence over the time course of infection and by the timing of measurements. We found that in order to obtain reliable group tolerance estimates, it is important to account for individual variation in vigor, if present, and that all individuals are at the same stage of infection when measurements are taken. The latter requirement makes estimation of tolerance based on cross-sectional field data challenging, as individuals become infected at different time points and the individual onset of infection is unknown. Repeated individual measurements of within host pathogen burden and performance would not only be valuable for inferring the infection status of individuals in field conditions, but would also provide tolerance estimates that capture the entire time course of infection. PMID:23412990
Bioinstrumentation for evaluation of workload in payload specialists - Results of ASSESS II
NASA Technical Reports Server (NTRS)
Wegmann, H. M.; Herrmann, R.; Winget, C. M.
1979-01-01
Results of the medical experiment on payload specialist workloads conducted as part of the ASSESS II airborne simulation of Spacelab conditions are reported. Subjects were fitted with temperature probes and ECG, EEG and EOG electrodes, and hormone and electrolyte excretion was monitored in order to evaluate the changes in circadian rhythms, sleep patterns and stress responses brought about by mission schedules over the ten days of the experiment. Internal dissociations of circadian rhythms, sleep disturbances and increased stress levels were observed, especially during the first three days of the experiment, indicating a considerable workload to be imposed upon the payload specialists. An intensive premission simulation is suggested as a means of estimating overall workloads and allowing payload specialist adaptation to mission conditions. The bioinstrumentation which was developed and applied to the airborne laboratory is concluded to be a practical and reliable tool in the assessment of payload specialist workloads.
U.S. ENVIRONMENTAL PROTECTION AGENCY'S LANDFILL GAS EMISSION MODEL (LANDGEM)
The paper discusses EPA's available software for estimating landfill gas emissions. This software is based on a first-order decomposition rate equation using empirical data from U.S. landfills. The software provides a relatively simple approach to estimating landfill gas emissi...
Accurate and efficient calculation of response times for groundwater flow
NASA Astrophysics Data System (ADS)
Carr, Elliot J.; Simpson, Matthew J.
2018-03-01
We study measures of the amount of time required for transient flow in heterogeneous porous media to effectively reach steady state, also known as the response time. Here, we develop a new approach that extends the concept of mean action time. Previous applications of the theory of mean action time to estimate the response time use the first two central moments of the probability density function associated with the transition from the initial condition, at t = 0, to the steady state condition that arises in the long time limit, as t → ∞ . This previous approach leads to a computationally convenient estimation of the response time, but the accuracy can be poor. Here, we outline a powerful extension using the first k raw moments, showing how to produce an extremely accurate estimate by making use of asymptotic properties of the cumulative distribution function. Results are validated using an existing laboratory-scale data set describing flow in a homogeneous porous medium. In addition, we demonstrate how the results also apply to flow in heterogeneous porous media. Overall, the new method is: (i) extremely accurate; and (ii) computationally inexpensive. In fact, the computational cost of the new method is orders of magnitude less than the computational effort required to study the response time by solving the transient flow equation. Furthermore, the approach provides a rigorous mathematical connection with the heuristic argument that the response time for flow in a homogeneous porous medium is proportional to L2 / D , where L is a relevant length scale, and D is the aquifer diffusivity. Here, we extend such heuristic arguments by providing a clear mathematical definition of the proportionality constant.
Vehicle States Observer Using Adaptive Tire-Road Friction Estimator
NASA Astrophysics Data System (ADS)
Kwak, Byunghak; Park, Youngjin
Vehicle stability control system is a new idea which can enhance the vehicle stability and handling in the emergency situation. This system requires the information of the yaw rate, sideslip angle and road friction in order to control the traction and braking forces at the individual wheels. This paper proposes an observer for the vehicle stability control system. This observer consisted of the state observer for vehicle motion estimation and the road condition estimator for the identification of the coefficient of the road friction. The state observer uses 2 degrees-of-freedom bicycle model and estimates the system variables based on the Kalman filter. The road condition estimator uses the same vehicle model and identifies the coefficient of the tire-road friction based on the recursive least square method. Both estimators make use of each other information. We show the effectiveness and feasibility of the proposed scheme under various road conditions through computer simulations of a fifteen degree-of-freedom non-linear vehicle model.
NASA Astrophysics Data System (ADS)
Barragán, Rosa María; Núñez, José; Arellano, Víctor Manuel; Nieva, David
2016-03-01
Exploration and exploitation of geothermal resources require the estimation of important physical characteristics of reservoirs including temperatures, pressures and in situ two-phase conditions, in order to evaluate possible uses and/or investigate changes due to exploitation. As at relatively high temperatures (>150 °C) reservoir fluids usually attain chemical equilibrium in contact with hot rocks, different models based on the chemistry of fluids have been developed that allow deep conditions to be estimated. Currently either in water-dominated or steam-dominated reservoirs the chemistry of steam has been useful for working out reservoir conditions. In this context, three methods based on the Fischer-Tropsch (FT) and combined H2S-H2 (HSH) mineral-gas reactions have been developed for estimating temperatures and the quality of the in situ two-phase mixture prevailing in the reservoir. For these methods the mineral buffers considered to be controlling H2S-H2 composition of fluids are as follows. The pyrite-magnetite buffer (FT-HSH1); the pyrite-hematite buffer (FT-HSH2) and the pyrite-pyrrhotite buffer (FT-HSH3). Currently from such models the estimations of both, temperature and steam fraction in the two-phase fluid are obtained graphically by using a blank diagram with a background theoretical solution as reference. Thus large errors are involved since the isotherms are highly nonlinear functions while reservoir steam fractions are taken from a logarithmic scale. In order to facilitate the use of the three FT-HSH methods and minimize visual interpolation errors, the EQUILGAS program that numerically solves the equations of the FT-HSH methods was developed. In this work the FT-HSH methods and the EQUILGAS program are described. Illustrative examples for Mexican fields are also given in order to help the users in deciding which method could be more suitable for every specific data set.
Genetic analysis of partial egg production records in Japanese quail using random regression models.
Abou Khadiga, G; Mahmoud, B Y F; Farahat, G S; Emam, A M; El-Full, E A
2017-08-01
The main objectives of this study were to detect the most appropriate random regression model (RRM) to fit the data of monthly egg production in 2 lines (selected and control) of Japanese quail and to test the consistency of different criteria of model choice. Data from 1,200 female Japanese quails for the first 5 months of egg production from 4 consecutive generations of an egg line selected for egg production in the first month (EP1) was analyzed. Eight RRMs with different orders of Legendre polynomials were compared to determine the proper model for analysis. All criteria of model choice suggested that the adequate model included the second-order Legendre polynomials for fixed effects, and the third-order for additive genetic effects and permanent environmental effects. Predictive ability of the best model was the highest among all models (ρ = 0.987). According to the best model fitted to the data, estimates of heritability were relatively low to moderate (0.10 to 0.17) showed a descending pattern from the first to the fifth month of production. A similar pattern was observed for permanent environmental effects with greater estimates in the first (0.36) and second (0.23) months of production than heritability estimates. Genetic correlations between separate production periods were higher (0.18 to 0.93) than their phenotypic counterparts (0.15 to 0.87). The superiority of the selected line over the control was observed through significant (P < 0.05) linear contrast estimates. Significant (P < 0.05) estimates of covariate effect (age at sexual maturity) showed a decreased pattern with greater impact on egg production in earlier ages (first and second months) than later ones. A methodology based on random regression animal models can be recommended for genetic evaluation of egg production in Japanese quail. © 2017 Poultry Science Association Inc.
Simple models of the hydrofracture process
NASA Astrophysics Data System (ADS)
Marder, M.; Chen, Chih-Hung; Patzek, T.
2015-12-01
Hydrofracturing to recover natural gas and oil relies on the creation of a fracture network with pressurized water. We analyze the creation of the network in two ways. First, we assemble a collection of analytical estimates for pressure-driven crack motion in simple geometries, including crack speed as a function of length, energy dissipated by fluid viscosity and used to break rock, and the conditions under which a second crack will initiate while a first is running. We develop a pseudo-three-dimensional numerical model that couples fluid motion with solid mechanics and can generate branching crack structures not specified in advance. One of our main conclusions is that the typical spacing between fractures must be on the order of a meter, and this conclusion arises in two separate ways. First, it arises from analysis of gas production rates, given the diffusion constants for gas in the rock. Second, it arises from the number of fractures that should be generated given the scale of the affected region and the amounts of water pumped into the rock.
Decentralized Quasi-Newton Methods
NASA Astrophysics Data System (ADS)
Eisen, Mark; Mokhtari, Aryan; Ribeiro, Alejandro
2017-05-01
We introduce the decentralized Broyden-Fletcher-Goldfarb-Shanno (D-BFGS) method as a variation of the BFGS quasi-Newton method for solving decentralized optimization problems. The D-BFGS method is of interest in problems that are not well conditioned, making first order decentralized methods ineffective, and in which second order information is not readily available, making second order decentralized methods impossible. D-BFGS is a fully distributed algorithm in which nodes approximate curvature information of themselves and their neighbors through the satisfaction of a secant condition. We additionally provide a formulation of the algorithm in asynchronous settings. Convergence of D-BFGS is established formally in both the synchronous and asynchronous settings and strong performance advantages relative to first order methods are shown numerically.
Holgado-Tello, Fco P; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana; Pérez-Gil, José A
2016-01-01
The Campbellian tradition provides a conceptual framework to assess threats to validity. On the other hand, different models of causal analysis have been developed to control estimation biases in different research designs. However, the link between design features, measurement issues, and concrete impact estimation analyses is weak. In order to provide an empirical solution to this problem, we use Structural Equation Modeling (SEM) as a first approximation to operationalize the analytical implications of threats to validity in quasi-experimental designs. Based on the analogies established between the Classical Test Theory (CTT) and causal analysis, we describe an empirical study based on SEM in which range restriction and statistical power have been simulated in two different models: (1) A multistate model in the control condition (pre-test); and (2) A single-trait-multistate model in the control condition (post-test), adding a new mediator latent exogenous (independent) variable that represents a threat to validity. Results show, empirically, how the differences between both the models could be partially or totally attributed to these threats. Therefore, SEM provides a useful tool to analyze the influence of potential threats to validity.
Holgado-Tello, Fco. P.; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana; Pérez-Gil, José A.
2016-01-01
The Campbellian tradition provides a conceptual framework to assess threats to validity. On the other hand, different models of causal analysis have been developed to control estimation biases in different research designs. However, the link between design features, measurement issues, and concrete impact estimation analyses is weak. In order to provide an empirical solution to this problem, we use Structural Equation Modeling (SEM) as a first approximation to operationalize the analytical implications of threats to validity in quasi-experimental designs. Based on the analogies established between the Classical Test Theory (CTT) and causal analysis, we describe an empirical study based on SEM in which range restriction and statistical power have been simulated in two different models: (1) A multistate model in the control condition (pre-test); and (2) A single-trait-multistate model in the control condition (post-test), adding a new mediator latent exogenous (independent) variable that represents a threat to validity. Results show, empirically, how the differences between both the models could be partially or totally attributed to these threats. Therefore, SEM provides a useful tool to analyze the influence of potential threats to validity. PMID:27378991
Estimating distributions with increasing failure rate in an imperfect repair model.
Kvam, Paul H; Singh, Harshinder; Whitaker, Lyn R
2002-03-01
A failed system is repaired minimally if after failure, it is restored to the working condition of an identical system of the same age. We extend the nonparametric maximum likelihood estimator (MLE) of a system's lifetime distribution function to test units that are known to have an increasing failure rate. Such items comprise a significant portion of working components in industry. The order-restricted MLE is shown to be consistent. Similar results hold for the Brown-Proschan imperfect repair model, which dictates that a failed component is repaired perfectly with some unknown probability, and is otherwise repaired minimally. The estimators derived are motivated and illustrated by failure data in the nuclear industry. Failure times for groups of emergency diesel generators and motor-driven pumps are analyzed using the order-restricted methods. The order-restricted estimators are consistent and show distinct differences from the ordinary MLEs. Simulation results suggest significant improvement in reliability estimation is available in many cases when component failure data exhibit the IFR property.
Parameters estimation using the first passage times method in a jump-diffusion model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khaldi, K., E-mail: kkhaldi@umbb.dz; LIMOSE Laboratory, Boumerdes University, 35000; Meddahi, S., E-mail: samia.meddahi@gmail.com
2016-06-02
The main purposes of this paper are two contributions: (1) it presents a new method, which is the first passage time (FPT method) generalized for all passage times (GPT method), in order to estimate the parameters of stochastic Jump-Diffusion process. (2) it compares in a time series model, share price of gold, the empirical results of the estimation and forecasts obtained with the GPT method and those obtained by the moments method and the FPT method applied to the Merton Jump-Diffusion (MJD) model.
Monochloramine Cometabolism by Mixed-Culture Nitrifiers ...
The current research investigated monochloramine cometabolism by nitrifying mixed cultures grown under drinking water relevant conditions and harvested from sand-packed reactors before conducting suspended growth batch kinetic experiments. Three batch reactors were used in each experiment: (1) a positive control to estimate ammonia kinetic parameters, (2) a negative control to account for abiotic reactions, and (3) a cometabolism reactor to estimate cometabolism kinetic constants. Kinetic parameters were estimated in AQUASIM with a simultaneous fit to all experimental data. Cometabolism kinetics were best described by a first order model. Monochloramine cometabolism kinetics were similar to those of ammonia metabolism, and monochloramine cometabolism was a significant loss mechanism (30% of the observed monochloramine loss). These results demonstrated that monochloramine cometabolism occurred in mixed cultures similar to those found in drinking water distribution systems; thus, cometabolism may be a significant contribution to monochloramine loss during nitrification episodes in drinking water distribution systems. The results demonstrated that monochloramine cometabolism occurred in mixed cultures similar to those found in drinking water distribution systems; thus, cometabolism may be a significant contribution to monochloramine loss during nitrification episodes in drinking water distribution systems.
Wing box transonic-flutter suppression using piezoelectric self-sensing actuators attached to skin
NASA Astrophysics Data System (ADS)
Otiefy, R. A. H.; Negm, H. M.
2010-12-01
The main objective of this research is to study the capability of piezoelectric (PZT) self-sensing actuators to suppress the transonic wing box flutter, which is a flow-structure interaction phenomenon. The unsteady general frequency modified transonic small disturbance (TSD) equation is used to model the transonic flow about the wing. The wing box structure and piezoelectric actuators are modeled using the equivalent plate method, which is based on the first order shear deformation plate theory (FSDPT). The piezoelectric actuators are bonded to the skin. The optimal electromechanical coupling conditions between the piezoelectric actuators and the wing are collected from previous work. Three main different control strategies, a linear quadratic Gaussian (LQG) which combines the linear quadratic regulator (LQR) with the Kalman filter estimator (KFE), an optimal static output feedback (SOF), and a classic feedback controller (CFC), are studied and compared. The optimum actuator and sensor locations are determined using the norm of feedback control gains (NFCG) and norm of Kalman filter estimator gains (NKFEG) respectively. A genetic algorithm (GA) optimization technique is used to calculate the controller and estimator parameters to achieve a target response.
Vehicle tracking using fuzzy-based vehicle detection window with adaptive parameters
NASA Astrophysics Data System (ADS)
Chitsobhuk, Orachat; Kasemsiri, Watjanapong; Glomglome, Sorayut; Lapamonpinyo, Pipatphon
2018-04-01
In this paper, fuzzy-based vehicle tracking system is proposed. The proposed system consists of two main processes: vehicle detection and vehicle tracking. In the first process, the Gradient-based Adaptive Threshold Estimation (GATE) algorithm is adopted to provide the suitable threshold value for the sobel edge detection. The estimated threshold can be adapted to the changes of diverse illumination conditions throughout the day. This leads to greater vehicle detection performance compared to a fixed user's defined threshold. In the second process, this paper proposes the novel vehicle tracking algorithms namely Fuzzy-based Vehicle Analysis (FBA) in order to reduce the false estimation of the vehicle tracking caused by uneven edges of the large vehicles and vehicle changing lanes. The proposed FBA algorithm employs the average edge density and the Horizontal Moving Edge Detection (HMED) algorithm to alleviate those problems by adopting fuzzy rule-based algorithms to rectify the vehicle tracking. The experimental results demonstrate that the proposed system provides the high accuracy of vehicle detection about 98.22%. In addition, it also offers the low false detection rates about 3.92%.
On the existence of touch points for first-order state inequality constraints
NASA Technical Reports Server (NTRS)
Seywald, Hans; Cliff, Eugene M.
1993-01-01
The appearance of touch points in state constrained optimal control problems with general vector-valued control is studied. Under the assumption that the Hamiltonian is regular, touch points for first-order state inequalities are shown to exist only under very special conditions. In many cases of practical importance these conditions can be used to exclude touch points a priori without solving an optimal control problem. The results are demonstrated on a simple example.
NEWS Climatology Project: The State of the Water Cycle at Continental to Global Scales
NASA Technical Reports Server (NTRS)
Rodell, Matthew; LEcuyer, Tristan; Beaudoing, Hiroko Kato; Olson, Bill
2011-01-01
NASA's Energy and Water Cycle Study (NEWS) program fosters collaborative research towards improved quantification and prediction of water and energy cycle consequences of climate change. In order to measure change, it is first necessary to describe current conditions. The goal of the NEWS Water and Energy Cycle Climatology project is to develop "state of the global water cycle" and "state of the global energy cycle" assessments based on data from modern ground and space based observing systems and data integrating models. The project is a multiinstitutional collaboration with more than 20 active contributors. This presentation will describe results of the first stage of the water budget analysis, whose goal was to characterize the current state of the water cycle on mean monthly, continental scales. We examine our success in closing the water budget within the expected uncertainty range and the effects of forcing budget closure as a method for refining individual flux estimates.
Monochloramine cometabolism by Nitrosomonas europaea under drinking water conditions.
Maestre, Juan P; Wahman, David G; Speitel, Gerald E
2013-09-01
Chloramine is widely used in United States drinking water systems as a secondary disinfectant, which may promote the growth of nitrifying bacteria because ammonia is present. At the onset of nitrification, both nitrifying bacteria and their products exert a monochloramine demand, decreasing the residual disinfectant concentration in water distribution systems. This work investigated another potentially significant mechanism for residual disinfectant loss: monochloramine cometabolism by ammonia-oxidizing bacteria (AOB). Monochloramine cometabolism was studied with the pure culture AOB Nitrosomonas europaea (ATCC 19718) in batch kinetic experiments under drinking water conditions. Three batch reactors were used in each experiment: a positive control to estimate the ammonia kinetic parameters, a negative control to account for abiotic reactions, and a cometabolism reactor to estimate the cometabolism kinetic constants. Kinetic parameters were estimated in AQUASIM with a simultaneous fit to all experimental data. The cometabolism reactors showed a more rapid monochloramine decay than in the negative controls, demonstrating that cometabolism occurs. Cometabolism kinetics were best described by a pseudo first order model with a reductant term to account for ammonia availability. Monochloramine cometabolism kinetics were similar to those of ammonia metabolism, and monochloramine cometabolism was a significant loss mechanism (30-60% of the observed monochloramine decay). These results suggest that monochloramine cometabolism should occur in practice and may be a significant contribution to monochloramine decay during nitrification episodes in drinking water distribution systems. Copyright © 2013 Elsevier Ltd. All rights reserved.
Estimating soil moisture exceedance probability from antecedent rainfall
NASA Astrophysics Data System (ADS)
Cronkite-Ratcliff, C.; Kalansky, J.; Stock, J. D.; Collins, B. D.
2016-12-01
The first storms of the rainy season in coastal California, USA, add moisture to soils but rarely trigger landslides. Previous workers proposed that antecedent rainfall, the cumulative seasonal rain from October 1 onwards, had to exceed specific amounts in order to trigger landsliding. Recent monitoring of soil moisture upslope of historic landslides in the San Francisco Bay Area shows that storms can cause positive pressure heads once soil moisture values exceed a threshold of volumetric water content (VWC). We propose that antecedent rainfall could be used to estimate the probability that VWC exceeds this threshold. A major challenge to estimating the probability of exceedance is that rain gauge records are frequently incomplete. We developed a stochastic model to impute (infill) missing hourly precipitation data. This model uses nearest neighbor-based conditional resampling of the gauge record using data from nearby rain gauges. Using co-located VWC measurements, imputed data can be used to estimate the probability that VWC exceeds a specific threshold for a given antecedent rainfall. The stochastic imputation model can also provide an estimate of uncertainty in the exceedance probability curve. Here we demonstrate the method using soil moisture and precipitation data from several sites located throughout Northern California. Results show a significant variability between sites in the sensitivity of VWC exceedance probability to antecedent rainfall.
Variational estimate method for solving autonomous ordinary differential equations
NASA Astrophysics Data System (ADS)
Mungkasi, Sudi
2018-04-01
In this paper, we propose a method for solving first-order autonomous ordinary differential equation problems using a variational estimate formulation. The variational estimate is constructed with a Lagrange multiplier which is chosen optimally, so that the formulation leads to an accurate solution to the problem. The variational estimate is an integral form, which can be computed using a computer software. As the variational estimate is an explicit formula, the solution is easy to compute. This is a great advantage of the variational estimate formulation.
Ab initio spectroscopy and ionic conductivity of water under Earth mantle conditions.
Rozsa, Viktor; Pan, Ding; Giberti, Federico; Galli, Giulia
2018-06-18
The phase diagram of water at extreme conditions plays a critical role in Earth and planetary science, yet remains poorly understood. Here we report a first-principles investigation of the liquid at high temperature, between 11 GPa and 20 GPa-a region where numerous controversial results have been reported over the past three decades. Our results are consistent with the recent estimates of the water melting line below 1,000 K and show that on the 1,000-K isotherm the liquid is rapidly dissociating and recombining through a bimolecular mechanism. We found that short-lived ionic species act as charge carriers, giving rise to an ionic conductivity that at 11 GPa and 20 GPa is six and seven orders of magnitude larger, respectively, than at ambient conditions. Conductivity calculations were performed entirely from first principles, with no a priori assumptions on the nature of charge carriers. Despite frequent dissociative events, we observed that hydrogen bonding persists at high pressure, up to at least 20 GPa. Our computed Raman spectra, which are in excellent agreement with experiment, show no distinctive signatures of the hydronium and hydroxide ions present in our simulations. Instead, we found that infrared spectra are sensitive probes of molecular dissociation, exhibiting a broad band below the OH stretching mode ascribable to vibrations of complex ions.
[Using fractional polynomials to estimate the safety threshold of fluoride in drinking water].
Pan, Shenling; An, Wei; Li, Hongyan; Yang, Min
2014-01-01
To study the dose-response relationship between fluoride content in drinking water and prevalence of dental fluorosis on the national scale, then to determine the safety threshold of fluoride in drinking water. Meta-regression analysis was applied to the 2001-2002 national endemic fluorosis survey data of key wards. First, fractional polynomial (FP) was adopted to establish fixed effect model, determining the best FP structure, after that restricted maximum likelihood (REML) was adopted to estimate between-study variance, then the best random effect model was established. The best FP structure was first-order logarithmic transformation. Based on the best random effect model, the benchmark dose (BMD) of fluoride in drinking water and its lower limit (BMDL) was calculated as 0.98 mg/L and 0.78 mg/L. Fluoride in drinking water can only explain 35.8% of the variability of the prevalence, among other influencing factors, ward type was a significant factor, while temperature condition and altitude were not. Fractional polynomial-based meta-regression method is simple, practical and can provide good fitting effect, based on it, the safety threshold of fluoride in drinking water of our country is determined as 0.8 mg/L.
NASA Astrophysics Data System (ADS)
Zhao, You-Qun; Li, Hai-Qing; Lin, Fen; Wang, Jian; Ji, Xue-Wu
2017-07-01
The accurate estimation of road friction coefficient in the active safety control system has become increasingly prominent. Most previous studies on road friction estimation have only used vehicle longitudinal or lateral dynamics and often ignored the load transfer, which tends to cause inaccurate of the actual road friction coefficient. A novel method considering load transfer of front and rear axles is proposed to estimate road friction coefficient based on braking dynamic model of two-wheeled vehicle. Sliding mode control technique is used to build the ideal braking torque controller, which control target is to control the actual wheel slip ratio of front and rear wheels tracking the ideal wheel slip ratio. In order to eliminate the chattering problem of the sliding mode controller, integral switching surface is used to design the sliding mode surface. A second order linear extended state observer is designed to observe road friction coefficient based on wheel speed and braking torque of front and rear wheels. The proposed road friction coefficient estimation schemes are evaluated by simulation in ADAMS/Car. The results show that the estimated values can well agree with the actual values in different road conditions. The observer can estimate road friction coefficient exactly in real-time and resist external disturbance. The proposed research provides a novel method to estimate road friction coefficient with strong robustness and more accurate.
Surround-Masking Affects Visual Estimation Ability
Jastrzebski, Nicola R.; Hugrass, Laila E.; Crewther, Sheila G.; Crewther, David P.
2017-01-01
Visual estimation of numerosity involves the discrimination of magnitude between two distributions or perceptual sets that vary in number of elements. How performance on such estimation depends on peripheral sensory stimulation is unclear, even in typically developing adults. Here, we varied the central and surround contrast of stimuli that comprised a visual estimation task in order to determine whether mechanisms involved with the removal of unessential visual input functionally contributes toward number acuity. The visual estimation judgments of typically developed adults were significantly impaired for high but not low contrast surround stimulus conditions. The center and surround contrasts of the stimuli also differentially affected the accuracy of numerosity estimation depending on whether fewer or more dots were presented. Remarkably, observers demonstrated the highest mean percentage accuracy across stimulus conditions in the discrimination of more elements when the surround contrast was low and the background luminance of the central region containing the elements was dark (black center). Conversely, accuracy was severely impaired during the discrimination of fewer elements when the surround contrast was high and the background luminance of the central region was mid level (gray center). These findings suggest that estimation ability is functionally related to the quality of low-order filtration of unessential visual information. These surround masking results may help understanding of the poor visual estimation ability commonly observed in developmental dyscalculia. PMID:28360845
Characteristics of the Martian atmosphere surface layer
NASA Technical Reports Server (NTRS)
Clow, G. D.; Haberle, R. M.
1990-01-01
Elements of various terrestrial boundary layer models are extended to Mars in order to estimate sensible heat, latent heat, and momentum fluxes within the Martian atmospheric surface ('constant flux') layer. The atmospheric surface layer consists of an interfacial sublayer immediately adjacent to the ground and an overlying fully turbulent surface sublayer where wind-shear production of turbulence dominates buoyancy production. Within the interfacial sublayer, sensible and latent heat are transported by non-steady molecular diffusion into small-scale eddies which intermittently burst through this zone. Both the thickness of the interfacial sublayer and the characteristics of the turbulent eddies penetrating through it depend on whether airflow is aerodynamically smooth or aerodynamically rough, as determined by the Roughness Reynold's number. Within the overlying surface sublayer, similarity theory can be used to express the mean vertical windspeed, temperature, and water vapor profiles in terms of a single parameter, the Monin-Obukhov stability parameter. To estimate the molecular viscosity and thermal conductivity of a CO2-H2O gas mixture under Martian conditions, parameterizations were developed using data from the TPRC Data Series and the first-order Chapman-Cowling expressions; the required collision integrals were approximated using the Lenard-Jones potential. Parameterizations for specific heat and binary diffusivity were also determined. The Brutsart model for sensible and latent heat transport within the interfacial sublayer for both aerodynamically smooth and rough airflow was experimentally tested under similar conditions, validating its application to Martian conditions. For the surface sublayer, the definition of the Monin-Obukhov length was modified to properly account for the buoyancy forces arising from water vapor gradients in the Martian atmospheric boundary layer. It was found that under most Martian conditions, the interfacial and surface sublayers offer roughly comparable resistance to sensible heat and water vapor transport and are thus both important in determining the associated fluxes.
Second-order numerical solution of time-dependent, first-order hyperbolic equations
NASA Technical Reports Server (NTRS)
Shah, Patricia L.; Hardin, Jay
1995-01-01
A finite difference scheme is developed to find an approximate solution of two similar hyperbolic equations, namely a first-order plane wave and spherical wave problem. Finite difference approximations are made for both the space and time derivatives. The result is a conditionally stable equation yielding an exact solution when the Courant number is set to one.
NASA Astrophysics Data System (ADS)
Lowman, Lauren E. L.; Barros, Ana P.
2014-06-01
Prior studies evaluated the interplay between climate and orography by investigating the sensitivity of relief to precipitation using the stream power erosion law (SPEL) for specified erosion rates. Here we address the inverse problem, inferring realistic spatial distributions of erosion rates for present-day topography and contemporaneous climate forcing. In the central Andes, similarities in the altitudinal distribution and density of first-order stream outlets and precipitation suggest a direct link between climate and fluvial erosion. Erosion rates are estimated with a Bayesian physical-statistical model based on the SPEL applied at spatial scales that capture joint hydrogeomorphic and hydrometeorological patterns within five river basins and one intermontane basin in Peru and Bolivia. Topographic slope and area data were generated from a high-resolution (˜90 m) digital elevation map, and mean annual precipitation was derived from 14 years of Tropical Rainfall Measuring Mission 3B42v.7 product and adjusted with rain gauge data. Estimated decadal-scale erosion rates vary between 0.68 and 11.59 mm/yr, with basin averages of 2.1-8.5 mm/yr. Even accounting for uncertainty in precipitation and simplifying assumptions, these values are 1-2 orders of magnitude larger than most millennial and million year timescale estimates in the central Andes, using various geological dating techniques (e.g., thermochronology and cosmogenic nuclides), but they are consistent with other decadal-scale estimates using landslide mapping and sediment flux observations. The results also reveal a pattern of spatially dependent erosion consistent with basin hypsometry. The modeling framework provides a means of remotely estimating erosion rates and associated uncertainties under current climate conditions over large regions. 2014. American Geophysical Union. All Rights Reserved.
Arredondo, J Tulio; Johnson, Douglas A
2011-11-01
The study of proportional relationships between size, shape, and function of part of or the whole organism is traditionally known as allometry. Examination of correlative changes in the size of interbranch distances (IBDs) at different root orders may help to identify root branching rules. Root morphological and functional characteristics in three range grasses {bluebunch wheatgrass [Pseudoroegneria spicata (Pursh) Löve], crested wheatgrass [Agropyron desertorum (Fisch. ex Link) Schult.×A. cristatum (L.) Gaert.], and cheatgrass (Bromus tectorum L.)} were examined in response to a soil nutrient gradient. Interbranch distances along the main root axis and the first-order laterals as well as other morphological and allocation root traits were determined. A model of nutrient diffusivity parameterized with root length and root diameter for the three grasses was used to estimate root functional properties (exploitation efficiency and exploitation potential). The results showed a significant negative allometric relationship between the main root axis and first-order lateral IBD (P ≤ 0.05), but only for bluebunch wheatgrass. The main root axis IBD was positively related to the number and length of roots, estimated exploitation efficiency of second-order roots, and specific root length, and was negatively related to estimated exploitation potential of first-order roots. Conversely, crested wheatgrass and cheatgrass, which rely mainly on root proliferation responses, exhibited fewer allometric relationships. Thus, the results suggested that species such as bluebunch wheatgrass, which display slow root growth and architectural root plasticity rather than opportunistic root proliferation and rapid growth, exhibit correlative allometry between the main axis IBD and morphological, allocation, and functional traits of roots.
NASA Astrophysics Data System (ADS)
Niknia, I.; Trevizoli, P. V.; Govindappa, P.; Christiaanse, T. V.; Teyber, R.; Rowe, A.
2018-05-01
First order transition material (FOM) usually exhibits magnetocaloric effects in a narrow temperature range which complicates their use in an active magnetic regenerator (AMR) refrigerator. In addition, the magnetocaloric effect in first order materials can vary with field and temperature history of the material. This study examines the behavior of a MnFe(P,Si) FOM sample in an AMR cycle using a numerical model and experimental measurements. For certain operating conditions, multiple points of equilibrium (MPE) exist for a fixed hot rejection temperature. Stable and unstable points of equilibriums (PEs) are identified and the impacts of heat loads, operating conditions, and configuration losses on the number of PEs are discussed. It is shown that the existence of multiple PEs can affect the performance of an AMR significantly for certain operating conditions. In addition, the points where MPEs exist appear to be linked to the device itself, not just the material, suggesting the need to layer a regenerator in a way that avoids MPE conditions and to layer with a specific device in mind.
Challenging the Southern Boundary of Active Rock Glaciers in West Greenland
NASA Astrophysics Data System (ADS)
Langley, K.; Abermann, J.
2017-12-01
Rock glaciers are permafrost features abundant in mountainous environments and are characterized as `steadily creeping perennially frozen and ice-rich debris on non-glacierised mountain slopes'. Previous studies investigated both the climatic significance and the dynamics of rock glaciers in Greenland, however, there do not exist studies as far south as the Godthåbsfjord area. We recently found evidence of a active rock glacier near Nuuk, around 250 km further south than the previously suggested southern active limit. It shows no signs of pioneer vegetation, which supports its likely dynamic activity. The rock glacier covers an area of ca. 1 km2and its lowest point is at an elevation of about 250 m a.s.l. Here we present the results of a two year field campaign designed to (I) confirm or reject active rock glacier occurrence in the Godthåbsfjord area with innovative methods, (II) study their dynamic regime and (III) investigate the climatic boundary conditions necessary for active rock glacier occurrence in the Sub-Arctic. We use a number of methods to determine the state of the rock glacier. Movement of the landform is assessed using repeat GPS surveying of marked stones and feature tracking based on ortho-photos and DEMs from repeat UAV deployments. Bottom temperature of snow cover (BTS) measurements give an independent first-order estimate of permafrost occurrence. An air temperature sensor deployed near the snout and recording hourly gives a first order estimate of the temperature gradients between Nuuk and the rock glacier, allowing us to assess the climatic boundary conditions required for rock glacier occurrence. BTS measurements show a clear drop in temperatures over the rock glacier compared to the surrounding areas suggesting an active landform with a well demarcated thermal regime. We will assess this independently with the repeat GPS and UAV surveys and will thus be able to confirm or reject the hypothesis of activity by the end of summer 2017.
Contractual conditions, working conditions and their impact on health and well-being.
Robone, Silvana; Jones, Andrew M; Rice, Nigel
2011-10-01
Given changes in the labour market in past decades, it is of interest to evaluate whether and how contractual and working conditions affect health and psychological well-being in society today. We consider the effects of contractual and working conditions on self-assessed health and psychological well-being using twelve waves (1991/1992-2002/2003) of the British Household Panel Survey. For self-assessed health, the dependent variable is categorical, and we estimate non-linear dynamic panel ordered probit models, while for psychological well-being, we estimate a dynamic linear specification. The results show that both contractual and working conditions have an influence on health and psychological well-being and that the impact is different for men and women.
Irano, Natalia; Bignardi, Annaiza Braga; El Faro, Lenira; Santana, Mário Luiz; Cardoso, Vera Lúcia; Albuquerque, Lucia Galvão
2014-03-01
The objective of this study was to estimate genetic parameters for milk yield, stayability, and the occurrence of clinical mastitis in Holstein cows, as well as studying the genetic relationship between them, in order to provide subsidies for the genetic evaluation of these traits. Records from 5,090 Holstein cows with calving varying from 1991 to 2010, were used in the analysis. Two standard multivariate analyses were carried out, one containing the trait of accumulated 305-day milk yields in the first lactation (MY1), stayability (STAY) until the third lactation, and clinical mastitis (CM), as well as the other traits, considering accumulated 305-day milk yields (Y305), STAY, and CM, including the first three lactations as repeated measures for Y305 and CM. The covariance components were obtained by a Bayesian approach. The heritability estimates obtained by multivariate analysis with MY1 were 0.19, 0.28, and 0.13 for MY1, STAY, and CM, respectively, whereas using the multivariate analysis with the Y305, the estimates were 0.19, 0.31, and 0.14, respectively. The genetic correlations between MY1 and STAY, MY1 and CM, and STAY and CM, respectively, were 0.38, 0.12, and -0.49. The genetic correlations between Y305 and STAY, Y305 and CM, and STAY and CM, respectively, were 0.66, -0.25, and -0.52.
Calibrating First-Order Strong Lensing Mass Estimates in Clusters of Galaxies
NASA Astrophysics Data System (ADS)
Reed, Brendan; Remolian, Juan; Sharon, Keren; Li, Nan; SPT Clusters Cooperation
2018-01-01
We investigate methods to reduce the statistical and systematic errors inherent to using the Einstein Radius as a first-order mass estimate in strong lensing galaxy clusters. By finding an empirical universal calibration function, we aim to enable a first-order mass estimate of large cluster data sets in a fraction of the time and effort of full-scale strong lensing mass modeling. We use 74 simulated cluster data from the Argonne National Laboratory in a lens redshift slice of [0.159, 0.667] with various source redshifts in the range of [1.23, 2.69]. From the simulated density maps, we calculate the exact mass enclosed within the Einstein Radius. We find that the mass inferred from the Einstein Radius alone produces an error width of ~39% with respect to the true mass. We explore an array of polynomial and exponential correction functions with dependence on cluster redshift and projected radii of the lensed images, aiming to reduce the statistical and systematic uncertainty. We find that the error on the the mass inferred from the Einstein Radius can be reduced significantly by using a universal correction function. Our study has implications for current and future large galaxy cluster surveys aiming to measure cluster mass, and the mass-concentration relation.
A comparison of zero-order, first-order, and monod biotransformation models
Bekins, B.A.; Warren, E.; Godsy, E.M.
1998-01-01
Under some conditions, a first-order kinetic model is a poor representation of biodegradation in contaminated aquifers. Although it is well known that the assumption of first-order kinetics is valid only when substrate concentration, S, is much less than the half-saturation constant, K(s), this assumption is often made without verification of this condition. We present a formal error analysis showing that the relative error in the first-order approximation is S/K(S) and in the zero-order approximation the error is K(s)/S. We then examine the problems that arise when the first-order approximation is used outside the range for which it is valid. A series of numerical simulations comparing results of first- and zero-order rate approximations to Monod kinetics for a real data set illustrates that if concentrations observed in the field are higher than K(s), it may better to model degradation using a zero-order rate expression. Compared with Monod kinetics, extrapolation of a first-order rate to lower concentrations under-predicts the biotransformation potential, while extrapolation to higher concentrations may grossly over-predict the transformation rate. A summary of solubilities and Monod parameters for aerobic benzene, toluene, and xylene (BTX) degradation shows that the a priori assumption of first-order degradation kinetics at sites contaminated with these compounds is not valid. In particular, out of six published values of KS for toluene, only one is greater than 2 mg/L, indicating that when toluene is present in concentrations greater than about a part per million, the assumption of first-order kinetics may be invalid. Finally, we apply an existing analytical solution for steady-state one-dimensional advective transport with Monod degradation kinetics to a field data set.A formal error analysis is presented showing that the relative error in the first-order approximation is S/KS and in the zero-order approximation the error is KS/S where S is the substrate concentration and KS is the half-saturation constant. The problems that arise when the first-order approximation is used outside the range for which it is valid are examined. A series of numerical simulations comparing results of first- and zero-order rate approximations to Monod kinetics for a real data set illustrates that if concentrations observed in the field are higher than KS, it may be better to model degradation using a zero-order rate expression.
Evaluation of the kinetic oxidation of aqueous volatile organic compounds by permanganate.
Mahmoodlu, Mojtaba G; Hassanizadeh, S Majid; Hartog, Niels
2014-07-01
The use of permanganate solutions for in-situ chemical oxidation (ISCO) is a well-established groundwater remediation technology, particularly for targeting chlorinated ethenes. The kinetics of oxidation reactions is an important ISCO remediation design aspect that affects the efficiency and oxidant persistence. The overall rate of the ISCO reaction between oxidant and contaminant is typically described using a second-order kinetic model while the second-order rate constant is determined experimentally by means of a pseudo first order approach. However, earlier studies of chlorinated hydrocarbons have yielded a wide range of values for the second-order rate constants. Also, there is limited insight in the kinetics of permanganate reactions with fuel-derived groundwater contaminants such as toluene and ethanol. In this study, batch experiments were carried out to investigate and compare the oxidation kinetics of aqueous trichloroethylene (TCE), ethanol, and toluene in an aqueous potassium permanganate solution. The overall second-order rate constants were determined directly by fitting a second-order model to the data, instead of typically using the pseudo-first-order approach. The second-order reaction rate constants (M(-1) s(-1)) for TCE, toluene, and ethanol were 8.0×10(-1), 2.5×10(-4), and 6.5×10(-4), respectively. Results showed that the inappropriate use of the pseudo-first-order approach in several previous studies produced biased estimates of the second-order rate constants. In our study, this error was expressed as a function of the extent (P/N) in which the reactant concentrations deviated from the stoichiometric ratio of each oxidation reaction. The error associated with the inappropriate use of the pseudo-first-order approach is negatively correlated with the P/N ratio and reached up to 25% of the estimated second-order rate constant in some previous studies of TCE oxidation. Based on our results, a similar relation is valid for the other volatile organic compounds studied. Copyright © 2013 Elsevier B.V. All rights reserved.
Rogue waves generation in a left-handed nonlinear transmission line with series varactor diodes
NASA Astrophysics Data System (ADS)
Onana Essama, B. G.; Atangana, J.; Biya Motto, F.; Mokhtari, B.; Cherkaoui Eddeqaqi, N.; Kofane, Timoleon C.
2014-07-01
We investigate the electromagnetic wave behavior and its characterization using collective variables technique. Second-order dispersion, first- and second-order nonlinearities, which strongly act in a left-handed nonlinear transmission line with series varactor diodes, are taken into account. Four frequency ranges have been found. The first one gives the so-called energetic soliton due to a perfect combination of second-order dispersion and first-order nonlinearity. The second frequency range presents a dispersive soliton leading to the collapse of the electromagnetic wave at the third frequency range. But the fourth one shows physical conditions which are able to provoke the appearance of wave trains generation with some particular waves, the rogue waves. Moreover, we demonstrate that the number of rogue waves increases with frequency. The soliton, thereafter, gains a relative stability when second-order nonlinearity comes into play with some specific values in the fourth frequency range. Furthermore, the stability conditions of the electromagnetic wave at high frequencies have been also discussed.
Seidler, Tomasz; Stadnicka, Katarzyna; Champagne, Benoît
2014-05-13
The linear [χ((1))] and second-order nonlinear [χ((2))] optical susceptibilities of the 2-methyl-4-nitroaniline (MNA) crystal are calculated within the local field theory, which consists of first computing the molecular properties, accounting for the dressing effects of the surroundings, and then taking into account the local field effects. Several aspects of these calculations are tackled with the aim of monitoring the convergence of the χ((1)) and χ((2)) predictions with respect to experiment by accounting for the effects of (i) the dressing field within successive approximations, of (ii) the first-order ZPVA corrections, and of (iii) the geometry. With respect to the reference CCSD-based results, besides double hybrid functionals, the most reliable exchange-correlation functionals are LC-BLYP for the static χ((1)) and CAM-B3LYP (and M05-2X, to a lesser extent) for the dynamic χ((1)) but they strongly underestimate χ((2)). Double hybrids perform better for χ((2)) but not necessarily for χ((1)), and, moreover, their performances are much similar to MP2, which is known to slightly overestimate β, with respect to high-level coupled-clusters calculations and, therefore, χ((2)). Other XC functionals with less HF exchange perform poorly with overestimations/underestimations of χ((1))/χ((2)), whereas the HF method leads to underestimations of both. The first-order ZPVA corrections, estimated at the B3LYP level, are usually small but not negligible. Indeed, after ZPVA corrections, the molecular polarizabilities and first hyperpolarizabilities increase by 2% and 5%, respectively, whereas their impact is magnified on the macroscopic responses with enhancements of χ((1)) by up to 5% and of χ((2)) by as much as 10%-12% at λ = 1064 nm. The geometry plays also a key role in view of predicting accurate susceptibilities, particularly for push-pull π-conjugated compounds such as MNA. So, the geometry optimized using periodic boundary conditions is characterized by an overestimated bond length alternation, which gives larger molecular properties and even larger macroscopic responses, because of the local field factor amplification effects. Our best estimates based on experimental geometries, charge dressing field, ZPVA correction, and CCSD molecular properties lead to an overestimation of χ((1)) by 12% in the static limit and 7% at λ = 1064 nm. For χ((2)), the difference, with respect to the experiment, is satisfactory and of the order of one standard deviation.
A Maximum Entropy Test for Evaluating Higher-Order Correlations in Spike Counts
Onken, Arno; Dragoi, Valentin; Obermayer, Klaus
2012-01-01
Evaluating the importance of higher-order correlations of neural spike counts has been notoriously hard. A large number of samples are typically required in order to estimate higher-order correlations and resulting information theoretic quantities. In typical electrophysiology data sets with many experimental conditions, however, the number of samples in each condition is rather small. Here we describe a method that allows to quantify evidence for higher-order correlations in exactly these cases. We construct a family of reference distributions: maximum entropy distributions, which are constrained only by marginals and by linear correlations as quantified by the Pearson correlation coefficient. We devise a Monte Carlo goodness-of-fit test, which tests - for a given divergence measure of interest - whether the experimental data lead to the rejection of the null hypothesis that it was generated by one of the reference distributions. Applying our test to artificial data shows that the effects of higher-order correlations on these divergence measures can be detected even when the number of samples is small. Subsequently, we apply our method to spike count data which were recorded with multielectrode arrays from the primary visual cortex of anesthetized cat during an adaptation experiment. Using mutual information as a divergence measure we find that there are spike count bin sizes at which the maximum entropy hypothesis can be rejected for a substantial number of neuronal pairs. These results demonstrate that higher-order correlations can matter when estimating information theoretic quantities in V1. They also show that our test is able to detect their presence in typical in-vivo data sets, where the number of samples is too small to estimate higher-order correlations directly. PMID:22685392
NASA Astrophysics Data System (ADS)
Sun, Yong; Ma, Zilin; Tang, Gongyou; Chen, Zheng; Zhang, Nong
2016-07-01
Since the main power source of hybrid electric vehicle(HEV) is supplied by the power battery, the predicted performance of power battery, especially the state-of-charge(SOC) estimation has attracted great attention in the area of HEV. However, the value of SOC estimation could not be greatly precise so that the running performance of HEV is greatly affected. A variable structure extended kalman filter(VSEKF)-based estimation method, which could be used to analyze the SOC of lithium-ion battery in the fixed driving condition, is presented. First, the general lower-order battery equivalent circuit model(GLM), which includes column accumulation model, open circuit voltage model and the SOC output model, is established, and the off-line and online model parameters are calculated with hybrid pulse power characteristics(HPPC) test data. Next, a VSEKF estimation method of SOC, which integrates the ampere-hour(Ah) integration method and the extended Kalman filter(EKF) method, is executed with different adaptive weighting coefficients, which are determined according to the different values of open-circuit voltage obtained in the corresponding charging or discharging processes. According to the experimental analysis, the faster convergence speed and more accurate simulating results could be obtained using the VSEKF method in the running performance of HEV. The error rate of SOC estimation with the VSEKF method is focused in the range of 5% to 10% comparing with the range of 20% to 30% using the EKF method and the Ah integration method. In Summary, the accuracy of the SOC estimation in the lithium-ion battery cell and the pack of lithium-ion battery system, which is obtained utilizing the VSEKF method has been significantly improved comparing with the Ah integration method and the EKF method. The VSEKF method utilizing in the SOC estimation in the lithium-ion pack of HEV can be widely used in practical driving conditions.
Belke, T W
2000-05-01
Six male Wistar rats were exposed to different orders of reinforcement schedules to investigate if estimates from Herrnstein's (1970) single-operant matching law equation would vary systematically with schedule order. Reinforcement schedules were arranged in orders of increasing and decreasing reinforcement rate. Subsequently, all rats were exposed to a single reinforcement schedule within a session to determine within-session changes in responding. For each condition, the operant was lever pressing and the reinforcing consequence was the opportunity to run for 15 s. Estimates of k and R(O) were higher when reinforcement schedules were arranged in order of increasing reinforcement rate. Within a session on a single reinforcement schedule, response rates increased between the beginning and the end of a session. A positive correlation between the difference in parameters between schedule orders and the difference in response rates within a session suggests that the within-session change in response rates may be related to the difference in the asymptotes. These results call into question the validity of parameter estimates from Herrnstein's (1970) equation when reinforcer efficacy changes within a session.
The manual describes two microcomputer programs written to estimate the performance of electrostatic precipitators (ESPs): the first, to estimate the electrical conditions for round discharge electrodes in the ESP; and the second, a modification of the EPA/SRI ESP model, to estim...
NASA Astrophysics Data System (ADS)
Lafitte, Pauline; Melis, Ward; Samaey, Giovanni
2017-07-01
We present a general, high-order, fully explicit relaxation scheme which can be applied to any system of nonlinear hyperbolic conservation laws in multiple dimensions. The scheme consists of two steps. In a first (relaxation) step, the nonlinear hyperbolic conservation law is approximated by a kinetic equation with stiff BGK source term. Then, this kinetic equation is integrated in time using a projective integration method. After taking a few small (inner) steps with a simple, explicit method (such as direct forward Euler) to damp out the stiff components of the solution, the time derivative is estimated and used in an (outer) Runge-Kutta method of arbitrary order. We show that, with an appropriate choice of inner step size, the time step restriction on the outer time step is similar to the CFL condition for the hyperbolic conservation law. Moreover, the number of inner time steps is also independent of the stiffness of the BGK source term. We discuss stability and consistency, and illustrate with numerical results (linear advection, Burgers' equation and the shallow water and Euler equations) in one and two spatial dimensions.
Higher derivative field theories: degeneracy conditions and classes
NASA Astrophysics Data System (ADS)
Crisostomi, Marco; Klein, Remko; Roest, Diederik
2017-06-01
We provide a full analysis of ghost free higher derivative field theories with coupled degrees of freedom. Assuming the absence of gauge symmetries, we derive the degeneracy conditions in order to evade the Ostrogradsky ghosts, and analyze which (non)trivial classes of solutions this allows for. It is shown explicitly how Lorentz invariance avoids the propagation of "half" degrees of freedom. Moreover, for a large class of theories, we construct the field redefinitions and/or (extended) contact transformations that put the theory in a manifestly first order form. Finally, we identify which class of theories cannot be brought to first order form by such transformations.
Ferroelectricity in corundum derivatives
NASA Astrophysics Data System (ADS)
Ye, Meng; Vanderbilt, David
2016-04-01
The search for new ferroelectric (FE) materials holds promise for broadening our understanding of FE mechanisms and extending the range of application of FE materials. Here we investigate a class of A B O3 and A2B B'O6 materials that can be derived from the X2O3 corundum structure by mixing two or three ordered cations on the X site. Most such corundum derivatives have a polar structure, but it is unclear whether the polarization is reversible, which is a requirement for a FE material. In this paper, we propose a method to study the FE reversal path of materials in the corundum derivative family. We first categorize the corundum derivatives into four classes and show that only two of these allow for the possibility of FE reversal. We then calculate the energy profile and energy barrier of the FE reversal path using first-principles density functional methods with a structural constraint. Furthermore, we identify several empirical measures that can provide a rule of thumb for estimating the energy barriers. Finally, the conditions under which the magnetic ordering is compatible with ferroelectricity are determined. These results lead us to predict several potentially new FE materials.
NASA Astrophysics Data System (ADS)
Gomila, Rodrigo; Arancibia, Gloria; Nehler, Mathias; Bracke, Rolf; Stöckhert, Ferdinand
2016-04-01
Fault zones and their related structural permeability play a leading role in the migration of fluids through the continental crust. A first approximation to understanding the structural permeability conditions, and the estimation of its hydraulic properties (i.e. palaeopermeability and fracture porosity conditions) of the fault-related fracture mesh is the 2D analysis of its veinlets, usually made in thin-section. Those estimations are based in the geometrical parameters of the veinlets, such as average fracture density, length and aperture, which can be statistically modelled assuming penny-shaped fractures of constant radius and aperture within an anisotropic fracture system. Thus, this model is related to fracture connectivity, its length and to the cube of the fracture apertures. In this way, the estimated values presents their own inaccuracies owing to the method used. Therefore, the study of the real spatial distribution of the veinlets of the fault-related fracture mesh (3D), feasible with the use of micro-CT analyses, is a first order factor to unravel both, the real structural permeability conditions of a fault-zone, together with the validation of previous estimations made in 2D analyses in thin-sections. This early contribution shows the preliminary results of a fault-related fracture mesh and its 3D spatial distribution in the damage zone of the Jorgillo Fault (JF), an ancient subvertical left-lateral strike-slip fault exposed in the Atacama Fault System in northern Chile. The JF is a ca. 20 km long NNW-striking strike-slip fault with sinistral displacement of ca. 4 km. The methodology consisted of the drilling of vertically oriented plugs of 5 mm in diameter located at different distances from the JF core - damage zone boundary. Each specimen was, then, scanned with an x-ray micro-CT scanner (ProCon X-Ray CTalpha) in order to assess the fracture mesh. X-rays were generated in a transmission target x-ray tube with acceleration voltages ranging from 90-120 kV and target currents from 40-60 μA. The focal spot size on the diamond/tungsten target was about 5 μm. The x-ray beam was filtered using a 1 mm Aluminum plate before passing the sample. 1200 x-ray images were taken during a full rotation of the sample using an amorphous silicon flat panel detector with 1516x1900 pixels. This resulted in a voxel resolution of about 8 μm in the 3D data reconstructed from the images. Future work will be aimed in the images segmentation of the fault-related fracture mesh followed by the estimation of its hydraulic properties at the time of fracture sealing. Acknowledgements: This work is a contribution to the CONICYT- BMBF International Scientific Collaborative Research Program Project PCCI130025/FKZ01DN14033 and the FONDAP-CONICYT Project 15090013.
Numerical scheme approximating solution and parameters in a beam equation
NASA Astrophysics Data System (ADS)
Ferdinand, Robert R.
2003-12-01
We present a mathematical model which describes vibration in a metallic beam about its equilibrium position. This model takes the form of a nonlinear second-order (in time) and fourth-order (in space) partial differential equation with boundary and initial conditions. A finite-element Galerkin approximation scheme is used to estimate model solution. Infinite-dimensional model parameters are then estimated numerically using an inverse method procedure which involves the minimization of a least-squares cost functional. Numerical results are presented and future work to be done is discussed.
The discrete one-sided Lipschitz condition for convex scalar conservation laws
NASA Technical Reports Server (NTRS)
Brenier, Yann; Osher, Stanley
1986-01-01
Physical solutions to convex scalar conservation laws satisfy a one-sided Lipschitz condition (OSLC) that enforces both the entropy condition and their variation boundedness. Consistency with this condition is therefore desirable for a numerical scheme and was proved for both the Godunov and the Lax-Friedrichs scheme--also, in a weakened version, for the Roe scheme, all of them being only first order accurate. A new, fully second order scheme is introduced here, which is consistent with the OSLC. The modified equation is considered and shows interesting features. Another second order scheme is then considered and numerical results are discussed.
Modeling and simulation of continuous wave velocity radar based on third-order DPLL
NASA Astrophysics Data System (ADS)
Di, Yan; Zhu, Chen; Hong, Ma
2015-02-01
Second-order digital phase-locked-loop (DPLL) is widely used in traditional Continuous wave (CW) velocity radar with poor performance in high dynamic conditions. Using the third-order DPLL can improve the performance. Firstly, the echo signal model of CW radar is given. Secondly, theoretical derivations of the tracking performance in different velocity conditions are given. Finally, simulation model of CW radar is established based on Simulink tool. Tracking performance of the two kinds of DPLL in different acceleration and jerk conditions is studied by this model. The results show that third-order PLL has better performance in high dynamic conditions. This model provides a platform for further research of CW radar.
Litskas, V D; Karamanlis, X N; Batzias, G C; Tsiouris, S E
2013-10-01
Eprinomectin (EPM) is a veterinary drug currently licensed in many countries for the treatment of endo- and ecto-parasites in cattle. Despite the notable evidence for its high toxicity to the terrestrial and aquatic environment ecosystems, its environmental behavior and fate are currently unknown. In the present research, the dissipation of EPM was studied in three soils and in cattle manure by using the OECD 307 guideline and the recently developed European Medicines Agency (EMA/CVMP/ERA/430327) guideline, respectively. The procedure presented by the FOrum for Co-ordination of pesticide models and their USe (FOCUS) was adopted for estimating the EPM degradation kinetics in soil and cattle manure. The EPM dissipation in soil was best described by the SFO (Simple First Order) and the HS (Hockey Stick) models, under aerobic and anaerobic conditions, respectively. The EPM dissipation in cattle manure was best described by the FOMC (First Order Multi Compartment) model. The Dissipation Time for the 50% of the initial EPM mass (DT50) range was 38-53days under aerobic and 691-1491days under anaerobic conditions. In addition, the DT50 for EPM in cattle manure was 333days. Therefore, EPM could be characterized as moderately to highly persistent to dissipation in soil, which depends on soil type, its oxygen content (aerobic or anaerobic conditions in soil) and the microbial activity. Moreover, the EPM resists dissipation in cattle manure, resulting to a high load in soil after manure application in agricultural land (or direct defecation in grassland). Consequently, the high possibility for EPM accumulation in soil and cattle manure should be considered when assessing the environmental risk of the drug. © 2013.
McElreath, Richard; Bell, Adrian V; Efferson, Charles; Lubell, Mark; Richerson, Peter J; Waring, Timothy
2008-11-12
The existence of social learning has been confirmed in diverse taxa, from apes to guppies. In order to advance our understanding of the consequences of social transmission and evolution of behaviour, however, we require statistical tools that can distinguish among diverse social learning strategies. In this paper, we advance two main ideas. First, social learning is diverse, in the sense that individuals can take advantage of different kinds of information and combine them in different ways. Examining learning strategies for different information conditions illuminates the more detailed design of social learning. We construct and analyse an evolutionary model of diverse social learning heuristics, in order to generate predictions and illustrate the impact of design differences on an organism's fitness. Second, in order to eventually escape the laboratory and apply social learning models to natural behaviour, we require statistical methods that do not depend upon tight experimental control. Therefore, we examine strategic social learning in an experimental setting in which the social information itself is endogenous to the experimental group, as it is in natural settings. We develop statistical models for distinguishing among different strategic uses of social information. The experimental data strongly suggest that most participants employ a hierarchical strategy that uses both average observed pay-offs of options as well as frequency information, the same model predicted by our evolutionary analysis to dominate a wide range of conditions.
Second-order singular pertubative theory for gravitational lenses
NASA Astrophysics Data System (ADS)
Alard, C.
2018-03-01
The extension of the singular perturbative approach to the second order is presented in this paper. The general expansion to the second order is derived. The second-order expansion is considered as a small correction to the first-order expansion. Using this approach, it is demonstrated that in practice the second-order expansion is reducible to a first order expansion via a re-definition of the first-order pertubative fields. Even if in usual applications the second-order correction is small the reducibility of the second-order expansion to the first-order expansion indicates a potential degeneracy issue. In general, this degeneracy is hard to break. A useful and simple second-order approximation is the thin source approximation, which offers a direct estimation of the correction. The practical application of the corrections derived in this paper is illustrated by using an elliptical NFW lens model. The second-order pertubative expansion provides a noticeable improvement, even for the simplest case of thin source approximation. To conclude, it is clear that for accurate modelization of gravitational lenses using the perturbative method the second-order perturbative expansion should be considered. In particular, an evaluation of the degeneracy due to the second-order term should be performed, for which the thin source approximation is particularly useful.
Quantifying soil carbon loss and uncertainty from a peatland wildfire using multi-temporal LiDAR
Reddy, Ashwan D.; Hawbaker, Todd J.; Wurster, F.; Zhu, Zhiliang; Ward, S.; Newcomb, Doug; Murray, R.
2015-01-01
Peatlands are a major reservoir of global soil carbon, yet account for just 3% of global land cover. Human impacts like draining can hinder the ability of peatlands to sequester carbon and expose their soils to fire under dry conditions. Estimating soil carbon loss from peat fires can be challenging due to uncertainty about pre-fire surface elevations. This study uses multi-temporal LiDAR to obtain pre- and post-fire elevations and estimate soil carbon loss caused by the 2011 Lateral West fire in the Great Dismal Swamp National Wildlife Refuge, VA, USA. We also determine how LiDAR elevation error affects uncertainty in our carbon loss estimate by randomly perturbing the LiDAR point elevations and recalculating elevation change and carbon loss, iterating this process 1000 times. We calculated a total loss using LiDAR of 1.10 Tg C across the 25 km2 burned area. The fire burned an average of 47 cm deep, equivalent to 44 kg C/m2, a value larger than the 1997 Indonesian peat fires (29 kg C/m2). Carbon loss via the First-Order Fire Effects Model (FOFEM) was estimated to be 0.06 Tg C. Propagating the LiDAR elevation error to the carbon loss estimates, we calculated a standard deviation of 0.00009 Tg C, equivalent to 0.008% of total carbon loss. We conclude that LiDAR elevation error is not a significant contributor to uncertainty in soil carbon loss under severe fire conditions with substantial peat consumption. However, uncertainties may be more substantial when soil elevation loss is of a similar or smaller magnitude than the reported LiDAR error.
Estimation of periodic solutions number of first-order differential equations
NASA Astrophysics Data System (ADS)
Ivanov, Gennady; Alferov, Gennady; Gorovenko, Polina; Sharlay, Artem
2018-05-01
The paper deals with first-order differential equations under the assumption that the right-hand side is a periodic function of time and continuous in the set of arguments. Pliss V.A. obtained the first results for a particular class of equations and showed that a number of theorems can not be continued. In this paper, it was possible to reduce the restrictions on the degree of smoothness of the right-hand side of the equation and obtain upper and lower bounds on the number of possible periodic solutions.
NASA Technical Reports Server (NTRS)
Botez, D.
1982-01-01
A highly accurate analytical expression for the effective refractive index in In GaAsP/InP DH lasers emitting in the 1.2-1.6 micron range is presented. This closed-form expression is used to derive simple wavelength-independent expressions for the first-order mode cutoff conditions of various lateral waveguides. The effective refractive index is a function of emission wavelength and active layer thickness, and the mode cutoff conditions are compared to experimental data from mode-stabilized 1.3 and 1.55 micron DH lasers.
NASA Astrophysics Data System (ADS)
Lin, Zhi; Zhang, Qinghai
2017-09-01
We propose high-order finite-volume schemes for numerically solving the steady-state advection-diffusion equation with nonlinear Robin boundary conditions. Although the original motivation comes from a mathematical model of blood clotting, the nonlinear boundary conditions may also apply to other scientific problems. The main contribution of this work is a generic algorithm for generating third-order, fourth-order, and even higher-order explicit ghost-filling formulas to enforce nonlinear Robin boundary conditions in multiple dimensions. Under the framework of finite volume methods, this appears to be the first algorithm of its kind. Numerical experiments on boundary value problems show that the proposed fourth-order formula can be much more accurate and efficient than a simple second-order formula. Furthermore, the proposed ghost-filling formulas may also be useful for solving other partial differential equations.
Estimation of coefficients and boundary parameters in hyperbolic systems
NASA Technical Reports Server (NTRS)
Banks, H. T.; Murphy, K. A.
1984-01-01
Semi-discrete Galerkin approximation schemes are considered in connection with inverse problems for the estimation of spatially varying coefficients and boundary condition parameters in second order hyperbolic systems typical of those arising in 1-D surface seismic problems. Spline based algorithms are proposed for which theoretical convergence results along with a representative sample of numerical findings are given.
Estimating daily forest carbon fluxes using a combination of ground and remotely sensed data
NASA Astrophysics Data System (ADS)
Chirici, Gherardo; Chiesi, Marta; Corona, Piermaria; Salvati, Riccardo; Papale, Dario; Fibbi, Luca; Sirca, Costantino; Spano, Donatella; Duce, Pierpaolo; Marras, Serena; Matteucci, Giorgio; Cescatti, Alessandro; Maselli, Fabio
2016-02-01
Several studies have demonstrated that Monteith's approach can efficiently predict forest gross primary production (GPP), while the modeling of net ecosystem production (NEP) is more critical, requiring the additional simulation of forest respirations. The NEP of different forest ecosystems in Italy was currently simulated by the use of a remote sensing driven parametric model (modified C-Fix) and a biogeochemical model (BIOME-BGC). The outputs of the two models, which simulate forests in quasi-equilibrium conditions, are combined to estimate the carbon fluxes of actual conditions using information regarding the existing woody biomass. The estimates derived from the methodology have been tested against daily reference GPP and NEP data collected through the eddy correlation technique at five study sites in Italy. The first test concerned the theoretical validity of the simulation approach at both annual and daily time scales and was performed using optimal model drivers (i.e., collected or calibrated over the site measurements). Next, the test was repeated to assess the operational applicability of the methodology, which was driven by spatially extended data sets (i.e., data derived from existing wall-to-wall digital maps). A good estimation accuracy was generally obtained for GPP and NEP when using optimal model drivers. The use of spatially extended data sets worsens the accuracy to a varying degree, which is properly characterized. The model drivers with the most influence on the flux modeling strategy are, in increasing order of importance, forest type, soil features, meteorology, and forest woody biomass (growing stock volume).
An extended stochastic method for seismic hazard estimation
NASA Astrophysics Data System (ADS)
Abd el-aal, A. K.; El-Eraki, M. A.; Mostafa, S. I.
2015-12-01
In this contribution, we developed an extended stochastic technique for seismic hazard assessment purposes. This technique depends on the hypothesis of stochastic technique of Boore (2003) "Simulation of ground motion using the stochastic method. Appl. Geophy. 160:635-676". The essential characteristics of extended stochastic technique are to obtain and simulate ground motion in order to minimize future earthquake consequences. The first step of this technique is defining the seismic sources which mostly affect the study area. Then, the maximum expected magnitude is defined for each of these seismic sources. It is followed by estimating the ground motion using an empirical attenuation relationship. Finally, the site amplification is implemented in calculating the peak ground acceleration (PGA) at each site of interest. We tested and applied this developed technique at Cairo, Suez, Port Said, Ismailia, Zagazig and Damietta cities to predict the ground motion. Also, it is applied at Cairo, Zagazig and Damietta cities to estimate the maximum peak ground acceleration at actual soil conditions. In addition, 0.5, 1, 5, 10 and 20 % damping median response spectra are estimated using the extended stochastic simulation technique. The calculated highest acceleration values at bedrock conditions are found at Suez city with a value of 44 cm s-2. However, these acceleration values decrease towards the north of the study area to reach 14.1 cm s-2 at Damietta city. This comes in agreement with the results of previous studies of seismic hazards in northern Egypt and is found to be comparable. This work can be used for seismic risk mitigation and earthquake engineering purposes.
Application of first order kinetics to characterize MTBE natural attenuation in groundwater
NASA Astrophysics Data System (ADS)
Metcalf, Meredith J.; Stevens, Graham J.; Robbins, Gary A.
2016-04-01
Methyl tertiary butyl ether (MTBE) was a gasoline oxygenate that became widely used in reformulated gasoline as a means to reduce air pollution in the 1990s. Unfortunately, many of the underground storage tanks containing reformulated gasoline experienced subsurface releases which soon became a health concern given the increase in public and private water supplies containing MTBE. Many states responded to this by banning the use of MTBE as an additive, including Connecticut. Although MTBE dissipates by natural attenuation, it continues to be prevalent in groundwater long after the Connecticut ban in 2004. This study estimated the rate of the natural attenuation in groundwater following the Connecticut ban by evaluating the MTBE concentration two years prior to and two years after the MTBE ban at eighty-three monitoring wells from twenty-two retail gasoline stations where MTBE contamination was observed. Sites chosen for this study had not undergone active remediation ensuring no artificial influence to the natural attenuation processes that controls the migration and dissipation of MTBE. Results indicate that MTBE has dissipated in the natural environment, at more than 80% of the sites and at approximately 82% of the individual monitoring wells. In general, dissipation approximated first order kinetics. Dissipation half-lives, calculated using concentration data from the two year period after the ban, ranged from approximately three weeks to just over seven years with an average half-life of 7.3 months with little variability in estimates for different site characteristics. The accuracy of first order estimates to predict further MTBE dissipation were tested by comparing predicted concentrations with those observed after the two year post-ban period; the predicted concentrations closely match the observed concentrations which supports the use of first order kinetics for predictions of this nature.
Using CV-GLUE procedure in analysis of wetland model predictive uncertainty.
Huang, Chun-Wei; Lin, Yu-Pin; Chiang, Li-Chi; Wang, Yung-Chieh
2014-07-01
This study develops a procedure that is related to Generalized Likelihood Uncertainty Estimation (GLUE), called the CV-GLUE procedure, for assessing the predictive uncertainty that is associated with different model structures with varying degrees of complexity. The proposed procedure comprises model calibration, validation, and predictive uncertainty estimation in terms of a characteristic coefficient of variation (characteristic CV). The procedure first performed two-stage Monte-Carlo simulations to ensure predictive accuracy by obtaining behavior parameter sets, and then the estimation of CV-values of the model outcomes, which represent the predictive uncertainties for a model structure of interest with its associated behavior parameter sets. Three commonly used wetland models (the first-order K-C model, the plug flow with dispersion model, and the Wetland Water Quality Model; WWQM) were compared based on data that were collected from a free water surface constructed wetland with paddy cultivation in Taipei, Taiwan. The results show that the first-order K-C model, which is simpler than the other two models, has greater predictive uncertainty. This finding shows that predictive uncertainty does not necessarily increase with the complexity of the model structure because in this case, the more simplistic representation (first-order K-C model) of reality results in a higher uncertainty in the prediction made by the model. The CV-GLUE procedure is suggested to be a useful tool not only for designing constructed wetlands but also for other aspects of environmental management. Copyright © 2014 Elsevier Ltd. All rights reserved.
Fatigue Analysis of Rotating Parts. A Case Study for a Belt Driven Pulley
NASA Astrophysics Data System (ADS)
Sandu, Ionela; Tabacu, Stefan; Ducu, Catalin
2017-10-01
The present study is focused on the life estimation of a rotating part as a component of an engine assembly namely the pulley of the coolant pump. The goal of the paper is to develop a model, supported by numerical analysis, capable to predict the lifetime of the part. Starting from functional drawing, CAD Model and technical specifications of the part a numerical model was developed. MATLAB code was used to develop a tool to apply the load over the selected area. The numerical analysis was performed in two steps. The first simulation concerned the inertia relief due to rotational motion about the shaft (of the pump). Results from this simulation were saved and the stress - strain state used as initial conditions for the analysis with the load applied. The lifetime of a good part was estimated. A defect was created in order to investigate the influence over the working requirements. It was found that there is little influence with respect to the prescribed lifetime.
Computational methods for the identification of spatially varying stiffness and damping in beams
NASA Technical Reports Server (NTRS)
Banks, H. T.; Rosen, I. G.
1986-01-01
A numerical approximation scheme for the estimation of functional parameters in Euler-Bernoulli models for the transverse vibration of flexible beams with tip bodies is developed. The method permits the identification of spatially varying flexural stiffness and Voigt-Kelvin viscoelastic damping coefficients which appear in the hybrid system of ordinary and partial differential equations and boundary conditions describing the dynamics of such structures. An inverse problem is formulated as a least squares fit to data subject to constraints in the form of a vector system of abstract first order evolution equations. Spline-based finite element approximations are used to finite dimensionalize the problem. Theoretical convergence results are given and numerical studies carried out on both conventional (serial) and vector computers are discussed.
Kosaka, Ryo; Fukuda, Kyohei; Nishida, Masahiro; Maruyama, Osamu; Yamane, Takashi
2013-01-01
In order to monitor the condition of a patient using a left ventricular assist system (LVAS), blood flow should be measured. However, the reliable determination of blood-flow rate has not been established. The purpose of the present study is to develop a noninvasive blood-flow meter using a curved cannula with zero compensation for an axial flow blood pump. The flow meter uses the centrifugal force generated by the flow rate in the curved cannula. Two strain gauges served as sensors. The first gauges were attached to the curved area to measure static pressure and centrifugal force, and the second gauges were attached to straight area to measure static pressure. The flow rate was determined by the differences in output from the two gauges. The zero compensation was constructed based on the consideration that the flow rate could be estimated during the initial driving condition and the ventricular suction condition without using the flow meter. A mock circulation loop was constructed in order to evaluate the measurement performance of the developed flow meter with zero compensation. As a result, the zero compensation worked effectively for the initial calibration and the zero-drift of the measured flow rate. We confirmed that the developed flow meter using a curved cannula with zero compensation was able to accurately measure the flow rate continuously and noninvasively.
NASA Astrophysics Data System (ADS)
Lenka, Bichitra Kumar; Banerjee, Soumitro
2018-03-01
We discuss the asymptotic stability of autonomous linear and nonlinear fractional order systems where the state equations contain same or different fractional orders which lie between 0 and 2. First, we use the Laplace transform method to derive some sufficient conditions which ensure asymptotic stability of linear fractional order systems. Then by using the obtained results and linearization technique, a stability theorem is presented for autonomous nonlinear fractional order system. Finally, we design a control strategy for stabilization of autonomous nonlinear fractional order systems, and apply the results to the chaotic fractional order Lorenz system in order to verify its effectiveness.
Absorbing boundary conditions for second-order hyperbolic equations
NASA Technical Reports Server (NTRS)
Jiang, Hong; Wong, Yau Shu
1989-01-01
A uniform approach to construct absorbing artificial boundary conditions for second-order linear hyperbolic equations is proposed. The nonlocal boundary condition is given by a pseudodifferential operator that annihilates travelling waves. It is obtained through the dispersion relation of the differential equation by requiring that the initial-boundary value problem admits the wave solutions travelling in one direction only. Local approximation of this global boundary condition yields an nth-order differential operator. It is shown that the best approximations must be in the canonical forms which can be factorized into first-order operators. These boundary conditions are perfectly absorbing for wave packets propagating at certain group velocities. A hierarchy of absorbing boundary conditions is derived for transonic small perturbation equations of unsteady flows. These examples illustrate that the absorbing boundary conditions are easy to derive, and the effectiveness is demonstrated by the numerical experiments.
Gervais, Gaël; Bichon, Emmanuelle; Antignac, Jean-Philippe; Monteau, Fabrice; Leroy, Gaëla; Barritaud, Lauriane; Chachignon, Mathilde; Ingrand, Valérie; Roche, Pascal; Le Bizec, Bruno
2011-06-01
The detection and structural elucidation of micropollutants treatment by-products are major issues to estimate efficiencies of the processes employed for drinking water production versus endocrine disruptive compounds contamination. This issue was mainly investigated at the laboratory scale and in high concentration conditions. However, potential by-products generated after chlorination can be influenced by the dilution factor employed in real conditions. The present study proposes a new methodology borrowed to the metabolomic science, using liquid chromatography coupled to high-resolution mass spectrometry, in order to reveal potential chlorination by-products of ethinylestradiol in spiked real water samples at the part-per-billion level (5 μg L(-1)). Conventional targeted measurements first demonstrated that chlorination with sodium hypochlorite (0.8 mg L(-1)) led to removals of ethinylestradiol over 97%. Then, the developed differential global profiling approach permitted to reveal eight chlorination by-products of EE2, six of them being described for the first time. Among these eight halogenated compounds, five have been structurally identified, demonstrating the potential capabilities of this new methodology applied to environmental samples. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Liu, WenXiang; Mou, WeiHua; Wang, FeiXue
2012-03-01
As the introduction of triple-frequency signals in GNSS, the multi-frequency ionosphere correction technology has been fast developing. References indicate that the triple-frequency second order ionosphere correction is worse than the dual-frequency first order ionosphere correction because of the larger noise amplification factor. On the assumption that the variances of three frequency pseudoranges were equal, other references presented the triple-frequency first order ionosphere correction, which proved worse or better than the dual-frequency first order correction in different situations. In practice, the PN code rate, carrier-to-noise ratio, parameters of DLL and multipath effect of each frequency are not the same, so three frequency pseudorange variances are unequal. Under this consideration, a new unequal-weighted triple-frequency first order ionosphere correction algorithm, which minimizes the variance of the pseudorange ionosphere-free combination, is proposed in this paper. It is found that conventional dual-frequency first-order correction algorithms and the equal-weighted triple-frequency first order correction algorithm are special cases of the new algorithm. A new pseudorange variance estimation method based on the three carrier combination is also introduced. Theoretical analysis shows that the new algorithm is optimal. The experiment with COMPASS G3 satellite observations demonstrates that the ionosphere-free pseudorange combination variance of the new algorithm is smaller than traditional multi-frequency correction algorithms.
Sheng, Li; Wang, Zidong; Zou, Lei; Alsaadi, Fuad E
2017-10-01
In this paper, the event-based finite-horizon H ∞ state estimation problem is investigated for a class of discrete time-varying stochastic dynamical networks with state- and disturbance-dependent noises [also called (x,v) -dependent noises]. An event-triggered scheme is proposed to decrease the frequency of the data transmission between the sensors and the estimator, where the signal is transmitted only when certain conditions are satisfied. The purpose of the problem addressed is to design a time-varying state estimator in order to estimate the network states through available output measurements. By employing the completing-the-square technique and the stochastic analysis approach, sufficient conditions are established to ensure that the error dynamics of the state estimation satisfies a prescribed H ∞ performance constraint over a finite horizon. The desired estimator parameters can be designed via solving coupled backward recursive Riccati difference equations. Finally, a numerical example is exploited to demonstrate the effectiveness of the developed state estimation scheme.
Spatial patterns of mixing in the Solomon Sea
NASA Astrophysics Data System (ADS)
Alberty, M. S.; Sprintall, J.; MacKinnon, J.; Ganachaud, A.; Cravatte, S.; Eldin, G.; Germineaud, C.; Melet, A.
2017-05-01
The Solomon Sea is a marginal sea in the southwest Pacific that connects subtropical and equatorial circulation, constricting transport of South Pacific Subtropical Mode Water and Antarctic Intermediate Water through its deep, narrow channels. Marginal sea topography inhibits internal waves from propagating out and into the open ocean, making these regions hot spots for energy dissipation and mixing. Data from two hydrographic cruises and from Argo profiles are employed to indirectly infer mixing from observations for the first time in the Solomon Sea. Thorpe and finescale methods indirectly estimate the rate of dissipation of kinetic energy (ɛ) and indicate that it is maximum in the surface and thermocline layers and decreases by 2-3 orders of magnitude by 2000 m depth. Estimates of diapycnal diffusivity from the observations and a simple diffusive model agree in magnitude but have different depth structures, likely reflecting the combined influence of both diapycnal mixing and isopycnal stirring. Spatial variability of ɛ is large, spanning at least 2 orders of magnitude within isopycnal layers. Seasonal variability of ɛ reflects regional monsoonal changes in large-scale oceanic and atmospheric conditions with ɛ increased in July and decreased in March. Finally, tide power input and topographic roughness are well correlated with mean spatial patterns of mixing within intermediate and deep isopycnals but are not clearly correlated with thermocline mixing patterns.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Yue; Xu, Ke; Jiang, Weilin
Hysteretic behavior was studied in a series of Fe thin films, grown by molecular beam epitaxy, having different grain sizes and grown on different substrates. Major and minor loops and first order reversal curves (FORCs) were collected to investigate magnetization mechanisms and domain behavior under different magnetic histories. The minor loop coefficient and major loop coercivity increase with decreasing grain size due to higher defect concentration resisting domain wall movement. First order reversal curves allowed estimation of the contribution of irreversible and reversible susceptibilities and switching field distribution. The differences in shape of the major loops and first order reversalmore » curves are described using a classical Preisach model with distributions of hysterons of different switching fields, providing a powerful visualization tool to help understand the magnetization switching behavior of Fe films as manifested in various experimental magnetization measurements.« less
Cao, Yue; Xu, Ke; Jiang, Weilin; ...
2015-07-03
Hysteretic behavior was studied in a series of Fe thin films, grown by molecular beam epitaxy, having different grain sizes and grown on different substrates. Major and minor loops and first order reversal curves (FORCs) were collected to investigate magnetization mechanisms and domain behavior under different magnetic histories. The minor loop coefficient and major loop coercivity increase with decreasing grain size due to higher defect concentration resisting domain wall movement. First order reversal curves allowed estimation of the contribution of irreversible and reversible susceptibilities and switching field distribution. The differences in shape of the major loops and first order reversalmore » curves are described using a classical Preisach model with distributions of hysterons of different switching fields, providing a powerful visualization tool to help understand the magnetization switching behavior of Fe films as manifested in various experimental magnetization measurements.« less
Information-geometric measures as robust estimators of connection strengths and external inputs.
Tatsuno, Masami; Fellous, Jean-Marc; Amari, Shun-Ichi
2009-08-01
Information geometry has been suggested to provide a powerful tool for analyzing multineuronal spike trains. Among several advantages of this approach, a significant property is the close link between information-geometric measures and neural network architectures. Previous modeling studies established that the first- and second-order information-geometric measures corresponded to the number of external inputs and the connection strengths of the network, respectively. This relationship was, however, limited to a symmetrically connected network, and the number of neurons used in the parameter estimation of the log-linear model needed to be known. Recently, simulation studies of biophysical model neurons have suggested that information geometry can estimate the relative change of connection strengths and external inputs even with asymmetric connections. Inspired by these studies, we analytically investigated the link between the information-geometric measures and the neural network structure with asymmetrically connected networks of N neurons. We focused on the information-geometric measures of orders one and two, which can be derived from the two-neuron log-linear model, because unlike higher-order measures, they can be easily estimated experimentally. Considering the equilibrium state of a network of binary model neurons that obey stochastic dynamics, we analytically showed that the corrected first- and second-order information-geometric measures provided robust and consistent approximation of the external inputs and connection strengths, respectively. These results suggest that information-geometric measures provide useful insights into the neural network architecture and that they will contribute to the study of system-level neuroscience.
Damon, Bruce M.; Heemskerk, Anneriet M.; Ding, Zhaohua
2012-01-01
Fiber curvature is a functionally significant muscle structural property, but its estimation from diffusion-tensor MRI fiber tracking data may be confounded by noise. The purpose of this study was to investigate the use of polynomial fitting of fiber tracts for improving the accuracy and precision of fiber curvature (κ) measurements. Simulated image datasets were created in order to provide data with known values for κ and pennation angle (θ). Simulations were designed to test the effects of increasing inherent fiber curvature (3.8, 7.9, 11.8, and 15.3 m−1), signal-to-noise ratio (50, 75, 100, and 150), and voxel geometry (13.8 and 27.0 mm3 voxel volume with isotropic resolution; 13.5 mm3 volume with an aspect ratio of 4.0) on κ and θ measurements. In the originally reconstructed tracts, θ was estimated accurately under most curvature and all imaging conditions studied; however, the estimates of κ were imprecise and inaccurate. Fitting the tracts to 2nd order polynomial functions provided accurate and precise estimates of κ for all conditions except very high curvature (κ=15.3 m−1), while preserving the accuracy of the θ estimates. Similarly, polynomial fitting of in vivo fiber tracking data reduced the κ values of fitted tracts from those of unfitted tracts and did not change the θ values. Polynomial fitting of fiber tracts allows accurate estimation of physiologically reasonable values of κ, while preserving the accuracy of θ estimation. PMID:22503094
NASA Astrophysics Data System (ADS)
Gaztanaga, Enrique; Fosalba, Pablo
1998-12-01
In Paper I of this series, we introduced the spherical collapse (SC) approximation in Lagrangian space as a way of estimating the cumulants xi_J of density fluctuations in cosmological perturbation theory (PT). Within this approximation, the dynamics is decoupled from the statistics of the initial conditions, so we are able to present here the cumulants for generic non-Gaussian initial conditions, which can be estimated to arbitrary order including the smoothing effects. The SC model turns out to recover the exact leading-order non-linear contributions up to terms involving non-local integrals of the J-point functions. We argue that for the hierarchical ratios S_J, these non-local terms are subdominant and tend to compensate each other. The resulting predictions show a non-trivial time evolution that can be used to discriminate between models of structure formation. We compare these analytic results with non-Gaussian N-body simulations, which turn out to be in very good agreement up to scales where sigma<~1.
1990-11-01
1 = Q- 1 - 1 QlaaQ- 1.1 + a’Q-1a This is a simple case of a general formula called Woodbury’s formula by some authors; see, for example, Phadke and...1 2. The First-Order Moving Average Model ..... .................. 3. Some Approaches to the Iterative...the approximate likelihood function in some time series models. Useful suggestions have been the Cholesky decomposition of the covariance matrix and
Phantom-derived estimation of effective dose equivalent from X rays with and without a lead apron.
Mateya, C F; Claycamp, H G
1997-06-01
Organ dose equivalents were measured in a humanoid phantom in order to estimate effective dose equivalent (H(E)) and effective dose (E) from low-energy x rays and in the presence or absence of a protective lead apron. Plane-parallel irradiation conditions were approximated using direct x-ray beams of 76 and 104 kVp and resulting dosimetry data was adjusted to model exposures conditions in fluoroscopy settings. Values of H(E) and E estimated under-shielded conditions were compared to the results of several recent studies that used combinations of measured and calculated dosimetry to model exposures to radiologists. While the estimates of H(E) and E without the lead apron were within 0.2 to 20% of expected values, estimates based on personal monitors worn at the (phantom) waist (underneath the apron) underestimated either H(E) or E while monitors placed at the neck (above the apron) significantly overestimated both quantities. Also, the experimentally determined H(E) and E were 1.4 to 3.3 times greater than might be estimated using recently reported "two-monitor" algorithms for the estimation of effective dose quantities. The results suggest that accurate estimation of either H(E) or E from personal monitors under conditions of partial body exposures remains problematic and is likely to require the use of multiple monitors.
Ions lost on their first orbit can impact Alfvén eigenmode stability
Heidbrink, William W.; Fu, Guo -Yong; Van Zeeland, Michael A.
2015-08-13
Some neutral-beam ions are deflected onto loss orbits by Alfvén eigenmodes on their first bounce orbit. Here, the resonance condition for these ions differs from the usual resonance condition for a confined fast ion. Estimates indicate that particles on single-pass loss orbits transfer enough energy to the wave to alter mode stability.
Characterising primary productivity measurements across a dynamic western boundary current region
NASA Astrophysics Data System (ADS)
Everett, Jason D.; Doblin, Martina A.
2015-06-01
Determining the magnitude of primary production (PP) in a changing ocean is a major research challenge. Thousands of estimates of marine PP exist globally, but there remain significant gaps in data availability, particularly in the Southern Hemisphere. In situ PP estimates are generally single-point measurements and therefore we rely on satellite models of PP in order to scale up over time and space. To reduce the uncertainty around the model output, these models need to be assessed against in situ measurements before use. This study examined the vertically-integrated productivity in four water-masses associated with the East Australian Current (EAC), the major western boundary current (WBC) of the South Pacific. We calculated vertically integrated PP from shipboard 14C PP estimates and then compared them to estimates from four commonly used satellite models (ESQRT, VGPM, VGPM-Eppley, VGPM-Kameda) to assess their utility for this region. Vertical profiles of the water-column show each water-mass had distinct temperature-salinity signatures. The depth of the fluorescence-maximum (fmax) increased from onshore (river plume) to offshore (EAC) as light penetration increased. Depth integrated PP was highest in river plumes (792±181 mg C m-2 d-1) followed by the EAC (534±116 mg C m-2 d-1), continental shelf (140±47 mg C m-2 d-1) and cyclonic eddy waters (121±4 mg C m-2 d-1). Surface carbon assimilation efficiency was greatest in the EAC (301±145 mg C (mg Chl-a)-1 d-1) compared to other water masses. All satellite primary production models tested underestimated EAC PP and overestimated continental shelf PP. The ESQRT model had the highest skill and lowest bias of the tested models, providing the best first-order estimates of PP on the continental shelf, including at a coastal time-series station, Port Hacking, which showed considerable inter-annual variability (155-2957 mg C m-2 d-1). This work provides the first estimates of depth integrated PP associated with the East Australian Current in temperate Australia. The ongoing intensification of all WBCs makes it critical to understand the variability in PP at the regional scale. More accurate predictions in the EAC region will require vertically-resolved in situ productivity and bio-optical measurements across multiple time scales to allow development of other models which simulate dynamic ocean conditions.
Condition of Tidal Wetlands of Washington, Oregon and California - 2002
The National Coastal Assessment (NCA) of US EPA conducted the first probability based assessment of the condition of estuarine intertidal wetland resources of the West Coast of the U.S. in 2002. The study results constitute a baseline estimate of condition of coastal resources t...
Guidoux, Romain; Duclos, Martine; Fleury, Gérard; Lacomme, Philippe; Lamaudière, Nicolas; Manenq, Pierre-Henri; Paris, Ludivine; Ren, Libo; Rousset, Sylvie
2014-12-01
This paper introduces a function dedicated to the estimation of total energy expenditure (TEE) of daily activities based on data from accelerometers integrated into smartphones. The use of mass-market sensors such as accelerometers offers a promising solution for the general public due to the growing smartphone market over the last decade. The TEE estimation function quality was evaluated using data from intensive numerical experiments based, first, on 12 volunteers equipped with a smartphone and two research sensors (Armband and Actiheart) in controlled conditions (CC) and, then, on 30 other volunteers in free-living conditions (FLC). The TEE given by these two sensors in both conditions and estimated from the metabolic equivalent tasks (MET) in CC served as references during the creation and evaluation of the function. The TEE mean gap in absolute value between the function and the three references was 7.0%, 16.4% and 2.7% in CC, and 17.0% and 23.7% according to Armband and Actiheart, respectively, in FLC. This is the first step in the definition of a new feedback mechanism that promotes self-management and daily-efficiency evaluation of physical activity as part of an information system dedicated to the prevention of chronic diseases. Copyright © 2014 Elsevier Inc. All rights reserved.
Diffusive Transport and Structural Properties of Liquid Iron Alloys at High Pressure
NASA Astrophysics Data System (ADS)
Posner, E.; Rubie, D. C.; Steinle-Neumann, G.; Frost, D. J.
2017-12-01
Diffusive transport properties of liquid iron alloys at high pressures (P) and temperatures (T) place important kinetic constraints on processes related to the origin and evolution of planetary cores. Earth's core composition is largely controlled by the extent of chemical equilibration achieved between liquid metal bodies and a silicate magma ocean during core formation, which can be estimated using chemical diffusion data. In order to estimate the time and length scales of metal-silicate chemical equilibration, we have measured chemical diffusion rates of Si, O and Cr in liquid iron over the P-T range of 1-18 GPa and 1873-2643 K using a multi-anvil apparatus. We have also performed first-principles molecular dynamic simulations of comparable binary liquid compositions, in addition to pure liquid Fe, over a much wider P-T range (1 bar-330 GPa, 2200-5500 K) in order to both validate the simulation results with experimental data at conditions accessible in the laboratory and to extend our dataset to conditions of the Earth's core. Over the entire P-T range studied using both methods, diffusion coefficients are described consistently and well using an exponential function of the homologous temperature relation. Si, Cr and Fe diffusivities of approximately 5 × 10-9 m2 s-1 are constant along the melting curve from ambient to core pressures, while oxygen diffusion is 2-3 times faster. Our results indicate that in order for the composition of the Earth's core to represent chemical equilibrium, impactor cores must have broken up into liquid droplet sizes no larger than a few tens of cm. Structural properties, analyzed using partial radial distribution functions from the molecular dynamics simulations, reveal a pressure-induced structural change in liquid Fe0.96O0.04 at densities of 8 g cm-3, in agreement with previous experimental studies. For densities above 8 g cm-3, the liquid is essentially close packed with a local CsCl-like (B2) packing of Fe around O under conditions of the Earth's core.
NASA Technical Reports Server (NTRS)
Franklin, F. A.; Lecar, M.; Lin, D. N. C.; Papaloizou, J.
1980-01-01
Conditions leading to the truncation, at the 2:1 resonance, of a disk of infrequently colliding particles surrounding the primary of a binary system are studied numerically and analytically. Attention is given to the case in which the mass ratio, q, is sufficiently small (less than about 0.1) and the radius of the disk centered on the primary allowably larger, so that first-order orbit-orbit resonances between ring material and the secondary can lie within it. Collisions are found to be less frequent than q to the -2/3 power orbital periods (the period of the forced eccentricity at the 2:1 resonance), and truncation occurs and Kirkwood gaps are produced only if the particle eccentricity is less than some critical value, estimated to be of order q to the 5/9 power, or approximately 0.02 for the sun-Jupiter case having q equal to 10 to the -3rd power.
Strategic sophistication of individuals and teams. Experimental evidence
Sutter, Matthias; Czermak, Simon; Feri, Francesco
2013-01-01
Many important decisions require strategic sophistication. We examine experimentally whether teams act more strategically than individuals. We let individuals and teams make choices in simple games, and also elicit first- and second-order beliefs. We find that teams play the Nash equilibrium strategy significantly more often, and their choices are more often a best response to stated first order beliefs. Distributional preferences make equilibrium play less likely. Using a mixture model, the estimated probability to play strategically is 62% for teams, but only 40% for individuals. A model of noisy introspection reveals that teams differ from individuals in higher order beliefs. PMID:24926100
PHARMACOKINETIC PROFILES OF PERFLUOROOCTANOIC ACID IN MICE AFTER CHRONIC EXPOSURE
Perfluorooctanoic acid (PFOA) is highly persistent in humans, with serum half-life estimates of 2.3 to 3.8 years. In the mouse, elimination of PFOA appears to be first-order after a single oral administration, with serum half-life estimates of 16 days for females and 22 days for ...
ERIC Educational Resources Information Center
Barker, James L.; And Others
This U.S. Environmental Protection Agency report presents estimates of the energy demand attributable to environmental control of pollution from stationary point sources. This class of pollution source includes powerplants, factories, refineries, municipal waste water treatment plants, etc., but excludes mobile sources such as trucks, and…
NASA Astrophysics Data System (ADS)
Chen, Chaochao; Vachtsevanos, George; Orchard, Marcos E.
2012-04-01
Machine prognosis can be considered as the generation of long-term predictions that describe the evolution in time of a fault indicator, with the purpose of estimating the remaining useful life (RUL) of a failing component/subsystem so that timely maintenance can be performed to avoid catastrophic failures. This paper proposes an integrated RUL prediction method using adaptive neuro-fuzzy inference systems (ANFIS) and high-order particle filtering, which forecasts the time evolution of the fault indicator and estimates the probability density function (pdf) of RUL. The ANFIS is trained and integrated in a high-order particle filter as a model describing the fault progression. The high-order particle filter is used to estimate the current state and carry out p-step-ahead predictions via a set of particles. These predictions are used to estimate the RUL pdf. The performance of the proposed method is evaluated via the real-world data from a seeded fault test for a UH-60 helicopter planetary gear plate. The results demonstrate that it outperforms both the conventional ANFIS predictor and the particle-filter-based predictor where the fault growth model is a first-order model that is trained via the ANFIS.
Space Vehicle Guidance, Navigation, Control, and Estimation Operations Technologies
2018-03-29
angular position around the ellipse, and the out-of-place amplitude and angular position. These elements are explicitly relatable to the six rectangular...quasi) second order relative orbital elements are explored. One theory uses the expanded solution form and introduces several instantaneous ellipses...In each case, the theory quantifies distortion of the first order relative orbital elements when including second order effects. The new variables are
DOE Office of Scientific and Technical Information (OSTI.GOV)
Revel, G. M.; Castellini, P.; Chiariotti, P.
2010-05-28
The present work deals with the analysis of problems and potentials of laser vibrometer measurements inside helicopter cabins in running conditions. The paper describes the results of a systematic measurement campaign performed on an Agusta A109MKII mock-up. The aim is to evaluate the applicability of Scanning Laser Doppler Vibrometer (SLDV) for tests in simulated flying conditions and to understand how performances of the technique are affected when the laser head is placed inside the cabin, thus being subjected to interfering inputs. Firstly a brief description of the performed test cases and the used measuring set-ups are given. Comparative tests betweenmore » SLDV and accelerometers are presented, analyzing the achievable performances for the specific application. Results obtained measuring with SLDV placed inside the helicopter cabin during operative excitation conditions are compared with those performed with the laser lying outside the mock-up, these last being considered as 'reference measurements'. Finally, in order to give an estimate of the uncertainty level on measured signals, a study linking the admitted percentage of noise content on vibrometer signals due to laser head vibration levels will be introduced.« less
Zink, V; Štípková, M; Lassen, J
2011-10-01
The aim of this study was to estimate genetic parameters for fertility traits and linear type traits in the Czech Holstein dairy cattle population. Phenotypic data regarding 12 linear type traits, measured in first lactation, and 3 fertility traits, measured in each of first and second lactation, were collected from 2005 to 2009 in the progeny testing program of the Czech-Moravian Breeders Corporation. The number of animals for each linear type trait was 59,467, except for locomotion, where 53,436 animals were recorded. The 3-generation pedigree file included 164,125 animals. (Co)variance components were estimated using AI-REML in a series of bivariate analyses, which were implemented via the DMU package. Fertility traits included days from calving to first service (CF1), days open (DO1), and days from first to last service (FL1) in first lactation, and days from calving to first service (CF2), days open (DO2), and days from first to last service (FL2) in second lactation. The number of animals with fertility data varied between traits and ranged from 18,915 to 58,686. All heritability estimates for reproduction traits were low, ranging from 0.02 to 0.04. Heritability estimates for linear type traits ranged from 0.03 for locomotion to 0.39 for stature. Estimated genetic correlations between fertility traits and linear type traits were generally neutral or positive, whereas genetic correlations between body condition score and CF1, DO1, FL1, CF2 and DO2 were mostly negative, with the greatest correlation between BCS and CF2 (-0.51). Genetic correlations with locomotion were greatest for CF1 and CF2 (-0.34 for both). Results of this study show that cows that are genetically extreme for angularity, stature, and body depth tend to perform poorly for fertility traits. At the same time, cows that are genetically predisposed for low body condition score or high locomotion score are generally inferior in fertility. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Miller, Ezer; Huppert, Amit; Novikov, Ilya; Warburg, Alon; Hailu, Asrat; Abbasi, Ibrahim; Freedman, Laurence S
2015-11-10
In this work, we describe a two-stage sampling design to estimate the infection prevalence in a population. In the first stage, an imperfect diagnostic test was performed on a random sample of the population. In the second stage, a different imperfect test was performed in a stratified random sample of the first sample. To estimate infection prevalence, we assumed conditional independence between the diagnostic tests and develop method of moments estimators based on expectations of the proportions of people with positive and negative results on both tests that are functions of the tests' sensitivity, specificity, and the infection prevalence. A closed-form solution of the estimating equations was obtained assuming a specificity of 100% for both tests. We applied our method to estimate the infection prevalence of visceral leishmaniasis according to two quantitative polymerase chain reaction tests performed on blood samples taken from 4756 patients in northern Ethiopia. The sensitivities of the tests were also estimated, as well as the standard errors of all estimates, using a parametric bootstrap. We also examined the impact of departures from our assumptions of 100% specificity and conditional independence on the estimated prevalence. Copyright © 2015 John Wiley & Sons, Ltd.
Virginia L. McDaniel; Roger W. Perry; Nancy E. Koerth; James M. Guldin
2016-01-01
Accurate fuel load and consumption predictions are important to estimate fire effects and air pollutant emissions. The FOFEM (First Order Fire Effects Model) is a commonly used model developed in the western United States to estimate fire effects such as fuel consumption, soil heating, air pollutant emissions, and tree mortality. However, the accuracy of the model in...
Second-Order Conditioning of Human Causal Learning
ERIC Educational Resources Information Center
Jara, Elvia; Vila, Javier; Maldonado, Antonio
2006-01-01
This article provides the first demonstration of a reliable second-order conditioning (SOC) effect in human causal learning tasks. It demonstrates the human ability to infer relationships between a cause and an effect that were never paired together during training. Experiments 1a and 1b showed a clear and reliable SOC effect, while Experiments 2a…
CONDITION OF ESTUARIES AND BAYS OF HAWAII FOR 2002: A STATISTICAL SUMMARY
The National Coastal Assessment (NCA) of US EPA conducted the first probabilistic assessment of the condition of estuarine resources of the main islands of Hawaii in 2002. The study provided condition estimates for both the estuaries and bays of the Hawaiian Island chain, as wel...
zeldovich-PLT: Zel'dovich approximation initial conditions generator
NASA Astrophysics Data System (ADS)
Eisenstein, Daniel; Garrison, Lehman
2016-05-01
zeldovich-PLT generates Zel'dovich approximation (ZA) initial conditions (i.e. first-order Lagrangian perturbation theory) for cosmological N-body simulations, optionally applying particle linear theory (PLT) corrections.
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Gottlieb, David; Abarbanel, Saul; Don, Wai-Sun
1993-01-01
The conventional method of imposing time dependent boundary conditions for Runge-Kutta (RK) time advancement reduces the formal accuracy of the space-time method to first order locally, and second order globally, independently of the spatial operator. This counter intuitive result is analyzed in this paper. Two methods of eliminating this problem are proposed for the linear constant coefficient case: (1) impose the exact boundary condition only at the end of the complete RK cycle, (2) impose consistent intermediate boundary conditions derived from the physical boundary condition and its derivatives. The first method, while retaining the RK accuracy in all cases, results in a scheme with much reduced CFL condition, rendering the RK scheme less attractive. The second method retains the same allowable time step as the periodic problem. However it is a general remedy only for the linear case. For non-linear hyperbolic equations the second method is effective only for for RK schemes of third order accuracy or less. Numerical studies are presented to verify the efficacy of each approach.
Uniform gradient estimates on manifolds with a boundary and applications
NASA Astrophysics Data System (ADS)
Cheng, Li-Juan; Thalmaier, Anton; Thompson, James
2018-04-01
We revisit the problem of obtaining uniform gradient estimates for Dirichlet and Neumann heat semigroups on Riemannian manifolds with boundary. As applications, we obtain isoperimetric inequalities, using Ledoux's argument, and uniform quantitative gradient estimates, firstly for C^2_b functions with boundary conditions and then for the unit spectral projection operators of Dirichlet and Neumann Laplacians.
Domain Decomposition Algorithms for First-Order System Least Squares Methods
NASA Technical Reports Server (NTRS)
Pavarino, Luca F.
1996-01-01
Least squares methods based on first-order systems have been recently proposed and analyzed for second-order elliptic equations and systems. They produce symmetric and positive definite discrete systems by using standard finite element spaces, which are not required to satisfy the inf-sup condition. In this paper, several domain decomposition algorithms for these first-order least squares methods are studied. Some representative overlapping and substructuring algorithms are considered in their additive and multiplicative variants. The theoretical and numerical results obtained show that the classical convergence bounds (on the iteration operator) for standard Galerkin discretizations are also valid for least squares methods.
Estimating gene function with least squares nonnegative matrix factorization.
Wang, Guoli; Ochs, Michael F
2007-01-01
Nonnegative matrix factorization is a machine learning algorithm that has extracted information from data in a number of fields, including imaging and spectral analysis, text mining, and microarray data analysis. One limitation with the method for linking genes through microarray data in order to estimate gene function is the high variance observed in transcription levels between different genes. Least squares nonnegative matrix factorization uses estimates of the uncertainties on the mRNA levels for each gene in each condition, to guide the algorithm to a local minimum in normalized chi2, rather than a Euclidean distance or divergence between the reconstructed data and the data itself. Herein, application of this method to microarray data is demonstrated in order to predict gene function.
Crack opening area estimates in pressurized through-wall cracked elbows under bending
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franco, C.; Gilles, P.; Pignol, M.
1997-04-01
One of the most important aspects in the leak-before-break approach is the estimation of the crack opening area corresponding to potential through-wall cracks at critical locations during plant operation. In order to provide a reasonable lower bound to the leak area under such loading conditions, numerous experimental and numerical programs have been developed in USA, U.K. and FRG and widely discussed in literature. This paper aims to extend these investigations on a class of pipe elbows characteristic of PWR main coolant piping. The paper is divided in three main parts. First, a new simplified estimation scheme for leakage area ismore » described, based on the reference stress method. This approach mainly developed in U.K. and more recently in France provides a convenient way to account for the non-linear behavior of the material. Second, the method is carried out for circumferential through-wall cracks located in PWR elbows subjected to internal pressure. Finite element crack area results are presented and comparisons are made with our predictions. Finally, in the third part, the discussion is extended to elbows under combined pressure and in plane bending moment.« less
Estimation and Analysis of Nonlinear Stochastic Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Marcus, S. I.
1975-01-01
The algebraic and geometric structures of certain classes of nonlinear stochastic systems were exploited in order to obtain useful stability and estimation results. The class of bilinear stochastic systems (or linear systems with multiplicative noise) was discussed. The stochastic stability of bilinear systems driven by colored noise was considered. Approximate methods for obtaining sufficient conditions for the stochastic stability of bilinear systems evolving on general Lie groups were discussed. Two classes of estimation problems involving bilinear systems were considered. It was proved that, for systems described by certain types of Volterra series expansions or by certain bilinear equations evolving on nilpotent or solvable Lie groups, the optimal conditional mean estimator consists of a finite dimensional nonlinear set of equations. The theory of harmonic analysis was used to derive suboptimal estimators for bilinear systems driven by white noise which evolve on compact Lie groups or homogeneous spaces.
2007-07-01
result, estimation of the lifetime of hydrogen cyanide in deeper ocean waters is likely to be difficult. 2.4 Sulfur Mustard (HS) The principle active ...single first-order reaction rather than consecutive first-order reactions.159 A recent study determined the activation energy of 18.5 kcal mole-1for...on the chloride ion activity . Despite the relative rapidity of the hydrolysis reaction, 1,1’-thiobis[2-chloroethane] has been found to persist in
NASA Technical Reports Server (NTRS)
Wang, R.; Demerdash, N. A.
1990-01-01
The effects of finite element grid geometries and associated ill-conditioning were studied in single medium and multi-media (air-iron) three dimensional magnetostatic field computation problems. The sensitivities of these 3D field computations to finite element grid geometries were investigated. It was found that in single medium applications the unconstrained magnetic vector potential curl-curl formulation in conjunction with first order finite elements produce global results which are almost totally insensitive to grid geometries. However, it was found that in multi-media (air-iron) applications first order finite element results are sensitive to grid geometries and consequent elemental shape ill-conditioning. These sensitivities were almost totally eliminated by means of the use of second order finite elements in the field computation algorithms. Practical examples are given in this paper to demonstrate these aspects mentioned above.
Deepika; Kaur, Sandeep; Narayan, Shiv
2018-06-01
This paper proposes a novel fractional order sliding mode control approach to address the issues of stabilization as well as tracking of an N-dimensional extended chained form of fractional order non-holonomic system. Firstly, the hierarchical fractional order terminal sliding manifolds are selected to procure the desired objectives in finite time. Then, a sliding mode control law is formulated which provides robustness against various system uncertainties or external disturbances. In addition, a novel fractional order uncertainty estimator is deduced mathematically to estimate and mitigate the effects of uncertainties, which also excludes the requirement of their upper bounds. Due to the omission of discontinuous control action, the proposed algorithm ensures a chatter-free control input. Moreover, the finite time stability of the closed loop system has been proved analytically through well known Mittag-Leffler and Fractional Lyapunov theorems. Finally, the proposed methodology is validated with MATLAB simulations on two examples including an application of fractional order non-holonomic wheeled mobile robot and its performances are also compared with the existing control approach. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Order-of-magnitude physics of neutron stars. Estimating their properties from first principles
NASA Astrophysics Data System (ADS)
Reisenegger, Andreas; Zepeda, Felipe S.
2016-03-01
We use basic physics and simple mathematics accessible to advanced undergraduate students to estimate the main properties of neutron stars. We set the stage and introduce relevant concepts by discussing the properties of "everyday" matter on Earth, degenerate Fermi gases, white dwarfs, and scaling relations of stellar properties with polytropic equations of state. Then, we discuss various physical ingredients relevant for neutron stars and how they can be combined in order to obtain a couple of different simple estimates of their maximum mass, beyond which they would collapse, turning into black holes. Finally, we use the basic structural parameters of neutron stars to briefly discuss their rotational and electromagnetic properties.
Blow-up for a three dimensional Keller-Segel model with consumption of chemoattractant
NASA Astrophysics Data System (ADS)
Jiang, Jie; Wu, Hao; Zheng, Songmu
2018-04-01
We investigate blow-up properties for the initial-boundary value problem of a Keller-Segel model with consumption of chemoattractant when the spatial dimension is three. Through a kinetic reformulation of the Keller-Segel system, we first derive some higher-order estimates and obtain certain blow-up criteria for the local classical solutions. These blow-up criteria generalize the results in [4,5] from the whole space R3 to the case of bounded smooth domain Ω ⊂R3. Lower global blow-up estimate on ‖ n ‖ L∞ (Ω) is also obtained based on our higher-order estimates. Moreover, we prove local non-degeneracy for blow-up points.
Studies on spectral analysis of randomly sampled signals: Application to laser velocimetry data
NASA Technical Reports Server (NTRS)
Sree, David
1992-01-01
Spectral analysis is very useful in determining the frequency characteristics of many turbulent flows, for example, vortex flows, tail buffeting, and other pulsating flows. It is also used for obtaining turbulence spectra from which the time and length scales associated with the turbulence structure can be estimated. These estimates, in turn, can be helpful for validation of theoretical/numerical flow turbulence models. Laser velocimetry (LV) is being extensively used in the experimental investigation of different types of flows, because of its inherent advantages; nonintrusive probing, high frequency response, no calibration requirements, etc. Typically, the output of an individual realization laser velocimeter is a set of randomly sampled velocity data. Spectral analysis of such data requires special techniques to obtain reliable estimates of correlation and power spectral density functions that describe the flow characteristics. FORTRAN codes for obtaining the autocorrelation and power spectral density estimates using the correlation-based slotting technique were developed. Extensive studies have been conducted on simulated first-order spectrum and sine signals to improve the spectral estimates. A first-order spectrum was chosen because it represents the characteristics of a typical one-dimensional turbulence spectrum. Digital prefiltering techniques, to improve the spectral estimates from randomly sampled data were applied. Studies show that the spectral estimates can be increased up to about five times the mean sampling rate.
Akashi, Kinya; Nishimura, Noriyuki; Ishida, Yoshinori; Yokota, Akiho
2004-10-08
Wild watermelon (Citrullus lanatus sp.) has the ability to tolerate severe drought/high light stress conditions despite carrying out normal C3-type photosynthesis. Here, mRNA differential display was employed to isolate drought-responsive genes in the leaves of wild watermelon. One of the isolated genes, CLMT2, shared significant homology with type-2 metallothionein (MT) sequences from other plants. The second-order rate constant for the reaction between a recombinant CLMT2 protein and hydroxyl radicals was estimated to be 1.2 x 10(11) M(-1) s(-1), demonstrating that CLMT2 had an extraordinary high activity for detoxifying hydroxyl radicals. Moreover, hydroxyl radical-catalyzed degradation of watermelon genomic DNA was effectively suppressed by CLMT2 in vitro. This is the first demonstration of a plant MT with antioxidant properties. The results suggest that CLMT2 induction contributes to the survival of wild watermelon under severe drought/high light stress conditions. Copyright 2004 Elsevier Inc.
Attractors for non-dissipative irrotational von Karman plates with boundary damping
NASA Astrophysics Data System (ADS)
Bociu, Lorena; Toundykov, Daniel
Long-time behavior of solutions to a von Karman plate equation is considered. The system has an unrestricted first-order perturbation and a nonlinear damping acting through free boundary conditions only. This model differs from those previously considered (e.g. in the extensive treatise (Chueshov and Lasiecka, 2010 [11])) because the semi-flow may be of a non-gradient type: the unique continuation property is not known to hold, and there is no strict Lyapunov function on the natural finite-energy space. Consequently, global bounds on the energy, let alone the existence of an absorbing ball, cannot be a priori inferred. Moreover, the free boundary conditions are not recognized by weak solutions and some helpful estimates available for clamped, hinged or simply-supported plates cannot be invoked. It is shown that this non-monotone flow can converge to a global compact attractor with the help of viscous boundary damping and appropriately structured restoring forces acting only on the boundary or its collar.
Sim, K S; Lim, M S; Yeap, Z X
2016-07-01
A new technique to quantify signal-to-noise ratio (SNR) value of the scanning electron microscope (SEM) images is proposed. This technique is known as autocorrelation Levinson-Durbin recursion (ACLDR) model. To test the performance of this technique, the SEM image is corrupted with noise. The autocorrelation function of the original image and the noisy image are formed. The signal spectrum based on the autocorrelation function of image is formed. ACLDR is then used as an SNR estimator to quantify the signal spectrum of noisy image. The SNR values of the original image and the quantified image are calculated. The ACLDR is then compared with the three existing techniques, which are nearest neighbourhood, first-order linear interpolation and nearest neighbourhood combined with first-order linear interpolation. It is shown that ACLDR model is able to achieve higher accuracy in SNR estimation. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Gottlieb, David; Abarbanel, Saul
1993-01-01
We present a systematic method for constructing boundary conditions (numerical and physical) of the required accuracy, for compact (Pade-like) high-order finite-difference schemes for hyperbolic systems. First, a roper summation-by-parts formula is found for the approximate derivative. A 'simultaneous approximation term' (SAT) is then introduced to treat the boundary conditions. This procedure leads to time-stable schemes even in the system case. An explicit construction of the fourth-order compact case is given. Numerical studies are presented to verify the efficacy of the approach.
Robust Angle Estimation for MIMO Radar with the Coexistence of Mutual Coupling and Colored Noise.
Wang, Junxiang; Wang, Xianpeng; Xu, Dingjie; Bi, Guoan
2018-03-09
This paper deals with joint estimation of direction-of-departure (DOD) and direction-of- arrival (DOA) in bistatic multiple-input multiple-output (MIMO) radar with the coexistence of unknown mutual coupling and spatial colored noise by developing a novel robust covariance tensor-based angle estimation method. In the proposed method, a third-order tensor is firstly formulated for capturing the multidimensional nature of the received data. Then taking advantage of the temporal uncorrelated characteristic of colored noise and the banded complex symmetric Toeplitz structure of the mutual coupling matrices, a novel fourth-order covariance tensor is constructed for eliminating the influence of both spatial colored noise and mutual coupling. After a robust signal subspace estimation is obtained by using the higher-order singular value decomposition (HOSVD) technique, the rotational invariance technique is applied to achieve the DODs and DOAs. Compared with the existing HOSVD-based subspace methods, the proposed method can provide superior angle estimation performance and automatically jointly perform the DODs and DOAs. Results from numerical experiments are presented to verify the effectiveness of the proposed method.
Robust Angle Estimation for MIMO Radar with the Coexistence of Mutual Coupling and Colored Noise
Wang, Junxiang; Wang, Xianpeng; Xu, Dingjie; Bi, Guoan
2018-01-01
This paper deals with joint estimation of direction-of-departure (DOD) and direction-of- arrival (DOA) in bistatic multiple-input multiple-output (MIMO) radar with the coexistence of unknown mutual coupling and spatial colored noise by developing a novel robust covariance tensor-based angle estimation method. In the proposed method, a third-order tensor is firstly formulated for capturing the multidimensional nature of the received data. Then taking advantage of the temporal uncorrelated characteristic of colored noise and the banded complex symmetric Toeplitz structure of the mutual coupling matrices, a novel fourth-order covariance tensor is constructed for eliminating the influence of both spatial colored noise and mutual coupling. After a robust signal subspace estimation is obtained by using the higher-order singular value decomposition (HOSVD) technique, the rotational invariance technique is applied to achieve the DODs and DOAs. Compared with the existing HOSVD-based subspace methods, the proposed method can provide superior angle estimation performance and automatically jointly perform the DODs and DOAs. Results from numerical experiments are presented to verify the effectiveness of the proposed method. PMID:29522499
PySeqLab: an open source Python package for sequence labeling and segmentation.
Allam, Ahmed; Krauthammer, Michael
2017-11-01
Text and genomic data are composed of sequential tokens, such as words and nucleotides that give rise to higher order syntactic constructs. In this work, we aim at providing a comprehensive Python library implementing conditional random fields (CRFs), a class of probabilistic graphical models, for robust prediction of these constructs from sequential data. Python Sequence Labeling (PySeqLab) is an open source package for performing supervised learning in structured prediction tasks. It implements CRFs models, that is discriminative models from (i) first-order to higher-order linear-chain CRFs, and from (ii) first-order to higher-order semi-Markov CRFs (semi-CRFs). Moreover, it provides multiple learning algorithms for estimating model parameters such as (i) stochastic gradient descent (SGD) and its multiple variations, (ii) structured perceptron with multiple averaging schemes supporting exact and inexact search using 'violation-fixing' framework, (iii) search-based probabilistic online learning algorithm (SAPO) and (iv) an interface for Broyden-Fletcher-Goldfarb-Shanno (BFGS) and the limited-memory BFGS algorithms. Viterbi and Viterbi A* are used for inference and decoding of sequences. Using PySeqLab, we built models (classifiers) and evaluated their performance in three different domains: (i) biomedical Natural language processing (NLP), (ii) predictive DNA sequence analysis and (iii) Human activity recognition (HAR). State-of-the-art performance comparable to machine-learning based systems was achieved in the three domains without feature engineering or the use of knowledge sources. PySeqLab is available through https://bitbucket.org/A_2/pyseqlab with tutorials and documentation. ahmed.allam@yale.edu or michael.krauthammer@yale.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Critical role for mesoscale eddy diffusion in supplying oxygen to hypoxic ocean waters
NASA Astrophysics Data System (ADS)
Gnanadesikan, Anand; Bianchi, Daniele; Pradal, Marie-Aude
2013-10-01
of the oceanic lateral eddy diffusion coefficient Aredi vary by more than an order of magnitude, ranging from less than a few hundred m2/s to thousands of m2/s. This uncertainty has first-order implications for the intensity of oceanic hypoxia, which is poorly simulated by the current generation of Earth System Models. Using satellite-based estimate of oxygen consumption in hypoxic waters to estimate the required diffusion coefficient for these waters gives a value of order 1000 m2/s. Varying Aredi across a suite of Earth System Models yields a broadly consistent result given a thermocline diapycnal diffusion coefficient of 1 × 10-5 m2/s.
Stability analysis of fractional-order Hopfield neural networks with time delays.
Wang, Hu; Yu, Yongguang; Wen, Guoguang
2014-07-01
This paper investigates the stability for fractional-order Hopfield neural networks with time delays. Firstly, the fractional-order Hopfield neural networks with hub structure and time delays are studied. Some sufficient conditions for stability of the systems are obtained. Next, two fractional-order Hopfield neural networks with different ring structures and time delays are developed. By studying the developed neural networks, the corresponding sufficient conditions for stability of the systems are also derived. It is shown that the stability conditions are independent of time delays. Finally, numerical simulations are given to illustrate the effectiveness of the theoretical results obtained in this paper. Copyright © 2014 Elsevier Ltd. All rights reserved.
Fleming, Kate M; White, Ian R
2007-01-01
Objective To determine the effect of birth order on the risk of perinatal death in twin pregnancies. Design Retrospective cohort study. Setting England, Northern Ireland, and Wales, 1994-2003. Participants 1377 twin pregnancies with one intrapartum stillbirth or neonatal death from causes other than congenital abnormality and one surviving infant. Main outcome measures The risk of perinatal death in the first and second twin estimated with conditional logistic regression. Results There was no association between birth order and the risk of death overall (odds ratio 1.0, 95% confidence interval 0.9 to 1.1). However, there was a highly significant interaction with gestational age (P<0.001). There was no association between birth order and the risk of death among infants born before 36 weeks' gestation but there was an increased risk of death among second twins born at term (2.3, 1.7 to 3.2, P<0.001), which was stronger for deaths caused by intrapartum anoxia or trauma (3.4, 2.2 to 5.3). Among term births, there was a trend (P=0.1) towards a greater risk of the second twin dying from anoxia among those delivered vaginally (4.1, 1.8 to 9.5) compared with those delivered by caesarean section (1.8, 0.9 to 3.6). Conclusions In this cohort, compared with first twins, second twins born at term were at increased risk of perinatal death related to delivery. Vaginally delivered second twins had a fourfold risk of death caused by intrapartum anoxia. PMID:17337456
Saien, Javad; Fallah Vahed Bazkiaei, Marzieh
2018-07-01
Aqueous solutions of p-nitrophenol (PNP) were treated with UV-activated potassium periodate (UV/KPI) in an efficient photo-reactor. Either periodate or UV alone had little effect; however, their combination led to a significant degradation and mineralization. The response surface methodology was employed for design of experiments and optimization. The optimum conditions for treatment of 30 mg/L of the substrate were determined as [KPI] = 386.3 mg/L, pH = 6.2 and T = 34.6°C, under which 79.5% degradation was achieved after 60 min. Use of 25 and 40 kHz ultrasound waves caused the degradation to enhance to 88.3% and 92.3%, respectively. The intermediates were identified by gas chromatography-mass spectroscopy analysis, leading to propose the reaction pathway. The presence of water conventional bicarbonate, chloride, sulfate and nitrate anions caused unfavorable effects in efficiency. Meanwhile, the kinetic study showed that PNP degradation follows a pseudo-first-order reaction and the activation energy was determined. The irradiation energy consumption required for one order of magnitude degradation was estimated as 11.18 kWh/m 3 . Accordingly, comparison with the previously reported processes showed the superiority of PNP treatment with the employed process.
Leidl, Dana M; Lay, Belinda P P; Chakouch, Cassandra; Westbrook, R Frederick; Holmes, Nathan M
2018-04-12
The present series of experiments pursued our recent findings that consolidation of a second-order fear memory requires neuronal activity, but not de novo protein synthesis, in the basolateral amygdala complex (BLA). It used a modified second-order conditioning protocol in which rats were exposed to S1-shock pairings in stage 1 and pairings of the serial S2-S1 compound and shock in stage 2. Experiment 1 showed that responding (freezing) to S2 in this protocol is conditional on its compounding with S1 in stage 2 (Experiment 1), and therefore, the result of associative formation. The remaining experiments then showed that the protein synthesis requirement for consolidation of new learning about S2 varied with the training afforded S1. When S1 was trained in stage 1 and present in stage 2, consolidation of the new S2 fear memory was unaffected by pre- or post-stage 2 infusions of the protein synthesis inhibitor, cycloheximide, into the BLA (Experiments 2 and 5). This result was observed independently of the number of S1-shock pairings in stage 1 (even a single pairing produced the result), and alongside demonstrations that cycloheximide infusions disrupt consolidation of a first-order fear memory (Experiments 2 and 5). However, when S1 was not conditioned in stage 1 (Experiment 3) or was omitted from conditioning in stage 2 (Experiment 4), consolidation of the new S2 fear memory was disrupted by post-stage 2 cycloheximide infusions into the BLA. These results were taken to imply that the consolidation of a higher-order fear memory exploits molecular events associated with consolidation of a reactivated first-order fear memory; hence it occurs independently of de novo protein synthesis in the BLA. Alternatively, the nature of the association formed in higher-order conditioning may be such as to not require de novo protein synthesis for its consolidation. Copyright © 2018 Elsevier Inc. All rights reserved.
A preconditioned formulation of the Cauchy-Riemann equations
NASA Technical Reports Server (NTRS)
Phillips, T. N.
1983-01-01
A preconditioning of the Cauchy-Riemann equations which results in a second-order system is described. This system is shown to have a unique solution if the boundary conditions are chosen carefully. This choice of boundary condition enables the solution of the first-order system to be retrieved. A numerical solution of the preconditioned equations is obtained by the multigrid method.
Asymptotic stability estimates near an equilibrium point
NASA Astrophysics Data System (ADS)
Dumas, H. Scott; Meyer, Kenneth R.; Palacián, Jesús F.; Yanguas, Patricia
2017-07-01
We use the error bounds for adiabatic invariants found in the work of Chartier, Murua and Sanz-Serna [3] to bound the solutions of a Hamiltonian system near an equilibrium over exponentially long times. Our estimates depend only on the linearized system and not on the higher order terms as in KAM theory, nor do we require any steepness or convexity conditions as in Nekhoroshev theory. We require that the equilibrium point where our estimate applies satisfy a type of formal stability called Lie stability.
Koch, Lisa K; Cunze, Sarah; Werblow, Antje; Kochmann, Judith; Dörge, Dorian D; Mehlhorn, Heinz; Klimpel, Sven
2016-03-01
Climatic changes raise the risk of re-emergence of arthropod-borne virus outbreaks globally. These viruses are transmitted by arthropod vectors, often mosquitoes. Due to increasing worldwide trade and tourism, these vector species are often accidentally introduced into many countries beyond their former distribution range. Aedes albopictus, a well-known disease vector, was detected for the first time in Germany in 2007, but seems to have failed establishment until today. However, the species is known to occur in other temperate regions and a risk for establishment in Germany remains, especially in the face of predicted climate change. Thus, the goal of the study was to estimate the potential distribution of Ae. albopictus in Germany. We used ecological niche modeling in order to estimate the potential habitat suitability for this species under current and projected future climatic conditions. According to our model, there are already two areas in western and southern Germany that appear suitable for Ae. albopictus under current climatic conditions. One of these areas lies in Baden-Wuerttemberg, the other in North-Rhine Westphalia in the Ruhr region. Furthermore, projections under future climatic conditions show an increase of the modeled habitat suitability throughout Germany. Ae. albopictus is supposed to be better acclimated to colder temperatures than other tropical vectors and thus, might become, triggered by climate change, a serious threat to public health in Germany. Our modeling results can help optimizing the design of monitoring programs currently in place in Germany.
A wearable system for the seismocardiogram assessment in daily life conditions.
Di Rienzo, Marco; Meriggi, Paolo; Rizzo, Francesco; Vaini, Emanuele; Faini, Andrea; Merati, Giampiero; Parati, Gianfranco; Castiglioni, Paolo
2011-01-01
Seismocardiogram (SCG) is the recording of the minute body accelerations induced by the heart activity, and reflects mechanical aspects of heart contraction and blood ejection. So far, most of the available systems for the SCG assessment are designed to be used in a laboratory or in controlled behavioral and environmental conditions. In this paper we propose a modified version of a textile-based wearable device for the unobtrusive recording of ECG, respiration and accelerometric data (the MagIC system), to assess the 3d sternal SCG in daily life. SCG is characterized by an extremely low magnitude of the accelerations (in the order of g × 10(-3)), and is masked by major body accelerations induced by locomotion. Thus in daily life recordings, SCG can be measured whenever the subject is still. We observed that about 30 seconds of motionless behavior are sufficient for a stable estimate of the average SCG waveform, independently from the subject's posture. Since it is likely that during spontaneous behavior the subject may stay still for at least 30 seconds several times in a day, it is expected that the SCG could be repeatedly estimated and tracked over time through a prolonged data recording. These observations represent the first testing of the system in the assessment of SCG out of a laboratory environment, and open the possibility to perform SCG studies in a wide range of everyday conditions without interfering with the subject's activity tasks.
McCartt, Anne T; Leaf, William A; Farmer, Charles M; Eichelberger, Angela H
2013-01-01
To examine the effects of changes to Washington State's ignition interlock laws: moving issuance of interlock orders from courts to the driver licensing department in July 2003 and extending the interlock order requirement to first-time offenders with blood alcohol concentrations (BACs) below 0.15 percent ("first simple driving under the influence [DUI]") in June 2004. Trends in conviction types, interlock installation rates, and 2-year cumulative recidivism rates were examined for first-time convictions (simple, high-BAC, test refusal DUI; deferred prosecution; alcohol-related negligent driving) stemming from DUI arrests between January 1999 and June 2006. Regression analyses examined recidivism effects of the law changes and interlock installation rates. To examine general deterrent effects, trends in single-vehicle late-night crashes in Washington were compared with trends in California and Oregon. After the 2004 law change, the proportion of simple DUIs declined somewhat, though the proportion of negligent driving convictions (no interlock order requirement) continued an upward trend. Interlock installation rates for first simple DUIs were 3 to 6 percent in the year before the law change and one third after. Recidivism declined by an estimated 12 percent (e.g., expected 10.6% without law change vs. 9.3% among offenders arrested between April and June 2006, the last study quarter) among first simple DUI offenders and an estimated 11 percent (expected 10.2% vs. 9.1%) among all first-time offenders. There was an estimated 0.06 percentage point decrease in the recidivism rate for each percentage point increase in the proportion of first simple DUI offenders with interlocks. If installation rates had been 100 vs. 34 percent for first simple DUI offenders arrested between April and June 2006, and if the linear relationship between rates of recidivism and installations continued, recidivism could have been reduced from 9.3 to 5.3 percent. With installation rates of 100 vs. 24 percent for all first offenders, their recidivism rate could have fallen from 9.1 to 3.2 percent. Although installation rates increased somewhat after the 2003 law change, recidivism rates were not significantly affected, perhaps due to the short follow-up period before the 2004 law change. The 2004 law change was associated with an 8.3 percent reduction in single-vehicle late-night crash risk. Mandating interlock orders for all first DUI convictions was associated with reductions in recidivism, even with low interlock use rates, and reductions in crashes. Additional gains are likely achievable with higher rates. Jurisdictions should seek to increase use rates and reconsider permitting reductions in DUI charges to other traffic offenses without interlock order requirements.
Online Estimation of Model Parameters of Lithium-Ion Battery Using the Cubature Kalman Filter
NASA Astrophysics Data System (ADS)
Tian, Yong; Yan, Rusheng; Tian, Jindong; Zhou, Shijie; Hu, Chao
2017-11-01
Online estimation of state variables, including state-of-charge (SOC), state-of-energy (SOE) and state-of-health (SOH) is greatly crucial for the operation safety of lithium-ion battery. In order to improve estimation accuracy of these state variables, a precise battery model needs to be established. As the lithium-ion battery is a nonlinear time-varying system, the model parameters significantly vary with many factors, such as ambient temperature, discharge rate and depth of discharge, etc. This paper presents an online estimation method of model parameters for lithium-ion battery based on the cubature Kalman filter. The commonly used first-order resistor-capacitor equivalent circuit model is selected as the battery model, based on which the model parameters are estimated online. Experimental results show that the presented method can accurately track the parameters variation at different scenarios.
Parameter estimation and order selection for an empirical model of VO2 on-kinetics.
Alata, O; Bernard, O
2007-04-27
In humans, VO2 on-kinetics are noisy numerical signals that reflect the pulmonary oxygen exchange kinetics at the onset of exercise. They are empirically modelled as a sum of an offset and delayed exponentials. The number of delayed exponentials; i.e. the order of the model, is commonly supposed to be 1 for low-intensity exercises and 2 for high-intensity exercises. As no ground truth has ever been provided to validate these postulates, physiologists still need statistical methods to verify their hypothesis about the number of exponentials of the VO2 on-kinetics especially in the case of high-intensity exercises. Our objectives are first to develop accurate methods for estimating the parameters of the model at a fixed order, and then, to propose statistical tests for selecting the appropriate order. In this paper, we provide, on simulated Data, performances of Simulated Annealing for estimating model parameters and performances of Information Criteria for selecting the order. These simulated Data are generated with both single-exponential and double-exponential models, and noised by white and Gaussian noise. The performances are given at various Signal to Noise Ratio (SNR). Considering parameter estimation, results show that the confidences of estimated parameters are improved by increasing the SNR of the response to be fitted. Considering model selection, results show that Information Criteria are adapted statistical criteria to select the number of exponentials.
State estimation of spatio-temporal phenomena
NASA Astrophysics Data System (ADS)
Yu, Dan
This dissertation addresses the state estimation problem of spatio-temporal phenomena which can be modeled by partial differential equations (PDEs), such as pollutant dispersion in the atmosphere. After discretizing the PDE, the dynamical system has a large number of degrees of freedom (DOF). State estimation using Kalman Filter (KF) is computationally intractable, and hence, a reduced order model (ROM) needs to be constructed first. Moreover, the nonlinear terms, external disturbances or unknown boundary conditions can be modeled as unknown inputs, which leads to an unknown input filtering problem. Furthermore, the performance of KF could be improved by placing sensors at feasible locations. Therefore, the sensor scheduling problem to place multiple mobile sensors is of interest. The first part of the dissertation focuses on model reduction for large scale systems with a large number of inputs/outputs. A commonly used model reduction algorithm, the balanced proper orthogonal decomposition (BPOD) algorithm, is not computationally tractable for large systems with a large number of inputs/outputs. Inspired by the BPOD and randomized algorithms, we propose a randomized proper orthogonal decomposition (RPOD) algorithm and a computationally optimal RPOD (RPOD*) algorithm, which construct an ROM to capture the input-output behaviour of the full order model, while reducing the computational cost of BPOD by orders of magnitude. It is demonstrated that the proposed RPOD* algorithm could construct the ROM in real-time, and the performance of the proposed algorithms on different advection-diffusion equations. Next, we consider the state estimation problem of linear discrete-time systems with unknown inputs which can be treated as a wide-sense stationary process with rational power spectral density, while no other prior information needs to be known. We propose an autoregressive (AR) model based unknown input realization technique which allows us to recover the input statistics from the output data by solving an appropriate least squares problem, then fit an AR model to the recovered input statistics and construct an innovations model of the unknown inputs using the eigensystem realization algorithm. The proposed algorithm outperforms the augmented two-stage Kalman Filter (ASKF) and the unbiased minimum-variance (UMV) algorithm are shown in several examples. Finally, we propose a framework to place multiple mobile sensors to optimize the long-term performance of KF in the estimation of the state of a PDE. The major challenges are that placing multiple sensors is an NP-hard problem, and the optimization problem is non-convex in general. In this dissertation, first, we construct an ROM using RPOD* algorithm, and then reduce the feasible sensor locations into a subset using the ROM. The Information Space Receding Horizon Control (I-RHC) approach and a modified Monte Carlo Tree Search (MCTS) approach are applied to solve the sensor scheduling problem using the subset. Various applications have been provided to demonstrate the performance of the proposed approach.
Adaptive Estimation and Heuristic Optimization of Nonlinear Spacecraft Attitude Dynamics
2016-09-15
Algorithm GPS Global Positioning System HOUF Higher Order Unscented Filter IC initial conditions IMM Interacting Multiple Model IMU Inertial Measurement Unit ...sources ranging from inertial measurement units to star sensors are used to construct observations for attitude estimation algorithms. The sensor...parameters. A single vector measurement will provide two independent parameters, as a unit vector constraint removes a DOF making the problem underdetermined
Optimal heavy tail estimation - Part 1: Order selection
NASA Astrophysics Data System (ADS)
Mudelsee, Manfred; Bermejo, Miguel A.
2017-12-01
The tail probability, P, of the distribution of a variable is important for risk analysis of extremes. Many variables in complex geophysical systems show heavy tails, where P decreases with the value, x, of a variable as a power law with a characteristic exponent, α. Accurate estimation of α on the basis of data is currently hindered by the problem of the selection of the order, that is, the number of largest x values to utilize for the estimation. This paper presents a new, widely applicable, data-adaptive order selector, which is based on computer simulations and brute force search. It is the first in a set of papers on optimal heavy tail estimation. The new selector outperforms competitors in a Monte Carlo experiment, where simulated data are generated from stable distributions and AR(1) serial dependence. We calculate error bars for the estimated α by means of simulations. We illustrate the method on an artificial time series. We apply it to an observed, hydrological time series from the River Elbe and find an estimated characteristic exponent of 1.48 ± 0.13. This result indicates finite mean but infinite variance of the statistical distribution of river runoff.
A mechanical model of metatarsal stress fracture during distance running.
Gross, T S; Bunch, R P
1989-01-01
A model of metatarsal mechanics has been proposed as a link between the high incidence of second and third metatarsal stress fractures and the large stresses measured beneath the second and third metatarsal heads during distance running. Eight discrete piezoelectric vertical stress transducers were used to record the forefoot stresses of 21 male distance runners. Based upon load bearing area estimates derived from footprints, plantar forces were estimated. Highest force was estimated beneath the second and first metatarsal head (341.1 N and 279.1 N, respectively). Considering the toe as a hinged cantilever and the metatarsal as a proximally attached rigid cantilever allowed estimation of metatarsal midshaft bending strain, shear, and axial forces. Bending strain was estimated to be greatest in the second metatarsal (6662 mu epsilon), a value 6.9 times greater than estimated first metatarsal strain. Predicted third, fourth, and fifth metatarsal strains ranged between 4832 and 5241 mu epsilon. Shear force estimates were also greatest in the second metatarsal (203.0 N). Axial forces were highest in the first metatarsal (593.2 N) due to large hallux forces in relationship to the remaining toes. Although a first order model, these data highlight the structural demands placed upon the second metatarsal, a location of high metatarsal stress fracture incidence during distance running.
NASA Technical Reports Server (NTRS)
Langel, R. A.; Estes, R. H.
1983-01-01
Data from MAGSAT analyzed as a function of the Dst index to determine the first degree/order spherical harmonic description of the near-Earth external field and its corresponding induced field. The analysis was done separately for data from dawn and dusk. The MAGSAT data was compared with POGO data. A local time variation of the external field persists even during very quiet magnetic conditions; both a diurnal and 8-hour period are present. A crude estimate of Sq current in the 45 deg geomagnetic latitude range is obtained for 1966 to 1970. The current strength, located in the ionosphere and induced in the Earth, is typical of earlier determinations from surface data, although its maximum is displaced in local time from previous results.
NASA Technical Reports Server (NTRS)
Langel, R. A.; Estes, R. H.
1985-01-01
Data from Magsat analyzed as a function of the Dst index to determine the first degree/order spherical harmonic description of the near-earth external field and its corresponding induced field. The analysis was done separately for data from dawn and dusk. The Magsat data was compared with POGO data. A local time variation of the external field persists even during very quiet magnetic conditions; both a diurnal and 8-hour period are present. A crude estimate of Sq current in the 45 deg geomagnetic latitude range is obtained for 1966 to 1970. The current strength, located in the ionosphere and induced in the earth, is typical of earlier determinations from surface data, although its maximum is displaced in local time from previous results.
Characterization and physical properties of hydrate bearing sediments
NASA Astrophysics Data System (ADS)
Terzariol, M.; Santamarina, C.
2016-12-01
The amount of carbon trapped in hydrates is estimated to be larger than in conventional oil and gas reservoirs, thus methane hydrate is a promising energy resource. The high water pressure and the relatively low temperature needed for hydrate stability restrict the distribution of methane hydrates to continental shelves and permafrost regions. Stability conditions add inherent complexity to coring, sampling, handling, testing and data interpretation, have profound implications on potential production strategies. Thus a novel technology is developed for handling, transferring, and testing of natural hydrate bearing sediments without depressurization in order to preserve the sediment structure. Results from the first deployment of these tools on natural samples from Nankai Trough, Japan will also be summarized. Finally, to avoid consequences of poor sampling, a new multi-sensor in-situ characterization tool will be introduced.
Virus elimination in activated sludge systems: from batch tests to mathematical modeling.
Haun, Emma; Ulbricht, Katharina; Nogueira, Regina; Rosenwinkel, Karl-Heinz
2014-01-01
A virus tool based on Activated Sludge Model No. 3 for modeling virus elimination in activated sludge systems was developed and calibrated with the results from laboratory-scale batch tests and from measurements in a municipal wastewater treatment plant (WWTP). The somatic coliphages were used as an indicator for human pathogenic enteric viruses. The extended model was used to simulate the virus concentration in batch tests and in a municipal full-scale WWTP under steady-state and dynamic conditions. The experimental and modeling results suggest that both adsorption and inactivation processes, modeled as reversible first-order reactions, contribute to virus elimination in activated sludge systems. The model should be a useful tool to estimate the number of viruses entering water bodies from the discharge of treated effluents.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghoos, K., E-mail: kristel.ghoos@kuleuven.be; Dekeyser, W.; Samaey, G.
2016-10-01
The plasma and neutral transport in the plasma edge of a nuclear fusion reactor is usually simulated using coupled finite volume (FV)/Monte Carlo (MC) codes. However, under conditions of future reactors like ITER and DEMO, convergence issues become apparent. This paper examines the convergence behaviour and the numerical error contributions with a simplified FV/MC model for three coupling techniques: Correlated Sampling, Random Noise and Robbins Monro. Also, practical procedures to estimate the errors in complex codes are proposed. Moreover, first results with more complex models show that an order of magnitude speedup can be achieved without any loss in accuracymore » by making use of averaging in the Random Noise coupling technique.« less
First estimates of the global and regional incidence of neonatal herpes infection
Looker, K. J.; Magaret, A. S.; May, M. T.; Turner, K. M. E.; Vickerman, P.; Newman, L. M.; Gottlieb, S. L.
2017-01-01
Background Neonatal herpes is a rare but potentially devastating condition (60% fatality without treatment). Transmission usually occurs during delivery from mothers with herpes simplex virus type 1 (HSV-1) or HSV-2 genital infection. The global burden has never been quantified. We developed a novel methodology for burden estimation and present first WHO global and regional estimates of the annual number of neonatal herpes cases during 2010–2015. Methods Previous estimates of HSV-1 and HSV-2 prevalence and incidence in women aged 15–49 years were applied to 2010–2015 birth rates to estimate infections during pregnancy. Published risks of neonatal HSV transmission were then applied according to whether maternal infection was incident or prevalent with HSV-1 or HSV-2 to estimate neonatal herpes cases. Findings Globally the overall rate of neonatal herpes was estimated to be ~10 cases per 100,000 births, equivalent to a best-estimate of ~14,000 cases annually (HSV-1: ~4,000; HSV-2: ~10,000). We estimated that the most neonatal herpes cases occurred in Africa, due to high maternal HSV-2 infection and high birth rates. HSV-1 contributed more cases than HSV-2 in the Americas, Europe and Western Pacific. High rates of genital HSV-1 infection and moderate HSV-2 prevalence meant the Americas had the highest overall rate. However, our estimates are highly sensitive to the core assumptions, and considerable uncertainty exists for many settings given sparse underlying data. Interpretation These neonatal herpes estimates mark the first attempt to quantify the global burden of this rare but serious condition. Better primary data collection on neonatal herpes is critically needed to reduce uncertainty and refine future estimates. This is particularly important in resource-poor settings where we may have underestimated cases. Nevertheless, these first estimates suggest development of new HSV prevention measures such as vaccines could have additional benefits beyond reducing genital ulcer disease and HSV-associated HIV transmission, through prevention of neonatal herpes. Funding World Health Organization PMID:28153513
Barcelona, M J; Xie, G
2001-08-15
Permeable reactive barriers (PRB) are being used to engineer favorable field conditions for in-situ remediation efforts. Two redox adjustment barriers were installed to facilitate a 10-month research effort on the fate and transport of MTBE (methyl tert-butyl ether) at a site called the Michigan Integrated Remediation Technology Laboratory (MIRTL). Thirty kilograms of whey were injected as a slurry into an unconfined aquifer to establish an upgradient reductive zone to reduce O2 concentration in the vicinity of a contaminant injection source. To minimize the impact of contaminant release, 363 kg of oxygen release compound (ORC) were placed in the aquifer as a downgradient oxidative barrier. Dissolved oxygen and other chemical species were monitored in the field to evaluate the effectiveness of this technology. A transient one-dimensional advective-dispersive-reaction (ADR) model was proposed to simulate the dissolved oxygen transport. The equations were solved with commonly encountered PRB initial and constant/variable boundary conditions. No similar previous solution was found in the literature. The in-situ lifetimes, based on variable source loading, were estimated to be 1,661 and 514 days for the whey barrier and ORC barrier, respectively. Estimates based on either maximum O2 consumption/production or measured O2 curves were found to under- or overestimate the lifetime of the barriers. The pseudo-first-order rate constant of whey depletion was estimated to be 0.303/d with a dissolution rate of 0.04/d. The oxygen release rate constant in the ORC barrier was estimated to be 0.03/d. This paper provides a means to design and predict the performance of reactive redox barriers, especially when only limited field data are available.
NASA Astrophysics Data System (ADS)
Martinez, M.; Rocha, B.; Li, M.; Shi, G.; Beltempo, A.; Rutledge, R.; Yanishevsky, M.
2012-11-01
The National Research Council Canada (NRC) has worked on the development of structural health monitoring (SHM) test platforms for assessing the performance of sensor systems for load monitoring applications. The first SHM platform consists of a 5.5 m cantilever aluminum beam that provides an optimal scenario for evaluating the ability of a load monitoring system to measure bending, torsion and shear loads. The second SHM platform contains an added level of structural complexity, by consisting of aluminum skins with bonded/riveted stringers, typical of an aircraft lower wing structure. These two load monitoring platforms are well characterized and documented, providing loading conditions similar to those encountered during service. In this study, a micro-electro-mechanical system (MEMS) for acquiring data from triads of gyroscopes, accelerometers and magnetometers is described. The system was used to compute changes in angles at discrete stations along the platforms. The angles obtained from the MEMS were used to compute a second, third or fourth order degree polynomial surface from which displacements at every point could be computed. The use of a new Kalman filter was evaluated for angle estimation, from which displacements in the structure were computed. The outputs of the newly developed algorithms were then compared to the displacements obtained from the linear variable displacement transducers connected to the platforms. The displacement curves were subsequently post-processed either analytically, or with the help of a finite element model of the structure, to estimate strains and loads. The estimated strains were compared with baseline strain gauge instrumentation installed on the platforms. This new approach for load monitoring was able to provide accurate estimates of applied strains and shear loads.
Improved tilt sensing in an LGS-based tomographic AO system based on instantaneous PSF estimation
NASA Astrophysics Data System (ADS)
Veran, Jean-Pierre
2013-12-01
Laser guide star (LGS)-based tomographic AO systems, such as Multi-Conjugate AO (MCAO), Multi-Object AO (MOAO) and Laser Tomography AO (LTAO), require natural guide stars (NGSs) to sense tip-tilt (TT) and possibly other low order modes, to get rid of the LGS-tilt indetermination problem. For example, NFIRAOS, the first-light facility MCAO system for the Thirty Meter Telescope requires three NGSs, in addition to six LGSs: two to measure TT and one to measure TT and defocus. In order to improve sky coverage, these NGSs are selected in a so-called technical field (2 arcmin in diameter for NFIRAOS), which is much larger than the on-axis science field (17x17 arcsec for NFIRAOS), on which the AO correction is optimized. Most times, the NGSs are far off-axis and thus poorly corrected by the high-order AO loop, resulting in spots with low contrast and high speckle noise. Accurately finding the position of such spots is difficult, even with advanced methods such as matched-filtering or correlation, because these methods rely on the knowledge of an average spot image, which is quite different from the instantaneous spot image, especially in case of poor correction. This results in poor tilt estimation, which, ultimately, impacts sky coverage. We propose to improve the estimation of the position of the NGS spots by using, for each frame, a current estimate of the instantaneous spot profile instead of an average profile. This estimate can be readily obtained by tracing wavefront errors in the direction of the NGS through the turbulence volume. The latter is already computed by the tomographic process from the LGS measurements as part of the high order AO loop. Computing such a wavefront estimate has actually already been proposed for the purpose of driving a deformable mirror (DM) in each NGS WFS, to optically correct the NGS spot, which does lead to improved centroiding accuracy. Our approach, however, is much simpler, because it does not require the complication of extra DMs, which would need to be driven in open-loop. Instead, it can be purely implemented in software, does not increase the real-time computational burden significantly, and can still provide a significant improvement in tilt measurement accuracy, and therefore in sky-coverage. In this paper, we illustrate the benefit of this new tilt measurement strategy in the specific case of NFIRAOS, under various observing conditions, in comparison with the more traditional approaches that ignore the instantaneous variations of the NGS spot profiles.
Rosa, B F J V; Dias-Silva, M V D; Alves, R G
2013-02-01
This study describes the structure of the Chironomidae community associated with bryophytes in a first-order stream located in a biological reserve of the Atlantic Forest, during two seasons. Samples of bryophytes adhered to rocks along a 100-m stretch of the stream were removed with a metal blade, and 200-mL pots were filled with the samples. The numerical density (individuals per gram of dry weight), Shannon's diversity index, Pielou's evenness index, the dominance index (DI), and estimated richness were calculated for each collection period (dry and rainy). Linear regression analysis was employed to test the existence of a correlation between rainfall and the individual's density and richness. The high numerical density and richness of Chironomidae taxa observed are probably related to the peculiar conditions of the bryophyte habitat. The retention of larvae during periods of higher rainfall contributed to the high density and richness of Chironomidae larvae. The rarefaction analysis showed higher richness in the rainy season related to the greater retention of food particles. The data from this study show that bryophytes provide stable habitats for the colonization by and refuge of Chironomidae larvae, mainly under conductions of faster water flow and higher precipitation.
Maximum relative speeds of living organisms: Why do bacteria perform as fast as ostriches?
NASA Astrophysics Data System (ADS)
Meyer-Vernet, Nicole; Rospars, Jean-Pierre
2016-12-01
Self-locomotion is central to animal behaviour and survival. It is generally analysed by focusing on preferred speeds and gaits under particular biological and physical constraints. In the present paper we focus instead on the maximum speed and we study its order-of-magnitude scaling with body size, from bacteria to the largest terrestrial and aquatic organisms. Using data for about 460 species of various taxonomic groups, we find a maximum relative speed of the order of magnitude of ten body lengths per second over a 1020-fold mass range of running and swimming animals. This result implies a locomotor time scale of the order of one tenth of second, virtually independent on body size, anatomy and locomotion style, whose ubiquity requires an explanation building on basic properties of motile organisms. From first-principle estimates, we relate this generic time scale to other basic biological properties, using in particular the recent generalisation of the muscle specific tension to molecular motors. Finally, we go a step further by relating this time scale to still more basic quantities, as environmental conditions at Earth in addition to fundamental physical and chemical constants.
The E-Step of the MGROUP EM Algorithm. Program Statistics Research Technical Report No. 93-37.
ERIC Educational Resources Information Center
Thomas, Neal
Mislevy (1984, 1985) introduced an EM algorithm for estimating the parameters of a latent distribution model that is used extensively by the National Assessment of Educational Progress. Second order asymptotic corrections are derived and applied along with more common first order asymptotic corrections to approximate the expectations required by…
Aylward, Lesa L; Brunet, Robert C; Starr, Thomas B; Carrier, Gaétan; Delzell, Elizabeth; Cheng, Hong; Beall, Colleen
2005-08-01
Recent studies demonstrating a concentration dependence of elimination of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) suggest that previous estimates of exposure for occupationally exposed cohorts may have underestimated actual exposure, resulting in a potential overestimate of the carcinogenic potency of TCDD in humans based on the mortality data for these cohorts. Using a database on U.S. chemical manufacturing workers potentially exposed to TCDD compiled by the National Institute for Occupational Safety and Health (NIOSH), we evaluated the impact of using a concentration- and age-dependent elimination model (CADM) (Aylward et al., 2005) on estimates of serum lipid area under the curve (AUC) for the NIOSH cohort. These data were used previously by Steenland et al. (2001) in combination with a first-order elimination model with an 8.7-year half-life to estimate cumulative serum lipid concentration (equivalent to AUC) for these workers for use in cancer dose-response assessment. Serum lipid TCDD measurements taken in 1988 for a subset of the cohort were combined with the NIOSH job exposure matrix and work histories to estimate dose rates per unit of exposure score. We evaluated the effect of choices in regression model (regression on untransformed vs. ln-transformed data and inclusion of a nonzero regression intercept) as well as the impact of choices of elimination models and parameters on estimated AUCs for the cohort. Central estimates for dose rate parameters derived from the serum-sampled subcohort were applied with the elimination models to time-specific exposure scores for the entire cohort to generate AUC estimates for all cohort members. Use of the CADM resulted in improved model fits to the serum sampling data compared to the first-order models. Dose rates varied by a factor of 50 among different combinations of elimination model, parameter sets, and regression models. Use of a CADM results in increases of up to five-fold in AUC estimates for the more highly exposed members of the cohort compared to estimates obtained using the first-order model with 8.7-year half-life. This degree of variation in the AUC estimates for this cohort would affect substantially the cancer potency estimates derived from the mortality data from this cohort. Such variability and uncertainty in the reconstructed serum lipid AUC estimates for this cohort, depending on elimination model, parameter set, and regression model, have not been described previously and are critical components in evaluating the dose-response data from the occupationally exposed populations.
NASA Astrophysics Data System (ADS)
Weber, Robin; Carrassi, Alberto; Guemas, Virginie; Doblas-Reyes, Francisco; Volpi, Danila
2014-05-01
Full Field (FFI) and Anomaly Initialisation (AI) are two schemes used to initialise seasonal-to-decadal (s2d) prediction. FFI initialises the model on the best estimate of the actual climate state and minimises the initial error. However, due to inevitable model deficiencies, the trajectories drift away from the observations towards the model's own attractor, inducing a bias in the forecast. AI has been devised to tackle the impact of drift through the addition of this bias onto the observations, in the hope of gaining an initial state closer to the model attractor. Its goal is to forecast climate anomalies. The large variety of experimental setups, global coupled models, and observational networks adopted world-wide have led to varying results with regards to the relative performance of AI and FFI. Our research is firstly motivated in a comparison of these two initialisation approaches under varying circumstances of observational errors, observational distributions, and model errors. We also propose and compare two advanced schemes for s2d prediction. Least Square Initialisation (LSI) intends to propagate observational information of partially initialized systems to the whole model domain, based on standard practices in data assimilation and using the covariance of the model anomalies. Exploring the Parameters Uncertainty (EPU) is an online drift correction technique applied during the forecast run after initialisation. It is designed to estimate, and subtract, the bias in the forecast related to parametric error. Experiments are carried out using an idealized coupled dynamics in order to facilitate better control and robust statistical inference. Results show that an improvement of FFI will necessitate refinements in the observations, whereas improvements in AI are subject to model advances. A successful approximation of the model attractor using AI is guaranteed only when the differences between model and nature probability distribution functions (PDFs) are limited to the first order. Significant higher order differences can lead to an initial conditions distribution for AI that is less representative of the model PDF and lead to a degradation of the initalisation skill. Finally, both ad- vanced schemes lead to significantly improved skill scores, encouraging their implementation for models of higher complexity.
Multisensory Stimulation Can Induce an Illusion of Larger Belly Size in Immersive Virtual Reality
Normand, Jean-Marie; Giannopoulos, Elias; Spanlang, Bernhard; Slater, Mel
2011-01-01
Background Body change illusions have been of great interest in recent years for the understanding of how the brain represents the body. Appropriate multisensory stimulation can induce an illusion of ownership over a rubber or virtual arm, simple types of out-of-the-body experiences, and even ownership with respect to an alternate whole body. Here we use immersive virtual reality to investigate whether the illusion of a dramatic increase in belly size can be induced in males through (a) first person perspective position (b) synchronous visual-motor correlation between real and virtual arm movements, and (c) self-induced synchronous visual-tactile stimulation in the stomach area. Methodology Twenty two participants entered into a virtual reality (VR) delivered through a stereo head-tracked wide field-of-view head-mounted display. They saw from a first person perspective a virtual body substituting their own that had an inflated belly. For four minutes they repeatedly prodded their real belly with a rod that had a virtual counterpart that they saw in the VR. There was a synchronous condition where their prodding movements were synchronous with what they felt and saw and an asynchronous condition where this was not the case. The experiment was repeated twice for each participant in counter-balanced order. Responses were measured by questionnaire, and also a comparison of before and after self-estimates of belly size produced by direct visual manipulation of the virtual body seen from the first person perspective. Conclusions The results show that first person perspective of a virtual body that substitutes for the own body in virtual reality, together with synchronous multisensory stimulation can temporarily produce changes in body representation towards the larger belly size. This was demonstrated by (a) questionnaire results, (b) the difference between the self-estimated belly size, judged from a first person perspective, after and before the experimental manipulation, and (c) significant positive correlations between these two measures. We discuss this result in the general context of body ownership illusions, and suggest applications including treatment for body size distortion illnesses. PMID:21283823
Beam-plasma instability in inhomogeneous magnetic field and second order cyclotron resonance effects
NASA Astrophysics Data System (ADS)
Trakhtengerts, V. Y.; Hobara, Y.; Demekhov, A. G.; Hayakawa, M.
1999-03-01
A new analytical approach to cyclotron instability of electron beams with sharp gradients in velocity space (step-like distribution function) is developed taking into account magnetic field inhomogeneity and nonstationary behavior of the electron beam velocity. Under these conditions, the conventional hydrodynamic instability of such beams is drastically modified and second order resonance effects become important. It is shown that the optimal conditions for the instability occur for nonstationary quasimonochromatic wavelets whose frequency changes in time. The theory developed permits one to estimate the wave amplification and spatio-temporal characteristics of these wavelets.
NASA Astrophysics Data System (ADS)
Akhavan Niaki, Farbod
The objective of this research is first to investigate the applicability and advantage of statistical state estimation methods for predicting tool wear in machining nickel-based superalloys over deterministic methods, and second to study the effects of cutting tool wear on the quality of the part. Nickel-based superalloys are among those classes of materials that are known as hard-to-machine alloys. These materials exhibit a unique combination of maintaining their strength at high temperature and have high resistance to corrosion and creep. These unique characteristics make them an ideal candidate for harsh environments like combustion chambers of gas turbines. However, the same characteristics that make nickel-based alloys suitable for aggressive conditions introduce difficulties when machining them. High strength and low thermal conductivity accelerate the cutting tool wear and increase the possibility of the in-process tool breakage. A blunt tool nominally deteriorates the surface integrity and damages quality of the machined part by inducing high tensile residual stresses, generating micro-cracks, altering the microstructure or leaving a poor roughness profile behind. As a consequence in this case, the expensive superalloy would have to be scrapped. The current dominant solution for industry is to sacrifice the productivity rate by replacing the tool in the early stages of its life or to choose conservative cutting conditions in order to lower the wear rate and preserve workpiece quality. Thus, monitoring the state of the cutting tool and estimating its effects on part quality is a critical task for increasing productivity and profitability in machining superalloys. This work aims to first introduce a probabilistic-based framework for estimating tool wear in milling and turning of superalloys and second to study the detrimental effects of functional state of the cutting tool in terms of wear and wear rate on part quality. In the milling operation, the mechanisms of tool failure were first identified and, based on the rapid catastrophic failure of the tool, a Bayesian inference method (i.e., Markov Chain Monte Carlo, MCMC) was used for parameter calibration of tool wear using a power mechanistic model. The calibrated model was then used in the state space probabilistic framework of a Kalman filter to estimate the tool flank wear. Furthermore, an on-machine laser measuring system was utilized and fused into the Kalman filter to improve the estimation accuracy. In the turning operation the behavior of progressive wear was investigated as well. Due to the nonlinear nature of wear in turning, an extended Kalman filter was designed for tracking progressive wear, and the results of the probabilistic-based method were compared with a deterministic technique, where significant improvement (more than 60% increase in estimation accuracy) was achieved. To fulfill the second objective of this research in understanding the underlying effects of wear on part quality in cutting nickel-based superalloys, a comprehensive study on surface roughness, dimensional integrity and residual stress was conducted. The estimated results derived from a probabilistic filter were used for finding the proper correlations between wear, surface roughness and dimensional integrity, along with a finite element simulation for predicting the residual stress profile for sharp and worn cutting tool conditions. The output of this research provides the essential information on condition monitoring of the tool and its effects on product quality. The low-cost Hall effect sensor used in this work to capture spindle power in the context of the stochastic filter can effectively estimate tool wear in both milling and turning operations, while the estimated wear can be used to generate knowledge of the state of workpiece surface integrity. Therefore the true functionality and efficiency of the tool in superalloy machining can be evaluated without additional high-cost sensing.
Development and Application of a Cohesive Sediment Transport Model in Coastal Louisiana
NASA Astrophysics Data System (ADS)
Sorourian, S.; Nistor, I.
2017-12-01
The Louisiana coast has suffered from rapid land loss due to the combined effects of increasing the rate of eustatic sea level rise, insufficient riverine sediment input and subsidence. The sediment in this region is dominated by cohesive sediments (up to 80% of clay). This study presents a new model for calculating suspended sediment concentration (SSC) of cohesive sediments. Several new concepts are incorporated into the proposed model, which is capable of estimating the spatial and temporal variation in the concentration of cohesive sediment. First, the model incorporates the effect of electrochemical forces between cohesive sediment particles. Second, the wave friction factor is expressed in terms of the median particle size diameter in order to enhance the accuracy of the estimation of bed shear stress. Third, the erosion rate of cohesive sediments is also expressed in time-dependent form. Simulated SSC profiles are compared with field data collected from Vermilion Bay, Louisiana. The results of the proposed model agree well with the experimental data, as soon as steady state condition is achieved. The results of the new numerical models provide a better estimation of the suspended sediment concentration profile compared to the initial model developed by Mehta and Li, 2003. Among the proposed developments, the formulation of a time-dependent erosion rate shows the most accurate results. Coupling of present model with the Finite-Volume, primitive equation Community Ocean Model (FVCOM) would shed light on the fate of fine-grained sediments in order to increase overall retention and restoration of the Louisiana coastal plain.
Tomorrow's Transportation Market : Developing an Innovative, Seamless Transportation System
DOT National Transportation Integrated Search
2013-04-17
With the cost of congestion in the United States estimated to be in the order of $121 billion, transportation planners are under increasing pressure to improve conditions and meet projected demand increases. Harnessing emerging technologies to develo...
First and second order stereology of hyaline cartilage: Application on mice femoral cartilage.
Noorafshan, Ali; Niazi, Behnam; Mohamadpour, Masoomeh; Hoseini, Leila; Hoseini, Najmeh; Owji, Ali Akbar; Rafati, Ali; Sadeghi, Yasaman; Karbalay-Doust, Saied
2016-11-01
Stereological techniques could be considered in research on cartilage to obtain quantitative data. The present study aimed to explain application of the first- and second-order stereological methods on articular cartilage of mice and the methods applied on the mice exposed to cadmium (Cd). The distal femoral articular cartilage of BALB/c mice (control and Cd-treated) was removed. Then, volume and surface area of the cartilage and number of chondrocytes were estimated using Cavalieri and optical dissector techniques on isotropic uniform random sections. Pair-correlation function [g(r)] and cross-correlation function were calculated to express the spatial arrangement of chondrocytes-chondrocytes and chondrocytes-matrix (chondrocyte clustering/dispersing), respectively. The mean±standard deviation of the cartilage volume, surface area, and thickness were 1.4±0.1mm 3 , 26.2±5.4mm 2 , and 52.8±6.7μm, respectively. Besides, the mean number of chondrocytes was 680±200 (×10 3 ). The cartilage volume, cartilage surface area, and number of chondrocytes were respectively reduced by 25%, 27%, and 27% in the Cd-treated mice in comparison to the control animals (p<0.03). Estimates of g(r) for the cells and matrix against the dipole distances, r, have been plotted. This plot showed that the chondrocytes and the matrix were neither dispersed nor clustered in the two study groups. Application of design-based stereological methods and also evaluation of spatial arrangement of the cartilage components carried potential advantages for investigating the cartilage in different joint conditions. Chondrocyte clustering/dispersing and cellularity can be evaluated in cartilage assessment in normal or abnormal situations. Copyright © 2016 Elsevier GmbH. All rights reserved.
NASA Technical Reports Server (NTRS)
Megier, J. (Principal Investigator)
1976-01-01
The author has identified the following significant results. Some qualitative results were obtained out of the experiment of reflectance measurements under greenhouse conditions. An effort was made to correlate phenological stages, production, and radiometric measurements. It was found that the first order effect of exposure variability to sun irradiation is responsible for different rice productivity classes. Effects of rice variety and fertilization become second order, because they are completely masked by the first order effects.
Wave simulation for the design of an innovative quay wall: the case of Vlorë Harbour
NASA Astrophysics Data System (ADS)
Antonini, Alessandro; Archetti, Renata; Lamberti, Alberto
2017-01-01
Sea states and environmental conditions are basic data for the design of marine structures. Hindcasted wave data have been applied here with the aim of identifying the proper design conditions for an innovative quay wall concept. In this paper, the results of a computational fluid dynamics model are used to optimise the new absorbing quay wall of Vlorë Harbour (Republic of Albania) and define the design loads under extreme wave conditions. The design wave states at the harbour entrance have been estimated analysing 31 years of hindcasted wave data simulated through the application of WaveWatch III. Due to the particular geography and topography of the Bay of Vlorë, wave conditions generated from the north-west are transferred to the harbour entrance with the application of a 2-D spectral wave module, whereas southern wave states, which are also the most critical for the port structures, are defined by means of a wave generation model, according to the available wind measurements. Finally, the identified extreme events have been used, through the NewWave approach, as boundary conditions for the numerical analysis of the interaction between the quay wall and the extreme events. The results show that the proposed method, based on numerical modelling at different scales from macro to meso and to micro, allows for the identification of the best site-specific solutions, also for a location devoid of any wave measurement. In this light, the objectives of the paper are two-fold. First, they show the application of sea condition estimations through the use of wave hindcasted data in order to properly define the design wave conditions for a new harbour structure. Second, they present a new approach for investigating an innovative absorbing quay wall based on CFD modelling and the NewWave theory.
NASA Astrophysics Data System (ADS)
Kulkarni, Rishikesh; Rastogi, Pramod
2018-05-01
A new approach is proposed for the multiple phase estimation from a multicomponent exponential phase signal recorded in multi-beam digital holographic interferometry. It is capable of providing multidimensional measurements in a simultaneous manner from a single recording of the exponential phase signal encoding multiple phases. Each phase within a small window around each pixel is appproximated with a first order polynomial function of spatial coordinates. The problem of accurate estimation of polynomial coefficients, and in turn the unwrapped phases, is formulated as a state space analysis wherein the coefficients and signal amplitudes are set as the elements of a state vector. The state estimation is performed using the extended Kalman filter. An amplitude discrimination criterion is utilized in order to unambiguously estimate the coefficients associated with the individual signal components. The performance of proposed method is stable over a wide range of the ratio of signal amplitudes. The pixelwise phase estimation approach of the proposed method allows it to handle the fringe patterns that may contain invalid regions.
Genetic parameters for test day somatic cell score in Brazilian Holstein cattle.
Costa, C N; Santos, G G; Cobuci, J A; Thompson, G; Carvalheira, J G V
2015-12-29
Selection for lower somatic cell count has been included in the breeding objectives of several countries in order to increase resistance to mastitis. Genetic parameters of somatic cell scores (SCS) were estimated from the first lactation test day records of Brazilian Holstein cows using random-regression models with Legendre polynomials (LP) of the order 3-5. Data consisted of 87,711 TD produced by 10,084 cows, sired by 619 bulls calved from 1993 to 2007. Heritability estimates varied from 0.06 to 0.14 and decreased from the beginning of the lactation up to 60 days in milk (DIM) and increased thereafter to the end of lactation. Genetic correlations between adjacent DIM were very high (>0.83) but decreased to negative values, obtained with LP of order four, between DIM in the extremes of lactation. Despite the favorable trend, genetic changes in SCS were not significant and did not differ among LP. There was little benefit of fitting an LP of an order >3 to model animal genetic and permanent environment effects for SCS. Estimates of variance components found in this study may be used for breeding value estimation for SCS and selection for mastitis resistance in Holstein cattle in Brazil.
Estimate of B(B¯→Xsγ) at O(αs2)
NASA Astrophysics Data System (ADS)
Misiak, M.; Asatrian, H. M.; Bieri, K.; Czakon, M.; Czarnecki, A.; Ewerth, T.; Ferroglia, A.; Gambino, P.; Gorbahn, M.; Greub, C.; Haisch, U.; Hovhannisyan, A.; Hurth, T.; Mitov, A.; Poghosyan, V.; Ślusarczyk, M.; Steinhauser, M.
2007-01-01
Combining our results for various O(αs2) corrections to the weak radiative B-meson decay, we are able to present the first estimate of the branching ratio at the next-to-next-to-leading order in QCD. We find B(B¯→Xsγ)=(3.15±0.23)×10-4 for Eγ>1.6GeV in the B¯-meson rest frame. The four types of uncertainties: nonperturbative (5%), parametric (3%), higher-order (3%), and mc-interpolation ambiguity (3%) have been added in quadrature to obtain the total error.
Attitude Representations for Kalman Filtering
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Bauer, Frank H. (Technical Monitor)
2001-01-01
The four-component quaternion has the lowest dimensionality possible for a globally nonsingular attitude representation, it represents the attitude matrix as a homogeneous quadratic function, and its dynamic propagation equation is bilinear in the quaternion and the angular velocity. The quaternion is required to obey a unit norm constraint, though, so Kalman filters often employ a quaternion for the global attitude estimate and a three-component representation for small errors about the estimate. We consider these mixed attitude representations for both a first-order Extended Kalman filter and a second-order filter, as well for quaternion-norm-preserving attitude propagation.
NASA Astrophysics Data System (ADS)
Marchenko, Artem; Duarte, Vasco
Agile teams want to deliver maximum business value. That’s easy if the on-site Ccstomer assigns business value to each story. But how does the customer do that? How can you estimate business value? This workshop is run as a game, where teams have to make tough business decisions for their ”organizations”. Teams have to decide which orders to take and what to deliver first in order to earn more. The session gives the participants basic business value estimation techniques, but the main point is to make people live through the business situation and to help them feel the consequences of various choices.
Analysis of First-Term Attrition of Non-Prior Service High-Quality U.S. Army Male Recruits
1989-12-13
the estimators. Under broad conditions ( Hanushek , 1977 ), the maximum likelihood estimators are: ( a ) consistent (b) asymptotically efficient, and...Diseases, Vol. 24, 1971, pp. 125 - 158. Hanushek , Eric ., and John E. Jackson, Statistical Methods for Social Scientists, Academic Press, New York, 1977
A physiologically-based pharmacokinetic (PBPK) model is being developed to estimate the dosimetry of toluene in rats inhaling the VOC under various experimental conditions. The effects of physical activity are currently being estimated utilizing a three-step process. First, we d...
Estimating Function Approaches for Spatial Point Processes
NASA Astrophysics Data System (ADS)
Deng, Chong
Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting second-order intensity function of spatial point processes. However, the original second-order quasi-likelihood is barely feasible due to the intense computation and high memory requirement needed to solve a large linear system. Motivated by the existence of geometric regular patterns in the stationary point processes, we find a lower dimension representation of the optimal weight function and propose a reduced second-order quasi-likelihood approach. Through a simulation study, we show that the proposed method not only demonstrates superior performance in fitting the clustering parameter but also merits in the relaxation of the constraint of the tuning parameter, H. Third, we studied the quasi-likelihood type estimating funciton that is optimal in a certain class of first-order estimating functions for estimating the regression parameter in spatial point process models. Then, by using a novel spectral representation, we construct an implementation that is computationally much more efficient and can be applied to more general setup than the original quasi-likelihood method.
Chlorine decay and bacterial inactivation kinetics in drinking water in the tropics.
Thøgersen, J; Dahi, E
1996-09-01
The decay of free chlorine (Cl2) and combined chlorine (mostly monochloramine: NH2Cl) and the inactivation of bacteria was examined in Dar es Salaam, Tanzania. Batch experiments, pilot-scale pipe experiments and full-scale pipe experiments were carried out to establish the kinetics for both decay and inactivation, and to compare the two disinfectants for use under tropical conditions. The decay of both disinfectants closely followed first order kinetics, with respect to the concentration of both disinfectant and disinfectant-consuming substances. Bacterial densities exhibited a kinetic pattern consisting of first order inactivation with respect to the density of the bacteria and the concentration of the disinfectant, and first order growth with respect to the bacterial density. The disinfection kinetic model takes the decaying concentration of the disinfectant into account. The decay rate constant for free chlorine was 114 lg(-1)h(-1), while the decay rate constant for combined chlorine was 1.84 lg(-1)h(-1) (1.6% of the decay rate for free chlorine). The average concentration of disinfectant consuming substances in the water phase was 2.6 mg Cl2/l for free chlorine and 5.6 mg NH2Cl/l for combined chlorine. The decay rate constant and the concentration of disinfectant consuming substances when water was pumped through pipes, depended on whether or not chlorination was continuous. Combined chlorine especially could clean the pipes of disinfectant consuming substances. The inactivation rate constant λ, was estimated at 3.06×10(4) lg(-1)h(-1). Based on the inactivation rate constant, and a growth rate constant determined in a previous study, the critical concentration of free chlorine was found to be 0.08 mg Cl2/l. The critical concentration is a value below which growth rates dominate over inactivation.
The pharmacokinetics of letrozole: association with key body mass metrics.
Jin, Seok-Joon; Jung, Jin Ah; Cho, Sang-Heon; Kim, Un-Jib; Choe, Sangmin; Ghim, Jong-Lyul; Noh, Yook-Hwan; Park, Hyun-Jung; Kim, Jung-Chul; Jung, Jin-A; Lim, Hyeong-Seok; Bae, Kyun-Seop
2012-08-01
To characterize the pharmacokinetics (PK) of letrozole by noncompartmental and mixed effect modeling analysis with the exploration of effect of body compositions on the PK. The PK data of 52 normal healthy male subjects with intensive PK sampling from two separate studies were included in this analysis. Subjects were given a single oral administration of 2.5 mg letrozole (Femara®), an antiestrogenic aromatase inhibitor used to treat breast cancer. Letrozole concentrations were measured using validated high-performance liquid chromatography with tandem mass spectrometry. PK analysis was performed using NONMEM® 7.2 with first-order conditional estimation with interaction method. The association of body composition (body mass index, soft lean mass, fat free mass, body fat mass), CYP2A6 genotype (*1/*1, *1/*4), and CYP3A5 genotype (*1/*1, *1/*3, *3/*3) with the PK of letrozole were tested. A two-compartment model with mixed first and zero order absorption and first order elimination best described the letrozole concentration-time profile. Body weight and body fat mass were significant covariates for central volume of distribution and peripheral volume of distribution (Vp), respectively. In another model built using more readily available body composition measures, body mass index was also a significant covariate of Vp. However, no significant association was shown between CYP2A6 and CYP3A5 genetic polymorphism and the PK of letrozole in this study. Our results indicate that body weight, body fat mass, body mass index are associated with the volume of distribution of letrozole. This study provides an initial step toward the development of individualized letrozole therapy based on body composition.
Perinatal mortality in second- vs firstborn twins: a matter of birth size or birth order?
Luo, Zhong-Cheng; Ouyang, Fengxiu; Zhang, Jun; Klebanoff, Mark
2014-08-01
Second-born twins on average weigh less than first-born twins and have been reported at an elevated risk of perinatal mortality. Whether the risk differences depend on their relative birth size is unknown. The present study aimed to evaluate the association of birth order with perinatal mortality by birth order-specific weight difference in twin pregnancies. In a retrospective cohort study of 258,800 twin pregnancies without reported congenital anomalies using the US matched multiple birth data 1995-2000 (the available largest multiple birth dataset), conditional logistic regression was applied to estimate the odds ratio (OR) of perinatal death adjusted for fetus-specific characteristics (sex, presentation, and birthweight for gestational age). Comparing second vs first twins, the risks of perinatal death were similar if they had similar birthweights (within 5%) and were increasingly higher if second twins weighed progressively less (adjusted ORs were 1.37, 1.90, and 3.94 if weighed 5.0-14.9%, 15.0-24.9%, and ≥25.0% less, respectively), and progressively lower if they weighed increasingly more (adjusted ORs were 0.67, 0.63, and 0.36 if weighed 5.0-14.9%, 15.0-24.9%, and ≥25.0% more, respectively) (all P < .001). The perinatal mortality rates were not significantly different in cesarean deliveries or preterm (<37 weeks) vaginal deliveries but were significantly higher in second twins in term vaginal deliveries (3.1 vs 1.8 per 1000; adjusted OR, 2.15; P < .001). Perinatal mortality risk differences in second vs first twins depend on their relative birth size. Vaginal delivery at term is associated with a substantially greater risk of perinatal mortality in second twins. Copyright © 2014 Mosby, Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Florian, Ehmele; Michael, Kunz
2016-04-01
Several major flood events occurred in Germany in the past 15-20 years especially in the eastern parts along the rivers Elbe and Danube. Examples include the major floods of 2002 and 2013 with an estimated loss of about 2 billion Euros each. The last major flood events in the State of Baden-Württemberg in southwest Germany occurred in the years 1978 and 1993/1994 along the rivers Rhine and Neckar with an estimated total loss of about 150 million Euros (converted) each. Flood hazard originates from a combination of different meteorological, hydrological and hydraulic processes. Currently there is no defined methodology available for evaluating and quantifying the flood hazard and related risk for larger areas or whole river catchments instead of single gauges. In order to estimate the probable maximum loss for higher return periods (e.g. 200 years, PML200), a stochastic model approach is designed since observational data are limited in time and space. In our approach, precipitation is linearly composed of three elements: background precipitation, orographically-induces precipitation, and a convectively-driven part. We use linear theory of orographic precipitation formation for the stochastic precipitation model (SPM), which is based on fundamental statistics of relevant atmospheric variables. For an adequate number of historic flood events, the corresponding atmospheric conditions and parameters are determined in order to calculate a probability density function (pdf) for each variable. This method involves all theoretically possible scenarios which may not have happened, yet. This work is part of the FLORIS-SV (FLOod RISk Sparkassen Versicherung) project and establishes the first step of a complete modelling chain of the flood risk. On the basis of the generated stochastic precipitation event set, hydrological and hydraulic simulations will be performed to estimate discharge and water level. The resulting stochastic flood event set will be used to quantify the flood risk and to estimate probable maximum loss (e.g. PML200) for a given property (buildings, industry) portfolio.
Strelioff, Christopher C; Crutchfield, James P; Hübler, Alfred W
2007-07-01
Markov chains are a natural and well understood tool for describing one-dimensional patterns in time or space. We show how to infer kth order Markov chains, for arbitrary k , from finite data by applying Bayesian methods to both parameter estimation and model-order selection. Extending existing results for multinomial models of discrete data, we connect inference to statistical mechanics through information-theoretic (type theory) techniques. We establish a direct relationship between Bayesian evidence and the partition function which allows for straightforward calculation of the expectation and variance of the conditional relative entropy and the source entropy rate. Finally, we introduce a method that uses finite data-size scaling with model-order comparison to infer the structure of out-of-class processes.
Chemical transformation of 3-bromo-2,2-bis(bromomethyl)-propanol under basic conditions.
Ezra, Shai; Feinstein, Shimon; Bilkis, Itzhak; Adar, Eilon; Ganor, Jiwchar
2005-01-15
The mechanism of the spontaneous decomposition of 3-bromo-2,2-bis(bromomethyl)propanol (TBNPA) and the kinetics of the reaction of the parent compound and two subsequent products were determined in aqueous solution at temperatures from 30 to 70 degrees C and pH from 7.0 to 9.5. TBNPA is decomposed by a sequence of reactions that form 3,3-bis(bromomethyl)oxetane (BBMO), 3-bromomethyl-3-hydroxymethyloxetane (BMHMO), and 2,6-dioxaspiro[3.3]-heptane (DOH), releasing one bromide ion at each stage. The pseudo-first-order rate constant of the decomposition of TBNPA increases linearlywith the pH. The apparent activation energy of this transformation (98+/-2 KJ/mol) was calculated from the change of the effective second-order rate constant with temperature. The pseudoactivation energies of BBMO and BMHMO were estimated to be 109 and 151 KJ/mol, respectively. Good agreement was found between the rate coefficients derived from changes in the organic molecules concentrations and those determined from the changes in the Br- concentrations. TBNPA is the most abundant semivolatile organic pollutant in the aquitard studied, and together with its byproducts they posess an environmental hazard. TBNPA half-life is estimated to be about 100 years. This implies that high concentrations of TBNPA will persist in the aquifer long after the elimination of all its sources.
ERIC Educational Resources Information Center
Lee, Yi-Hsuan; Zhang, Jinming
2008-01-01
The method of maximum-likelihood is typically applied to item response theory (IRT) models when the ability parameter is estimated while conditioning on the true item parameters. In practice, the item parameters are unknown and need to be estimated first from a calibration sample. Lewis (1985) and Zhang and Lu (2007) proposed the expected response…
ERIC Educational Resources Information Center
Ebersbach, Mirjam; Luwel, Koen; Verschaffel, Lieven
2015-01-01
Children's estimation skills on a bounded and unbounded number line task were assessed in the light of their familiarity with numbers. Kindergartners, first graders, and second graders (N = 120) estimated the position of numbers on a 1--100 number line, marked with either two reference points (i.e., 1 and 10: unbounded condition) or three…
Methods for estimating water consumption for thermoelectric power plants in the United States
Diehl, Timothy H.; Harris, Melissa; Murphy, Jennifer C.; Hutson, Susan S.; Ladd, David E.
2013-01-01
Heat budgets were constructed for the first four generation-type categories; data at solar thermal plants were insufficient for heat budgets. These heat budgets yielded estimates of the amount of heat transferred to the condenser. The ratio of evaporation to the heat discharged through the condenser was estimated using existing heat balance models that are sensitive to environmental data; this feature allows estimation of consumption under different climatic conditions. These two estimates were multiplied to yield an estimate of consumption at each power plant.
Bailey, E A; Dutton, A W; Mattingly, M; Devasia, S; Roemer, R B
1998-01-01
Reduced-order modelling techniques can make important contributions in the control and state estimation of large systems. In hyperthermia, reduced-order modelling can provide a useful tool by which a large thermal model can be reduced to the most significant subset of its full-order modes, making real-time control and estimation possible. Two such reduction methods, one based on modal decomposition and the other on balanced realization, are compared in the context of simulated hyperthermia heat transfer problems. The results show that the modal decomposition reduction method has three significant advantages over that of balanced realization. First, modal decomposition reduced models result in less error, when compared to the full-order model, than balanced realization reduced models of similar order in problems with low or moderate advective heat transfer. Second, because the balanced realization based methods require a priori knowledge of the sensor and actuator placements, the reduced-order model is not robust to changes in sensor or actuator locations, a limitation not present in modal decomposition. Third, the modal decomposition transformation is less demanding computationally. On the other hand, in thermal problems dominated by advective heat transfer, numerical instabilities make modal decomposition based reduction problematic. Modal decomposition methods are therefore recommended for reduction of models in which advection is not dominant and research continues into methods to render balanced realization based reduction more suitable for real-time clinical hyperthermia control and estimation.
Junghöfer, Markus; Rehbein, Maimu Alissa; Maitzen, Julius; Schindler, Sebastian
2017-01-01
Abstract Humans have a remarkable capacity for rapid affective learning. For instance, using first-order US such as odors or electric shocks, magnetoencephalography (MEG) studies of multi-CS conditioning demonstrate enhanced early (<150 ms) and mid-latency (150–300 ms) visual evoked responses to affectively conditioned faces, together with changes in stimulus evaluation. However, particularly in social contexts, human affective learning is often mediated by language, a class of complex higher-order US. To elucidate mechanisms of this type of learning, we investigate how face processing changes following verbal evaluative multi-CS conditioning. Sixty neutral expression male faces were paired with phrases about aversive crimes (30) or neutral occupations (30). Post conditioning, aversively associated faces evoked stronger magnetic fields in a mid-latency interval between 220 and 320 ms, localized primarily in left visual cortex. Aversively paired faces were also rated as more arousing and more unpleasant, evaluative changes occurring both with and without contingency awareness. However, no early MEG effects were found, implying that verbal evaluative conditioning may require conceptual processing and does not engage rapid, possibly sub-cortical, pathways. Results demonstrate the efficacy of verbal evaluative multi-CS conditioning and indicate both common and distinct neural mechanisms of first- and higher-order multi-CS conditioning, thereby informing theories of associative learning. PMID:28008078
Junghöfer, Markus; Rehbein, Maimu Alissa; Maitzen, Julius; Schindler, Sebastian; Kissler, Johanna
2017-04-01
Humans have a remarkable capacity for rapid affective learning. For instance, using first-order US such as odors or electric shocks, magnetoencephalography (MEG) studies of multi-CS conditioning demonstrate enhanced early (<150 ms) and mid-latency (150-300 ms) visual evoked responses to affectively conditioned faces, together with changes in stimulus evaluation. However, particularly in social contexts, human affective learning is often mediated by language, a class of complex higher-order US. To elucidate mechanisms of this type of learning, we investigate how face processing changes following verbal evaluative multi-CS conditioning. Sixty neutral expression male faces were paired with phrases about aversive crimes (30) or neutral occupations (30). Post conditioning, aversively associated faces evoked stronger magnetic fields in a mid-latency interval between 220 and 320 ms, localized primarily in left visual cortex. Aversively paired faces were also rated as more arousing and more unpleasant, evaluative changes occurring both with and without contingency awareness. However, no early MEG effects were found, implying that verbal evaluative conditioning may require conceptual processing and does not engage rapid, possibly sub-cortical, pathways. Results demonstrate the efficacy of verbal evaluative multi-CS conditioning and indicate both common and distinct neural mechanisms of first- and higher-order multi-CS conditioning, thereby informing theories of associative learning. © The Author (2016). Published by Oxford University Press.
Correlations in polymer blends: Simulations, perturbation theory, and coarse-grained theory
NASA Astrophysics Data System (ADS)
Chung, Jun Kyung
A thermodynamic perturbation theory of symmetric polymer blends is developed that properly accounts for the correlation in the spatial arrangement of monomers. By expanding the free energy of mixing in powers of a small parameter alpha which controls the incompatibility of two monomer species, we show that the perturbation theory has the form of the original Flory-Huggins theory, to first order in alpha. However, the lattice coordination number in the original theory is replaced by an effective coordination number. A random walk model for the effective coordination number is found to describe Monte Carlo simulation data very well. We also propose a way to estimate Flory-Huggins chi parameter by extrapolating the perturbation theory to the limit of a hypothetical system of infinitely long chains. The first order perturbation theory yields an accurate estimation of chi to first order in alpha. Going to second order, however, turns out to be more involved and an unambiguous determination of the coefficient of alpha2 term is not possible at the moment. Lastly, we test the predictions of a renormalized one-loop theory of fluctuations using two coarse-grained models of symmetric polymer blends at the critical composition. It is found that the theory accurately describes the correlation effect for relatively small values of chiN. In addition, the universality assumption of coarse-grained models is examined and we find results that are supportive of it.
Stability of power systems coupled with market dynamics
NASA Astrophysics Data System (ADS)
Meng, Jianping
This Ph.D. thesis presented here spans two relatively independent topics. The first part, Chapter 2 is self-contained, and is dedicated to studies of new algorithms for power system state estimation. The second part, encompassing the remaining chapters, is dedicated to stability analysis of power system coupled with market dynamics. The first part of this thesis presents improved Newton's methods employing efficient vectorized calculations of higher order derivatives in power system state estimation problems. The improved algorithms are proposed based on an exact Newton's method using the second order terms. By efficiently computing an exact gain matrix, combined with a special optimal multiplier method, the new algorithms show more reliable convergence compared with the existing methods of normal equations, orthogonal decomposition, and Hachtel's sparse tableau. Our methods are able to handle ill-conditioned problems, yet show minimal penalty in computational cost for well-conditioned cases. These claims are illustrated through the standard IEEE 118 and 300 bus test examples. The second part of the thesis focuses on stability analysis of market/power systems. The work presented is motivated by an emerging problem. As the frequency of market based dispatch updates increases, there will inevitably be interaction between the dynamics of markets determining the generator dispatch commands, and the physical response of generators and network interconnections, necessitating the development of stability analysis for such coupled systems. We begin with numeric tests using different market models, with detailed machine/exciter/turbine/governor dynamics, in the New England 39 bus test system. A progression of modeling refinements are introduced, including such non-ideal effects as time delays. Electricity market parameter identification algorithms are also studied based on real time data from the PJM electricity market. Finally our power market model is augmented by optimal power flow constraints, allowing study of the so-called congestion problem. These studies show that understanding of potential modes of instability in such coupled systems is of crucial importance both in designing suitable rules for power markets, and in designing physical generator controls that are complementary to market-based dispatch.
NASA Astrophysics Data System (ADS)
Abdollahi, Soheila; Bahmanabadi, Mahmud; Pezeshkian, Yousef; Mortazavi Moghaddam, Saba
2016-03-01
The first phase of the Alborz Observatory Array (Alborz-I) consists of 20 plastic scintillation detectors each one with surface area of 0.25 m2spread over an area of 40 × 40 m2 realized to the study of Extensive Air Showers around the knee at the Sharif University of Technology campus. The first stage of the project including construction and operation of a prototype system has now been completed and the electronics that will be used in the array instrument has been tested under field conditions. In order to achieve a realistic estimate of the array performance, a large number of simulated CORSIKA showers have been used. In the present work, theoretical results obtained in the study of different array layouts and trigger conditions are described. Using Monte Carlo simulations of showers the rate of detected events per day and the trigger probability functions, i.e., the probability for an extensive air shower to trigger a ground based array as a function of the shower core distance to the center of array are presented for energies above 1 TeV and zenith angles up to 60°. Moreover, the angular resolution of the Alborz-I array is obtained.
Empirical performance of interpolation techniques in risk-neutral density (RND) estimation
NASA Astrophysics Data System (ADS)
Bahaludin, H.; Abdullah, M. H.
2017-03-01
The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.
M.D. O' Connor; C.H. Perry; W. McDavitt
2007-01-01
According to the State of California, most of North Coastâs watersheds are impaired by sediment. This study quantified sediment yield from watersheds under different management conditions. Temporary sedimentation basins were installed in 30 randomly chosen first-order streams in two watersheds in Humboldt County, California. Most treatment sites were clearcuts, but two...
Genetic parameters of legendre polynomials for first parity lactation curves.
Pool, M H; Janss, L L; Meuwissen, T H
2000-11-01
Variance components of the covariance function coefficients in a random regression test-day model were estimated by Legendre polynomials up to a fifth order for first-parity records of Dutch dairy cows using Gibbs sampling. Two Legendre polynomials of equal order were used to model the random part of the lactation curve, one for the genetic component and one for permanent environment. Test-day records from cows registered between 1990 to 1996 and collected by regular milk recording were available. For the data set, 23,700 complete lactations were selected from 475 herds sired by 262 sires. Because the application of a random regression model is limited by computing capacity, we investigated the minimum order needed to fit the variance structure in the data sufficiently. Predictions of genetic and permanent environmental variance structures were compared with bivariate estimates on 30-d intervals. A third-order or higher polynomial modeled the shape of variance curves over DIM with sufficient accuracy for the genetic and permanent environment part. Also, the genetic correlation structure was fitted with sufficient accuracy by a third-order polynomial, but, for the permanent environmental component, a fourth order was needed. Because equal orders are suggested in the literature, a fourth-order Legendre polynomial is recommended in this study. However, a rank of three for the genetic covariance matrix and of four for permanent environment allows a simpler covariance function with a reduced number of parameters based on the eigenvalues and eigenvectors.
Nonparametric estimation of plant density by the distance method
Patil, S.A.; Burnham, K.P.; Kovner, J.L.
1979-01-01
A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.
Linear theory for filtering nonlinear multiscale systems with model error
Berry, Tyrus; Harlim, John
2014-01-01
In this paper, we study filtering of multiscale dynamical systems with model error arising from limitations in resolving the smaller scale processes. In particular, the analysis assumes the availability of continuous-time noisy observations of all components of the slow variables. Mathematically, this paper presents new results on higher order asymptotic expansion of the first two moments of a conditional measure. In particular, we are interested in the application of filtering multiscale problems in which the conditional distribution is defined over the slow variables, given noisy observation of the slow variables alone. From the mathematical analysis, we learn that for a continuous time linear model with Gaussian noise, there exists a unique choice of parameters in a linear reduced model for the slow variables which gives the optimal filtering when only the slow variables are observed. Moreover, these parameters simultaneously give the optimal equilibrium statistical estimates of the underlying system, and as a consequence they can be estimated offline from the equilibrium statistics of the true signal. By examining a nonlinear test model, we show that the linear theory extends in this non-Gaussian, nonlinear configuration as long as we know the optimal stochastic parametrization and the correct observation model. However, when the stochastic parametrization model is inappropriate, parameters chosen for good filter performance may give poor equilibrium statistical estimates and vice versa; this finding is based on analytical and numerical results on our nonlinear test model and the two-layer Lorenz-96 model. Finally, even when the correct stochastic ansatz is given, it is imperative to estimate the parameters simultaneously and to account for the nonlinear feedback of the stochastic parameters into the reduced filter estimates. In numerical experiments on the two-layer Lorenz-96 model, we find that the parameters estimated online, as part of a filtering procedure, simultaneously produce accurate filtering and equilibrium statistical prediction. In contrast, an offline estimation technique based on a linear regression, which fits the parameters to a training dataset without using the filter, yields filter estimates which are worse than the observations or even divergent when the slow variables are not fully observed. This finding does not imply that all offline methods are inherently inferior to the online method for nonlinear estimation problems, it only suggests that an ideal estimation technique should estimate all parameters simultaneously whether it is online or offline. PMID:25002829
Bayesian deconvolution of [corrected] fMRI data using bilinear dynamical systems.
Makni, Salima; Beckmann, Christian; Smith, Steve; Woolrich, Mark
2008-10-01
In Penny et al. [Penny, W., Ghahramani, Z., Friston, K.J. 2005. Bilinear dynamical systems. Philos. Trans. R. Soc. Lond. B Biol. Sci. 360(1457) 983-993], a particular case of the Linear Dynamical Systems (LDSs) was used to model the dynamic behavior of the BOLD response in functional MRI. This state-space model, called bilinear dynamical system (BDS), is used to deconvolve the fMRI time series in order to estimate the neuronal response induced by the different stimuli of the experimental paradigm. The BDS model parameters are estimated using an expectation-maximization (EM) algorithm proposed by Ghahramani and Hinton [Ghahramani, Z., Hinton, G.E. 1996. Parameter Estimation for Linear Dynamical Systems. Technical Report, Department of Computer Science, University of Toronto]. In this paper we introduce modifications to the BDS model in order to explicitly model the spatial variations of the haemodynamic response function (HRF) in the brain using a non-parametric approach. While in Penny et al. [Penny, W., Ghahramani, Z., Friston, K.J. 2005. Bilinear dynamical systems. Philos. Trans. R. Soc. Lond. B Biol. Sci. 360(1457) 983-993] the relationship between neuronal activation and fMRI signals is formulated as a first-order convolution with a kernel expansion using basis functions (typically two or three), in this paper, we argue in favor of a spatially adaptive GLM in which a local non-parametric estimation of the HRF is performed. Furthermore, in order to overcome the overfitting problem typically associated with simple EM estimates, we propose a full Variational Bayes (VB) solution to infer the BDS model parameters. We demonstrate the usefulness of our model which is able to estimate both the neuronal activity and the haemodynamic response function in every voxel of the brain. We first examine the behavior of this approach when applied to simulated data with different temporal and noise features. As an example we will show how this method can be used to improve interpretability of estimates from an independent component analysis (ICA) analysis of fMRI data. We finally demonstrate its use on real fMRI data in one slice of the brain.
Economic valuation of the Emas waterfall, Mogi-Guaçu River, SP, Brazil.
Peixer, Janice; Giacomini, Henrique C; Petrere, Miguel
2011-12-01
The Emas waterfall in Mogi-Guaçu River is regionally recognized as an important fishing spot and touristic place. The first reports of the professional and sport fishing there date back from the 30's, which is the same period when the tourism took place. The present paper provides an environmental valuation of this place and an assessment of the differences among the major groups of people using the area. During 2006 we interviewed 33 professional fishers, 107 sport fishers, 45 tourists and 103 excursionists in order to estimate the Willingness to Pay (WTP) for each category and to analyze the influence of socioeconomic factors by means of logistic regressions and ANCOVAs. The WTP of professional fisher was significantly influenced by age and education, and the WTP for the sport fishers was influenced by the family income. The variables that influenced the tourists' and excursionists' WTP were sex and education. The total annual aggregated value to maintain the waterfall in the current conditions was estimated in US$ 11.432.128, and US$ 55.424.283 to restore it.
Variability estimation of urban wastewater biodegradable fractions by respirometry.
Lagarde, Fabienne; Tusseau-Vuillemin, Marie-Hélène; Lessard, Paul; Héduit, Alain; Dutrop, François; Mouchel, Jean-Marie
2005-11-01
This paper presents a methodology for assessing the variability of biodegradable chemical oxygen demand (COD) fractions in urban wastewaters. Thirteen raw wastewater samples from combined and separate sewers feeding the same plant were characterised, and two optimisation procedures were applied in order to evaluate the variability in biodegradable fractions and related kinetic parameters. Through an overall optimisation on all the samples, a unique kinetic parameter set was obtained with a three-substrate model including an adsorption stage. This method required powerful numerical treatment, but improved the identifiability problem compared to the usual sample-to-sample optimisation. The results showed that the fractionation of samples collected in the combined sewer was much more variable (standard deviation of 70% of the mean values) than the fractionation of the separate sewer samples, and the slowly biodegradable COD fraction was the most significant fraction (45% of the total COD on average). Because these samples were collected under various rain conditions, the standard deviations obtained here on the combined sewer biodegradable fractions could be used as a first estimation of the variability of this type of sewer system.
Wen, Yu-Guan; Shang, De-Wei; Xie, He-Zhi; Wang, Xi-Pei; Ni, Xiao-Jia; Zhang, Ming; Lu, Wei; Qiu, Chang; Liu, Xia; Li, Fang-Fang; Li, Xuan; Luo, Fu-Tian
2013-03-01
The aim of the study was to better understand blonanserin population pharmacokinetic (PK) characteristics in Chinese healthy subjects. Data from two studies with 50 subjects were analyzed to investigate the population PK characteristics of blonanserin at single dose (4, 8, and 12 mg) under fasting, multidose (4 mg bid or 8 mg qd for 7 days) and under food intake condition (single dose, 8 mg). Blonanserin plasma concentrations were detected using the high performance liquid chromatography tandem mass spectrometry (LC/MS/MS). A nonlinear mixed-effects model was developed to describe the blonanserin concentration-time profiles. A two compartment model with first-order absorption was built to describe the time-course of blonanserin. The population-predicted system apparent clearance (CL/F), volume of apparent distribution in center (V(1)/F), and the first-order absorption rate constant (Ka) of blonanserin under fasting was 1230 L/h, 9500 L, and 3.02 h(-1), respectively. Food intake decreased Ka of blonanserin to 0.78 h(-1). The relative bioavailability between fasting and food intake estimated by the final model was 55%. No clinically significant safety issues were identified. This is the first study assessing the PK profile of blonanserin with population PKs method. The results can be used for simulation in further clinical trial and optimize individual dosage regimens using a Bayesian methodology in patients. Copyright © 2013 John Wiley & Sons, Ltd.
Second-order infinitesimal bendings of surfaces of revolution with flattening at the poles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sabitov, I Kh
We study infinitesimal bendings of surfaces of revolution with flattening at the poles. We begin by considering the minimal possible smoothness class C{sup 1} both for surfaces and for deformation fields. Conditions are formulated for a given harmonic of a first-order infinitesimal bending to be extendable into a second order infinitesimal bending. We finish by stating a criterion for nonrigidity of second order for closed surfaces of revolution in the analytic class. We also give the first concrete example of such a nonrigid surface. Bibliography: 15 entries.
Proofreading of DNA polymerase: a new kinetic model with higher-order terminal effects
NASA Astrophysics Data System (ADS)
Song, Yong-Shun; Shu, Yao-Gen; Zhou, Xin; Ou-Yang, Zhong-Can; Li, Ming
2017-01-01
The fidelity of DNA replication by DNA polymerase (DNAP) has long been an important issue in biology. While numerous experiments have revealed details of the molecular structure and working mechanism of DNAP which consists of both a polymerase site and an exonuclease (proofreading) site, there were quite a few theoretical studies on the fidelity issue. The first model which explicitly considered both sites was proposed in the 1970s and the basic idea was widely accepted by later models. However, all these models did not systematically investigate the dominant factor on DNAP fidelity, i.e. the higher-order terminal effects through which the polymerization pathway and the proofreading pathway coordinate to achieve high fidelity. In this paper, we propose a new and comprehensive kinetic model of DNAP based on some recent experimental observations, which includes previous models as special cases. We present a rigorous and unified treatment of the corresponding steady-state kinetic equations of any-order terminal effects, and derive analytical expressions for fidelity in terms of kinetic parameters under bio-relevant conditions. These expressions offer new insights on how the higher-order terminal effects contribute substantially to the fidelity in an order-by-order way, and also show that the polymerization-and-proofreading mechanism is dominated only by very few key parameters. We then apply these results to calculate the fidelity of some real DNAPs, which are in good agreements with previous intuitive estimates given by experimentalists.
NASA Astrophysics Data System (ADS)
Prat, Olivier; Nelson, Brian; Stevens, Scott; Seo, Dong-Jun; Kim, Beomgeun
2015-04-01
The processing of radar-only precipitation via the reanalysis from the National Mosaic and Multi-Sensor Quantitative (NMQ/Q2) based on the WSR-88D Next-generation Radar (NEXRAD) network over Continental United States (CONUS) is completed for the period covering from 2001 to 2012. This important milestone constitutes a unique opportunity to study precipitation processes at a 1-km spatial resolution for a 5-min temporal resolution. However, in order to be suitable for hydrological, meteorological and climatological applications, the radar-only product needs to be bias-adjusted and merged with in-situ rain gauge information. Several in-situ datasets are available to assess the biases of the radar-only product and to adjust for those biases to provide a multi-sensor QPE. The rain gauge networks that are used such as the Global Historical Climatology Network-Daily (GHCN-D), the Hydrometeorological Automated Data System (HADS), the Automated Surface Observing Systems (ASOS), and the Climate Reference Network (CRN), have different spatial density and temporal resolution. The challenges related to incorporating non-homogeneous networks over a vast area and for a long-term record are enormous. Among the challenges we are facing are the difficulties incorporating differing resolution and quality surface measurements to adjust gridded estimates of precipitation. Another challenge is the type of adjustment technique. The objective of this work is threefold. First, we investigate how the different in-situ networks can impact the precipitation estimates as a function of the spatial density, sensor type, and temporal resolution. Second, we assess conditional and un-conditional biases of the radar-only QPE for various time scales (daily, hourly, 5-min) using in-situ precipitation observations. Finally, after assessing the bias and applying reduction or elimination techniques, we are using a unique in-situ dataset merging the different RG networks (CRN, ASOS, HADS, GHCN-D) to adjust the radar-only QPE product via an Inverse Distance Weighting (IDW) approach. In addition, we also investigate alternate adjustment techniques such as the kriging method and its variants (Simple Kriging: SK; Ordinary Kriging: OK; Conditional Bias-Penalized Kriging: CBPK). From this approach, we also hope to generate estimates of uncertainty for the gridded bias-adjusted QPE. Further comparison with a suite of lower resolution QPEs derived from ground based radar measurements (Stage IV) and satellite products (TMPA, CMORPH, PERSIANN) is also provided in order to give a detailed picture of the improvements and remaining challenges.
Error Distribution Evaluation of the Third Vanishing Point Based on Random Statistical Simulation
NASA Astrophysics Data System (ADS)
Li, C.
2012-07-01
POS, integrated by GPS / INS (Inertial Navigation Systems), has allowed rapid and accurate determination of position and attitude of remote sensing equipment for MMS (Mobile Mapping Systems). However, not only does INS have system error, but also it is very expensive. Therefore, in this paper error distributions of vanishing points are studied and tested in order to substitute INS for MMS in some special land-based scene, such as ground façade where usually only two vanishing points can be detected. Thus, the traditional calibration approach based on three orthogonal vanishing points is being challenged. In this article, firstly, the line clusters, which parallel to each others in object space and correspond to the vanishing points, are detected based on RANSAC (Random Sample Consensus) and parallelism geometric constraint. Secondly, condition adjustment with parameters is utilized to estimate nonlinear error equations of two vanishing points (VX, VY). How to set initial weights for the adjustment solution of single image vanishing points is presented. Solving vanishing points and estimating their error distributions base on iteration method with variable weights, co-factor matrix and error ellipse theory. Thirdly, under the condition of known error ellipses of two vanishing points (VX, VY) and on the basis of the triangle geometric relationship of three vanishing points, the error distribution of the third vanishing point (VZ) is calculated and evaluated by random statistical simulation with ignoring camera distortion. Moreover, Monte Carlo methods utilized for random statistical estimation are presented. Finally, experimental results of vanishing points coordinate and their error distributions are shown and analyzed.
Information theoretic analysis of canny edge detection in visual communication
NASA Astrophysics Data System (ADS)
Jiang, Bo; Rahman, Zia-ur
2011-06-01
In general edge detection evaluation, the edge detectors are examined, analyzed, and compared either visually or with a metric for specific an application. This analysis is usually independent of the characteristics of the image-gathering, transmission and display processes that do impact the quality of the acquired image and thus, the resulting edge image. We propose a new information theoretic analysis of edge detection that unites the different components of the visual communication channel and assesses edge detection algorithms in an integrated manner based on Shannon's information theory. The edge detection algorithm here is considered to achieve high performance only if the information rate from the scene to the edge approaches the maximum possible. Thus, by setting initial conditions of the visual communication system as constant, different edge detection algorithms could be evaluated. This analysis is normally limited to linear shift-invariant filters so in order to examine the Canny edge operator in our proposed system, we need to estimate its "power spectral density" (PSD). Since the Canny operator is non-linear and shift variant, we perform the estimation for a set of different system environment conditions using simulations. In our paper we will first introduce the PSD of the Canny operator for a range of system parameters. Then, using the estimated PSD, we will assess the Canny operator using information theoretic analysis. The information-theoretic metric is also used to compare the performance of the Canny operator with other edge-detection operators. This also provides a simple tool for selecting appropriate edgedetection algorithms based on system parameters, and for adjusting their parameters to maximize information throughput.
Multitaper spectral analysis of atmospheric radar signals
NASA Astrophysics Data System (ADS)
Anandan, V.; Pan, C.; Rajalakshmi, T.; Ramachandra Reddy, G.
2004-11-01
Multitaper spectral analysis using sinusoidal taper has been carried out on the backscattered signals received from the troposphere and lower stratosphere by the Gadanki Mesosphere-Stratosphere-Troposphere (MST) radar under various conditions of the signal-to-noise ratio. Comparison of study is made with sinusoidal taper of the order of three and single tapers of Hanning and rectangular tapers, to understand the relative merits of processing under the scheme. Power spectra plots show that echoes are better identified in the case of multitaper estimation, especially in the region of a weak signal-to-noise ratio. Further analysis is carried out to obtain three lower order moments from three estimation techniques. The results show that multitaper analysis gives a better signal-to-noise ratio or higher detectability. The spectral analysis through multitaper and single tapers is subjected to study of consistency in measurements. Results show that the multitaper estimate is better consistent in Doppler measurements compared to single taper estimates. Doppler width measurements with different approaches were studied and the results show that the estimation was better in the multitaper technique in terms of temporal resolution and estimation accuracy.
Estimating the Contrail Impact on Climate Using the UK Met Office Model
NASA Astrophysics Data System (ADS)
Rap, A.; Forster, P. M.
2008-12-01
With air travel predicted to increase over the coming century, the emissions associated with air traffic are expected to have a significant warming effect on climate. According to current best estimates, an important contribution comes from contrails. However, as reported by the IPCC fourth assessment report, these current best estimates still have a high uncertainty. The development and validation of contrail parameterizations in global climate models is therefore very important. This current study develops a contrail parameterization within the UK Met Office Climate Model. Using this new parameterization, we estimate that for the 2002 traffic, the global mean annual contrail coverage is approximately 0.11%, a value which in good agreement with several other estimates. The corresponding contrail radiative forcing (RF) is calculated to be approximately 4 and 6 mWm-2 in all-sky and clear-sky conditions, respectively. These values lie within the lower end of the RF range reported by the latest IPCC assessment. The relatively high cloud masking effect on contrails observed by our parameterization compared with other studies is investigated, and a possible cause for this difference is suggested. The effect of the diurnal variations of air traffic on both contrail coverage and contrail RF is also investigated. The new parameterization is also employed in thirty-year slab-ocean model runs in order to give one of the first insights into contrail effects on daily temperature range and the climate impact of contrails.
Climate change impact on North Sea wave conditions: a consistent analysis of ten projections
NASA Astrophysics Data System (ADS)
Grabemann, Iris; Groll, Nikolaus; Möller, Jens; Weisse, Ralf
2015-02-01
Long-term changes in the mean and extreme wind wave conditions as they may occur in the course of anthropogenic climate change can influence and endanger human coastal and offshore activities. A set of ten wave climate projections derived from time slice and transient simulations of future conditions is analyzed to estimate the possible impact of anthropogenic climate change on mean and extreme wave conditions in the North Sea. This set includes different combinations of IPCC SRES emission scenarios (A2, B2, A1B, and B1), global and regional models, and initial states. A consistent approach is used to provide a more robust assessment of expected changes and uncertainties. While the spatial patterns and the magnitude of the climate change signals vary, some robust features among the ten projections emerge: mean and severe wave heights tend to increase in the eastern parts of the North Sea towards the end of the twenty-first century in nine to ten projections, but the magnitude of the increase in extreme waves varies in the order of decimeters between these projections. For the western parts of the North Sea more than half of the projections suggest a decrease in mean and extreme wave heights. Comparing the different sources of uncertainties due to models, scenarios, and initial conditions, it can be inferred that the influence of the emission scenario on the climate change signal seems to be less important. Furthermore, the transient projections show strong multi-decadal fluctuations, and changes towards the end of the twenty-first century might partly be associated with internal variability rather than with systematic changes.
Analysis of twelve-month degradation in three polycrystalline photovoltaic modules
NASA Astrophysics Data System (ADS)
Lai, T.; Potter, B. G.; Simmons-Potter, K.
2016-09-01
Polycrystalline silicon photovoltaic (PV) modules have the advantage of lower manufacturing cost as compared to their monocrystalline counterparts, but generally exhibit both lower initial module efficiencies and more significant early-stage efficiency degradation than do similar monocrystalline PV modules. For both technologies, noticeable deterioration in power conversion efficiency typically occurs over the first two years of usage. Estimating PV lifetime by examining the performance degradation behavior under given environmental conditions is, therefore, one of continual goals for experimental research and economic analysis. In the present work, accelerated lifecycle testing (ALT) on three polycrystalline PV technologies was performed in a full-scale, industrial-standard environmental chamber equipped with single-sun irradiance capability, providing an illumination uniformity of 98% over a 2 x 1.6m area. In order to investigate environmental aging effects, timedependent PV performance (I-V characteristic) was evaluated over a recurring, compressed day-night cycle, which simulated local daily solar insolation for the southwestern United States, followed by dark (night) periods. During a total test time of just under 4 months that corresponded to a year equivalent exposure on a fielded module, the temperature and humidity varied in ranges from 3°C to 40°C and 5% to 85% based on annual weather profiles for Tucson, AZ. Removing the temperature de-rating effect that was clearly seen in the data enabled the computation of normalized efficiency degradation with time and environmental exposure. Results confirm the impact of environmental conditions on the module long-term performance. Overall, more than 2% efficiency degradation in the first year of usage was observed for all thee polycrystalline Si solar modules. The average 5-year degradation of each PV technology was estimated based on their determined degradation rates.
NASA Astrophysics Data System (ADS)
Gallet, Jean-Charles; Merkouriadi, Ioanna; Liston, Glen E.; Polashenski, Chris; Hudson, Stephen; Rösel, Anja; Gerland, Sebastian
2017-10-01
Snow is crucial over sea ice due to its conflicting role in reflecting the incoming solar energy and reducing the heat transfer so that its temporal and spatial variability are important to estimate. During the Norwegian Young Sea ICE (N-ICE2015) campaign, snow physical properties and variability were examined, and results from April until mid-June 2015 are presented here. Overall, the snow thickness was about 20 cm higher than the climatology for second-year ice, with an average of 55 ± 27 cm and 32 ± 20 cm on first-year ice. The average density was 350-400 kg m-3 in spring, with higher values in June due to melting. Due to flooding in March, larger variability in snow water equivalent was observed. However, the snow structure was quite homogeneous in spring due to warmer weather and lower amount of storms passing over the field camp. The snow was mostly consisted of wind slab, faceted, and depth hoar type crystals with occasional fresh snow. These observations highlight the more dynamic character of evolution of snow properties over sea ice compared to previous observations, due to more variable sea ice and weather conditions in this area. The snowpack was isothermal as early as 10 June with the first onset of melt clearly identified in early June. Based on our observations, we estimate than snow could be accurately represented by a three to four layers modeling approach, in order to better consider the high variability of snow thickness and density together with the rapid metamorphose of the snow in springtime.
First Chance Outreach. Del Rio First Chance Early Childhood Program.
ERIC Educational Resources Information Center
Hanna, Cornelia B.; Levermann, D.
In order to help handicapped children function in regular school programs by the time they enter first grade, the First Chance Early Childhood Program provides precise intervention into the development of children aged 3 to 5 with clearly identified handicapping conditions. Using English and/or Spanish, program staff test and measure the referred…
Estimators of wheel slip for electric vehicles using torque and encoder measurements
NASA Astrophysics Data System (ADS)
Boisvert, M.; Micheau, P.
2016-08-01
For the purpose of regenerative braking control in hybrid and electrical vehicles, recent studies have suggested controlling the slip ratio of the electric-powered wheel. A slip tracking controller requires an accurate slip estimation in the overall range of the slip ratio (from 0 to 1), contrary to the conventional slip limiter (ABS) which calls for an accurate slip estimation in the critical slip area, estimated at around 0.15 in several applications. Considering that it is not possible to directly measure the slip ratio of a wheel, the problem is to estimate the latter from available online data. To estimate the slip of a wheel, both wheel speed and vehicle speed must be known. Several studies provide algorithms that allow obtaining a good estimation of vehicle speed. On the other hand, there is no proposed algorithm for the conditioning of the wheel speed measurement. Indeed, the noise included in the wheel speed measurement reduces the accuracy of the slip estimation, a disturbance increasingly significant at low speed and low torque. Herein, two different extended Kalman observers of slip ratio were developed. The first calculates the slip ratio with data provided by an observer of vehicle speed and of propeller wheel speed. The second observer uses an original nonlinear model of the slip ratio as a function of the electric motor. A sinus tracking algorithm is included in the two observers, in order to reject harmonic disturbances of wheel speed measurement. Moreover, mass and road uncertainties can be compensated with a coefficient adapted online by an RLS. The algorithms were implemented and tested with a three-wheel recreational hybrid vehicle. Experimental results show the efficiency of both methods.
Assessment of Cropland Water and Nitrogen Balance from Climate Change in Korea Peninsular
NASA Astrophysics Data System (ADS)
Lim, C. H.; Song, C.; Kim, T.; Lee, W. K.; Jeon, S. W.
2015-12-01
If crop growth is based on cropland productivity, the changes are due to changes in water and nitrogen balance from climate. In this study, order to estimation the change in cropland water and nitrogen balance in Korea peninsular using meteorological data observed last 30 years(1984-2013y). And we used soil, topography and management data about cropland. So as to estimating water and nitrogen variables, we used to the GIS based EPIC model that is major crop model in agro-ecosystem modelling field. Among the much of water and nitrogen variables, we selected to evapotranspiration, runoff, precipitation, nitrification, N lost, N contents and denitrification for this analysis. This selected variables associate with cropland water and nitrogen balance.First result, we can found the water balance changes in Korea peninsular, especially South Korea better condition than North Korea. In North Korea, evapotranspiration and precipitation result were lower than South Korea, but runoff result was bigger than South Korea. And we got a result about nitrogen balance changes in Korea peninsular from climate. In spatially, South and North Korea showed to similar condition on nitrogen balance in whole period. But in temporally, showed negative trends as time goes on, it caused by climate change. Overall condition of water and nitrogen balance on last 30 years in Korea peninsular, South Korea showed better condition than North Korea. Water and nitrogen balance change means have to be changed on agriculture management action, such as irrigation and fertilizer. In future period, climate change will cause a large effect to cropland water and nitrogen balance in mid-latitude area, so we have to prepare the change of this field for wise adaptation by climate change.
NASA Astrophysics Data System (ADS)
Vijayaraghavan, Krishna
2014-11-01
This paper presents two novel observer concepts. First, it develops a globally exponentially stable nonlinear observer for noise-free dissipative nonlinear systems. Second, for a dissipative nonlinear system with measurement noise, the paper develops an observer to guarantee a desired performance, namely an upper limit on the ratio of the square of the weighted L2 norm of the error to the square of the weighted L2 norm of the measurement noise. The necessary and sufficient conditions for both observers are reformulated as algebraic Riccati equations (AREs) so that standard solvers can be utilised. In addition, the paper presents necessary and sufficient conditions to be satisfied by the nonlinear system in order to ensure that the ARE (and hence the observer design problem) has a solution. The use of the methodology developed in this paper is demonstrated through illustrative examples. In literature, there is no previous observer for dissipative system that provides both necessary and sufficient conditions. Results for noisy system either rely on linearising the system about state trajectory (requiring initial estimates to be close to the actual states) or are for specialised systems that cannot be extended to dissipative systems.
Stability Analysis of Distributed Order Fractional Chen System
Aminikhah, H.; Refahi Sheikhani, A.; Rezazadeh, H.
2013-01-01
We first investigate sufficient and necessary conditions of stability of nonlinear distributed order fractional system and then we generalize the integer-order Chen system into the distributed order fractional domain. Based on the asymptotic stability theory of nonlinear distributed order fractional systems, the stability of distributed order fractional Chen system is discussed. In addition, we have found that chaos exists in the double fractional order Chen system. Numerical solutions are used to verify the analytical results. PMID:24489508
Discretizing singular point sources in hyperbolic wave propagation problems
Petersson, N. Anders; O'Reilly, Ossian; Sjogreen, Bjorn; ...
2016-06-01
Here, we develop high order accurate source discretizations for hyperbolic wave propagation problems in first order formulation that are discretized by finite difference schemes. By studying the Fourier series expansions of the source discretization and the finite difference operator, we derive sufficient conditions for achieving design accuracy in the numerical solution. Only half of the conditions in Fourier space can be satisfied through moment conditions on the source discretization, and we develop smoothness conditions for satisfying the remaining accuracy conditions. The resulting source discretization has compact support in physical space, and is spread over as many grid points as themore » number of moment and smoothness conditions. In numerical experiments we demonstrate high order of accuracy in the numerical solution of the 1-D advection equation (both in the interior and near a boundary), the 3-D elastic wave equation, and the 3-D linearized Euler equations.« less
Optimal Control Problems with Switching Points. Ph.D. Thesis, 1990 Final Report
NASA Technical Reports Server (NTRS)
Seywald, Hans
1991-01-01
The main idea of this report is to give an overview of the problems and difficulties that arise in solving optimal control problems with switching points. A brief discussion of existing optimality conditions is given and a numerical approach for solving the multipoint boundary value problems associated with the first-order necessary conditions of optimal control is presented. Two real-life aerospace optimization problems are treated explicitly. These are altitude maximization for a sounding rocket (Goddard Problem) in the presence of a dynamic pressure limit, and range maximization for a supersonic aircraft flying in the vertical, also in the presence of a dynamic pressure limit. In the second problem singular control appears along arcs with active dynamic pressure limit, which in the context of optimal control, represents a first-order state inequality constraint. An extension of the Generalized Legendre-Clebsch Condition to the case of singular control along state/control constrained arcs is presented and is applied to the aircraft range maximization problem stated above. A contribution to the field of Jacobi Necessary Conditions is made by giving a new proof for the non-optimality of conjugate paths in the Accessory Minimum Problem. Because of its simple and explicit character, the new proof may provide the basis for an extension of Jacobi's Necessary Condition to the case of the trajectories with interior point constraints. Finally, the result that touch points cannot occur for first-order state inequality constraints is extended to the case of vector valued control functions.
NASA Astrophysics Data System (ADS)
Foster, L. K.; Clark, B. R.; Duncan, L. L.; Tebo, D. T.; White, J.
2017-12-01
Several historical groundwater models exist within the Coastal Lowlands Aquifer System (CLAS), which spans the Gulf Coastal Plain in Texas, Louisiana, Mississippi, Alabama, and Florida. The largest of these models, called the Gulf Coast Regional Aquifer System Analysis (RASA) model, has been brought into a new framework using the Newton formulation for MODFLOW-2005 (MODFLOW-NWT) and serves as the starting point of a new investigation underway by the U.S. Geological Survey to improve understanding of the CLAS and provide predictions of future groundwater availability within an uncertainty quantification (UQ) framework. The use of an UQ framework will not only provide estimates of water-level observation worth, hydraulic parameter uncertainty, boundary-condition uncertainty, and uncertainty of future potential predictions, but it will also guide the model development process. Traditionally, model development proceeds from dataset construction to the process of deterministic history matching, followed by deterministic predictions using the model. This investigation will combine the use of UQ with existing historical models of the study area to assess in a quantitative framework the effect model package and property improvements have on the ability to represent past-system states, as well as the effect on the model's ability to make certain predictions of water levels, water budgets, and base-flow estimates. Estimates of hydraulic property information and boundary conditions from the existing models and literature, forming the prior, will be used to make initial estimates of model forecasts and their corresponding uncertainty, along with an uncalibrated groundwater model run within an unconstrained Monte Carlo analysis. First-Order Second-Moment (FOSM) analysis will also be used to investigate parameter and predictive uncertainty, and guide next steps in model development prior to rigorous history matching by using PEST++ parameter estimation code.
Brubacher, Sonja P; Roberts, Kim P; Powell, Martine
2012-01-01
Children (N = 157) 4 to 8 years old participated 1 time (single) or 4 times (repeated) in an interactive event. Across each condition, half were questioned a week later about the only or a specific occurrence of the event (depth first) and then about what usually happens. Half were prompted in the reverse order (breadth first). Children with repeated experience who first were asked about what usually happens reported more event-related information overall than those asked about an occurrence first. All children used episodic language when describing an occurrence; however, children with repeated-event experience used episodic language less often when describing what usually happens than did those with a single experience. Accuracy rates did not differ between conditions. Implications for theories of repeated-event memory are discussed.
Decision Support for Transportation Planning in Joint COA Development.
1996-06-01
COA generation is interwoven with COA evaluation. SOCAP demonstrates its ability to aid in feasibility estimation by producing output for the Dynamic...Analysis and Replanning Tool (DART) transportation feasibility estimator. The output of SOCAP is first used by an intermediate Force Module Enhancer...and Requirements Generator (FMERG), which elaborates the major force list produced by SOCAP in order to add supporting units and their transportation
P.B., Mohite; R.B., Pandhare; S.G., Khanage
2012-01-01
Purpose: Lamivudine is cytosine and zidovudine is cytidine and is used as an antiretroviral agents. Both drugs are available in tablet dosage forms with a dose of 150 mg for LAM and 300 mg ZID respectively. Method: The method employed is based on first order derivative spectroscopy. Wavelengths 279 nm and 300 nm were selected for the estimation of the Lamovudine and Zidovudine respectively by taking the first order derivative spectra. The conc. of both drugs was determined by proposed method. The results of analysis have been validated statistically and by recovery studies as per ICH guidelines. Result: Both the drugs obey Beer’s law in the concentration range 10-50 μg mL-1,for LAM and ZID; with regression 0.9998 and 0.9999, intercept – 0.0677 and – 0.0043 and slope 0.0457 and 0.0391 for LAM and ZID, respectively.The accuracy and reproducibility results are close to 100% with 2% RSD. Conclusion: A simple, accurate, precise, sensitive and economical procedures for simultaneous estimation of Lamovudine and Zidovudine in tablet dosage form have been developed. PMID:24312779
ERIC Educational Resources Information Center
Sinharay, Sandip
2015-01-01
The maximum likelihood estimate (MLE) of the ability parameter of an item response theory model with known item parameters was proved to be asymptotically normally distributed under a set of regularity conditions for tests involving dichotomous items and a unidimensional ability parameter (Klauer, 1990; Lord, 1983). This article first considers…
Undergraduate Financial Aid and Subsequent Giving Behavior. Discussion Paper.
ERIC Educational Resources Information Center
Dugan, Kelly; Mullin, Charles H.; Siegfried, John J.
Data on 2,822 Vanderbilt University graduates were used to investigate alumni giving behavior during the 8 years after graduation. A two-stage model accounting for individual truncation was used first to estimate the likelihood of making a contribution and second to estimate the average gift size conditional on contributing. The type of financial…
Congdon, Peter
2006-12-01
This paper considers the development of estimates of mental illness prevalence for small areas and applications in explaining psychiatric outcomes and in assessing service provision. Estimates of prevalence are based on a logistic regression analysis of two national studies that provides model based estimates of relative morbidity risk by demographic, socio-economic and ethnic group for major psychiatric conditions; household/marital and area status also figure in the regression. Relative risk estimates are used, along with suitably disaggregated census populations, to make prevalence estimates for 354 English local authorities (LAs). Two applications are considered: the first involves analysis of variations in schizophrenia referrals and suicide mortality over English LAs that takes account of prevalence differences, and the second involves assessing hospital referral and bed use in relation to prevalence (for ages 16-74) for a case study area, Waltham Forest in NE London.
Individual differences in first- and second-order temporal judgment.
Corcoran, Andrew W; Groot, Christopher; Bruno, Aurelio; Johnston, Alan; Cropper, Simon J
2018-01-01
The ability of subjects to identify and reproduce brief temporal intervals is influenced by many factors whether they be stimulus-based, task-based or subject-based. The current study examines the role individual differences play in subsecond and suprasecond timing judgments, using the schizoptypy personality scale as a test-case approach for quantifying a broad range of individual differences. In two experiments, 129 (Experiment 1) and 141 (Experiment 2) subjects completed the O-LIFE personality questionnaire prior to performing a modified temporal-bisection task. In the bisection task, subjects responded to two identical instantiations of a luminance grating presented in a 4deg window, 4deg above fixation for 1.5 s (Experiment 1) or 3 s (Experiment 2). Subjects initiated presentation with a button-press, and released the button when they considered the stimulus to be half-way through (750/1500 ms). Subjects were then asked to indicate their 'most accurate estimate' of the two intervals. In this way we measure both performance on the task (a first-order measure) and the subjects' knowledge of their performance (a second-order measure). In Experiment 1 the effect of grating-drift and feedback on performance was also examined. Experiment 2 focused on the static/no-feedback condition. For the group data, Experiment 1 showed a significant effect of presentation order in the baseline condition (no feedback), which disappeared when feedback was provided. Moving the stimulus had no effect on perceived duration. Experiment 2 showed no effect of stimulus presentation order. This elimination of the subsecond order-effect was at the expense of accuracy, as the mid-point of the suprasecond interval was generally underestimated. Response precision increased as a proportion of total duration, reducing the variance below that predicted by Weber's law. This result is consistent with a breakdown of the scalar properties of time perception in the early suprasecond range. All subjects showed good insight into their own performance, though that insight did not necessarily correlate with the veridical bisection point. In terms of personality, we found evidence of significant differences in performance along the Unusual Experiences subscale, of most theoretical interest here, in the subsecond condition only. There was also significant correlation with Impulsive Nonconformity and Cognitive Disorganisation in the sub- and suprasecond conditions, respectively. Overall, these data support a partial dissociation of timing mechanisms at very short and slightly longer intervals. Further, these results suggest that perception is not the only critical mitigator of confidence in temporal experience, since individuals can effectively compensate for differences in perception at the level of metacognition in early suprasecond time. Though there are individual differences in performance, these are perhaps less than expected from previous reports and indicate an effective timing mechanism dealing with brief durations independent of the influence of significant personality trait differences.
First-Order System Least Squares for the Stokes Equations, with Application to Linear Elasticity
NASA Technical Reports Server (NTRS)
Cai, Z.; Manteuffel, T. A.; McCormick, S. F.
1996-01-01
Following our earlier work on general second-order scalar equations, here we develop a least-squares functional for the two- and three-dimensional Stokes equations, generalized slightly by allowing a pressure term in the continuity equation. By introducing a velocity flux variable and associated curl and trace equations, we are able to establish ellipticity in an H(exp 1) product norm appropriately weighted by the Reynolds number. This immediately yields optimal discretization error estimates for finite element spaces in this norm and optimal algebraic convergence estimates for multiplicative and additive multigrid methods applied to the resulting discrete systems. Both estimates are uniform in the Reynolds number. Moreover, our pressure-perturbed form of the generalized Stokes equations allows us to develop an analogous result for the Dirichlet problem for linear elasticity with estimates that are uniform in the Lame constants.
NASA Astrophysics Data System (ADS)
Zhong, Chongquan; Lin, Yaoyao
2017-11-01
In this work, a model reference adaptive control-based estimated algorithm is proposed for online multi-parameter identification of surface-mounted permanent magnet synchronous machines. By taking the dq-axis equations of a practical motor as the reference model and the dq-axis estimation equations as the adjustable model, a standard model-reference-adaptive-system-based estimator was established. Additionally, the Popov hyperstability principle was used in the design of the adaptive law to guarantee accurate convergence. In order to reduce the oscillation of identification result, this work introduces a first-order low-pass digital filter to improve precision regarding the parameter estimation. The proposed scheme was then applied to an SPM synchronous motor control system without any additional circuits and implemented using a DSP TMS320LF2812. For analysis, the experimental results reveal the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Aubert, J.; Fournier, A.
2011-10-01
Over the past decades, direct three-dimensional numerical modelling has been successfully used to reproduce the main features of the geodynamo. Here we report on efforts to solve the associated inverse problem, aiming at inferring the underlying properties of the system from the sole knowledge of surface observations and the first principle dynamical equations describing the convective dynamo. To this end we rely on twin experiments. A reference model time sequence is first produced and used to generate synthetic data, restricted here to the large-scale component of the magnetic field and its rate of change at the outer boundary. Starting from a different initial condition, a second sequence is next run and attempts are made to recover the internal magnetic, velocity and buoyancy anomaly fields from the sparse surficial data. In order to reduce the vast underdetermination of this problem, we use stochastic inversion, a linear estimation method determining the most likely internal state compatible with the observations and some prior knowledge, and we also implement a sequential evolution algorithm in order to invert time-dependent surface observations. The prior is the multivariate statistics of the numerical model, which are directly computed from a large number of snapshots stored during a preliminary direct run. The statistics display strong correlation between different harmonic degrees of the surface observations and internal fields, provided they share the same harmonic order, a natural consequence of the linear coupling of the governing dynamical equations and of the leading influence of the Coriolis force. Synthetic experiments performed with a weakly nonlinear model yield an excellent quantitative retrieval of the internal structure. In contrast, the use of a strongly nonlinear (and more realistic) model results in less accurate static estimations, which in turn fail to constrain the unobserved small scales in the time integration of the evolution scheme. Evaluating the quality of forecasts of the system evolution against the reference solution, we show that our scheme can improve predictions based on linear extrapolations on forecast horizons shorter than the system e-folding time. Still, in the perspective of forthcoming data assimilation activities, our study underlines the need of advanced estimation techniques able to cope with the moderate to strong nonlinearities present in the geodynamo.
Quantification of atmospheric methane oxidation in glacier forefields: Initial survey results
NASA Astrophysics Data System (ADS)
Nauer, Philipp A.; Schroth, Martin H.; Pinto, Eric A.; Zeyer, Josef
2010-05-01
The oxidation of CH4 by methanotrophic bacteria is the only known terrestrial sink for atmospheric CH4. Aerobic methanotrophs are active in soils and sediments under various environmental conditions. However, little is known about the activity and abundance of methanotrophs in pioneering ecosystems and their role in succession. In alpine environments, receding glaciers pose a unique opportunity to investigate soil development and ecosystem succession. In an initial survey during summer and autumn 2009 we probed several locations in the forefields of four glaciers in the Swiss Alps to quantify the turnover of atmospheric methane in recently exposed soils. Three glacier forefields (the Stein, Steinlimi and Tiefen) are situated on siliceous bedrock, while one (the Griessen) is situated on calcareous bedrock. We sampled soil air from different depths to generate CH4 concentration profiles for qualitative analysis. At selected locations we applied surface Gas Push-Pull Tests (GPPT) to estimate first-order rate coefficients of CH4 oxidation. The test consists of a controlled injection of the reactants CH4 and O2 and the tracer Ar into and out of the soil at the same location. A top-closed steel cylinder previously emplaced in the soil encloses the injected gas mixture to ensure sufficient reaction times. Rate coefficients can be derived from differences of reactant and tracer breakthrough curves. In one GPPT we employed 13C-CH4 and measured the evolution of δ13C of extracted CO2. To confirm rate coefficients obtained by GPPTs we estimated effective soil diffusivity from soil core samples and fitted a diffusion-consumption model to our profile data. A qualitative analysis of the concentration profiles showed little activity in the forefields on siliceous bedrock, with only one out of fifteen locations exhibiting substantially lower CH4 concentrations in the soil compared to the atmosphere. The surface GPPTs with conventional CH4 at the active location were not sensitive enough to derive meaningful first-order rate coefficients of CH4 oxidation. The more sensitive GPPT with 13C-CH4 resulted in a coefficient of 0.025 h-1, close to the value of 0.011 h-1 estimated from the corresponding concentration profile. Activities in the forefield on calcareous bedrock were substantially higher, with decreased CH4 concentrations in the soil at three out of five locations. Estimated first-order rate coefficients from GPPT and profile at one selected location were 0.6 h-1 and 1.3 h-1, respectively, one to two orders of magnitude higher than values from the siliceous forefield. Additional analysis by quantitative PCR revealed substantially lower numbers of pmoA gene copies per g soil at the active location in the siliceous forefield compared to the selected location in the calcareous forefield. Reasons for these differences in activity and abundance are still unknown and will be subject of further investigations in an upcoming field campaign. The GPPT in combination with δ13C analysis of extracted CO2 appeared to be a functioning approach to sensitively quantify low CH4 turnover.
MICROBIAL TRANSFORMATION RATE CONSTANTS OF STRUCTURALLY DIVERSE MAN-MADE CHEMICALS
To assist in estimating microbially mediated transformation rates of man-made chemicals from their chemical structures, all second order rate constants that have been measured under conditions that make the values comparable have been extracted from the literature and combined wi...
NASA Astrophysics Data System (ADS)
Medialdea, Alicia; Bateman, Mark D.; Evans, David J.; Roberts, David H.; Chiverrell, Richard C.; Clark, Chris D.
2017-04-01
BRITICE-CHRONO is a NERC-funded consortium project of more than 40 researchers aiming to establish the retreat patterns of the last British and Irish Ice Sheet. For this purpose, optically stimulated luminescence (OSL) dating, among other dating techniques, has been used in order to establish accurate chronology. More than 150 samples from glacial environments have been dated and provide key information for modelling of the ice retreat. Nevertheless, luminescence dating of glacial sediments has proven to be challenging: first, glacial sediments were often affected by incomplete bleaching and secondly, quartz grains within the sediments sampled were often characterized by complex luminescence behaviour; characterized by dim signal and low reproducibility. Specific statistical approaches have been used to over come the former to enable the estimated ages to be based on grain populations most likely to have been well bleached. This latest work presents how issues surrounding complex luminescence behaviour were over-come in order to obtain accurate OSL ages. This study has been performed on two samples of bedded sand originated on an ice walled lake plain, in Lincolnshire, UK. Quartz extracts from each sample were artificially bleached and irradiated to known doses. Dose recovery tests have been carried out under different conditions to study the effect of: preheat temperature, thermal quenching, contribution of slow components, hot bleach after a measuring cycles and IR stimulation. Measurements have been performed on different luminescence readers to study the possible contribution of instrument reproducibility. These have shown that a great variability can be observed not only among the studied samples but also within a specific site and even a specific sample. In order to determine an accurate chronology and realistic uncertainties to the estimated ages, this variability must be taken into account. Tight acceptance criteria to measured doses from natural, not exposed, aliquots have been applied. These derived on reproducible dose distributions from which accurate ages could be estimated.
How can streamflow and climate-landscape data be used to estimate baseflow mean response time?
NASA Astrophysics Data System (ADS)
Zhang, Runrun; Chen, Xi; Zhang, Zhicai; Soulsby, Chris; Gao, Man
2018-02-01
Mean response time (MRT) is a metric describing the propagation of catchment hydraulic behavior that reflects both hydro-climatic conditions and catchment characteristics. To provide a comprehensive understanding of catchment response over a longer-time scale for hydraulic processes, the MRT function for baseflow generation was derived using an instantaneous unit hydrograph (IUH) model that describes the subsurface response to effective rainfall inputs. IUH parameters were estimated based on the "match test" between the autocorrelation function (ACFs) derived from the filtered base flow time series and from the IUH parameters, under the GLUE framework. Regionalization of MRT was conducted using estimates and hydroclimate-landscape indices in 22 sub-basins of the Jinghe River Basin (JRB) in the Loess Plateau of northwest China. Results indicate there is strong equifinality in determination of the best parameter sets but the median values of the MRT estimates are relatively stable in the acceptable range of the parameters. MRTs vary markedly over the studied sub-basins, ranging from tens of days to more than a year. Climate, topography and geomorphology were identified as three first-order controls on recharge-baseflow response processes. Human activities involving the cultivation of permanent crops may elongate the baseflow MRT and hence increase the dynamic storage. Cross validation suggests the model can be used to estimate MRTs in ungauged catchments in similar regions of throughout the Loess Plateau. The proposed method provides a systematic approach for MRT estimation and regionalization in terms of hydroclimate and catchment characteristics, which is helpful in the sustainable water resources utilization and ecological protection in the Loess Plateau.
Xie, Yuanlong; Tang, Xiaoqi; Song, Bao; Zhou, Xiangdong; Guo, Yixuan
2018-04-01
In this paper, data-driven adaptive fractional order proportional integral (AFOPI) control is presented for permanent magnet synchronous motor (PMSM) servo system perturbed by measurement noise and data dropouts. The proposed method directly exploits the closed-loop process data for the AFOPI controller design under unknown noise distribution and data missing probability. Firstly, the proposed method constructs the AFOPI controller tuning problem as a parameter identification problem using the modified l p norm virtual reference feedback tuning (VRFT). Then, iteratively reweighted least squares is integrated into the l p norm VRFT to give a consistent compensation solution for the AFOPI controller. The measurement noise and data dropouts are estimated and eliminated by feedback compensation periodically, so that the AFOPI controller is updated online to accommodate the time-varying operating conditions. Moreover, the convergence and stability are guaranteed by mathematical analysis. Finally, the effectiveness of the proposed method is demonstrated both on simulations and experiments implemented on a practical PMSM servo system. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Raman and Brillouin scattering studies of bulk 2H-WSe2
NASA Astrophysics Data System (ADS)
Akintola, K.; Andrews, G. T.; Curnoe, S. H.; Koehler, M. R.; Keppens, V.
2015-10-01
Raman and Brillouin spectroscopy were used to probe optic and acoustic phonons in bulk 2H-WSe2. Raman spectra collected under different polarization conditions allowed assignment of spectral peaks to various first- and second-order processes. In contrast to some previous studies, a Raman peak at ˜259 cm-1was found not to be due to the A1g mode but to a second-order process involving phonons at either the M or K point of the Brillouin zone. Resonance effects due to excitons were also observed in the Raman spectra. Brillouin spectra of 2H-WSe2 contain a single peak doublet arising from a Rayleigh surface mode propagating with a velocity of 1340+/- 20 m s-1. This value is comparable to that estimated from Density Functional Theory calculations and also to those for the transition metal diselenides 2H-TaSe2 and 2H-NbSe2. Unlike these two materials, however, peaks arising from scattering via the elasto-optic mechanism were not observed in Brillouin spectra of WSe2 despite its lower opacity.
The intrinsic B-mode polarisation of the Cosmic Microwave Background
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fidler, Christian; Pettinari, Guido W.; Crittenden, Robert
2014-07-01
We estimate the B-polarisation induced in the Cosmic Microwave Background by the non-linear evolution of density perturbations. Using the second-order Boltzmann code SONG, our analysis incorporates, for the first time, all physical effects at recombination. We also include novel contributions from the redshift part of the Boltzmann equation and from the bolometric definition of the temperature in the presence of polarisation. The remaining line-of-sight terms (lensing and time-delay) have previously been studied and must be calculated non-perturbatively. The intrinsic B-mode polarisation is present independent of the initial conditions and might contaminate the signal from primordial gravitational waves. We find thismore » contamination to be comparable to a primordial tensor-to-scalar ratio of r ≅ 10{sup −7} at the angular scale ℓ ≅ 100, where the primordial signal peaks, and r ≅ 5 × 10{sup −5} at ℓ ≅ 700, where the intrinsic signal peaks. Therefore, we conclude that the intrinsic B-polarisation from second-order effects is not likely to contaminate future searches of primordial gravitational waves.« less
Oliviero, T; Verkerk, R; Van Boekel, M A J S; Dekker, M
2014-11-15
Broccoli belongs to the Brassicaceae plant family consisting of widely eaten vegetables containing high concentrations of glucosinolates. Enzymatic hydrolysis of glucosinolates by endogenous myrosinase (MYR) can form isothiocyanates with health-promoting activities. The effect of water content (WC) and temperature on MYR inactivation in broccoli was investigated. Broccoli was freeze dried obtaining batches with WC between 10% and 90% (aw from 0.10 to 0.96). These samples were incubated for various times at different temperatures (40-70°C) and MYR activity was measured. The initial MYR inactivation rates were estimated by the first-order reaction kinetic model. MYR inactivation rate constants were lower in the driest samples (10% WC) at all studied temperatures. Samples with 67% and 90% WC showed initial inactivation rate constants all in the same order of magnitude. Samples with 31% WC showed intermediate initial inactivation rate constants. These results are useful to optimise the conditions of drying processes to produce dried broccoli with optimal MYR retention for human health. Copyright © 2014 Elsevier Ltd. All rights reserved.
Interface conditions for domain decomposition with radical grid refinement
NASA Technical Reports Server (NTRS)
Scroggs, Jeffrey S.
1991-01-01
Interface conditions for coupling the domains in a physically motivated domain decomposition method are discussed. The domain decomposition is based on an asymptotic-induced method for the numerical solution of hyperbolic conservation laws with small viscosity. The method consists of multiple stages. The first stage is to obtain a first approximation using a first-order method, such as the Godunov scheme. Subsequent stages of the method involve solving internal-layer problem via a domain decomposition. The method is derived and justified via singular perturbation techniques.
Enhanced economic connectivity to foster heat stress-related losses.
Wenz, Leonie; Levermann, Anders
2016-06-01
Assessing global impacts of unexpected meteorological events in an increasingly connected world economy is important for estimating the costs of climate change. We show that since the beginning of the 21st century, the structural evolution of the global supply network has been such as to foster an increase of climate-related production losses. We compute first- and higher-order losses from heat stress-induced reductions in productivity under changing economic and climatic conditions between 1991 and 2011. Since 2001, the economic connectivity has augmented in such a way as to facilitate the cascading of production loss. The influence of this structural change has dominated over the effect of the comparably weak climate warming during this decade. Thus, particularly under future warming, the intensification of international trade has the potential to amplify climate losses if no adaptation measures are taken.
Verification and Calibration of a Reduced Order Wind Farm Model by Wind Tunnel Experiments
NASA Astrophysics Data System (ADS)
Schreiber, J.; Nanos, E. M.; Campagnolo, F.; Bottasso, C. L.
2017-05-01
In this paper an adaptation of the FLORIS approach is considered that models the wind flow and power production within a wind farm. In preparation to the use of this model for wind farm control, this paper considers the problem of its calibration and validation with the use of experimental observations. The model parameters are first identified based on measurements performed on an isolated scaled wind turbine operated in a boundary layer wind tunnel in various wind-misalignment conditions. Next, the wind farm model is verified with results of experimental tests conducted on three interacting scaled wind turbines. Although some differences in the estimated absolute power are observed, the model appears to be capable of identifying with good accuracy the wind turbine misalignment angles that, by deflecting the wake, lead to maximum power for the investigated layouts.
Enhanced economic connectivity to foster heat stress–related losses
Wenz, Leonie; Levermann, Anders
2016-01-01
Assessing global impacts of unexpected meteorological events in an increasingly connected world economy is important for estimating the costs of climate change. We show that since the beginning of the 21st century, the structural evolution of the global supply network has been such as to foster an increase of climate-related production losses. We compute first- and higher-order losses from heat stress–induced reductions in productivity under changing economic and climatic conditions between 1991 and 2011. Since 2001, the economic connectivity has augmented in such a way as to facilitate the cascading of production loss. The influence of this structural change has dominated over the effect of the comparably weak climate warming during this decade. Thus, particularly under future warming, the intensification of international trade has the potential to amplify climate losses if no adaptation measures are taken. PMID:27386555
Population pharmacokinetics of aripiprazole in healthy Korean subjects.
Jeon, Ji-Young; Chae, Soo-Wan; Kim, Min-Gul
2016-04-01
Aripiprazole is widely used to treat schizophrenia and bipolar disorder. This study aimed to develop a combined population pharmacokinetic model for aripiprazole in healthy Korean subjects and to identify the significant covariates in the pharmacokinetic variability of aripiprazole. Aripiprazole plasma concentrations and demographic data were collected retrospectively from previous bioequivalence studies that were conducted in Chonbuk National University Hospital. Informed consent was obtained from subjects for cytochrome P450 (CYP) genotyping. The population pharmacokinetic parameters of aripiprazole were estimated using nonlinear mixed-effect modeling with first-order conditional estimation with interaction method. The effects of age, sex, weight, height, and CYP genotype were assessed as covariates. A total of 1,508 samples from 88 subjects in three bioequivalence studies were collected. The two-compartment model was adopted, and the final population model showed that the CYP2D6 genotype polymorphism, height and weight significantly affect aripiprazole disposition. The bootstrap and visual predictive check results were evaluated, showing that the accuracy of the pharmacokinetic model was acceptable. A population pharmacokinetic model of aripiprazole was developed for Korean subjects. CYP2D6 genotype polymorphism, weight, and height were included as significant factors affecting aripiprazole disposition. The population pharmacokinetic parameters of aripiprazole estimated in the present study may be useful for individualizing clinical dosages and for studying the concentration-effect relationship of the drug.
Modeling coupled sorption and transformation of 17β-estradiol-17-sulfate in soil-water systems
NASA Astrophysics Data System (ADS)
Bai, Xuelian; Shrestha, Suman L.; Casey, Francis X. M.; Hakk, Heldur; Fan, Zhaosheng
2014-11-01
Animal manure is the primary source of exogenous free estrogens in the environment, which are known endocrine-disrupting chemicals to disorder the reproduction system of organisms. Conjugated estrogens can act as precursors to free estrogens, which may increase the total estrogenicity in the environment. In this study, a comprehensive model was used to simultaneously simulate the coupled sorption and transformation of a sulfate estrogen conjugate, 17β-estradiol-17-sulfate (E2-17S), in various soil-water systems (non-sterile/sterile; topsoil/subsoil). The simulated processes included multiple transformation pathways (i.e. hydroxylation, hydrolysis, and oxidation) and mass transfer between the aqueous, reversibly sorbed, and irreversibly sorbed phases of all soils for E2-17S and its metabolites. The conceptual model was conceived based on a series of linear sorption and first-order transformation expressions. The model was inversely solved using finite difference to estimate process parameters. A global optimization method was applied for the inverse analysis along with variable model restrictions to estimate 36 parameters. The model provided a satisfactory simultaneous fit (R2adj = 0.93 and d = 0.87) of all the experimental data and reliable parameter estimates. This modeling study improved the understanding on fate and transport of estrogen conjugates under various soil-water conditions.
Linear Covariance Analysis and Epoch State Estimators
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Carpenter, J. Russell
2014-01-01
This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.
Linear Covariance Analysis and Epoch State Estimators
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Carpenter, J. Russell
2012-01-01
This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.
Matrix form of Legendre polynomials for solving linear integro-differential equations of high order
NASA Astrophysics Data System (ADS)
Kammuji, M.; Eshkuvatov, Z. K.; Yunus, Arif A. M.
2017-04-01
This paper presents an effective approximate solution of high order of Fredholm-Volterra integro-differential equations (FVIDEs) with boundary condition. Legendre truncated series is used as a basis functions to estimate the unknown function. Matrix operation of Legendre polynomials is used to transform FVIDEs with boundary conditions into matrix equation of Fredholm-Volterra type. Gauss Legendre quadrature formula and collocation method are applied to transfer the matrix equation into system of linear algebraic equations. The latter equation is solved by Gauss elimination method. The accuracy and validity of this method are discussed by solving two numerical examples and comparisons with wavelet and methods.
Optimization of the lithium/thionyl chloride battery
NASA Technical Reports Server (NTRS)
White, Ralph E.
1989-01-01
A 1-D math model for the lithium/thionyl chloride primary cell is used in conjunction with a parameter estimation technique in order to estimate the electro-kinetic parameters of this electrochemical system. The electro-kinetic parameters include the anodic transfer coefficient and exchange current density of the lithium oxidation, alpha sub a,1 and i sub o,i,ref, the cathodic transfer coefficient and the effective exchange current density of the thionyl chloride reduction, alpha sub c,2 and a sup o i sub o,2,ref, and a morphology parameter, Xi. The parameter estimation is performed on simulated data first in order to gain confidence in the method. Data, reported in the literature, for a high rate discharge of an experimental lithium/thionyl chloride cell is used for an analysis.
Putti, Fernando Ferrari; Filho, Luis Roberto Almeida Gabriel; Gabriel, Camila Pires Cremasco; Neto, Alfredo Bonini; Bonini, Carolina Dos Santos Batista; Rodrigues Dos Reis, André
2017-06-01
This study aimed to develop a fuzzy mathematical model to estimate the impacts of global warming on the vitality of Laelia purpurata growing in different Brazilian environmental conditions. In order to develop the mathematical model was considered as intrinsic factors the parameters: temperature, humidity and shade conditions to determine the vitality of plants. Fuzzy model results could accurately predict the optimal conditions for cultivation of Laelia purpurata in several sites of Brazil. Based on fuzzy model results, we found that higher temperatures and lacking of properly shading can reduce the vitality of orchids. Fuzzy mathematical model could precisely detect the effect of higher temperatures causing damages on vitality of plants as a consequence of global warming. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Rapa, Giulia; Groppo, Chiara; Rolfo, Franco; Petrelli, Maurizio; Mosca, Pietro; Perugini, Diego
2017-11-01
The pressure, temperature, and timing (P-T-t) conditions at which CO2 was produced during the Himalayan prograde metamorphism have been constrained, focusing on the most abundant calc-silicate rock type in the Himalaya. A detailed petrological modeling of a clinopyroxene + scapolite + K-feldspar + plagioclase + quartz ± calcite calc-silicate rock allowed the identification and full characterization - for the first time - of different metamorphic reactions leading to the simultaneous growth of titanite and CO2 production. The results of thermometric determinations (Zr-in-Ttn thermometry) and U-Pb geochronological analyses suggest that, in the studied lithology, most titanite grains grew during two nearly consecutive episodes of titanite formation: a near-peak event at 730-740 °C, 10 kbar, 30-26 Ma, and a peak event at 740-765 °C, 10.5 kbar, 25-20 Ma. Both episodes of titanite growth are correlated with specific CO2-producing reactions and constrain the timing, duration and P-T conditions of the main CO2-producing events, as well as the amounts of CO2 produced (1.4-1.8 wt% of CO2). A first-order extrapolation of such CO2 amounts to the orogen scale provides metamorphic CO2 fluxes ranging between 1.4 and 19.4 Mt/yr; these values are of the same order of magnitude as the present-day CO2 fluxes degassed from spring waters located along the Main Central Thrust. We suggest that these metamorphic CO2 fluxes should be considered in any future attempts of estimating the global budget of non-volcanic carbon fluxes from the lithosphere.
Modelling cometabolic biotransformation of organic micropollutants in nitrifying reactors.
Fernandez-Fontaina, E; Carballa, M; Omil, F; Lema, J M
2014-11-15
Cometabolism is the ability of microorganisms to degrade non-growth substrates in the presence of primary substrates, being the main removal mechanism behind the biotransformation of organic micropollutants in wastewater treatment plants. In this paper, a cometabolic Monod-type kinetics, linking biotransformation of micropollutants with primary substrate degradation, was applied to a highly enriched nitrifying activated sludge (NAS) reactor operated under different operational conditions (hydraulic retention time (HRT) and nitrifying activity). A dynamic model of the bioreactor was built taking into account biotransformation, sorption and volatilization. The micropollutant transformation capacity (Tc), the half-saturation constant (Ksc) and the solid-liquid partitioning coefficient (Kd) of several organic micropollutants were estimated at 25 °C using an optimization algorithm to fit experimental data to the proposed model with the cometabolic Monod-type biotransformation kinetics. The cometabolic Monod-type kinetic model was validated under different HRTs (1.0-3.7 d) and nitrification rates (0.12-0.45 g N/g VSS d), describing more accurately the fate of those compounds affected by the biological activity of nitrifiers (ibuprofen, naproxen, erythromycin and roxithromycin) compared to the commonly applied pseudo-first order micropollutant biotransformation kinetics, which does not link biotransformation of micropollutants to consumption of primary substrate. Furthermore, in contrast to the pseudo-first order biotransformation constant (k(biol)), the proposed cometabolic kinetic coefficients are independent of operational conditions such as the nitrogen loading rate applied. Also, the influence of the kinetic parameters on the biotransformation efficiency of NAS reactors, defined as the relative amount of the total inlet micropollutant load being biotransformed, was assessed considering different HRTs and nitrification rates. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wei, Zhongbao; Tseng, King Jet; Wai, Nyunt; Lim, Tuti Mariana; Skyllas-Kazacos, Maria
2016-11-01
Reliable state estimate depends largely on an accurate battery model. However, the parameters of battery model are time varying with operating condition variation and battery aging. The existing co-estimation methods address the model uncertainty by integrating the online model identification with state estimate and have shown improved accuracy. However, the cross interference may arise from the integrated framework to compromise numerical stability and accuracy. Thus this paper proposes the decoupling of model identification and state estimate to eliminate the possibility of cross interference. The model parameters are online adapted with the recursive least squares (RLS) method, based on which a novel joint estimator based on extended Kalman Filter (EKF) is formulated to estimate the state of charge (SOC) and capacity concurrently. The proposed joint estimator effectively compresses the filter order which leads to substantial improvement in the computational efficiency and numerical stability. Lab scale experiment on vanadium redox flow battery shows that the proposed method is highly authentic with good robustness to varying operating conditions and battery aging. The proposed method is further compared with some existing methods and shown to be superior in terms of accuracy, convergence speed, and computational cost.
Self-Organized Bistability Associated with First-Order Phase Transitions
NASA Astrophysics Data System (ADS)
di Santo, Serena; Burioni, Raffaella; Vezzani, Alessandro; Muñoz, Miguel A.
2016-06-01
Self-organized criticality elucidates the conditions under which physical and biological systems tune themselves to the edge of a second-order phase transition, with scale invariance. Motivated by the empirical observation of bimodal distributions of activity in neuroscience and other fields, we propose and analyze a theory for the self-organization to the point of phase coexistence in systems exhibiting a first-order phase transition. It explains the emergence of regular avalanches with attributes of scale invariance that coexist with huge anomalous ones, with realizations in many fields.
NASA Technical Reports Server (NTRS)
Ferguson, T. V.; Havskjold, G. L.; Rojas, L.
1988-01-01
A laser two-focus velocimeter was used in an open-loop water test facility in order to map the flowfield downstream of the SSME's high-pressure oxidizer turbopump first-stage turbine nozzle; attention was given to the effects of the upstream strut-downstream nozzle configuration on the flow at the rotor inlet, in order to estimate dynamic loads on the first-stage rotor blades. Velocity and flow angles were plotted as a function of circumferential position, and were found to clearly display the periodic behavior of the wake flow field. The influence of the upstream centerbody-supporting struts on the vane nozzle wake pattern was evident.
Reduced order models for prediction of groundwater quality impacts from CO₂ and brine leakage
Zheng, Liange; Carroll, Susan; Bianchi, Marco; ...
2014-12-31
A careful assessment of the risk associated with geologic CO₂ storage is critical to the deployment of large-scale storage projects. A potential risk is the deterioration of groundwater quality caused by the leakage of CO₂ and brine leakage from deep subsurface reservoirs. In probabilistic risk assessment studies, numerical modeling is the primary tool employed to assess risk. However, the application of traditional numerical models to fully evaluate the impact of CO₂ leakage on groundwater can be computationally complex, demanding large processing times and resources, and involving large uncertainties. As an alternative, reduced order models (ROMs) can be used as highlymore » efficient surrogates for the complex process-based numerical models. In this study, we represent the complex hydrogeological and geochemical conditions in a heterogeneous aquifer and subsequent risk by developing and using two separate ROMs. The first ROM is derived from a model that accounts for the heterogeneous flow and transport conditions in the presence of complex leakage functions for CO₂ and brine. The second ROM is obtained from models that feature similar, but simplified flow and transport conditions, and allow for a more complex representation of all relevant geochemical reactions. To quantify possible impacts to groundwater aquifers, the basic risk metric is taken as the aquifer volume in which the water quality of the aquifer may be affected by an underlying CO₂ storage project. The integration of the two ROMs provides an estimate of the impacted aquifer volume taking into account uncertainties in flow, transport and chemical conditions. These two ROMs can be linked in a comprehensive system level model for quantitative risk assessment of the deep storage reservoir, wellbore leakage, and shallow aquifer impacts to assess the collective risk of CO₂ storage projects.« less
NASA Astrophysics Data System (ADS)
Summa, D.; Di Girolamo, P.; Stelitano, D.; Cacciani, M.
2013-12-01
The planetary boundary layer (PBL) includes the portion of the atmosphere which is directly influenced by the presence of the earth's surface. Aerosol particles trapped within the PBL can be used as tracers to study the boundary-layer vertical structure and time variability. As a result of this, elastic backscatter signals collected by lidar systems can be used to determine the height and the internal structure of the PBL. The present analysis considers three different methods to estimate the PBL height. The first method is based on the determination of the first-order derivative of the logarithm of the range-corrected elastic lidar signals. Estimates of the PBL height for specific case studies obtained through this approach are compared with simultaneous estimates from the potential temperature profiles measured by radiosondes launched simultaneously to lidar operation. Additional estimates of the boundary layer height are based on the determination of the first-order derivative of the range-corrected rotational Raman lidar signals. This latter approach results to be successfully applicable also in the afternoon-evening decaying phase of the PBL, when the effectiveness of the approach based on the elastic lidar signals may be compromised or altered by the presence of the residual layer. Results from these different approaches are compared and discussed in the paper, with a specific focus on selected case studies collected by the University of Basilicata Raman lidar system BASIL during the Convective and Orographically-induced Precipitation Study (COPS).
NASA Astrophysics Data System (ADS)
Summa, D.; Di Girolamo, P.; Stelitano, D.; Cacciani, M.
2013-06-01
The Planetary Boundary Layer (PBL) includes the portion of the atmosphere which is directly influenced by the presence of the Earth's surface. Aerosol particles trapped within the PBL can be used as tracers to study the boundary-layer vertical structure and time variability. As a result of this, elastic backscatter signals collected by lidar systems can be used to determine the height and the internal structure of the PBL. The present analysis considers three different methods to estimate the PBL height. A first method is based on the determination of the first order derivative of the logarithm of the range-corrected elastic lidar signals. Estimates of the PBL height for specific case studies obtained from this approach are compared with simultaneous estimates from the potential temperature profiles measured by radiosondes launched simultaneously to lidar operation. Additional estimates of the boundary layer height are based on the determination of the first order derivative of the range-corrected rotational Raman lidar signals. This latter approach results to be successfully applicable also in the afternoon-evening decaying phase of the PBL, when the effectiveness of the approach based on the elastic lidar signals may be compromised or altered by the presence of the residual layer. Results from these different approaches are compared and discussed in the paper, with a specific focus on selected case studies collected by the University of Basilicata Raman lidar system BASIL during the Convective and Orographically-induced Precipitation Study (COPS).
NASA Astrophysics Data System (ADS)
Lin, S. T.; Liou, T. S.
2017-12-01
Numerical simulation of groundwater flow in anisotropic aquifers usually suffers from the lack of accuracy of calculating groundwater flux across grid blocks. Conventional two-point flux approximation (TPFA) can only obtain the flux normal to the grid interface but completely neglects the one parallel to it. Furthermore, the hydraulic gradient in a grid block estimated from TPFA can only poorly represent the hydraulic condition near the intersection of grid blocks. These disadvantages are further exacerbated when the principal axes of hydraulic conductivity, global coordinate system, and grid boundary are not parallel to one another. In order to refine the estimation the in-grid hydraulic gradient, several multiple-point flux approximation (MPFA) methods have been developed for two-dimensional groundwater flow simulations. For example, the MPFA-O method uses the hydraulic head at the junction node as an auxiliary variable which is then eliminated using the head and flux continuity conditions. In this study, a three-dimensional MPFA method will be developed for numerical simulation of groundwater flow in three-dimensional and strongly anisotropic aquifers. This new MPFA method first discretizes the simulation domain into hexahedrons. Each hexahedron is further decomposed into a certain number of tetrahedrons. The 2D MPFA-O method is then extended to these tetrahedrons, using the unknown head at the intersection of hexahedrons as an auxiliary variable along with the head and flux continuity conditions to solve for the head at the center of each hexahedron. Numerical simulations using this new MPFA method have been successfully compared with those obtained from a modified version of TOUGH2.
Feng, Kaiqiang; Li, Jie; Zhang, Xiaoming; Shen, Chong; Bi, Yu; Zheng, Tao; Liu, Jun
2017-09-19
In order to reduce the computational complexity, and improve the pitch/roll estimation accuracy of the low-cost attitude heading reference system (AHRS) under conditions of magnetic-distortion, a novel linear Kalman filter, suitable for nonlinear attitude estimation, is proposed in this paper. The new algorithm is the combination of two-step geometrically-intuitive correction (TGIC) and the Kalman filter. In the proposed algorithm, the sequential two-step geometrically-intuitive correction scheme is used to make the current estimation of pitch/roll immune to magnetic distortion. Meanwhile, the TGIC produces a computed quaternion input for the Kalman filter, which avoids the linearization error of measurement equations and reduces the computational complexity. Several experiments have been carried out to validate the performance of the filter design. The results demonstrate that the mean time consumption and the root mean square error (RMSE) of pitch/roll estimation under magnetic disturbances are reduced by 45.9% and 33.8%, respectively, when compared with a standard filter. In addition, the proposed filter is applicable for attitude estimation under various dynamic conditions.
Feng, Kaiqiang; Li, Jie; Zhang, Xiaoming; Shen, Chong; Bi, Yu; Zheng, Tao; Liu, Jun
2017-01-01
In order to reduce the computational complexity, and improve the pitch/roll estimation accuracy of the low-cost attitude heading reference system (AHRS) under conditions of magnetic-distortion, a novel linear Kalman filter, suitable for nonlinear attitude estimation, is proposed in this paper. The new algorithm is the combination of two-step geometrically-intuitive correction (TGIC) and the Kalman filter. In the proposed algorithm, the sequential two-step geometrically-intuitive correction scheme is used to make the current estimation of pitch/roll immune to magnetic distortion. Meanwhile, the TGIC produces a computed quaternion input for the Kalman filter, which avoids the linearization error of measurement equations and reduces the computational complexity. Several experiments have been carried out to validate the performance of the filter design. The results demonstrate that the mean time consumption and the root mean square error (RMSE) of pitch/roll estimation under magnetic disturbances are reduced by 45.9% and 33.8%, respectively, when compared with a standard filter. In addition, the proposed filter is applicable for attitude estimation under various dynamic conditions. PMID:28925979
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benkovitz, C.M.
Sulfur emissions from volcanoes are located in areas of volcanic activity, are extremely variable in time, and can be released anywhere from ground level to the stratosphere. Previous estimates of global sulfur emissions from all sources by various authors have included estimates for emissions from volcanic activity. In general, these global estimates of sulfur emissions from volcanoes are given as global totals for an ``average`` year. A project has been initiated at Brookhaven National Laboratory to compile inventories of sulfur emissions from volcanoes. In order to complement the GEIA inventories of anthropogenic sulfur emissions, which represent conditions circa specific years,more » sulfur emissions from volcanoes are being estimated for the years 1985 and 1990.« less
Hydrologic Engineering in Planning,
1981-04-01
through abstraction of losses 3) Transform precipitation excess to streamflow 4) Estimate other contributions in order to obtain the total runoff...similar to those of surface entry, transmission ability and storage capacity and are illustrated in Figure 4.3. The initial losses are the losses that...AVERAGE CONDITIONS LEGEND w UNIFORM LOSSES 0I SOIL TRANSMISSION RATE A NTECEDENT CONDITIONS U) -~(WET)(DY IL 0 / -J TIME TIME SOIL CHARACTERISTICS 0,0
Evidential analysis of difference images for change detection of multitemporal remote sensing images
NASA Astrophysics Data System (ADS)
Chen, Yin; Peng, Lijuan; Cremers, Armin B.
2018-03-01
In this article, we develop two methods for unsupervised change detection in multitemporal remote sensing images based on Dempster-Shafer's theory of evidence (DST). In most unsupervised change detection methods, the probability of difference image is assumed to be characterized by mixture models, whose parameters are estimated by the expectation maximization (EM) method. However, the main drawback of the EM method is that it does not consider spatial contextual information, which may entail rather noisy detection results with numerous spurious alarms. To remedy this, we firstly develop an evidence theory based EM method (EEM) which incorporates spatial contextual information in EM by iteratively fusing the belief assignments of neighboring pixels to the central pixel. Secondly, an evidential labeling method in the sense of maximizing a posteriori probability (MAP) is proposed in order to further enhance the detection result. It first uses the parameters estimated by EEM to initialize the class labels of a difference image. Then it iteratively fuses class conditional information and spatial contextual information, and updates labels and class parameters. Finally it converges to a fixed state which gives the detection result. A simulated image set and two real remote sensing data sets are used to evaluate the two evidential change detection methods. Experimental results show that the new evidential methods are comparable to other prevalent methods in terms of total error rate.
Stochastic modeling of macrodispersion in unsaturated heterogeneous porous media. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeh, T.C.J.
1995-02-01
Spatial heterogeneity of geologic media leads to uncertainty in predicting both flow and transport in the vadose zone. In this work an efficient and flexible, combined analytical-numerical Monte Carlo approach is developed for the analysis of steady-state flow and transient transport processes in highly heterogeneous, variably saturated porous media. The approach is also used for the investigation of the validity of linear, first order analytical stochastic models. With the Monte Carlo analysis accurate estimates of the ensemble conductivity, head, velocity, and concentration mean and covariance are obtained; the statistical moments describing displacement of solute plumes, solute breakthrough at a compliancemore » surface, and time of first exceedance of a given solute flux level are analyzed; and the cumulative probability density functions for solute flux across a compliance surface are investigated. The results of the Monte Carlo analysis show that for very heterogeneous flow fields, and particularly in anisotropic soils, the linearized, analytical predictions of soil water tension and soil moisture flux become erroneous. Analytical, linearized Lagrangian transport models also overestimate both the longitudinal and the transverse spreading of the mean solute plume in very heterogeneous soils and in dry soils. A combined analytical-numerical conditional simulation algorithm is also developed to estimate the impact of in-situ soil hydraulic measurements on reducing the uncertainty of concentration and solute flux predictions.« less
NASA Astrophysics Data System (ADS)
Simoncini, E.; Delgado-Bonal, A.; Martin-Torres, F. J.
2012-12-01
Although during the 1960s, atmospheric disequilibrium has been proposed as a sign of habitability of Earth and, in general, of a planet [1, 2], no calculation has been done until now. In order to provide a first evaluation of Earth's atmospheric disequilibrium, we have developed a new formulation to account for the thermodynamic conditions of a wide range of planetary atmospheres, from terrestrial planets to icy satellites, to hot exoplanets. Using this new formulation, we estimate the departure of different planetary atmospheres from their equilibrium conditions, computing the dissipation of free energy due to all chemical processes [3]. In particular, we focus on the effect of our proposed changes on O2/CO2 chemistry (comparing Io satellite atmosphere and Earth Mesosphere), N2 (Venus, Earth and Titan) and H2O stability on terrestrial planets and exoplanets. Our results have an impact in the definition of Habitable Zone by considering appropriate physical-chemical conditions of planetary atmospheres. References [1] J. E. Lovelock, A physical basis for life detection experiments. Nature, 207, 568-570 (1965). [2] J. E. Lovelock, Thermodynamics and the recognition of alien biospheres. Proc. R. Soc. Lond., B. 189, 167 - 181 (1975). [3] Simoncini E., Delgado-Bonal A., Martin-Torres F.J., Accounting thermodynamic conditions in chemical models of planetary atmospheres. Submitted to Astrophysical Journal.
NASA Technical Reports Server (NTRS)
Tessler, A.; Annett, M. S.; Gendron, G.
2001-01-01
A {1,2}-order theory for laminated composite and sandwich plates is extended to include thermoelastic effects. The theory incorporates all three-dimensional strains and stresses. Mixed-field assumptions are introduced which include linear in-plane displacements, parabolic transverse displacement and shear strains, and a cubic distribution of the transverse normal stress. Least squares strain compatibility conditions and exact traction boundary conditions are enforced to yield higher polynomial degree distributions for the transverse shear strains and transverse normal stress through the plate thickness. The principle of virtual work is used to derive a 10th-order system of equilibrium equations and associated Poisson boundary conditions. The predictive capability of the theory is demonstrated using a closed-form analytic solution for a simply-supported rectangular plate subjected to a linearly varying temperature field across the thickness. Several thin and moderately thick laminated composite and sandwich plates are analyzed. Numerical comparisons are made with corresponding solutions of the first-order shear deformation theory and three-dimensional elasticity theory. These results, which closely approximate the three-dimensional elasticity solutions, demonstrate that through - the - thickness deformations even in relatively thin and, especially in thick. composite and sandwich laminates can be significant under severe thermal gradients. The {1,2}-order kinematic assumptions insure an overall accurate theory that is in general superior and, in some cases, equivalent to the first-order theory.
NASA Astrophysics Data System (ADS)
Fernández, Eduardo F.; Almonacid, Florencia; Sarmah, Nabin; Mallick, Tapas; Sanchez, Iñigo; Cuadra, Juan M.; Soria-Moya, Alberto; Pérez-Higueras, Pedro
2014-09-01
A model based on easily obtained atmospheric parameters and on a simple lineal mathematical expression has been developed at the Centre of Advanced Studies in Energy and Environment in southern Spain. The model predicts the maximum power of a HCPV module as a function of direct normal irradiance, air temperature and air mass. Presently, the proposed model has only been validated in southern Spain and its performance in locations with different atmospheric conditions still remains unknown. In order to address this issue, several HCPV modules have been measured in two different locations with different climate conditions than the south of Spain: the Environment and Sustainability Institute in southern UK and the National Renewable Energy Center in northern Spain. Results show that the model has an adequate match between actual and estimated data with a RMSE lower than 3.9% at locations with different climate conditions.
NASA Technical Reports Server (NTRS)
1978-01-01
The author has identified the following significant results. LACIE acreage estimates were in close agreement with SRS estimates, and an operational system with a 14 day LANDSAT data turnaround could have produced an accurate acreage estimate (one which satisfied the 90/90 criterion) 1 1/2 to 2 months before harvest. Low yield estimates, resulting from agromet conditions not taken into account in the yield models, caused production estimates to be correspondingly low. However, both yield and production estimates satisfied the LACIE 90/90 criterion for winter wheat in the yardstick region.
Covariate-adjusted Spearman's rank correlation with probability-scale residuals.
Liu, Qi; Li, Chun; Wanga, Valentine; Shepherd, Bryan E
2018-06-01
It is desirable to adjust Spearman's rank correlation for covariates, yet existing approaches have limitations. For example, the traditionally defined partial Spearman's correlation does not have a sensible population parameter, and the conditional Spearman's correlation defined with copulas cannot be easily generalized to discrete variables. We define population parameters for both partial and conditional Spearman's correlation through concordance-discordance probabilities. The definitions are natural extensions of Spearman's rank correlation in the presence of covariates and are general for any orderable random variables. We show that they can be neatly expressed using probability-scale residuals (PSRs). This connection allows us to derive simple estimators. Our partial estimator for Spearman's correlation between X and Y adjusted for Z is the correlation of PSRs from models of X on Z and of Y on Z, which is analogous to the partial Pearson's correlation derived as the correlation of observed-minus-expected residuals. Our conditional estimator is the conditional correlation of PSRs. We describe estimation and inference, and highlight the use of semiparametric cumulative probability models, which allow preservation of the rank-based nature of Spearman's correlation. We conduct simulations to evaluate the performance of our estimators and compare them with other popular measures of association, demonstrating their robustness and efficiency. We illustrate our method in two applications, a biomarker study and a large survey. © 2017, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Infante Corona, J. A.; Lakhankar, T.; Khanbilvardi, R.; Pradhanang, S. M.
2013-12-01
Stream flow estimation and flood prediction influenced by snow melting processes have been studied for the past couple of decades because of their destruction potential, money losses and demises. It has been observed that snow, that was very stationary during its seasons, now is variable in shorter time-scales (daily and hourly) and rapid snowmelt can contribute or been the cause of floods. Therefore, good estimates of snowpack properties on ground are necessary in order to have an accurate prediction of these destructive events. The snow thermal model (SNTHERM) is a 1-dimensional model that analyzes the snowpack properties given the climatological conditions of a particular area. Gridded data from both, in-situ meteorological observations and remote sensing data will be produced using interpolation methods; thus, snow water equivalent (SWE) and snowmelt estimations can be obtained. The soil and water assessment tool (SWAT) is a hydrological model capable of predicting runoff quantity and quality of a watershed given its main physical and hydrological properties. The results from SNTHERM will be used as an input for SWAT in order to have simulated runoff under snowmelt conditions. This project attempts to improve the river discharge estimation considering both, excess rainfall runoff and the snow melting process. Obtaining a better estimation of the snowpack properties and evolution is expected. A coupled use of SNTHERM and SWAT based on meteorological in situ and remote sensed data will improve the temporal and spatial resolution of the snowpack characterization and river discharge estimations, and thus flood prediction.
Method and System for Temporal Filtering in Video Compression Systems
NASA Technical Reports Server (NTRS)
Lu, Ligang; He, Drake; Jagmohan, Ashish; Sheinin, Vadim
2011-01-01
Three related innovations combine improved non-linear motion estimation, video coding, and video compression. The first system comprises a method in which side information is generated using an adaptive, non-linear motion model. This method enables extrapolating and interpolating a visual signal, including determining the first motion vector between the first pixel position in a first image to a second pixel position in a second image; determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image; determining a third motion vector between the first pixel position in the first image and the second pixel position in the second image, the second pixel position in the second image, and the third pixel position in the third image using a non-linear model; and determining a position of the fourth pixel in a fourth image based upon the third motion vector. For the video compression element, the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a decoder. The encoder converts the source frame into a space-frequency representation, estimates the conditional statistics of at least one vector of space-frequency coefficients with similar frequencies, and is conditioned on previously encoded data. It estimates an encoding rate based on the conditional statistics and applies a Slepian-Wolf code with the computed encoding rate. The method for decoding includes generating a side-information vector of frequency coefficients based on previously decoded source data and encoder statistics and previous reconstructions of the source frequency vector. It also performs Slepian-Wolf decoding of a source frequency vector based on the generated side-information and the Slepian-Wolf code bits. The video coding element includes receiving a first reference frame having a first pixel value at a first pixel position, a second reference frame having a second pixel value at a second pixel position, and a third reference frame having a third pixel value at a third pixel position. It determines a first motion vector between the first pixel position and the second pixel position, a second motion vector between the second pixel position and the third pixel position, and a fourth pixel value for a fourth frame based upon a linear or nonlinear combination of the first pixel value, the second pixel value, and the third pixel value. A stationary filtering process determines the estimated pixel values. The parameters of the filter may be predetermined constants.
Kinetics of Methylmercury Production Revisited
Olsen, Todd A.; Muller, Katherine A.; Painter, Scott L.; ...
2018-01-27
Laboratory measurements of the biologically mediated methylation of mercury (Hg) to the neurotoxin monomethylmercury (MMHg) often exhibit kinetics that are inconsistent with first-order kinetic models. Using time-resolved measurements of filter passing Hg and MMHg during methylation/demethylation assays, a multisite kinetic sorption model, and reanalyses of previous assays, we show in this paper that competing kinetic sorption reactions can lead to time-varying availability and apparent non-first-order kinetics in Hg methylation and MMHg demethylation. The new model employing a multisite kinetic sorption model for Hg and MMHg can describe the range of behaviors for time-resolved methylation/demethylation data reported in the literature includingmore » those that exhibit non-first-order kinetics. Additionally, we show that neglecting competing sorption processes can confound analyses of methylation/demethylation assays, resulting in rate constant estimates that are systematically biased low. Finally, simulations of MMHg production and transport in a hypothetical periphyton biofilm bed illustrate the implications of our new model and demonstrate that methylmercury production may be significantly different than projected by single-rate first-order models.« less
Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Yupeng, E-mail: yupeng@ualberta.ca; Deutsch, Clayton V.
2012-06-15
In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells.more » In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.« less
Context-Aided Sensor Fusion for Enhanced Urban Navigation
Martí, Enrique David; Martín, David; García, Jesús; de la Escalera, Arturo; Molina, José Manuel; Armingol, José María
2012-01-01
The deployment of Intelligent Vehicles in urban environments requires reliable estimation of positioning for urban navigation. The inherent complexity of this kind of environments fosters the development of novel systems which should provide reliable and precise solutions to the vehicle. This article details an advanced GNSS/IMU fusion system based on a context-aided Unscented Kalman filter for navigation in urban conditions. The constrained non-linear filter is here conditioned by a contextual knowledge module which reasons about sensor quality and driving context in order to adapt it to the situation, while at the same time it carries out a continuous estimation and correction of INS drift errors. An exhaustive analysis has been carried out with available data in order to characterize the behavior of available sensors and take it into account in the developed solution. The performance is then analyzed with an extensive dataset containing representative situations. The proposed solution suits the use of fusion algorithms for deploying Intelligent Transport Systems in urban environments. PMID:23223080
Context-aided sensor fusion for enhanced urban navigation.
Martí, Enrique David; Martín, David; García, Jesús; de la Escalera, Arturo; Molina, José Manuel; Armingol, José María
2012-12-06
The deployment of Intelligent Vehicles in urban environments requires reliable estimation of positioning for urban navigation. The inherent complexity of this kind of environments fosters the development of novel systems which should provide reliable and precise solutions to the vehicle. This article details an advanced GNSS/IMU fusion system based on a context-aided Unscented Kalman filter for navigation in urban conditions. The constrained non-linear filter is here conditioned by a contextual knowledge module which reasons about sensor quality and driving context in order to adapt it to the situation, while at the same time it carries out a continuous estimation and correction of INS drift errors. An exhaustive analysis has been carried out with available data in order to characterize the behavior of available sensors and take it into account in the developed solution. The performance is then analyzed with an extensive dataset containing representative situations. The proposed solution suits the use of fusion algorithms for deploying Intelligent Transport Systems in urban environments.
The Spanish version of the Emotional Labour Scale (ELS): a validation study.
Picardo, Juan M; López-Fernández, Consuelo; Hervás, María José Abellán
2013-10-01
To validate the Spanish version of the Emotional Labour Scale (ELS), an instrument widely used to understand how professionals working with people face emotional labor in their daily job. An observational, cross-sectional and multicenter survey was used. Nursing students and their clinical tutors (n=211) completed the self-reported ELS when the clinical practice period was over. First order and second order Confirmatory Factor Analyses (CFA) were estimated in order to test the factor structure of the scale. The results of the CFA confirm a factor structure of the scale with six first order factors (duration, frequency, intensity, variety, surface acting and deep acting) and two larger second order factors named Demands (duration, frequency, intensity and variety) and Acting (surface acting and deep acting) establishing the validity of the Spanish version of the ELS. Copyright © 2012 Elsevier Ltd. All rights reserved.
Absence of first-order unbinding transitions of fluid and polymerized membranes
NASA Technical Reports Server (NTRS)
Grotehans, Stefan; Lipowsky, Reinhard
1990-01-01
Unbinding transitions of fluid and polymerized membranes are studied by renormalization-group (RG) methods. Two different RG schemes are used and found to give rather consistent results. The fixed-point structure of both RG's exhibits a complex behavior as a function of the decay exponent tau for the fluctuation-induced interaction of the membranes. For tau greater than tau(S2) interacting membranes can undergo first-order transitions even in the strong-fluctuation regime. These estimates for tau(S2) imply, however, that both fluid and polymerized membranes unbind in a continuous way in the absence of lateral tension.
Concurrently adjusting interrelated control parameters to achieve optimal engine performance
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna
2015-12-01
Methods and systems for real-time engine control optimization are provided. A value of an engine performance variable is determined, a value of a first operating condition and a value of a second operating condition of a vehicle engine are detected, and initial values for a first engine control parameter and a second engine control parameter are determined based on the detected first operating condition and the detected second operating condition. The initial values for the first engine control parameter and the second engine control parameter are adjusted based on the determined value of the engine performance variable to cause the engine performance variable to approach a target engine performance variable. In order to cause the engine performance variable to approach the target engine performance variable, adjusting the initial value for the first engine control parameter necessitates a corresponding adjustment of the initial value for the second engine control parameter.
Analysis of Fluid Gauge Sensor for Zero or Microgravity Conditions using Finite Element Method
NASA Technical Reports Server (NTRS)
Deshpande, Manohar D.; Doiron, Terence a.
2007-01-01
In this paper the Finite Element Method (FEM) is presented for mass/volume gauging of a fluid in a tank subjected to zero or microgravity conditions. In this approach first mutual capacitances between electrodes embedded inside the tank are measured. Assuming the medium properties the mutual capacitances are also estimated using FEM approach. Using proper non-linear optimization the assumed properties are updated by minimizing the mean square error between estimated and measured capacitances values. Numerical results are presented to validate the present approach.
Nonlinear Estimation With Sparse Temporal Measurements
2016-09-01
Kalman filter , the extended Kalman filter (EKF) and unscented Kalman filter (UKF) are commonly used in practical application. The Kalman filter is an...optimal estimator for linear systems; the EKF and UKF are sub-optimal approximations of the Kalman filter . The EKF uses a first-order Taylor series...propagated covariance is compared for similarity with a Monte Carlo propagation. The similarity of the covariance matrices is shown to predict filter
Swept sine testing of rotor-bearing system for damping estimation
NASA Astrophysics Data System (ADS)
Chandra, N. Harish; Sekhar, A. S.
2014-01-01
Many types of rotating components commonly operate above the first or second critical speed and they are subjected to run-ups and shutdowns frequently. The present study focuses on developing FRF of rotor bearing systems for damping estimation from swept-sine excitation. The principle of active vibration control states that with increase in angular acceleration, the amplitude of vibration due to unbalance will reduce and the FRF envelope will shift towards the right (or higher frequency). The frequency response function (FRF) estimated by tracking filters or Co-Quad analyzers was proved to induce an error into the FRF estimate. Using Fast Fourier Transform (FFT) algorithm and stationary wavelet transform (SWT) decomposition FRF distortion can be reduced. To obtain a theoretical clarity, the shifting of FRF envelope phenomenon is incorporated into conventional FRF expressions and validation is performed with the FRF estimated using the Fourier Transform approach. The half-power bandwidth method is employed to extract damping ratios from the FRF estimates. While deriving half-power points for both types of responses (acceleration and displacement), damping ratio (ζ) is estimated with different approximations like classical definition (neglecting damping ratio of order higher than 2), third order (neglecting damping ratios with order higher than 4) and exact (no assumptions on damping ratio). The use of stationary wavelet transform to denoise the noise corrupted FRF data is explained. Finally, experiments are performed on a test rotor excited with different sweep rates to estimate the damping ratio.
Time-to-contact estimation of accelerated stimuli is based on first-order information.
Benguigui, Nicolas; Ripoll, Hubert; Broderick, Michael P
2003-12-01
The goal of this study was to test whether 1st-order information, which does not account for acceleration, is used (a) to estimate the time to contact (TTC) of an accelerated stimulus after the occlusion of a final part of its trajectory and (b) to indirectly intercept an accelerated stimulus with a thrown projectile. Both tasks require the production of an action on the basis of predictive information acquired before the arrival of the stimulus at the target and allow the experimenter to make quantitative predictions about the participants' use (or nonuse) of 1st-order information. The results show that participants do not use information about acceleration and that they commit errors that rely quantitatively on 1st-order information even when acceleration is psychophysically detectable. In the indirect interceptive task, action is planned about 200 ms before the initiation of the movement, at which time the 1st-order TTC attains a critical value. ((c) 2003 APA, all rights reserved)
The Prevalence of Age-Related Eye Diseases and Visual Impairment in Aging: Current Estimates
Klein, Ronald; Klein, Barbara E. K.
2013-01-01
Purpose. To examine prevalence of five age-related eye conditions (age-related cataract, AMD, open-angle glaucoma, diabetic retinopathy [DR], and visual impairment) in the United States. Methods. Review of published scientific articles and unpublished research findings. Results. Cataract, AMD, open-angle glaucoma, DR, and visual impairment prevalences are high in four different studies of these conditions, especially in people over 75 years of age. There are disparities among racial/ethnic groups with higher age-specific prevalence of DR, open-angle glaucoma, and visual impairment in Hispanics and blacks compared with whites, higher prevalence of age-related cataract in whites compared with blacks, and higher prevalence of late AMD in whites compared with Hispanics and blacks. The estimates are based on old data and do not reflect recent changes in the distribution of age and race/ethnicity in the United States population. There are no epidemiologic estimates of prevalence for many visually-impairing conditions. Conclusions. Ongoing prevalence surveys designed to provide reliable estimates of visual impairment, AMD, age-related cataract, open-angle glaucoma, and DR are needed. It is important to collect objective data on these and other conditions that affect vision and quality of life in order to plan for health care needs and identify areas for further research. PMID:24335069
NASA Astrophysics Data System (ADS)
Cassiani, G.; Gallotti, L.; Ventura, V.; Andreotti, G.
2003-04-01
The identification of flow and transport characteristics in the vadose zone is a fundamental step towards understanding the dynamics of contaminated sites and the resulting risk of groundwater pollution. Borehole radar has gained popularity for the monitoring of moisture content changes, thanks to its apparent simplicity and its high resolution characteristics. However, cross-hole radar requires closely spaced (a few meters), plastic-cased boreholes, that are rarely available as a standard feature in sites of practical interest. Unlike cross-hole applications, Vertical Radar Profiles (VRP) require only one borehole, with practical and financial benefits. High-resolution, time-lapse VRPs have been acquired at a crude oil contaminated site in Trecate, Northern Italy, on a few existing boreholes originally developed for remediation via bioventing. The dynamic water table conditions, with yearly oscillations of roughly 5 m from 6 to 11 m bgl, offers a good opportunity to observe via VRP a field scale drainage-imbibition process. Arrival time inversion has been carried out using a regularized tomographic algorithm, in order to overcome the noise introduced by first arrival picking. Interpretation of the vertical profiles in terms of moisture content has been based on standard models (Topp et al., 1980; Roth et al., 1990). The sedimentary sequence manifests itself as a cyclic pattern in moisture content over most of the profiles. We performed preliminary Richards' equation simulations with time varying later table boundary conditions, in order to estimate the unsaturated flow parameters, and the results have been compared with laboratory evidence from cores.
Karray, Sahar; Smaoui-Damak, Wafa; Rebai, Tarek; Hamza-Chaffai, Amel
2015-11-01
The gametogenic cycle of the Cerastoderma glaucum was analyzed using both qualitative and semi-quantitative methods. The condition index and glycogen concentrations were determined in order to provide information on energy storage. The cockles were collected monthly from a Bayyadha site located 15 km south of Sfax City (Gulf of Gabès) between January 2007 and January 2008. From histological point of view, we applied two approaches: (i) the qualitative method describing the various stages of gamete development for males and females during a cycle of 13 months, and (ii) the semi-quantitative method concerning the estimation of different tissue surfaces. The results showed that there is evidence of three periods of reproduction in this population. A comparison between the surfaces occupied by the three organs showed that the foot and the gonad surfaces are higher than the surface of the adductor muscle. This could suggest that these two organs are more involved in the process of glycogen reserve storage. The results of the glycogen concentrations in the different tissues (gonad, adductor muscle, and "remainders") show that during the second and third periods of reproduction, glycogen was stored in the adductor muscle and in the remainder during sexual rest, and in the gonad during the gametogenesis phases in order to supply the reproductive effort. On the contrary, in the first period of reproduction, the low concentrations of glycogen recorded in the gonad coincided with its high degree of development. This fact could be related to environmental conditions (low temperature and food) recorded during this period.
NASA Astrophysics Data System (ADS)
Rahimi, Mina; Essaid, Hedeff I.; Wilson, John T.
2015-12-01
The role of temporally varying surface water-groundwater (SW-GW) exchange on nitrate removal by streambed denitrification was examined along a reach of Leary Weber Ditch (LWD), Indiana, a small, first-order, low-relief agricultural watershed within the Upper Mississippi River basin, using data collected in 2004 and 2005. Stream stage, GW heads (H), and temperatures (T) were continuously monitored in streambed piezometers and stream bank wells for two transects across LWD accompanied by synoptic measurements of stream stage, H, T, and nitrate (NO3) concentrations along the reach. The H and T data were used to develop and calibrate vertical two-dimensional, models of streambed water flow and heat transport across and along the axis of the stream. Model-estimated SW-GW exchange varied seasonally and in response to high-streamflow events due to dynamic interactions between SW stage and GW H. Comparison of 2004 and 2005 conditions showed that small changes in precipitation amount and intensity, evapotranspiration, and/or nearby GW levels within a low-relief watershed can readily impact SW-GW interactions. The calibrated LWD flow models and observed stream and streambed NO3 concentrations were used to predict temporal variations in streambed NO3 removal in response to dynamic SW-GW exchange. NO3 removal rates underwent slow seasonal changes, but also underwent rapid changes in response to high-flow events. These findings suggest that increased temporal variability of SW-GW exchange in low-order, low-relief watersheds may be a factor contributing their more efficient removal of NO3.
Rahimi Kazerooni, Mina N.; Essaid, Hedeff I.; Wilson, John T.
2015-01-01
The role of temporally varying surface water-groundwater (SW-GW) exchange on nitrate removal by streambed denitrification was examined along a reach of Leary Weber Ditch (LWD), Indiana, a small, first-order, low-relief agricultural watershed within the Upper Mississippi River basin, using data collected in 2004 and 2005. Stream stage, GW heads (H), and temperatures (T) were continuously monitored in streambed piezometers and stream bank wells for two transects across LWD accompanied by synoptic measurements of stream stage, H, T, and nitrate (NO3) concentrations along the reach. The H and T data were used to develop and calibrate vertical two-dimensional, models of streambed water flow and heat transport across and along the axis of the stream. Model-estimated SW-GW exchange varied seasonally and in response to high-streamflow events due to dynamic interactions between SW stage and GW H. Comparison of 2004 and 2005 conditions showed that small changes in precipitation amount and intensity, evapotranspiration, and/or nearby GW levels within a low-relief watershed can readily impact SW-GW interactions. The calibrated LWD flow models and observed stream and streambed NO3 concentrations were used to predict temporal variations in streambed NO3 removal in response to dynamic SW-GW exchange. NO3 removal rates underwent slow seasonal changes, but also underwent rapid changes in response to high-flow events. These findings suggest that increased temporal variability of SW-GW exchange in low-order, low-relief watersheds may be a factor contributing their more efficient removal of NO3.
THE EFFECT OF CHLORINE DEMAND ON INACTIVATION RATE CONSTANT
Ct (disinfectant concentration multiplied by exposure time) values are used by the US EPA to evaluate the efficacy of disinfection of microorganisms under various conditions of drinking water treatment conditions. First-order decay is usually assumed for the degradation of a disi...
Thorndahl, S; Willems, P
2008-01-01
Failure of urban drainage systems may occur due to surcharge or flooding at specific manholes in the system, or due to overflows from combined sewer systems to receiving waters. To quantify the probability or return period of failure, standard approaches make use of the simulation of design storms or long historical rainfall series in a hydrodynamic model of the urban drainage system. In this paper, an alternative probabilistic method is investigated: the first-order reliability method (FORM). To apply this method, a long rainfall time series was divided in rainstorms (rain events), and each rainstorm conceptualized to a synthetic rainfall hyetograph by a Gaussian shape with the parameters rainstorm depth, duration and peak intensity. Probability distributions were calibrated for these three parameters and used on the basis of the failure probability estimation, together with a hydrodynamic simulation model to determine the failure conditions for each set of parameters. The method takes into account the uncertainties involved in the rainstorm parameterization. Comparison is made between the failure probability results of the FORM method, the standard method using long-term simulations and alternative methods based on random sampling (Monte Carlo direct sampling and importance sampling). It is concluded that without crucial influence on the modelling accuracy, the FORM is very applicable as an alternative to traditional long-term simulations of urban drainage systems.
Assessing first-order emulator inference for physical parameters in nonlinear mechanistic models
Hooten, Mevin B.; Leeds, William B.; Fiechter, Jerome; Wikle, Christopher K.
2011-01-01
We present an approach for estimating physical parameters in nonlinear models that relies on an approximation to the mechanistic model itself for computational efficiency. The proposed methodology is validated and applied in two different modeling scenarios: (a) Simulation and (b) lower trophic level ocean ecosystem model. The approach we develop relies on the ability to predict right singular vectors (resulting from a decomposition of computer model experimental output) based on the computer model input and an experimental set of parameters. Critically, we model the right singular vectors in terms of the model parameters via a nonlinear statistical model. Specifically, we focus our attention on first-order models of these right singular vectors rather than the second-order (covariance) structure.
Pang, Liping; Close, Murray; Goltz, Mark; Noonan, Mike; Sinton, Lester
2005-04-01
Filtration of Bacillus subtilis spores and the F-RNA phage MS2 (MS2) on a field scale in a coarse alluvial gravel aquifer was evaluated from the authors' previously published data. An advection-dispersion model that is coupled with first-order attachment kinetics was used in this study to interpret microbial concentration vs. time breakthrough curves (BTC) at sampling wells. Based on attachment rates (katt) that were determined by applying the model to the breakthrough data, filter factors (f) were calculated and compared with f values estimated from the slopes of log (cmax/co) vs. distance plots. These two independent approaches resulted in nearly identical filter factors, suggesting that both approaches are useful in determining reductions in microbial concentrations over transport distance. Applying the graphic approach to analyse spatial data, we have also estimated the f values for different aquifers using information provided by some other published field studies. The results show that values of f, in units of log (cmax/co) m(-1), are consistently in the order of 10(-2) for clean coarse gravel aquifers, 10(-3) for contaminated coarse gravel aquifers, and generally 10(-1) for sandy fine gravel aquifers and river and coastal sand aquifers. For each aquifer category, the f values for bacteriophages and bacteria are in the same order-of-magnitude. The f values estimated in this study indicate that for every one-log reduction in microbial concentration in groundwater, it requires a few tens of meters of travel in clean coarse gravel aquifers, but a few hundreds of meters in contaminated coarse gravel aquifers. In contrast, a one-log reduction generally only requires a few meters of travel in sandy fine gravel aquifers and sand aquifers. Considering the highest concentration in human effluent is in the order of 10(4) pfu/l for enteroviruses and 10(6) cfu/100 ml for faecal coliform bacteria, a 7-log reduction in microbial concentration would comply with the drinking water standards for the downgradient wells under natural gradient conditions. Based on the results of this study, a 7-log reduction would require 125-280 m travel in clean coarse gravel aquifers, 1.7-3.9 km travel in contaminated coarse gravel aquifers, 33-61 m travel in clean sandy fine gravel aquifers, 33-129 m travel in contaminated sandy fine gravel aquifers, and 37-44 m travel in contaminated river and coastal sand aquifers. These recommended setback distances are for a worst-case scenario, assuming direct discharge of raw effluent into the saturated zone of an aquifer. Filtration theory was applied to calculate collision efficiency (alpha) from model-derived attachment rates (katt), and the results are compared with those reported in the literature. The calculated alpha values vary by two orders-of-magnitude, depending on whether collision efficiency is estimated from the effective particle size (d10) or the mean particle size (d50). Collision efficiency values for MS-2 are similar to those previously reported in the literature (e.g. ) [DeBorde, D.C., Woessner, W.W., Kiley, QT., Ball, P., 1999. Rapid transport of viruses in a floodplain aquifer. Water Res. 33 (10), 2229-2238]. However, the collision efficiency values calculated for Bacillus subtilis spores were unrealistic, suggesting that filtration theory is not appropriate for theoretically estimating filtration capacity for poorly sorted coarse gravel aquifer media. This is not surprising, as filtration theory was developed for uniform sand filters and does not consider particle size distribution. Thus, we do not recommend the use of filtration theory to estimate the filter factor or setback distances. Either of the methods applied in this work (BTC or concentration vs. distance analyses), which takes into account aquifer heterogeneities and site-specific conditions, appear to be most useful in determining filter factors and setback distances.
NASA Astrophysics Data System (ADS)
Pang, Liping; Close, Murray; Goltz, Mark; Noonan, Mike; Sinton, Lester
2005-04-01
Filtration of Bacillus subtilis spores and the F-RNA phage MS2 (MS2) on a field scale in a coarse alluvial gravel aquifer was evaluated from the authors' previously published data. An advection-dispersion model that is coupled with first-order attachment kinetics was used in this study to interpret microbial concentration vs. time breakthrough curves (BTC) at sampling wells. Based on attachment rates ( katt) that were determined by applying the model to the breakthrough data, filter factors ( f) were calculated and compared with f values estimated from the slopes of log ( cmax/ co) vs. distance plots. These two independent approaches resulted in nearly identical filter factors, suggesting that both approaches are useful in determining reductions in microbial concentrations over transport distance. Applying the graphic approach to analyse spatial data, we have also estimated the f values for different aquifers using information provided by some other published field studies. The results show that values of f, in units of log ( cmax/ co) m -1, are consistently in the order of 10 -2 for clean coarse gravel aquifers, 10 -3 for contaminated coarse gravel aquifers, and generally 10 -1 for sandy fine gravel aquifers and river and coastal sand aquifers. For each aquifer category, the f values for bacteriophages and bacteria are in the same order-of-magnitude. The f values estimated in this study indicate that for every one-log reduction in microbial concentration in groundwater, it requires a few tens of meters of travel in clean coarse gravel aquifers, but a few hundreds of meters in contaminated coarse gravel aquifers. In contrast, a one-log reduction generally only requires a few meters of travel in sandy fine gravel aquifers and sand aquifers. Considering the highest concentration in human effluent is in the order of 10 4 pfu/l for enteroviruses and 10 6 cfu/100 ml for faecal coliform bacteria, a 7-log reduction in microbial concentration would comply with the drinking water standards for the downgradient wells under natural gradient conditions. Based on the results of this study, a 7-log reduction would require 125-280 m travel in clean coarse gravel aquifers, 1.7-3.9 km travel in contaminated coarse gravel aquifers, 33-61 m travel in clean sandy fine gravel aquifers, 33-129 m travel in contaminated sandy fine gravel aquifers, and 37-44 m travel in contaminated river and coastal sand aquifers. These recommended setback distances are for a worst-case scenario, assuming direct discharge of raw effluent into the saturated zone of an aquifer. Filtration theory was applied to calculate collision efficiency ( α) from model-derived attachment rates ( katt), and the results are compared with those reported in the literature. The calculated α values vary by two orders-of-magnitude, depending on whether collision efficiency is estimated from the effective particle size ( d10) or the mean particle size ( d50). Collision efficiency values for MS-2 are similar to those previously reported in the literature (e.g. DeBorde et al., 1999) [DeBorde, D.C., Woessner, W.W., Kiley, QT., Ball, P., 1999. Rapid transport of viruses in a floodplain aquifer. Water Res. 33 (10), 2229-2238]. However, the collision efficiency values calculated for Bacillus subtilis spores were unrealistic, suggesting that filtration theory is not appropriate for theoretically estimating filtration capacity for poorly sorted coarse gravel aquifer media. This is not surprising, as filtration theory was developed for uniform sand filters and does not consider particle size distribution. Thus, we do not recommend the use of filtration theory to estimate the filter factor or setback distances. Either of the methods applied in this work (BTC or concentration vs. distance analyses), which takes into account aquifer heterogeneities and site-specific conditions, appear to be most useful in determining filter factors and setback distances.
A Size-Distance Scaling Demonstration Based on the Holway-Boring Experiment
ERIC Educational Resources Information Center
Gallagher, Shawn P.; Hoefling, Crystal L.
2013-01-01
We explored size-distance scaling with a demonstration based on the classic Holway-Boring experiment. Undergraduate psychology majors estimated the sizes of two glowing paper circles under two conditions. In the first condition, the environment was dark and, with no depth cues available, participants ranked the circles according to their angular…
Reformulating the Schrödinger equation as a Shabat-Zakharov system
NASA Astrophysics Data System (ADS)
Boonserm, Petarpa; Visser, Matt
2010-02-01
We reformulate the second-order Schrödinger equation as a set of two coupled first-order differential equations, a so-called "Shabat-Zakharov system" (sometimes called a "Zakharov-Shabat" system). There is considerable flexibility in this approach, and we emphasize the utility of introducing an "auxiliary condition" or "gauge condition" that is used to cut down the degrees of freedom. Using this formalism, we derive the explicit (but formal) general solution to the Schrödinger equation. The general solution depends on three arbitrarily chosen functions, and a path-ordered exponential matrix. If one considers path ordering to be an "elementary" process, then this represents complete quadrature, albeit formal, of the second-order linear ordinary differential equation.
Design of a Modular Monolithic Implicit Solver for Multi-Physics Applications
NASA Technical Reports Server (NTRS)
Carton De Wiart, Corentin; Diosady, Laslo T.; Garai, Anirban; Burgess, Nicholas; Blonigan, Patrick; Ekelschot, Dirk; Murman, Scott M.
2018-01-01
The design of a modular multi-physics high-order space-time finite-element framework is presented together with its extension to allow monolithic coupling of different physics. One of the main objectives of the framework is to perform efficient high- fidelity simulations of capsule/parachute systems. This problem requires simulating multiple physics including, but not limited to, the compressible Navier-Stokes equations, the dynamics of a moving body with mesh deformations and adaptation, the linear shell equations, non-re effective boundary conditions and wall modeling. The solver is based on high-order space-time - finite element methods. Continuous, discontinuous and C1-discontinuous Galerkin methods are implemented, allowing one to discretize various physical models. Tangent and adjoint sensitivity analysis are also targeted in order to conduct gradient-based optimization, error estimation, mesh adaptation, and flow control, adding another layer of complexity to the framework. The decisions made to tackle these challenges are presented. The discussion focuses first on the "single-physics" solver and later on its extension to the monolithic coupling of different physics. The implementation of different physics modules, relevant to the capsule/parachute system, are also presented. Finally, examples of coupled computations are presented, paving the way to the simulation of the full capsule/parachute system.
Palmer, Jeremy C; Car, Roberto; Debenedetti, Pablo G
2013-01-01
We investigate the metastable phase behaviour of the ST2 water model under deeply supercooled conditions. The phase behaviour is examined using umbrella sampling (US) and well-tempered metadynamics (WT-MetaD) simulations to compute the reversible free energy surface parameterized by density and bond-orientation order. We find that free energy surfaces computed with both techniques clearly show two liquid phases in coexistence, in agreement with our earlier US and grand canonical Monte Carlo calculations [Y. Liu, J. C. Palmer, A. Z. Panagiotopoulos and P. G. Debenedetti, J Chem Phys, 2012, 137, 214505; Y. Liu, A. Z. Panagiotopoulos and P. G. Debenedetti, J Chem Phys, 2009, 131, 104508]. While we demonstrate that US and WT-MetaD produce consistent results, the latter technique is estimated to be more computationally efficient by an order of magnitude. As a result, we show that WT-MetaD can be used to study the finite-size scaling behaviour of the free energy barrier separating the two liquids for systems containing 192, 300 and 400 ST2 molecules. Although our results are consistent with the expected N(2/3) scaling law, we conclude that larger systems must be examined to provide conclusive evidence of a first-order phase transition and associated second critical point.
Order of Access to Semantic Content and Self Schema.
ERIC Educational Resources Information Center
Mueller, John H.; And Others
Self-referenced content is generally remembered better and faster than information encoded in other ways. To examine how self-relevant information is organized in memory, three experiments were conducted, comparing the effects of target-first or word-first methodology. In the target-first condition, subjects (N=15) saw one of the two questions,…
Model-Based Estimation of Knee Stiffness
Pfeifer, Serge; Vallery, Heike; Hardegger, Michael; Riener, Robert; Perreault, Eric J.
2013-01-01
During natural locomotion, the stiffness of the human knee is modulated continuously and subconsciously according to the demands of activity and terrain. Given modern actuator technology, powered transfemoral prostheses could theoretically provide a similar degree of sophistication and function. However, experimentally quantifying knee stiffness modulation during natural gait is challenging. Alternatively, joint stiffness could be estimated in a less disruptive manner using electromyography (EMG) combined with kinetic and kinematic measurements to estimate muscle force, together with models that relate muscle force to stiffness. Here we present the first step in that process, where we develop such an approach and evaluate it in isometric conditions, where experimental measurements are more feasible. Our EMG-guided modeling approach allows us to consider conditions with antagonistic muscle activation, a phenomenon commonly observed in physiological gait. Our validation shows that model-based estimates of knee joint stiffness coincide well with experimental data obtained using conventional perturbation techniques. We conclude that knee stiffness can be accurately estimated in isometric conditions without applying perturbations, which presents an important step towards our ultimate goal of quantifying knee stiffness during gait. PMID:22801482
First-Order System Least-Squares for Second-Order Elliptic Problems with Discontinuous Coefficients
NASA Technical Reports Server (NTRS)
Manteuffel, Thomas A.; McCormick, Stephen F.; Starke, Gerhard
1996-01-01
The first-order system least-squares methodology represents an alternative to standard mixed finite element methods. Among its advantages is the fact that the finite element spaces approximating the pressure and flux variables are not restricted by the inf-sup condition and that the least-squares functional itself serves as an appropriate error measure. This paper studies the first-order system least-squares approach for scalar second-order elliptic boundary value problems with discontinuous coefficients. Ellipticity of an appropriately scaled least-squares bilinear form of the size of the jumps in the coefficients leading to adequate finite element approximation results. The occurrence of singularities at interface corners and cross-points is discussed. and a weighted least-squares functional is introduced to handle such cases. Numerical experiments are presented for two test problems to illustrate the performance of this approach.
High-Order Non-Reflecting Boundary Conditions for the Linearized Euler Equations
2008-09-01
rotational effect. Now this rotational effect can be simplified. The atmosphere is thin compared to the radius of the Earth . Furthermore, atmospheric flows...error norm of the discrete solution. Blayo and Debreu [13] considered a characteristic variable ap- proach to NRBCs in first-order systems for ocean and...Third Edition, John Wiley and Sons, New York, 1995. [77] Jensen, T., “Open Boundary Conditions in Stratified Ocean Models,” Journal of Marine Systems
Lo, Yuan-Chieh; Hu, Yuh-Chung; Chang, Pei-Zen
2018-01-01
Thermal characteristic analysis is essential for machine tool spindles because sudden failures may occur due to unexpected thermal issue. This article presents a lumped-parameter Thermal Network Model (TNM) and its parameter estimation scheme, including hardware and software, in order to characterize both the steady-state and transient thermal behavior of machine tool spindles. For the hardware, the authors develop a Bluetooth Temperature Sensor Module (BTSM) which accompanying with three types of temperature-sensing probes (magnetic, screw, and probe). Its specification, through experimental test, achieves to the precision ±(0.1 + 0.0029|t|) °C, resolution 0.00489 °C, power consumption 7 mW, and size Ø40 mm × 27 mm. For the software, the heat transfer characteristics of the machine tool spindle correlative to rotating speed are derived based on the theory of heat transfer and empirical formula. The predictive TNM of spindles was developed by grey-box estimation and experimental results. Even under such complicated operating conditions as various speeds and different initial conditions, the experiments validate that the present modeling methodology provides a robust and reliable tool for the temperature prediction with normalized mean square error of 99.5% agreement, and the present approach is transferable to the other spindles with a similar structure. For realizing the edge computing in smart manufacturing, a reduced-order TNM is constructed by Model Order Reduction (MOR) technique and implemented into the real-time embedded system. PMID:29473877
Lo, Yuan-Chieh; Hu, Yuh-Chung; Chang, Pei-Zen
2018-02-23
Thermal characteristic analysis is essential for machine tool spindles because sudden failures may occur due to unexpected thermal issue. This article presents a lumped-parameter Thermal Network Model (TNM) and its parameter estimation scheme, including hardware and software, in order to characterize both the steady-state and transient thermal behavior of machine tool spindles. For the hardware, the authors develop a Bluetooth Temperature Sensor Module (BTSM) which accompanying with three types of temperature-sensing probes (magnetic, screw, and probe). Its specification, through experimental test, achieves to the precision ±(0.1 + 0.0029|t|) °C, resolution 0.00489 °C, power consumption 7 mW, and size Ø40 mm × 27 mm. For the software, the heat transfer characteristics of the machine tool spindle correlative to rotating speed are derived based on the theory of heat transfer and empirical formula. The predictive TNM of spindles was developed by grey-box estimation and experimental results. Even under such complicated operating conditions as various speeds and different initial conditions, the experiments validate that the present modeling methodology provides a robust and reliable tool for the temperature prediction with normalized mean square error of 99.5% agreement, and the present approach is transferable to the other spindles with a similar structure. For realizing the edge computing in smart manufacturing, a reduced-order TNM is constructed by Model Order Reduction (MOR) technique and implemented into the real-time embedded system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collin, Blaise P.; Petti, David A.; Demkowicz, Paul A.
Safety tests were conducted on fuel compacts from AGR-1, the first irradiation experiment of the Advanced Gas Reactor (AGR) Fuel Development and Qualification program, at temperatures ranging from 1600 to 1800 °C to determine fission product release at temperatures that bound reactor accident conditions. The PARFUME (PARticle FUel ModEl) code was used to predict the release of fission products silver, cesium, strontium, and krypton from fuel compacts containing tristructural isotropic (TRISO) coated particles during 15 of these safety tests. Comparisons between PARFUME predictions and post-irradiation examination results of the safety tests were conducted on two types of AGR-1 compacts: compactsmore » containing only intact particles and compacts containing one or more particles whose SiC layers failed during safety testing. In both cases, PARFUME globally over-predicted the experimental release fractions by several orders of magnitude: more than three (intact) and two (failed SiC) orders of magnitude for silver, more than three and up to two orders of magnitude for strontium, and up to two and more than one orders of magnitude for krypton. The release of cesium from intact particles was also largely over-predicted (by up to five orders of magnitude) but its release from particles with failed SiC was only over-predicted by a factor of about 3. These over-predictions can be largely attributed to an over-estimation of the diffusivities used in the modeling of fission product transport in TRISO-coated particles. The integral release nature of the data makes it difficult to estimate the individual over-estimations in the kernel or each coating layer. Nevertheless, a tentative assessment of correction factors to these diffusivities was performed to enable a better match between the modeling predictions and the safety testing results. The method could only be successfully applied to silver and cesium. In the case of strontium, correction factors could not be assessed because potential release during the safety tests could not be distinguished from matrix content released during irradiation. Furthermore, in the case of krypton, all the coating layers are partly retentive and the available data did not allow the level of retention in individual layers to be determined, hence preventing derivation of any correction factors.« less
Collin, Blaise P.; Petti, David A.; Demkowicz, Paul A.; ...
2016-04-07
Safety tests were conducted on fuel compacts from AGR-1, the first irradiation experiment of the Advanced Gas Reactor (AGR) Fuel Development and Qualification program, at temperatures ranging from 1600 to 1800 °C to determine fission product release at temperatures that bound reactor accident conditions. The PARFUME (PARticle FUel ModEl) code was used to predict the release of fission products silver, cesium, strontium, and krypton from fuel compacts containing tristructural isotropic (TRISO) coated particles during 15 of these safety tests. Comparisons between PARFUME predictions and post-irradiation examination results of the safety tests were conducted on two types of AGR-1 compacts: compactsmore » containing only intact particles and compacts containing one or more particles whose SiC layers failed during safety testing. In both cases, PARFUME globally over-predicted the experimental release fractions by several orders of magnitude: more than three (intact) and two (failed SiC) orders of magnitude for silver, more than three and up to two orders of magnitude for strontium, and up to two and more than one orders of magnitude for krypton. The release of cesium from intact particles was also largely over-predicted (by up to five orders of magnitude) but its release from particles with failed SiC was only over-predicted by a factor of about 3. These over-predictions can be largely attributed to an over-estimation of the diffusivities used in the modeling of fission product transport in TRISO-coated particles. The integral release nature of the data makes it difficult to estimate the individual over-estimations in the kernel or each coating layer. Nevertheless, a tentative assessment of correction factors to these diffusivities was performed to enable a better match between the modeling predictions and the safety testing results. The method could only be successfully applied to silver and cesium. In the case of strontium, correction factors could not be assessed because potential release during the safety tests could not be distinguished from matrix content released during irradiation. Furthermore, in the case of krypton, all the coating layers are partly retentive and the available data did not allow the level of retention in individual layers to be determined, hence preventing derivation of any correction factors.« less
NASA Technical Reports Server (NTRS)
Schatten, K. H.; Scherrer, P. H.; Svalgaard, L.; Wilcox, J. M.
1978-01-01
On physical grounds it is suggested that the sun's polar field strength near a solar minimum is closely related to the following cycle's solar activity. Four methods of estimating the sun's polar magnetic field strength near solar minimum are employed to provide an estimate of cycle 21's yearly mean sunspot number at solar maximum of 140 plus or minus 20. This estimate is considered to be a first order attempt to predict the cycle's activity using one parameter of physical importance.
Los Alamos National Laboratory Economic Analysis Capability Overview
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boero, Riccardo; Edwards, Brian Keith; Pasqualini, Donatella
Los Alamos National Laboratory has developed two types of models to compute the economic impact of infrastructure disruptions. FastEcon is a fast running model that estimates first-order economic impacts of large scale events such as hurricanes and floods and can be used to identify the amount of economic activity that occurs in a specific area. LANL’s Computable General Equilibrium (CGE) model estimates more comprehensive static and dynamic economic impacts of a broader array of events and captures the interactions between sectors and industries when estimating economic impacts.
Sliding mode observers for automotive alternator
NASA Astrophysics Data System (ADS)
Chen, De-Shiou
Estimator development for synchronous rectification of the automotive alternator is a desirable approach for estimating alternator's back electromotive forces (EMFs) without a direct mechanical sensor of the rotor position. Recent theoretical studies show that estimation of the back EMF may be observed based on system's phase current model by sensing electrical variables (AC phase currents and DC bus voltage) of the synchronous rectifier. Observer design of the back EMF estimation has been developed for constant engine speed. In this work, we are interested in nonlinear observer design of the back EMF estimation for the real case of variable engine speed. Initial back EMF estimate can be obtained from a first-order sliding mode observer (SMO) based on the phase current model. A fourth-order nonlinear asymptotic observer (NAO), complemented by the dynamics of the back EMF with time-varying frequency and amplitude, is then incorporated into the observer design for chattering reduction. Since the cost of required phase current sensors may be prohibitive, the most applicable approach in real implementation by measuring DC current of the synchronous rectifier is carried out in the dissertation. It is shown that the DC link current consists of sequential "windows" with partial information of the phase currents, hence, the cascaded NAO is responsible not only for the purpose of chattering reduction but also for necessarily accomplishing the process of estimation. Stability analyses of the proposed estimators are considered for most linear and time-varying cases. The stability of the NAO without speed information is substantiated by both numerical and experimental results. Prospective estimation algorithms for the case of battery current measurements are investigated. Theoretical study indicates that the convergence of the proposed LAO may be provided by high gain inputs. Since the order of the LAO/NAO for the battery current case is one order higher than that of the link current measurements, it is hard to find moderate values of the input gains for the real-time sampled-data systems. Technical difficulties in implementation of such high order discrete-time nonlinear estimators have been discussed. Directions of further investigations have been provided.
Srikantan, Chitra; Suraishkumar, G K; Srivastava, Smita
2018-06-01
The study demonstrates for the first time that light influences the adsorption equilibrium and kinetics of a dye by root culture system. The azo dye (Reactive Red 120) adsorption by the hairy roots of H. annuus followed a pseudo first-order kinetic model and the adsorption equilibrium parameters were best estimated using Langmuir isotherm. The maximum dye adsorption capacity of the roots increased 6-fold, from 0.26 mg g -1 under complete dark conditions to 1.51 mg g -1 under 16/8 h light/dark photoperiod. Similarly, adsorption rate of the dye and removal (%) also increased in the presence of light, irrespective of the initial concentration of the dye (20-110 mg L -1 ). The degradation of the azo dye upon adsorption by the hairy roots of H. annuus was also confirmed. In addition, a strategy for simultaneous dye removal and increased alpha-tocopherol (industrially relevant) production by H. annuus hairy root cultures has been proposed and demonstrated. Copyright © 2018 Elsevier Ltd. All rights reserved.