Ward, B Douglas; Mazaheri, Yousef
2006-12-15
The blood oxygenation level-dependent (BOLD) signal measured in functional magnetic resonance imaging (fMRI) experiments in response to input stimuli is temporally delayed and distorted due to the blurring effect of the voxel hemodynamic impulse response function (IRF). Knowledge of the IRF, obtained during the same experiment, or as the result of a separate experiment, can be used to dynamically obtain an estimate of the input stimulus function. Reconstruction of the input stimulus function allows the fMRI experiment to be evaluated as a communication system. The input stimulus function may be considered as a "message" which is being transmitted over a noisy "channel", where the "channel" is characterized by the voxel IRF. Following reconstruction of the input stimulus function, the received message is compared with the transmitted message on a voxel-by-voxel basis to determine the transmission error rate. Reconstruction of the input stimulus function provides insight into actual brain activity during task activation with less temporal blurring, and may be considered as a first step toward estimation of the true neuronal input function.
Fuzzy Neuron: Method and Hardware Realization
NASA Technical Reports Server (NTRS)
Krasowski, Michael J.; Prokop, Norman F.
2014-01-01
This innovation represents a method by which single-to-multi-input, single-to-many-output system transfer functions can be estimated from input/output data sets. This innovation can be run in the background while a system is operating under other means (e.g., through human operator effort), or may be utilized offline using data sets created from observations of the estimated system. It utilizes a set of fuzzy membership functions spanning the input space for each input variable. Linear combiners associated with combinations of input membership functions are used to create the output(s) of the estimator. Coefficients are adjusted online through the use of learning algorithms.
An open tool for input function estimation and quantification of dynamic PET FDG brain scans.
Bertrán, Martín; Martínez, Natalia; Carbajal, Guillermo; Fernández, Alicia; Gómez, Álvaro
2016-08-01
Positron emission tomography (PET) analysis of clinical studies is mostly restricted to qualitative evaluation. Quantitative analysis of PET studies is highly desirable to be able to compute an objective measurement of the process of interest in order to evaluate treatment response and/or compare patient data. But implementation of quantitative analysis generally requires the determination of the input function: the arterial blood or plasma activity which indicates how much tracer is available for uptake in the brain. The purpose of our work was to share with the community an open software tool that can assist in the estimation of this input function, and the derivation of a quantitative map from the dynamic PET study. Arterial blood sampling during the PET study is the gold standard method to get the input function, but is uncomfortable and risky for the patient so it is rarely used in routine studies. To overcome the lack of a direct input function, different alternatives have been devised and are available in the literature. These alternatives derive the input function from the PET image itself (image-derived input function) or from data gathered from previous similar studies (population-based input function). In this article, we present ongoing work that includes the development of a software tool that integrates several methods with novel strategies for the segmentation of blood pools and parameter estimation. The tool is available as an extension to the 3D Slicer software. Tests on phantoms were conducted in order to validate the implemented methods. We evaluated the segmentation algorithms over a range of acquisition conditions and vasculature size. Input function estimation algorithms were evaluated against ground truth of the phantoms, as well as on their impact over the final quantification map. End-to-end use of the tool yields quantification maps with [Formula: see text] relative error in the estimated influx versus ground truth on phantoms. The main contribution of this article is the development of an open-source, free to use tool that encapsulates several well-known methods for the estimation of the input function and the quantification of dynamic PET FDG studies. Some alternative strategies are also proposed and implemented in the tool for the segmentation of blood pools and parameter estimation. The tool was tested on phantoms with encouraging results that suggest that even bloodless estimators could provide a viable alternative to blood sampling for quantification using graphical analysis. The open tool is a promising opportunity for collaboration among investigators and further validation on real studies.
INFANT HEALTH PRODUCTION FUNCTIONS: WHAT A DIFFERENCE THE DATA MAKE
Reichman, Nancy E.; Corman, Hope; Noonan, Kelly; Dave, Dhaval
2008-01-01
SUMMARY We examine the extent to which infant health production functions are sensitive to model specification and measurement error. We focus on the importance of typically unobserved but theoretically important variables (typically unobserved variables, TUVs), other non-standard covariates (NSCs), input reporting, and characterization of infant health. The TUVs represent wantedness, taste for risky behavior, and maternal health endowment. The NSCs include father characteristics. We estimate the effects of prenatal drug use, prenatal cigarette smoking, and First trimester prenatal care on birth weight, low birth weight, and a measure of abnormal infant health conditions. We compare estimates using self-reported inputs versus input measures that combine information from medical records and self-reports. We find that TUVs and NSCs are significantly associated with both inputs and outcomes, but that excluding them from infant health production functions does not appreciably affect the input estimates. However, using self-reported inputs leads to overestimated effects of inputs, particularly prenatal care, on outcomes, and using a direct measure of infant health does not always yield input estimates similar to those when using birth weight outcomes. The findings have implications for research, data collection, and public health policy. PMID:18792077
Smallwood, D. O.
1996-01-01
It is shown that the usual method for estimating the coherence functions (ordinary, partial, and multiple) for a general multiple-input! multiple-output problem can be expressed as a modified form of Cholesky decomposition of the cross-spectral density matrix of the input and output records. The results can be equivalently obtained using singular value decomposition (SVD) of the cross-spectral density matrix. Using SVD suggests a new form of fractional coherence. The formulation as a SVD problem also suggests a way to order the inputs when a natural physical order of the inputs is absent.
Hori, Yuki; Ihara, Naoki; Teramoto, Noboru; Kunimi, Masako; Honda, Manabu; Kato, Koichi; Hanakawa, Takashi
2015-01-01
Measurement of arterial input function (AIF) for quantitative positron emission tomography (PET) studies is technically challenging. The present study aimed to develop a method based on a standard arterial input function (SIF) to estimate input function without blood sampling. We performed 18F-fluolodeoxyglucose studies accompanied by continuous blood sampling for measurement of AIF in 11 rats. Standard arterial input function was calculated by averaging AIFs from eight anesthetized rats, after normalization with body mass (BM) and injected dose (ID). Then, the individual input function was estimated using two types of SIF: (1) SIF calibrated by the individual's BM and ID (estimated individual input function, EIFNS) and (2) SIF calibrated by a single blood sampling as proposed previously (EIF1S). No significant differences in area under the curve (AUC) or cerebral metabolic rate for glucose (CMRGlc) were found across the AIF-, EIFNS-, and EIF1S-based methods using repeated measures analysis of variance. In the correlation analysis, AUC or CMRGlc derived from EIFNS was highly correlated with those derived from AIF and EIF1S. Preliminary comparison between AIF and EIFNS in three awake rats supported an idea that the method might be applicable to behaving animals. The present study suggests that EIFNS method might serve as a noninvasive substitute for individual AIF measurement. PMID:25966947
Hori, Yuki; Ihara, Naoki; Teramoto, Noboru; Kunimi, Masako; Honda, Manabu; Kato, Koichi; Hanakawa, Takashi
2015-10-01
Measurement of arterial input function (AIF) for quantitative positron emission tomography (PET) studies is technically challenging. The present study aimed to develop a method based on a standard arterial input function (SIF) to estimate input function without blood sampling. We performed (18)F-fluolodeoxyglucose studies accompanied by continuous blood sampling for measurement of AIF in 11 rats. Standard arterial input function was calculated by averaging AIFs from eight anesthetized rats, after normalization with body mass (BM) and injected dose (ID). Then, the individual input function was estimated using two types of SIF: (1) SIF calibrated by the individual's BM and ID (estimated individual input function, EIF(NS)) and (2) SIF calibrated by a single blood sampling as proposed previously (EIF(1S)). No significant differences in area under the curve (AUC) or cerebral metabolic rate for glucose (CMRGlc) were found across the AIF-, EIF(NS)-, and EIF(1S)-based methods using repeated measures analysis of variance. In the correlation analysis, AUC or CMRGlc derived from EIF(NS) was highly correlated with those derived from AIF and EIF(1S). Preliminary comparison between AIF and EIF(NS) in three awake rats supported an idea that the method might be applicable to behaving animals. The present study suggests that EIF(NS) method might serve as a noninvasive substitute for individual AIF measurement.
Ming, Y; Peiwen, Q
2001-03-01
The understanding of ultrasonic motor performances as a function of input parameters, such as the voltage amplitude, driving frequency, the preload on the rotor, is a key to many applications and control of ultrasonic motor. This paper presents performances estimation of the piezoelectric rotary traveling wave ultrasonic motor as a function of input voltage amplitude and driving frequency and preload. The Love equation is used to derive the traveling wave amplitude on the stator surface. With the contact model of the distributed spring-rigid body between the stator and rotor, a two-dimension analytical model of the rotary traveling wave ultrasonic motor is constructed. Then the performances of stead rotation speed and stall torque are deduced. With MATLAB computational language and iteration algorithm, we estimate the performances of rotation speed and stall torque versus input parameters respectively. The same experiments are completed with the optoelectronic tachometer and stand weight. Both estimation and experiment results reveal the pattern of performance variation as a function of its input parameters.
Kudomi, Nobuyuki; Maeda, Yukito; Yamamoto, Hiroyuki; Yamamoto, Yuka; Hatakeyama, Tetsuhiro; Nishiyama, Yoshihiro
2018-05-01
CBF, OEF, and CMRO 2 images can be quantitatively assessed using PET. Their image calculation requires arterial input functions, which require invasive procedure. The aim of the present study was to develop a non-invasive approach with image-derived input functions (IDIFs) using an image from an ultra-rapid O 2 and C 15 O 2 protocol. Our technique consists of using a formula to express the input using tissue curve with rate constants. For multiple tissue curves, the rate constants were estimated so as to minimize the differences of the inputs using the multiple tissue curves. The estimated rates were used to express the inputs and the mean of the estimated inputs was used as an IDIF. The method was tested in human subjects ( n = 24). The estimated IDIFs were well-reproduced against the measured ones. The difference in the calculated CBF, OEF, and CMRO 2 values by the two methods was small (<10%) against the invasive method, and the values showed tight correlations ( r = 0.97). The simulation showed errors associated with the assumed parameters were less than ∼10%. Our results demonstrate that IDIFs can be reconstructed from tissue curves, suggesting the possibility of using a non-invasive technique to assess CBF, OEF, and CMRO 2 .
Peak-Seeking Control Using Gradient and Hessian Estimates
NASA Technical Reports Server (NTRS)
Ryan, John J.; Speyer, Jason L.
2010-01-01
A peak-seeking control method is presented which utilizes a linear time-varying Kalman filter. Performance function coordinate and magnitude measurements are used by the Kalman filter to estimate the gradient and Hessian of the performance function. The gradient and Hessian are used to command the system toward a local extremum. The method is naturally applied to multiple-input multiple-output systems. Applications of this technique to a single-input single-output example and a two-input one-output example are presented.
Econometric analysis of fire suppression production functions for large wildland fires
Thomas P. Holmes; David E. Calkin
2013-01-01
In this paper, we use operational data collected for large wildland fires to estimate the parameters of economic production functions that relate the rate of fireline construction with the level of fire suppression inputs (handcrews, dozers, engines and helicopters). These parameter estimates are then used to evaluate whether the productivity of fire suppression inputs...
NASA Astrophysics Data System (ADS)
Teeples, Ronald; Glyer, David
1987-05-01
Both policy and technical analysis of water delivery systems have been based on cost functions that are inconsistent with or are incomplete representations of the neoclassical production functions of economics. We present a full-featured production function model of water delivery which can be estimated from a multiproduct, dual cost function. The model features implicit prices for own-water inputs and is implemented as a jointly estimated system of input share equations and a translog cost function. Likelihood ratio tests are performed showing that a minimally constrained, full-featured production function is a necessary specification of the water delivery operations in our sample. This, plus the model's highly efficient and economically correct parameter estimates, confirms the usefulness of a production function approach to modeling the economic activities of water delivery systems.
Uncertainty importance analysis using parametric moment ratio functions.
Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen
2014-02-01
This article presents a new importance analysis framework, called parametric moment ratio function, for measuring the reduction of model output uncertainty when the distribution parameters of inputs are changed, and the emphasis is put on the mean and variance ratio functions with respect to the variances of model inputs. The proposed concepts efficiently guide the analyst to achieve a targeted reduction on the model output mean and variance by operating on the variances of model inputs. The unbiased and progressive unbiased Monte Carlo estimators are also derived for the parametric mean and variance ratio functions, respectively. Only a set of samples is needed for implementing the proposed importance analysis by the proposed estimators, thus the computational cost is free of input dimensionality. An analytical test example with highly nonlinear behavior is introduced for illustrating the engineering significance of the proposed importance analysis technique and verifying the efficiency and convergence of the derived Monte Carlo estimators. Finally, the moment ratio function is applied to a planar 10-bar structure for achieving a targeted 50% reduction of the model output variance. © 2013 Society for Risk Analysis.
Quantitative myocardial perfusion from static cardiac and dynamic arterial CT
NASA Astrophysics Data System (ADS)
Bindschadler, Michael; Branch, Kelley R.; Alessio, Adam M.
2018-05-01
Quantitative myocardial blood flow (MBF) estimation by dynamic contrast enhanced cardiac computed tomography (CT) requires multi-frame acquisition of contrast transit through the blood pool and myocardium to inform the arterial input and tissue response functions. Both the input and the tissue response functions for the entire myocardium are sampled with each acquisition. However, the long breath holds and frequent sampling can result in significant motion artifacts and relatively high radiation dose. To address these limitations, we propose and evaluate a new static cardiac and dynamic arterial (SCDA) quantitative MBF approach where (1) the input function is well sampled using either prediction from pre-scan timing bolus data or measured from dynamic thin slice ‘bolus tracking’ acquisitions, and (2) the whole-heart tissue response data is limited to one contrast enhanced CT acquisition. A perfusion model uses the dynamic arterial input function to generate a family of possible myocardial contrast enhancement curves corresponding to a range of MBF values. Combined with the timing of the single whole-heart acquisition, these curves generate a lookup table relating myocardial contrast enhancement to quantitative MBF. We tested the SCDA approach in 28 patients that underwent a full dynamic CT protocol both at rest and vasodilator stress conditions. Using measured input function plus single (enhanced CT only) or plus double (enhanced and contrast free baseline CT’s) myocardial acquisitions yielded MBF estimates with root mean square (RMS) error of 1.2 ml/min/g and 0.35 ml/min/g, and radiation dose reductions of 90% and 83%, respectively. The prediction of the input function based on timing bolus data and the static acquisition had an RMS error compared to the measured input function of 26.0% which led to MBF estimation errors greater than threefold higher than using the measured input function. SCDA presents a new, simplified approach for quantitative perfusion imaging with an acquisition strategy offering substantial radiation dose and computational complexity savings over dynamic CT.
Identification of modal parameters including unmeasured forces and transient effects
NASA Astrophysics Data System (ADS)
Cauberghe, B.; Guillaume, P.; Verboven, P.; Parloo, E.
2003-08-01
In this paper, a frequency-domain method to estimate modal parameters from short data records with known input (measured) forces and unknown input forces is presented. The method can be used for an experimental modal analysis, an operational modal analysis (output-only data) and the combination of both. A traditional experimental and operational modal analysis in the frequency domain starts respectively, from frequency response functions and spectral density functions. To estimate these functions accurately sufficient data have to be available. The technique developed in this paper estimates the modal parameters directly from the Fourier spectra of the outputs and the known input. Instead of using Hanning windows on these short data records the transient effects are estimated simultaneously with the modal parameters. The method is illustrated, tested and validated by Monte Carlo simulations and experiments. The presented method to process short data sequences leads to unbiased estimates with a small variance in comparison to the more traditional approaches.
NASA Astrophysics Data System (ADS)
Muinul Islam, Muhammad; Tsujikawa, Tetsuya; Mori, Tetsuya; Kiyono, Yasushi; Okazawa, Hidehiko
2017-06-01
A noninvasive method to estimate input function directly from H2 15O brain PET data for measurement of cerebral blood flow (CBF) was proposed in this study. The image derived input function (IDIF) method extracted the time-activity curves (TAC) of the major cerebral arteries at the skull base from the dynamic PET data. The extracted primordial IDIF showed almost the same radioactivity as the arterial input function (AIF) from sampled blood at the plateau part in the later phase, but significantly lower radioactivity in the initial arterial phase compared with that of AIF-TAC. To correct the initial part of the IDIF, a dispersion function was applied and two constants for the correction were determined by fitting with the individual AIF in 15 patients with unilateral arterial stenoocclusive lesions. The area under the curves (AUC) from the two input functions showed good agreement with the mean AUCIDIF/AUCAIF ratio of 0.92 ± 0.09. The final products of CBF and arterial-to-capillary vascular volume (V 0) obtained from the IDIF and AIF showed no difference, and had with high correlation coefficients.
Katoh, Chietsugu; Yoshinaga, Keiichiro; Klein, Ran; Kasai, Katsuhiko; Tomiyama, Yuuki; Manabe, Osamu; Naya, Masanao; Sakakibara, Mamoru; Tsutsui, Hiroyuki; deKemp, Robert A; Tamaki, Nagara
2012-08-01
Myocardial blood flow (MBF) estimation with (82)Rubidium ((82)Rb) positron emission tomography (PET) is technically difficult because of the high spillover between regions of interest, especially due to the long positron range. We sought to develop a new algorithm to reduce the spillover in image-derived blood activity curves, using non-uniform weighted least-squares fitting. Fourteen volunteers underwent imaging with both 3-dimensional (3D) (82)Rb and (15)O-water PET at rest and during pharmacological stress. Whole left ventricular (LV) (82)Rb MBF was estimated using a one-compartment model, including a myocardium-to-blood spillover correction to estimate the corresponding blood input function Ca(t)(whole). Regional K1 values were calculated using this uniform global input function, which simplifies equations and enables robust estimation of MBF. To assess the robustness of the modified algorithm, inter-operator repeatability of 3D (82)Rb MBF was compared with a previously established method. Whole LV correlation of (82)Rb MBF with (15)O-water MBF was better (P < .01) with the modified spillover correction method (r = 0.92 vs r = 0.60). The modified method also yielded significantly improved inter-operator repeatability of regional MBF quantification (r = 0.89) versus the established method (r = 0.82) (P < .01). A uniform global input function can suppress LV spillover into the image-derived blood input function, resulting in improved precision for MBF quantification with 3D (82)Rb PET.
Zanderigo, Francesca; D'Agostino, Alexandra E; Joshi, Nandita; Schain, Martin; Kumar, Dileep; Parsey, Ramin V; DeLorenzo, Christine; Mann, J John
2018-02-08
Inhibition of the isoform A of monoamine oxidase (MAO-A), a mitochondrial enzyme catalyzing deamination of monoamine neurotransmitters, is useful in treatment of depression and anxiety disorders. [ 11 C]harmine, a MAO-A PET radioligand, has been used to study mood disorders and antidepressant treatment. However, [ 11 C]harmine binding test-retest characteristics have to date only been partially investigated. Furthermore, since MAO-A is ubiquitously expressed, no reference region is available, thus requiring arterial blood sampling during PET scanning. Here, we investigate [ 11 C]harmine binding measurements test-retest properties; assess effects of using a minimally invasive input function estimation on binding quantification and repeatability; and explore binding potentials estimation using a reference region-free approach. Quantification of [ 11 C]harmine distribution volume (V T ) via kinetic models and graphical analyses was compared based on absolute test-retest percent difference (TRPD), intraclass correlation coefficient (ICC), and identifiability. The optimal procedure was also used with a simultaneously estimated input function in place of the measured curve. Lastly, an approach for binding potentials quantification in absence of a reference region was evaluated. [ 11 C]harmine V T estimates quantified using arterial blood and kinetic modeling showed average absolute TRPD values of 7.7 to 15.6 %, and ICC values between 0.56 and 0.86, across brain regions. Using simultaneous estimation (SIME) of input function resulted in V T estimates close to those obtained using arterial input function (r = 0.951, slope = 1.073, intercept = - 1.037), with numerically but not statistically higher test-retest difference (range 16.6 to 22.0 %), but with overall poor ICC values, between 0.30 and 0.57. Prospective studies using [ 11 C]harmine are possible given its test-retest repeatability when binding is quantified using arterial blood. Results with SIME of input function show potential for simplifying data acquisition by replacing arterial catheterization with one arterial blood sample at 20 min post-injection. Estimation of [ 11 C]harmine binding potentials remains a challenge that warrants further investigation.
NASA Astrophysics Data System (ADS)
Naghibolhosseini, Maryam; Long, Glenis
2011-11-01
The distortion product otoacoustic emission (DPOAE) input/output (I/O) function may provide a potential tool for evaluating cochlear compression. Hearing loss causes an increase in the level of the sound that is just audible for the person, which affects the cochlea compression and thus the dynamic range of hearing. Although the slope of the I/O function is highly variable when the total DPOAE is used, separating the nonlinear-generator component from the reflection component reduces this variability. We separated the two components using least squares fit (LSF) analysis of logarithmic sweeping tones, and confirmed that the separated generator component provides more consistent I/O functions than the total DPOAE. In this paper we estimated the slope of the I/O functions of the generator components at different sound levels using LSF analysis. An artificial neural network (ANN) was used to estimate psychophysical thresholds using the estimated slopes of the I/O functions. DPOAE I/O functions determined in this way may help to estimate hearing thresholds and cochlear health.
Desired Accuracy Estimation of Noise Function from ECG Signal by Fuzzy Approach
Vahabi, Zahra; Kermani, Saeed
2012-01-01
Unknown noise and artifacts present in medical signals with non-linear fuzzy filter will be estimated and then removed. An adaptive neuro-fuzzy interference system which has a non-linear structure presented for the noise function prediction by before Samples. This paper is about a neuro-fuzzy method to estimate unknown noise of Electrocardiogram signal. Adaptive neural combined with Fuzzy System to construct a fuzzy Predictor. For this system setting parameters such as the number of Membership Functions for each input and output, training epochs, type of MFs for each input and output, learning algorithm and etc. is determined by learning data. At the end simulated experimental results are presented for proper validation. PMID:23717810
ENHANCED RECOVERY METHODS FOR 85KR AGE-DATING GROUNDWATER: ROYAL WATERSHED, MAINE
Potential widespread use of 85Kr, having a constant input function in the northern hemisphere, for groundwater age-dating would advance watershed investigations. The current input function of tritium is not sufficient to estimate young modern recharge waters. While tri...
Adaptive noise reduction circuit for a sound reproduction system
NASA Technical Reports Server (NTRS)
Engebretson, A. Maynard (Inventor); O'Connell, Michael P. (Inventor)
1995-01-01
A noise reduction circuit for a hearing aid having an adaptive filter for producing a signal which estimates the noise components present in an input signal. The circuit includes a second filter for receiving the noise-estimating signal and modifying it as a function of a user's preference or as a function of an expected noise environment. The circuit also includes a gain control for adjusting the magnitude of the modified noise-estimating signal, thereby allowing for the adjustment of the magnitude of the circuit response. The circuit also includes a signal combiner for combining the input signal with the adjusted noise-estimating signal to produce a noise reduced output signal.
Reconstruction of an input function from a dynamic PET water image using multiple tissue curves
NASA Astrophysics Data System (ADS)
Kudomi, Nobuyuki; Maeda, Yukito; Yamamoto, Yuka; Nishiyama, Yoshihiro
2016-08-01
Quantification of cerebral blood flow (CBF) is important for the understanding of normal and pathologic brain physiology. When CBF is assessed using PET with {{\\text{H}}2} 15O or C15O2, its calculation requires an arterial input function, which generally requires invasive arterial blood sampling. The aim of the present study was to develop a new technique to reconstruct an image derived input function (IDIF) from a dynamic {{\\text{H}}2} 15O PET image as a completely non-invasive approach. Our technique consisted of using a formula to express the input using tissue curve with rate constant parameter. For multiple tissue curves extracted from the dynamic image, the rate constants were estimated so as to minimize the sum of the differences of the reproduced inputs expressed by the extracted tissue curves. The estimated rates were used to express the inputs and the mean of the estimated inputs was used as an IDIF. The method was tested in human subjects (n = 29) and was compared to the blood sampling method. Simulation studies were performed to examine the magnitude of potential biases in CBF and to optimize the number of multiple tissue curves used for the input reconstruction. In the PET study, the estimated IDIFs were well reproduced against the measured ones. The difference between the calculated CBF values obtained using the two methods was small as around <8% and the calculated CBF values showed a tight correlation (r = 0.97). The simulation showed that errors associated with the assumed parameters were <10%, and that the optimal number of tissue curves to be used was around 500. Our results demonstrate that IDIF can be reconstructed directly from tissue curves obtained through {{\\text{H}}2} 15O PET imaging. This suggests the possibility of using a completely non-invasive technique to assess CBF in patho-physiological studies.
Mino, H
2007-01-01
To estimate the parameters, the impulse response (IR) functions of some linear time-invariant systems generating intensity processes, in Shot-Noise-Driven Doubly Stochastic Poisson Process (SND-DSPP) in which multivariate presynaptic spike trains and postsynaptic spike trains can be assumed to be modeled by the SND-DSPPs. An explicit formula for estimating the IR functions from observations of multivariate input processes of the linear systems and the corresponding counting process (output process) is derived utilizing the expectation maximization (EM) algorithm. The validity of the estimation formula was verified through Monte Carlo simulations in which two presynaptic spike trains and one postsynaptic spike train were assumed to be observable. The IR functions estimated on the basis of the proposed identification method were close to the true IR functions. The proposed method will play an important role in identifying the input-output relationship of pre- and postsynaptic neural spike trains in practical situations.
NASA Astrophysics Data System (ADS)
Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.
2018-05-01
Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.
The human motor neuron pools receive a dominant slow‐varying common synaptic input
Negro, Francesco; Yavuz, Utku Şükrü
2016-01-01
Key points Motor neurons in a pool receive both common and independent synaptic inputs, although the proportion and role of their common synaptic input is debated.Classic correlation techniques between motor unit spike trains do not measure the absolute proportion of common input and have limitations as a result of the non‐linearity of motor neurons.We propose a method that for the first time allows an accurate quantification of the absolute proportion of low frequency common synaptic input (<5 Hz) to motor neurons in humans.We applied the proposed method to three human muscles and determined experimentally that they receive a similar large amount (>60%) of common input, irrespective of their different functional and control properties.These results increase our knowledge about the role of common and independent input to motor neurons in force control. Abstract Motor neurons receive both common and independent synaptic inputs. This observation is classically based on the presence of a significant correlation between pairs of motor unit spike trains. The functional significance of different relative proportions of common input across muscles, individuals and conditions is still debated. One of the limitations in our understanding of correlated input to motor neurons is that it has not been possible so far to quantify the absolute proportion of common input with respect to the total synaptic input received by the motor neurons. Indeed, correlation measures of pairs of output spike trains only allow for relative comparisons. In the present study, we report for the first time an approach for measuring the proportion of common input in the low frequency bandwidth (<5 Hz) to a motor neuron pool in humans. This estimate is based on a phenomenological model and the theoretical fitting of the experimental values of coherence between the permutations of groups of motor unit spike trains. We demonstrate the validity of this theoretical estimate with several simulations. Moreover, we applied this method to three human muscles: the abductor digiti minimi, tibialis anterior and vastus medialis. Despite these muscles having different functional roles and control properties, as confirmed by the results of the present study, we estimate that their motor pools receive a similar and large (>60%) proportion of common low frequency oscillations with respect to their total synaptic input. These results suggest that the central nervous system provides a large amount of common input to motor neuron pools, in a similar way to that for muscles with different functional and control properties. PMID:27151459
Optimal control of nonlinear continuous-time systems in strict-feedback form.
Zargarzadeh, Hassan; Dierks, Travis; Jagannathan, Sarangapani
2015-10-01
This paper proposes a novel optimal tracking control scheme for nonlinear continuous-time systems in strict-feedback form with uncertain dynamics. The optimal tracking problem is transformed into an equivalent optimal regulation problem through a feedforward adaptive control input that is generated by modifying the standard backstepping technique. Subsequently, a neural network-based optimal control scheme is introduced to estimate the cost, or value function, over an infinite horizon for the resulting nonlinear continuous-time systems in affine form when the internal dynamics are unknown. The estimated cost function is then used to obtain the optimal feedback control input; therefore, the overall optimal control input for the nonlinear continuous-time system in strict-feedback form includes the feedforward plus the optimal feedback terms. It is shown that the estimated cost function minimizes the Hamilton-Jacobi-Bellman estimation error in a forward-in-time manner without using any value or policy iterations. Finally, optimal output feedback control is introduced through the design of a suitable observer. Lyapunov theory is utilized to show the overall stability of the proposed schemes without requiring an initial admissible controller. Simulation examples are provided to validate the theoretical results.
Estimation and classification by sigmoids based on mutual information
NASA Technical Reports Server (NTRS)
Baram, Yoram
1994-01-01
An estimate of the probability density function of a random vector is obtained by maximizing the mutual information between the input and the output of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's s method, applied to an estimated density, yields a recursive maximum likelihood estimator, consisting of a single internal layer of sigmoids, for a random variable or a random sequence. Applications to the diamond classification and to the prediction of a sun-spot process are demonstrated.
Existence conditions for unknown input functional observers
NASA Astrophysics Data System (ADS)
Fernando, T.; MacDougall, S.; Sreeram, V.; Trinh, H.
2013-01-01
This article presents necessary and sufficient conditions for the existence and design of an unknown input Functional observer. The existence of the observer can be verified by computing a nullspace of a known matrix and testing some matrix rank conditions. The existence of the observer does not require the satisfaction of the observer matching condition (i.e. Equation (16) in Hou and Muller 1992, 'Design of Observers for Linear Systems with Unknown Inputs', IEEE Transactions on Automatic Control, 37, 871-875), is not limited to estimating scalar functionals and allows for arbitrary pole placement. The proposed observer always exists when a state observer exists for the unknown input system, and furthermore, the proposed observer can exist even in some instances when an unknown input state observer does not exist.
School District Inputs and Biased Estimation of Educational Production Functions.
ERIC Educational Resources Information Center
Watts, Michael
1985-01-01
In 1979, Eric Hanushek pointed out a potential problem in estimating educational production functions, particularly at the precollege level. He observed that it is frequently inappropriate to include school-system variables in equations using the individual student as the unit of observation. This study offers limited evidence supporting this…
Image-derived input function with factor analysis and a-priori information.
Simončič, Urban; Zanotti-Fregonara, Paolo
2015-02-01
Quantitative PET studies often require the cumbersome and invasive procedure of arterial cannulation to measure the input function. This study sought to minimize the number of necessary blood samples by developing a factor-analysis-based image-derived input function (IDIF) methodology for dynamic PET brain studies. IDIF estimation was performed as follows: (a) carotid and background regions were segmented manually on an early PET time frame; (b) blood-weighted and tissue-weighted time-activity curves (TACs) were extracted with factor analysis; (c) factor analysis results were denoised and scaled using the voxels with the highest blood signal; (d) using population data and one blood sample at 40 min, whole-blood TAC was estimated from postprocessed factor analysis results; and (e) the parent concentration was finally estimated by correcting the whole-blood curve with measured radiometabolite concentrations. The methodology was tested using data from 10 healthy individuals imaged with [(11)C](R)-rolipram. The accuracy of IDIFs was assessed against full arterial sampling by comparing the area under the curve of the input functions and by calculating the total distribution volume (VT). The shape of the image-derived whole-blood TAC matched the reference arterial curves well, and the whole-blood area under the curves were accurately estimated (mean error 1.0±4.3%). The relative Logan-V(T) error was -4.1±6.4%. Compartmental modeling and spectral analysis gave less accurate V(T) results compared with Logan. A factor-analysis-based IDIF for [(11)C](R)-rolipram brain PET studies that relies on a single blood sample and population data can be used for accurate quantification of Logan-V(T) values.
O'Sullivan, F; Kirrane, J; Muzi, M; O'Sullivan, J N; Spence, A M; Mankoff, D A; Krohn, K A
2010-03-01
Kinetic quantitation of dynamic positron emission tomography (PET) studies via compartmental modeling usually requires the time-course of the radio-tracer concentration in the arterial blood as an arterial input function (AIF). For human and animal imaging applications, significant practical difficulties are associated with direct arterial sampling and as a result there is substantial interest in alternative methods that require no blood sampling at the time of the study. A fixed population template input function derived from prior experience with directly sampled arterial curves is one possibility. Image-based extraction, including requisite adjustment for spillover and recovery, is another approach. The present work considers a hybrid statistical approach based on a penalty formulation in which the information derived from a priori studies is combined in a Bayesian manner with information contained in the sampled image data in order to obtain an input function estimate. The absolute scaling of the input is achieved by an empirical calibration equation involving the injected dose together with the subject's weight, height and gender. The technique is illustrated in the context of (18)F -Fluorodeoxyglucose (FDG) PET studies in humans. A collection of 79 arterially sampled FDG blood curves are used as a basis for a priori characterization of input function variability, including scaling characteristics. Data from a series of 12 dynamic cerebral FDG PET studies in normal subjects are used to evaluate the performance of the penalty-based AIF estimation technique. The focus of evaluations is on quantitation of FDG kinetics over a set of 10 regional brain structures. As well as the new method, a fixed population template AIF and a direct AIF estimate based on segmentation are also considered. Kinetics analyses resulting from these three AIFs are compared with those resulting from radially sampled AIFs. The proposed penalty-based AIF extraction method is found to achieve significant improvements over the fixed template and the segmentation methods. As well as achieving acceptable kinetic parameter accuracy, the quality of fit of the region of interest (ROI) time-course data based on the extracted AIF, matches results based on arterially sampled AIFs. In comparison, significant deviation in the estimation of FDG flux and degradation in ROI data fit are found with the template and segmentation methods. The proposed AIF extraction method is recommended for practical use.
Estimation and Simulation of Slow Crack Growth Parameters from Constant Stress Rate Data
NASA Technical Reports Server (NTRS)
Salem, Jonathan A.; Weaver, Aaron S.
2003-01-01
Closed form, approximate functions for estimating the variances and degrees-of-freedom associated with the slow crack growth parameters n, D, B, and A(sup *) as measured using constant stress rate ('dynamic fatigue') testing were derived by using propagation of errors. Estimates made with the resulting functions and slow crack growth data for a sapphire window were compared to the results of Monte Carlo simulations. The functions for estimation of the variances of the parameters were derived both with and without logarithmic transformation of the initial slow crack growth equations. The transformation was performed to make the functions both more linear and more normal. Comparison of the Monte Carlo results and the closed form expressions derived with propagation of errors indicated that linearization is not required for good estimates of the variances of parameters n and D by the propagation of errors method. However, good estimates variances of the parameters B and A(sup *) could only be made when the starting slow crack growth equation was transformed and the coefficients of variation of the input parameters were not too large. This was partially a result of the skewered distributions of B and A(sup *). Parametric variation of the input parameters was used to determine an acceptable range for using closed form approximate equations derived from propagation of errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, W. Payton; Hokr, Milan; Shao, Hua
We investigated the transit time distribution (TTD) of discharge collected from fractures in the Bedrichov Tunnel, Czech Republic, using lumped parameter models and multiple environmental tracers. We then utilize time series of δ 18O, δ 2H and 3H along with CFC measurements from individual fractures in the Bedrichov Tunnel of the Czech Republic to investigate the TTD, and the uncertainty in estimated mean travel time in several fracture networks of varying length and discharge. We also compare several TTDs, including the dispersion distribution, the exponential distribution, and a developed TTD which includes the effects of matrix diffusion. The effect ofmore » seasonal recharge is explored by comparing several seasonal weighting functions to derive the historical recharge concentration. We identify best fit mean ages for each TTD by minimizing the error-weighted, multi-tracer χ2 residual for each seasonal weighting function. We use this methodology to test the ability of each TTD and seasonal input function to fit the observed tracer concentrations, and the effect of choosing different TTD and seasonal recharge functions on the mean age estimation. We find that the estimated mean transit time is a function of both the assumed TTD and seasonal weighting function. Best fits as measured by the χ2 value were achieved for the dispersion model using the seasonal input function developed here for two of the three modeled sites, while at the third site, equally good fits were achieved with the exponential model and the dispersion model and our seasonal input function. The average mean transit time for all TTDs and seasonal input functions converged to similar values at each location. The sensitivity of the estimated mean transit time to the seasonal weighting function was equal to that of the TTD. These results indicated that understanding seasonality of recharge is at least as important as the uncertainty in the flow path distribution in fracture networks and that unique identification of the TTD and mean transit time is difficult given the uncertainty in the recharge function. But, the mean transit time appears to be relatively robust to the structural model uncertainty. The results presented here should be applicable to other studies using environmental tracers to constrain flow and transport properties in fractured rock systems.« less
Gardner, W. Payton; Hokr, Milan; Shao, Hua; ...
2016-10-19
We investigated the transit time distribution (TTD) of discharge collected from fractures in the Bedrichov Tunnel, Czech Republic, using lumped parameter models and multiple environmental tracers. We then utilize time series of δ 18O, δ 2H and 3H along with CFC measurements from individual fractures in the Bedrichov Tunnel of the Czech Republic to investigate the TTD, and the uncertainty in estimated mean travel time in several fracture networks of varying length and discharge. We also compare several TTDs, including the dispersion distribution, the exponential distribution, and a developed TTD which includes the effects of matrix diffusion. The effect ofmore » seasonal recharge is explored by comparing several seasonal weighting functions to derive the historical recharge concentration. We identify best fit mean ages for each TTD by minimizing the error-weighted, multi-tracer χ2 residual for each seasonal weighting function. We use this methodology to test the ability of each TTD and seasonal input function to fit the observed tracer concentrations, and the effect of choosing different TTD and seasonal recharge functions on the mean age estimation. We find that the estimated mean transit time is a function of both the assumed TTD and seasonal weighting function. Best fits as measured by the χ2 value were achieved for the dispersion model using the seasonal input function developed here for two of the three modeled sites, while at the third site, equally good fits were achieved with the exponential model and the dispersion model and our seasonal input function. The average mean transit time for all TTDs and seasonal input functions converged to similar values at each location. The sensitivity of the estimated mean transit time to the seasonal weighting function was equal to that of the TTD. These results indicated that understanding seasonality of recharge is at least as important as the uncertainty in the flow path distribution in fracture networks and that unique identification of the TTD and mean transit time is difficult given the uncertainty in the recharge function. But, the mean transit time appears to be relatively robust to the structural model uncertainty. The results presented here should be applicable to other studies using environmental tracers to constrain flow and transport properties in fractured rock systems.« less
Yong-Feng Gao; Xi-Ming Sun; Changyun Wen; Wei Wang
2017-07-01
This paper is concerned with the problem of adaptive tracking control for a class of uncertain nonlinear systems with nonsymmetric input saturation and immeasurable states. The radial basis function of neural network (NN) is employed to approximate unknown functions, and an NN state observer is designed to estimate the immeasurable states. To analyze the effect of input saturation, an auxiliary system is employed. By the aid of adaptive backstepping technique, an adaptive tracking control approach is developed. Under the proposed adaptive tracking controller, the boundedness of all the signals in the closed-loop system is achieved. Moreover, distinct from most of the existing references, the tracking error can be bounded by an explicit function of design parameters and saturation input error. Finally, an example is given to show the effectiveness of the proposed method.
General methodology for nonlinear modeling of neural systems with Poisson point-process inputs.
Marmarelis, V Z; Berger, T W
2005-07-01
This paper presents a general methodological framework for the practical modeling of neural systems with point-process inputs (sequences of action potentials or, more broadly, identical events) based on the Volterra and Wiener theories of functional expansions and system identification. The paper clarifies the distinctions between Volterra and Wiener kernels obtained from Poisson point-process inputs. It shows that only the Wiener kernels can be estimated via cross-correlation, but must be defined as zero along the diagonals. The Volterra kernels can be estimated far more accurately (and from shorter data-records) by use of the Laguerre expansion technique adapted to point-process inputs, and they are independent of the mean rate of stimulation (unlike their P-W counterparts that depend on it). The Volterra kernels can also be estimated for broadband point-process inputs that are not Poisson. Useful applications of this modeling approach include cases where we seek to determine (model) the transfer characteristics between one neuronal axon (a point-process 'input') and another axon (a point-process 'output') or some other measure of neuronal activity (a continuous 'output', such as population activity) with which a causal link exists.
Cost function approach for estimating derived demand for composite wood products
T. C. Marcin
1991-01-01
A cost function approach was examined for using the concept of duality between production and input factor demands. A translog cost function was used to represent residential construction costs and derived conditional factor demand equations. Alternative models were derived from the translog cost function by imposing parameter restrictions.
A data mining framework for time series estimation.
Hu, Xiao; Xu, Peng; Wu, Shaozhi; Asgari, Shadnaz; Bergsneider, Marvin
2010-04-01
Time series estimation techniques are usually employed in biomedical research to derive variables less accessible from a set of related and more accessible variables. These techniques are traditionally built from systems modeling approaches including simulation, blind decovolution, and state estimation. In this work, we define target time series (TTS) and its related time series (RTS) as the output and input of a time series estimation process, respectively. We then propose a novel data mining framework for time series estimation when TTS and RTS represent different sets of observed variables from the same dynamic system. This is made possible by mining a database of instances of TTS, its simultaneously recorded RTS, and the input/output dynamic models between them. The key mining strategy is to formulate a mapping function for each TTS-RTS pair in the database that translates a feature vector extracted from RTS to the dissimilarity between true TTS and its estimate from the dynamic model associated with the same TTS-RTS pair. At run time, a feature vector is extracted from an inquiry RTS and supplied to the mapping function associated with each TTS-RTS pair to calculate a dissimilarity measure. An optimal TTS-RTS pair is then selected by analyzing these dissimilarity measures. The associated input/output model of the selected TTS-RTS pair is then used to simulate the TTS given the inquiry RTS as an input. An exemplary implementation was built to address a biomedical problem of noninvasive intracranial pressure assessment. The performance of the proposed method was superior to that of a simple training-free approach of finding the optimal TTS-RTS pair by a conventional similarity-based search on RTS features. 2009 Elsevier Inc. All rights reserved.
Economies of Scale and Scope in Australian Higher Education
ERIC Educational Resources Information Center
Worthington, A. C.; Higgs, H.
2011-01-01
This paper estimates economies of scale and scope for 36 Australian universities using a multiple-input, multiple-output cost function over the period 1998-2006. The three inputs included in the analysis are full-time equivalent academic and non-academic staff and physical capital. The five outputs are undergraduate, postgraduate and PhD…
Models for forecasting energy use in the US farm sector
NASA Astrophysics Data System (ADS)
Christensen, L. R.
1981-07-01
Econometric models were developed and estimated for the purpose of forecasting electricity and petroleum demand in US agriculture. A structural approach is pursued which takes account of the fact that the quantity demanded of any one input is a decision made in conjunction with other input decisions. Three different functional forms of varying degrees of complexity are specified for the structural cost function, which describes the cost of production as a function of the level of output and factor prices. Demand for materials (all purchased inputs) is derived from these models. A separate model which break this demand up into demand for the four components of materials is used to produce forecasts of electricity and petroleum is a stepwise manner.
Comparing fixed and variable-width Gaussian networks.
Kůrková, Věra; Kainen, Paul C
2014-09-01
The role of width of Gaussians in two types of computational models is investigated: Gaussian radial-basis-functions (RBFs) where both widths and centers vary and Gaussian kernel networks which have fixed widths but varying centers. The effect of width on functional equivalence, universal approximation property, and form of norms in reproducing kernel Hilbert spaces (RKHS) is explored. It is proven that if two Gaussian RBF networks have the same input-output functions, then they must have the same numbers of units with the same centers and widths. Further, it is shown that while sets of input-output functions of Gaussian kernel networks with two different widths are disjoint, each such set is large enough to be a universal approximator. Embedding of RKHSs induced by "flatter" Gaussians into RKHSs induced by "sharper" Gaussians is described and growth of the ratios of norms on these spaces with increasing input dimension is estimated. Finally, large sets of argminima of error functionals in sets of input-output functions of Gaussian RBFs are described. Copyright © 2014 Elsevier Ltd. All rights reserved.
Cetacean population density estimation from single fixed sensors using passive acoustics.
Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica
2011-06-01
Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America
NASA Technical Reports Server (NTRS)
Reddy C. J.
1998-01-01
Model Based Parameter Estimation (MBPE) is presented in conjunction with the hybrid Finite Element Method (FEM)/Method of Moments (MoM) technique for fast computation of the input characteristics of cavity-backed aperture antennas over a frequency range. The hybrid FENI/MoM technique is used to form an integro-partial- differential equation to compute the electric field distribution of a cavity-backed aperture antenna. In MBPE, the electric field is expanded in a rational function of two polynomials. The coefficients of the rational function are obtained using the frequency derivatives of the integro-partial-differential equation formed by the hybrid FEM/ MoM technique. Using the rational function approximation, the electric field is obtained over a frequency range. Using the electric field at different frequencies, the input characteristics of the antenna are obtained over a wide frequency range. Numerical results for an open coaxial line, probe-fed coaxial cavity and cavity-backed microstrip patch antennas are presented. Good agreement between MBPE and the solutions over individual frequencies is observed.
Tang, Jinjun; Zou, Yajie; Ash, John; Zhang, Shen; Liu, Fang; Wang, Yinhai
2016-01-01
Travel time is an important measurement used to evaluate the extent of congestion within road networks. This paper presents a new method to estimate the travel time based on an evolving fuzzy neural inference system. The input variables in the system are traffic flow data (volume, occupancy, and speed) collected from loop detectors located at points both upstream and downstream of a given link, and the output variable is the link travel time. A first order Takagi-Sugeno fuzzy rule set is used to complete the inference. For training the evolving fuzzy neural network (EFNN), two learning processes are proposed: (1) a K-means method is employed to partition input samples into different clusters, and a Gaussian fuzzy membership function is designed for each cluster to measure the membership degree of samples to the cluster centers. As the number of input samples increases, the cluster centers are modified and membership functions are also updated; (2) a weighted recursive least squares estimator is used to optimize the parameters of the linear functions in the Takagi-Sugeno type fuzzy rules. Testing datasets consisting of actual and simulated data are used to test the proposed method. Three common criteria including mean absolute error (MAE), root mean square error (RMSE), and mean absolute relative error (MARE) are utilized to evaluate the estimation performance. Estimation results demonstrate the accuracy and effectiveness of the EFNN method through comparison with existing methods including: multiple linear regression (MLR), instantaneous model (IM), linear model (LM), neural network (NN), and cumulative plots (CP).
Tang, Jinjun; Zou, Yajie; Ash, John; Zhang, Shen; Liu, Fang; Wang, Yinhai
2016-01-01
Travel time is an important measurement used to evaluate the extent of congestion within road networks. This paper presents a new method to estimate the travel time based on an evolving fuzzy neural inference system. The input variables in the system are traffic flow data (volume, occupancy, and speed) collected from loop detectors located at points both upstream and downstream of a given link, and the output variable is the link travel time. A first order Takagi-Sugeno fuzzy rule set is used to complete the inference. For training the evolving fuzzy neural network (EFNN), two learning processes are proposed: (1) a K-means method is employed to partition input samples into different clusters, and a Gaussian fuzzy membership function is designed for each cluster to measure the membership degree of samples to the cluster centers. As the number of input samples increases, the cluster centers are modified and membership functions are also updated; (2) a weighted recursive least squares estimator is used to optimize the parameters of the linear functions in the Takagi-Sugeno type fuzzy rules. Testing datasets consisting of actual and simulated data are used to test the proposed method. Three common criteria including mean absolute error (MAE), root mean square error (RMSE), and mean absolute relative error (MARE) are utilized to evaluate the estimation performance. Estimation results demonstrate the accuracy and effectiveness of the EFNN method through comparison with existing methods including: multiple linear regression (MLR), instantaneous model (IM), linear model (LM), neural network (NN), and cumulative plots (CP). PMID:26829639
Using a Polytope to Estimate Efficient Production Functions of Joint Product Processes.
ERIC Educational Resources Information Center
Simpson, William A.
In the last decade, a modeling technique has been developed to handle complex input/output analyses where outputs involve joint products and there are no known mathematical relationships linking the outputs or inputs. The technique uses the geometrical concept of a six-dimensional shape called a polytope to analyze the efficiency of each…
Empirical mode decomposition-based facial pose estimation inside video sequences
NASA Astrophysics Data System (ADS)
Qing, Chunmei; Jiang, Jianmin; Yang, Zhijing
2010-03-01
We describe a new pose-estimation algorithm via integration of the strength in both empirical mode decomposition (EMD) and mutual information. While mutual information is exploited to measure the similarity between facial images to estimate poses, EMD is exploited to decompose input facial images into a number of intrinsic mode function (IMF) components, which redistribute the effect of noise, expression changes, and illumination variations as such that, when the input facial image is described by the selected IMF components, all the negative effects can be minimized. Extensive experiments were carried out in comparisons to existing representative techniques, and the results show that the proposed algorithm achieves better pose-estimation performances with robustness to noise corruption, illumination variation, and facial expressions.
Aeroservoelastic Uncertainty Model Identification from Flight Data
NASA Technical Reports Server (NTRS)
Brenner, Martin J.
2001-01-01
Uncertainty modeling is a critical element in the estimation of robust stability margins for stability boundary prediction and robust flight control system development. There has been a serious deficiency to date in aeroservoelastic data analysis with attention to uncertainty modeling. Uncertainty can be estimated from flight data using both parametric and nonparametric identification techniques. The model validation problem addressed in this paper is to identify aeroservoelastic models with associated uncertainty structures from a limited amount of controlled excitation inputs over an extensive flight envelope. The challenge to this problem is to update analytical models from flight data estimates while also deriving non-conservative uncertainty descriptions consistent with the flight data. Multisine control surface command inputs and control system feedbacks are used as signals in a wavelet-based modal parameter estimation procedure for model updates. Transfer function estimates are incorporated in a robust minimax estimation scheme to get input-output parameters and error bounds consistent with the data and model structure. Uncertainty estimates derived from the data in this manner provide an appropriate and relevant representation for model development and robust stability analysis. This model-plus-uncertainty identification procedure is applied to aeroservoelastic flight data from the NASA Dryden Flight Research Center F-18 Systems Research Aircraft.
NASA Astrophysics Data System (ADS)
Yang, Duo; Zhang, Xu; Pan, Rui; Wang, Yujie; Chen, Zonghai
2018-04-01
The state-of-health (SOH) estimation is always a crucial issue for lithium-ion batteries. In order to provide an accurate and reliable SOH estimation, a novel Gaussian process regression (GPR) model based on charging curve is proposed in this paper. Different from other researches where SOH is commonly estimated by cycle life, in this work four specific parameters extracted from charging curves are used as inputs of the GPR model instead of cycle numbers. These parameters can reflect the battery aging phenomenon from different angles. The grey relational analysis method is applied to analyze the relational grade between selected features and SOH. On the other hand, some adjustments are made in the proposed GPR model. Covariance function design and the similarity measurement of input variables are modified so as to improve the SOH estimate accuracy and adapt to the case of multidimensional input. Several aging data from NASA data repository are used for demonstrating the estimation effect by the proposed method. Results show that the proposed method has high SOH estimation accuracy. Besides, a battery with dynamic discharging profile is used to verify the robustness and reliability of this method.
Incorporation of MRI-AIF Information For Improved Kinetic Modelling of Dynamic PET Data
NASA Astrophysics Data System (ADS)
Sari, Hasan; Erlandsson, Kjell; Thielemans, Kris; Atkinson, David; Ourselin, Sebastien; Arridge, Simon; Hutton, Brian F.
2015-06-01
In the analysis of dynamic PET data, compartmental kinetic analysis methods require an accurate knowledge of the arterial input function (AIF). Although arterial blood sampling is the gold standard of the methods used to measure the AIF, it is usually not preferred as it is an invasive method. An alternative method is the simultaneous estimation method (SIME), where physiological parameters and the AIF are estimated together, using information from different anatomical regions. Due to the large number of parameters to estimate in its optimisation, SIME is a computationally complex method and may sometimes fail to give accurate estimates. In this work, we try to improve SIME by utilising an input function derived from a simultaneously obtained DSC-MRI scan. With the assumption that the true value of one of the six parameter PET-AIF model can be derived from an MRI-AIF, the method is tested using simulated data. The results indicate that SIME can yield more robust results when the MRI information is included with a significant reduction in absolute bias of Ki estimates.
Fuzzy/Neural Software Estimates Costs of Rocket-Engine Tests
NASA Technical Reports Server (NTRS)
Douglas, Freddie; Bourgeois, Edit Kaminsky
2005-01-01
The Highly Accurate Cost Estimating Model (HACEM) is a software system for estimating the costs of testing rocket engines and components at Stennis Space Center. HACEM is built on a foundation of adaptive-network-based fuzzy inference systems (ANFIS) a hybrid software concept that combines the adaptive capabilities of neural networks with the ease of development and additional benefits of fuzzy-logic-based systems. In ANFIS, fuzzy inference systems are trained by use of neural networks. HACEM includes selectable subsystems that utilize various numbers and types of inputs, various numbers of fuzzy membership functions, and various input-preprocessing techniques. The inputs to HACEM are parameters of specific tests or series of tests. These parameters include test type (component or engine test), number and duration of tests, and thrust level(s) (in the case of engine tests). The ANFIS in HACEM are trained by use of sets of these parameters, along with costs of past tests. Thereafter, the user feeds HACEM a simple input text file that contains the parameters of a planned test or series of tests, the user selects the desired HACEM subsystem, and the subsystem processes the parameters into an estimate of cost(s).
Saturated hydraulic conductivity of US soils grouped according to textural class and bulk density
USDA-ARS?s Scientific Manuscript database
Importance of the saturated hydraulic conductivity as soil hydraulic property led to the development of multiple pedotransfer functions for estimating it. One approach to estimating Ksat was using textural classes rather than specific textural fraction contents as pedotransfer inputs. The objective...
Saturated hydraulic conductivity of US soils grouped according textural class and bulk density
USDA-ARS?s Scientific Manuscript database
Importance of the saturated hydraulic conductivity as soil hydraulic property led to the development of multiple pedotransfer functions for estimating it. One approach to estimating Ksat was using textural classes rather than specific textural fraction contents as pedotransfer inputs. The objective...
Gaussian functional regression for output prediction: Model assimilation and experimental design
NASA Astrophysics Data System (ADS)
Nguyen, N. C.; Peraire, J.
2016-03-01
In this paper, we introduce a Gaussian functional regression (GFR) technique that integrates multi-fidelity models with model reduction to efficiently predict the input-output relationship of a high-fidelity model. The GFR method combines the high-fidelity model with a low-fidelity model to provide an estimate of the output of the high-fidelity model in the form of a posterior distribution that can characterize uncertainty in the prediction. A reduced basis approximation is constructed upon the low-fidelity model and incorporated into the GFR method to yield an inexpensive posterior distribution of the output estimate. As this posterior distribution depends crucially on a set of training inputs at which the high-fidelity models are simulated, we develop a greedy sampling algorithm to select the training inputs. Our approach results in an output prediction model that inherits the fidelity of the high-fidelity model and has the computational complexity of the reduced basis approximation. Numerical results are presented to demonstrate the proposed approach.
Park, Heesu; Dong, Suh-Yeon; Lee, Miran; Youn, Inchan
2017-07-24
Human-activity recognition (HAR) and energy-expenditure (EE) estimation are major functions in the mobile healthcare system. Both functions have been investigated for a long time; however, several challenges remain unsolved, such as the confusion between activities and the recognition of energy-consuming activities involving little or no movement. To solve these problems, we propose a novel approach using an accelerometer and electrocardiogram (ECG). First, we collected a database of six activities (sitting, standing, walking, ascending, resting and running) of 13 voluntary participants. We compared the HAR performances of three models with respect to the input data type (with none, all, or some of the heart-rate variability (HRV) parameters). The best recognition performance was 96.35%, which was obtained with some selected HRV parameters. EE was also estimated for different choices of the input data type (with or without HRV parameters) and the model type (single and activity-specific). The best estimation performance was found in the case of the activity-specific model with HRV parameters. Our findings indicate that the use of human physiological data, obtained by wearable sensors, has a significant impact on both HAR and EE estimation, which are crucial functions in the mobile healthcare system.
Temporal rainfall estimation using input data reduction and model inversion
NASA Astrophysics Data System (ADS)
Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.
2016-12-01
Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a demonstration of equifinality. The use of a likelihood function that considers both rainfall and streamflow error combined with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.
Multidimensional density shaping by sigmoids.
Roth, Z; Baram, Y
1996-01-01
An estimate of the probability density function of a random vector is obtained by maximizing the output entropy of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's optimization method, applied to the estimated density, yields a recursive estimator for a random variable or a random sequence. A constrained connectivity structure yields a linear estimator, which is particularly suitable for "real time" prediction. A Gaussian nonlinearity yields a closed-form solution for the network's parameters, which may also be used for initializing the optimization algorithm when other nonlinearities are employed. A triangular connectivity between the neurons and the input, which is naturally suggested by the statistical setting, reduces the number of parameters. Applications to classification and forecasting problems are demonstrated.
Chinchilla middle ear transmission matrix model and middle-ear flexibilitya)
Ravicz, Michael E.; Rosowski, John J.
2017-01-01
The function of the middle ear (ME) in transforming ME acoustic inputs and outputs (sound pressures and volume velocities) can be described with an acoustic two-port transmission matrix. This description is independent of the load on the ME (cochlea or ear canal) and holds in either direction: forward (from ear canal to cochlea) or reverse (from cochlea to ear canal). A transmission matrix describing ME function in chinchilla, an animal commonly used in auditory research, is presented, computed from measurements of forward ME function: input admittance YTM, ME pressure gain GMEP, ME velocity transfer function HV, and cochlear input admittance YC, in the same set of ears [Ravicz and Rosowski (2012b). J. Acoust. Soc. Am. 132, 2437–2454; (2013a). J. Acoust. Soc. Am. 133, 2208–2223; (2013b). J. Acoust. Soc. Am. 134, 2852–2865]. Unlike previous estimates, these computations require no assumptions about the state of the inner ear, effectiveness of ME manipulations, or measurements of sound transmission in the reverse direction. These element values are generally consistent with physical constraints and the anatomical ME “transformer ratio.” Differences from a previous estimate in chinchilla [Songer and Rosowski (2007). J. Acoust. Soc. Am. 122, 932–942] may be due to a difference in ME flexibility between the two subject groups. PMID:28599566
Chinchilla middle ear transmission matrix model and middle-ear flexibility.
Ravicz, Michael E; Rosowski, John J
2017-05-01
The function of the middle ear (ME) in transforming ME acoustic inputs and outputs (sound pressures and volume velocities) can be described with an acoustic two-port transmission matrix. This description is independent of the load on the ME (cochlea or ear canal) and holds in either direction: forward (from ear canal to cochlea) or reverse (from cochlea to ear canal). A transmission matrix describing ME function in chinchilla, an animal commonly used in auditory research, is presented, computed from measurements of forward ME function: input admittance Y TM , ME pressure gain G MEP , ME velocity transfer function H V , and cochlear input admittance Y C , in the same set of ears [Ravicz and Rosowski (2012b). J. Acoust. Soc. Am. 132, 2437-2454; (2013a). J. Acoust. Soc. Am. 133, 2208-2223; (2013b). J. Acoust. Soc. Am. 134, 2852-2865]. Unlike previous estimates, these computations require no assumptions about the state of the inner ear, effectiveness of ME manipulations, or measurements of sound transmission in the reverse direction. These element values are generally consistent with physical constraints and the anatomical ME "transformer ratio." Differences from a previous estimate in chinchilla [Songer and Rosowski (2007). J. Acoust. Soc. Am. 122, 932-942] may be due to a difference in ME flexibility between the two subject groups.
Contractor, Kaiyumars B; Kenny, Laura M; Coombes, Charles R; Turkheimer, Federico E; Aboagye, Eric O; Rosso, Lula
2012-03-24
Quantification of kinetic parameters of positron emission tomography (PET) imaging agents normally requires collecting arterial blood samples which is inconvenient for patients and difficult to implement in routine clinical practice. The aim of this study was to investigate whether a population-based input function (POP-IF) reliant on only a few individual discrete samples allows accurate estimates of tumour proliferation using [18F]fluorothymidine (FLT). Thirty-six historical FLT-PET data with concurrent arterial sampling were available for this study. A population average of baseline scans blood data was constructed using leave-one-out cross-validation for each scan and used in conjunction with individual blood samples. Three limited sampling protocols were investigated including, respectively, only seven (POP-IF7), five (POP-IF5) and three (POP-IF3) discrete samples of the historical dataset. Additionally, using the three-point protocol, we derived a POP-IF3M, the only input function which was not corrected for the fraction of radiolabelled metabolites present in blood. The kinetic parameter for net FLT retention at steady state, Ki, was derived using the modified Patlak plot and compared with the original full arterial set for validation. Small percentage differences in the area under the curve between all the POP-IFs and full arterial sampling IF was found over 60 min (4.2%-5.7%), while there were, as expected, larger differences in the peak position and peak height.A high correlation between Ki values calculated using the original arterial input function and all the population-derived IFs was observed (R2 = 0.85-0.98). The population-based input showed good intra-subject reproducibility of Ki values (R2 = 0.81-0.94) and good correlation (R2 = 0.60-0.85) with Ki-67. Input functions generated using these simplified protocols over scan duration of 60 min estimate net PET-FLT retention with reasonable accuracy.
Analysis of nystagmus response to a pseudorandom velocity input
NASA Technical Reports Server (NTRS)
Lessard, C. S.
1986-01-01
Space motion sickness was not reported during the first Apollo missions; however, since Apollo 8 through the current Shuttle and Skylab missions, approximately 50% of the crewmembers have experienced instances of space motion sickness. Space motion sickness, renamed space adaptation syndrome, occurs primarily during the initial period of a mission until habilation takes place. One of NASA's efforts to resolve the space adaptation syndrome is to model the individual's vestibular response for basis knowledge and as a possible predictor of an individual's susceptibility to the disorder. This report describes a method to analyse the vestibular system when subjected to a pseudorandom angular velocity input. A sum of sinusoids (pseudorandom) input lends itself to analysis by linear frequency methods. Resultant horizontal ocular movements were digitized, filtered and transformed into the frequency domain. Programs were developed and evaluated to obtain the (1) auto spectra of input stimulus and resultant ocular resonse, (2) cross spectra, (3) the estimated vestibular-ocular system transfer function gain and phase, and (4) coherence function between stimulus and response functions.
Dynamic modeling and parameter estimation of a radial and loop type distribution system network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jun Qui; Heng Chen; Girgis, A.A.
1993-05-01
This paper presents a new identification approach to three-phase power system modeling and model reduction taking power system network as multi-input, multi-output (MIMO) processes. The model estimate can be obtained in discrete-time input-output form, discrete- or continuous-time state-space variable form, or frequency-domain impedance transfer function matrix form. An algorithm for determining the model structure of this MIMO process is described. The effect of measurement noise on the approach is also discussed. This approach has been applied on a sample system and simulation results are also presented in this paper.
Schell, Greggory J; Lavieri, Mariel S; Stein, Joshua D; Musch, David C
2013-12-21
Open-angle glaucoma (OAG) is a prevalent, degenerate ocular disease which can lead to blindness without proper clinical management. The tests used to assess disease progression are susceptible to process and measurement noise. The aim of this study was to develop a methodology which accounts for the inherent noise in the data and improve significant disease progression identification. Longitudinal observations from the Collaborative Initial Glaucoma Treatment Study (CIGTS) were used to parameterize and validate a Kalman filter model and logistic regression function. The Kalman filter estimates the true value of biomarkers associated with OAG and forecasts future values of these variables. We develop two logistic regression models via generalized estimating equations (GEE) for calculating the probability of experiencing significant OAG progression: one model based on the raw measurements from CIGTS and another model based on the Kalman filter estimates of the CIGTS data. Receiver operating characteristic (ROC) curves and associated area under the ROC curve (AUC) estimates are calculated using cross-fold validation. The logistic regression model developed using Kalman filter estimates as data input achieves higher sensitivity and specificity than the model developed using raw measurements. The mean AUC for the Kalman filter-based model is 0.961 while the mean AUC for the raw measurements model is 0.889. Hence, using the probability function generated via Kalman filter estimates and GEE for logistic regression, we are able to more accurately classify patients and instances as experiencing significant OAG progression. A Kalman filter approach for estimating the true value of OAG biomarkers resulted in data input which improved the accuracy of a logistic regression classification model compared to a model using raw measurements as input. This methodology accounts for process and measurement noise to enable improved discrimination between progression and nonprogression in chronic diseases.
PACE 2: Pricing and Cost Estimating Handbook
NASA Technical Reports Server (NTRS)
Stewart, R. D.; Shepherd, T.
1977-01-01
An automatic data processing system to be used for the preparation of industrial engineering type manhour and material cost estimates has been established. This computer system has evolved into a highly versatile and highly flexible tool which significantly reduces computation time, eliminates computational errors, and reduces typing and reproduction time for estimators and pricers since all mathematical and clerical functions are automatic once basic inputs are derived.
A new methodology for estimating nuclear casualties as a function of time.
Zirkle, Robert A; Walsh, Terri J; Disraelly, Deena S; Curling, Carl A
2011-09-01
The Human Response Injury Profile (HRIP) nuclear methodology provides an estimate of casualties occurring as a consequence of nuclear attacks against military targets for planning purposes. The approach develops user-defined, time-based casualty and fatality estimates based on progressions of underlying symptoms and their severity changes over time. This paper provides a description of the HRIP nuclear methodology and its development, including inputs, human response and the casualty estimation process.
Brandsch, Rainer
2017-10-01
Migration modelling provides reliable migration estimates from food-contact materials (FCM) to food or food simulants based on mass-transfer parameters like diffusion and partition coefficients related to individual materials. In most cases, mass-transfer parameters are not readily available from the literature and for this reason are estimated with a given uncertainty. Historically, uncertainty was accounted for by introducing upper limit concepts first, turning out to be of limited applicability due to highly overestimated migration results. Probabilistic migration modelling gives the possibility to consider uncertainty of the mass-transfer parameters as well as other model inputs. With respect to a functional barrier, the most important parameters among others are the diffusion properties of the functional barrier and its thickness. A software tool that accepts distribution as inputs and is capable of applying Monte Carlo methods, i.e., random sampling from the input distributions of the relevant parameters (i.e., diffusion coefficient and layer thickness), predicts migration results with related uncertainty and confidence intervals. The capabilities of probabilistic migration modelling are presented in the view of three case studies (1) sensitivity analysis, (2) functional barrier efficiency and (3) validation by experimental testing. Based on the predicted migration by probabilistic migration modelling and related exposure estimates, safety evaluation of new materials in the context of existing or new packaging concepts is possible. Identifying associated migration risk and potential safety concerns in the early stage of packaging development is possible. Furthermore, dedicated material selection exhibiting required functional barrier efficiency under application conditions becomes feasible. Validation of the migration risk assessment by probabilistic migration modelling through a minimum of dedicated experimental testing is strongly recommended.
An information theory framework for dynamic functional domain connectivity.
Vergara, Victor M; Miller, Robyn; Calhoun, Vince
2017-06-01
Dynamic functional network connectivity (dFNC) analyzes time evolution of coherent activity in the brain. In this technique dynamic changes are considered for the whole brain. This paper proposes an information theory framework to measure information flowing among subsets of functional networks call functional domains. Our method aims at estimating bits of information contained and shared among domains. The succession of dynamic functional states is estimated at the domain level. Information quantity is based on the probabilities of observing each dynamic state. Mutual information measurement is then obtained from probabilities across domains. Thus, we named this value the cross domain mutual information (CDMI). Strong CDMIs were observed in relation to the subcortical domain. Domains related to sensorial input, motor control and cerebellum form another CDMI cluster. Information flow among other domains was seldom found. Other methods of dynamic connectivity focus on whole brain dFNC matrices. In the current framework, information theory is applied to states estimated from pairs of multi-network functional domains. In this context, we apply information theory to measure information flow across functional domains. Identified CDMI clusters point to known information pathways in the basal ganglia and also among areas of sensorial input, patterns found in static functional connectivity. In contrast, CDMI across brain areas of higher level cognitive processing follow a different pattern that indicates scarce information sharing. These findings show that employing information theory to formally measured information flow through brain domains reveals additional features of functional connectivity. Copyright © 2017 Elsevier B.V. All rights reserved.
Process-based Cost Estimation for Ramjet/Scramjet Engines
NASA Technical Reports Server (NTRS)
Singh, Brijendra; Torres, Felix; Nesman, Miles; Reynolds, John
2003-01-01
Process-based cost estimation plays a key role in effecting cultural change that integrates distributed science, technology and engineering teams to rapidly create innovative and affordable products. Working together, NASA Glenn Research Center and Boeing Canoga Park have developed a methodology of process-based cost estimation bridging the methodologies of high-level parametric models and detailed bottoms-up estimation. The NASA GRC/Boeing CP process-based cost model provides a probabilistic structure of layered cost drivers. High-level inputs characterize mission requirements, system performance, and relevant economic factors. Design alternatives are extracted from a standard, product-specific work breakdown structure to pre-load lower-level cost driver inputs and generate the cost-risk analysis. As product design progresses and matures the lower level more detailed cost drivers can be re-accessed and the projected variation of input values narrowed, thereby generating a progressively more accurate estimate of cost-risk. Incorporated into the process-based cost model are techniques for decision analysis, specifically, the analytic hierarchy process (AHP) and functional utility analysis. Design alternatives may then be evaluated not just on cost-risk, but also user defined performance and schedule criteria. This implementation of full-trade study support contributes significantly to the realization of the integrated development environment. The process-based cost estimation model generates development and manufacturing cost estimates. The development team plans to expand the manufacturing process base from approximately 80 manufacturing processes to over 250 processes. Operation and support cost modeling is also envisioned. Process-based estimation considers the materials, resources, and processes in establishing cost-risk and rather depending on weight as an input, actually estimates weight along with cost and schedule.
Effects of control inputs on the estimation of stability and control parameters of a light airplane
NASA Technical Reports Server (NTRS)
Cannaday, R. L.; Suit, W. T.
1977-01-01
The maximum likelihood parameter estimation technique was used to determine the values of stability and control derivatives from flight test data for a low-wing, single-engine, light airplane. Several input forms were used during the tests to investigate the consistency of parameter estimates as it relates to inputs. These consistencies were compared by using the ensemble variance and estimated Cramer-Rao lower bound. In addition, the relationship between inputs and parameter correlations was investigated. Results from the stabilator inputs are inconclusive but the sequence of rudder input followed by aileron input or aileron followed by rudder gave more consistent estimates than did rudder or ailerons individually. Also, square-wave inputs appeared to provide slightly improved consistency in the parameter estimates when compared to sine-wave inputs.
Analysis of positron lifetime spectra in polymers
NASA Technical Reports Server (NTRS)
Singh, Jag J.; Mall, Gerald H.; Sprinkle, Danny R.
1988-01-01
A new procedure for analyzing multicomponent positron lifetime spectra in polymers was developed. It requires initial estimates of the lifetimes and the intensities of various components, which are readily obtainable by a standard spectrum stripping process. These initial estimates, after convolution with the timing system resolution function, are then used as the inputs for a nonlinear least squares analysis to compute the estimates that conform to a global error minimization criterion. The convolution integral uses the full experimental resolution function, in contrast to the previous studies where analytical approximations of it were utilized. These concepts were incorporated into a generalized Computer Program for Analyzing Positron Lifetime Spectra (PAPLS) in polymers. Its validity was tested using several artificially generated data sets. These data sets were also analyzed using the widely used POSITRONFIT program. In almost all cases, the PAPLS program gives closer fit to the input values. The new procedure was applied to the analysis of several lifetime spectra measured in metal ion containing Epon-828 samples. The results are described.
Fully probabilistic control for stochastic nonlinear control systems with input dependent noise.
Herzallah, Randa
2015-03-01
Robust controllers for nonlinear stochastic systems with functional uncertainties can be consistently designed using probabilistic control methods. In this paper a generalised probabilistic controller design for the minimisation of the Kullback-Leibler divergence between the actual joint probability density function (pdf) of the closed loop control system, and an ideal joint pdf is presented emphasising how the uncertainty can be systematically incorporated in the absence of reliable systems models. To achieve this objective all probabilistic models of the system are estimated from process data using mixture density networks (MDNs) where all the parameters of the estimated pdfs are taken to be state and control input dependent. Based on this dependency of the density parameters on the input values, explicit formulations to the construction of optimal generalised probabilistic controllers are obtained through the techniques of dynamic programming and adaptive critic methods. Using the proposed generalised probabilistic controller, the conditional joint pdfs can be made to follow the ideal ones. A simulation example is used to demonstrate the implementation of the algorithm and encouraging results are obtained. Copyright © 2014 Elsevier Ltd. All rights reserved.
Rosen, I G; Luczak, Susan E; Weiss, Jordan
2014-03-15
We develop a blind deconvolution scheme for input-output systems described by distributed parameter systems with boundary input and output. An abstract functional analytic theory based on results for the linear quadratic control of infinite dimensional systems with unbounded input and output operators is presented. The blind deconvolution problem is then reformulated as a series of constrained linear and nonlinear optimization problems involving infinite dimensional dynamical systems. A finite dimensional approximation and convergence theory is developed. The theory is applied to the problem of estimating blood or breath alcohol concentration (respectively, BAC or BrAC) from biosensor-measured transdermal alcohol concentration (TAC) in the field. A distributed parameter model with boundary input and output is proposed for the transdermal transport of ethanol from the blood through the skin to the sensor. The problem of estimating BAC or BrAC from the TAC data is formulated as a blind deconvolution problem. A scheme to identify distinct drinking episodes in TAC data based on a Hodrick Prescott filter is discussed. Numerical results involving actual patient data are presented.
Fuzzy rule based estimation of agricultural diffuse pollution concentration in streams.
Singh, Raj Mohan
2008-04-01
Outflow from the agricultural fields carries diffuse pollutants like nutrients, pesticides, herbicides etc. and transports the pollutants into the nearby streams. It is a matter of serious concern for water managers and environmental researchers. The application of chemicals in the agricultural fields, and transport of these chemicals into streams are uncertain that cause complexity in reliable stream quality predictions. The chemical characteristics of applied chemical, percentage of area under the chemical application etc. are some of the main inputs that cause pollution concentration as output in streams. Each of these inputs and outputs may contain measurement errors. Fuzzy rule based model based on fuzzy sets suits to address uncertainties in inputs by incorporating overlapping membership functions for each of inputs even for limited data availability situations. In this study, the property of fuzzy sets to address the uncertainty in input-output relationship is utilized to obtain the estimate of concentrations of a herbicide, atrazine, in a stream. The data of White river basin, a part of the Mississippi river system, is used for developing the fuzzy rule based models. The performance of the developed methodology is found encouraging.
NASA Technical Reports Server (NTRS)
Lehtinen, B.; Geyser, L. C.
1984-01-01
AESOP is a computer program for use in designing feedback controls and state estimators for linear multivariable systems. AESOP is meant to be used in an interactive manner. Each design task that the program performs is assigned a "function" number. The user accesses these functions either (1) by inputting a list of desired function numbers or (2) by inputting a single function number. In the latter case the choice of the function will in general depend on the results obtained by the previously executed function. The most important of the AESOP functions are those that design,linear quadratic regulators and Kalman filters. The user interacts with the program when using these design functions by inputting design weighting parameters and by viewing graphic displays of designed system responses. Supporting functions are provided that obtain system transient and frequency responses, transfer functions, and covariance matrices. The program can also compute open-loop system information such as stability (eigenvalues), eigenvectors, controllability, and observability. The program is written in ANSI-66 FORTRAN for use on an IBM 3033 using TSS 370. Descriptions of all subroutines and results of two test cases are included in the appendixes.
Autonomous frequency domain identification: Theory and experiment
NASA Technical Reports Server (NTRS)
Yam, Yeung; Bayard, D. S.; Hadaegh, F. Y.; Mettler, E.; Milman, M. H.; Scheid, R. E.
1989-01-01
The analysis, design, and on-orbit tuning of robust controllers require more information about the plant than simply a nominal estimate of the plant transfer function. Information is also required concerning the uncertainty in the nominal estimate, or more generally, the identification of a model set within which the true plant is known to lie. The identification methodology that was developed and experimentally demonstrated makes use of a simple but useful characterization of the model uncertainty based on the output error. This is a characterization of the additive uncertainty in the plant model, which has found considerable use in many robust control analysis and synthesis techniques. The identification process is initiated by a stochastic input u which is applied to the plant p giving rise to the output. Spectral estimation (h = P sub uy/P sub uu) is used as an estimate of p and the model order is estimated using the produce moment matrix (PMM) method. A parametric model unit direction vector p is then determined by curve fitting the spectral estimate to a rational transfer function. The additive uncertainty delta sub m = p - unit direction vector p is then estimated by the cross spectral estimate delta = P sub ue/P sub uu where e = y - unit direction vectory y is the output error, and unit direction vector y = unit direction vector pu is the computed output of the parametric model subjected to the actual input u. The experimental results demonstrate the curve fitting algorithm produces the reduced-order plant model which minimizes the additive uncertainty. The nominal transfer function estimate unit direction vector p and the estimate delta of the additive uncertainty delta sub m are subsequently available to be used for optimization of robust controller performance and stability.
Membrane voltage changes in passive dendritic trees: a tapering equivalent cylinder model.
Poznański, R R
1988-01-01
An exponentially tapering equivalent cylinder model is employed in order to approximate the loss of the dendritic trunk parameter observed from anatomical data on apical and basilar dendrites of CA1 and CA3 hippocampal pyramidal neurons. This model allows dendritic trees with a relative paucity of branching to be treated. In particular, terminal branches are not required to end at the same electrotonic distance. The Laplace transform method is used to obtain analytic expressions for the Green's function corresponding to an instantaneous pulse of current injected at a single point along a tapering equivalent cylinder with sealed ends. The time course of the voltage in response to an arbitrary input is computed using the Green's function in a convolution integral. Examples of current input considered are (1) an infinitesimally brief (Dirac delta function) pulse and (2) a step pulse. It is demonstrated that inputs located on a tapering equivalent cylinder are more effective at the soma than identically placed inputs on a nontapering equivalent cylinder. Asymptotic solutions are derived to enable the voltage response behaviour over both relatively short and long time periods to be analysed. Semilogarithmic plots of these solutions provide a basis for estimating the membrane time constant tau m from experimental transients. Transient voltage decrement from a clamped soma reveals that tapering tends to reduce the error associated with inadequate voltage clamping of the dendritic membrane. A formula is derived which shows that tapering tends to increase the estimate of the electrotonic length parameter L.
A new polytopic approach for the unknown input functional observer design
NASA Astrophysics Data System (ADS)
Bezzaoucha, Souad; Voos, Holger; Darouach, Mohamed
2018-03-01
In this paper, a constructive procedure to design Functional Unknown Input Observers for nonlinear continuous time systems is proposed under the Polytopic Takagi-Sugeno framework. An equivalent representation for the nonlinear model is achieved using the sector nonlinearity transformation. Applying the Lyapunov theory and the ? attenuation, linear matrix inequalities conditions are deduced which are solved for feasibility to obtain the observer design matrices. To cope with the effect of unknown inputs, classical approach of decoupling the unknown input for the linear case is used. Both algebraic and solver-based solutions are proposed (relaxed conditions). Necessary and sufficient conditions for the existence of the functional polytopic observer are given. For both approaches, the general and particular cases (measurable premise variables, full state estimation with full and reduced order cases) are considered and it is shown that the proposed conditions correspond to the one presented for standard linear case. To illustrate the proposed theoretical results, detailed numerical simulations are presented for a Quadrotor Aerial Robots Landing and a Waste Water Treatment Plant. Both systems are highly nonlinear and represented in a T-S polytopic form with unmeasurable premise variables and unknown inputs.
Stochastic Estimation of Cost Frontier: Evidence from Bangladesh
ERIC Educational Resources Information Center
Mamun, Shamsul Arifeen Khan
2012-01-01
In the literature of higher education cost function study, enough knowledge is created in the area of economy scale in the context of developed countries but the knowledge of input demand is lacking. On the other hand, empirical knowledge in the context of developing countries is very meagre. The paper fills up the knowledge gap, estimating a…
Constraints on the age and evolution of the Galaxy from the white dwarf luminosity function
NASA Technical Reports Server (NTRS)
Wood, M. A.
1992-01-01
The white dwarf disk luminosity function is explored using observational results of Liebert et al. (1988, 1989) as a template for comparison, and the cooling curves of Wood (1990, 1991) as the input basis functions for the integration. The star formation rate over the history of the Galaxy is found to be constant to within an order of magnitude, and the disk age lies in the range 6-13.5 Gyr, where roughly 40 percent of the uncertainty is due to the observational uncertainties. Using the best current estimates as inputs to the integration, the disk ages range from 7.5 to 11 Gyr, i.e., they are substantially younger than most estimates for the halo globular clusters but in reasonable agreement with those for the disk globular clusters and open clusters. The ages of these differing populations, taken together, are consistent with the pressure-supported collapse models of early spiral Galactic evolution.
Optimal nonlinear codes for the perception of natural colours.
von der Twer, T; MacLeod, D I
2001-08-01
We discuss how visual nonlinearity can be optimized for the precise representation of environmental inputs. Such optimization leads to neural signals with a compressively nonlinear input-output function the gradient of which is matched to the cube root of the probability density function (PDF) of the environmental input values (and not to the PDF directly as in histogram equalization). Comparisons between theory and psychophysical and electrophysiological data are roughly consistent with the idea that parvocellular (P) cells are optimized for precision representation of colour: their contrast-response functions span a range appropriately matched to the environmental distribution of natural colours along each dimension of colour space. Thus P cell codes for colour may have been selected to minimize error in the perceptual estimation of stimulus parameters for natural colours. But magnocellular (M) cells have a much stronger than expected saturating nonlinearity; this supports the view that the function of M cells is mainly to detect boundaries rather than to specify contrast or lightness.
Aerodynamic Parameter Estimation for the X-43A (Hyper-X) from Flight Data
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Derry, Stephen D.; Smith, Mark S.
2005-01-01
Aerodynamic parameters were estimated based on flight data from the third flight of the X-43A hypersonic research vehicle, also called Hyper-X. Maneuvers were flown using multiple orthogonal phase-optimized sweep inputs applied as simultaneous control surface perturbations at Mach 8, 7, 6, 5, 4, and 3 during the vehicle descent. Aerodynamic parameters, consisting of non-dimensional longitudinal and lateral stability and control derivatives, were estimated from flight data at each Mach number. Multi-step inputs at nearly the same flight conditions were also flown to assess the prediction capability of the identified models. Prediction errors were found to be comparable in magnitude to the modeling errors, which indicates accurate modeling. Aerodynamic parameter estimates were plotted as a function of Mach number, and compared with estimates from the pre-flight aerodynamic database, which was based on wind-tunnel tests and computational fluid dynamics. Agreement between flight estimates and values computed from the aerodynamic database was excellent overall.
SDR input power estimation algorithms
NASA Astrophysics Data System (ADS)
Briones, J. C.; Nappier, J. M.
The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.
SDR Input Power Estimation Algorithms
NASA Technical Reports Server (NTRS)
Nappier, Jennifer M.; Briones, Janette C.
2013-01-01
The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.
AU-FREDI - AUTONOMOUS FREQUENCY DOMAIN IDENTIFICATION
NASA Technical Reports Server (NTRS)
Yam, Y.
1994-01-01
The Autonomous Frequency Domain Identification program, AU-FREDI, is a system of methods, algorithms and software that was developed for the identification of structural dynamic parameters and system transfer function characterization for control of large space platforms and flexible spacecraft. It was validated in the CALTECH/Jet Propulsion Laboratory's Large Spacecraft Control Laboratory. Due to the unique characteristics of this laboratory environment, and the environment-specific nature of many of the software's routines, AU-FREDI should be considered to be a collection of routines which can be modified and reassembled to suit system identification and control experiments on large flexible structures. The AU-FREDI software was originally designed to command plant excitation and handle subsequent input/output data transfer, and to conduct system identification based on the I/O data. Key features of the AU-FREDI methodology are as follows: 1. AU-FREDI has on-line digital filter design to support on-orbit optimal input design and data composition. 2. Data composition of experimental data in overlapping frequency bands overcomes finite actuator power constraints. 3. Recursive least squares sine-dwell estimation accurately handles digitized sinusoids and low frequency modes. 4. The system also includes automated estimation of model order using a product moment matrix. 5. A sample-data transfer function parametrization supports digital control design. 6. Minimum variance estimation is assured with a curve fitting algorithm with iterative reweighting. 7. Robust root solvers accurately factorize high order polynomials to determine frequency and damping estimates. 8. Output error characterization of model additive uncertainty supports robustness analysis. The research objectives associated with AU-FREDI were particularly useful in focusing the identification methodology for realistic on-orbit testing conditions. Rather than estimating the entire structure, as is typically done in ground structural testing, AU-FREDI identifies only the key transfer function parameters and uncertainty bounds that are necessary for on-line design and tuning of robust controllers. AU-FREDI's system identification algorithms are independent of the JPL-LSCL environment, and can easily be extracted and modified for use with input/output data files. The basic approach of AU-FREDI's system identification algorithms is to non-parametrically identify the sampled data in the frequency domain using either stochastic or sine-dwell input, and then to obtain a parametric model of the transfer function by curve-fitting techniques. A cross-spectral analysis of the output error is used to determine the additive uncertainty in the estimated transfer function. The nominal transfer function estimate and the estimate of the associated additive uncertainty can be used for robust control analysis and design. AU-FREDI's I/O data transfer routines are tailored to the environment of the CALTECH/ JPL-LSCL which included a special operating system to interface with the testbed. Input commands for a particular experiment (wideband, narrowband, or sine-dwell) were computed on-line and then issued to respective actuators by the operating system. The operating system also took measurements through displacement sensors and passed them back to the software for storage and off-line processing. In order to make use of AU-FREDI's I/O data transfer routines, a user would need to provide an operating system capable of overseeing such functions between the software and the experimental setup at hand. The program documentation contains information designed to support users in either providing such an operating system or modifying the system identification algorithms for use with input/output data files. It provides a history of the theoretical, algorithmic and software development efforts including operating system requirements and listings of some of the various special purpose subroutines which were developed and optimized for Lahey FORTRAN compilers on IBM PC-AT computers before the subroutines were integrated into the system software. Potential purchasers are encouraged to purchase and review the documentation before purchasing the AU-FREDI software. AU-FREDI is distributed in DEC VAX BACKUP format on a 1600 BPI 9-track magnetic tape (standard media) or a TK50 tape cartridge. AU-FREDI was developed in 1989 and is a copyrighted work with all copyright vested in NASA.
NASA Astrophysics Data System (ADS)
Xiong, Guoming; Cumming, Paul; Todica, Andrei; Hacker, Marcus; Bartenstein, Peter; Böning, Guido
2012-12-01
Absolute quantitation of the cerebral metabolic rate for glucose (CMRglc) can be obtained in positron emission tomography (PET) studies when serial measurements of the arterial [18F]-fluoro-deoxyglucose (FDG) input are available. Since this is not always practical in PET studies of rodents, there has been considerable interest in defining an image-derived input function (IDIF) by placing a volume of interest (VOI) within the left ventricle of the heart. However, spill-in arising from trapping of FDG in the myocardium often leads to progressive contamination of the IDIF, which propagates to underestimation of the magnitude of CMRglc. We therefore developed a novel, non-invasive method for correcting the IDIF without scaling to a blood sample. To this end, we first obtained serial arterial samples and dynamic FDG-PET data of the head and heart in a group of eight anaesthetized rats. We fitted a bi-exponential function to the serial measurements of the IDIF, and then used the linear graphical Gjedde-Patlak method to describe the accumulation in myocardium. We next estimated the magnitude of myocardial spill-in reaching the left ventricle VOI by assuming a Gaussian point-spread function, and corrected the measured IDIF for this estimated spill-in. Finally, we calculated parametric maps of CMRglc using the corrected IDIF, and for the sake of comparison, relative to serial blood sampling from the femoral artery. The uncorrected IDIF resulted in 20% underestimation of the magnitude of CMRglc relative to the gold standard arterial input method. However, there was no bias with the corrected IDIF, which was robust to the variable extent of myocardial tracer uptake, such that there was a very high correlation between individual CMRglc measurements using the corrected IDIF with gold-standard arterial input results. Based on simulation, we furthermore find that electrocardiogram-gating, i.e. ECG-gating is not necessary for IDIF quantitation using our approach.
Sprague, Lori A.; Gronberg, Jo Ann M.
2013-01-01
Anthropogenic inputs of nitrogen and phosphorus to each county in the conterminous United States and to the watersheds of 495 surface-water sites studied as part of the U.S. Geological Survey National Water-Quality Assessment Program were quantified for the years 1992, 1997, and 2002. Estimates of inputs of nitrogen and phosphorus from biological fixation by crops (for nitrogen only), human consumption, crop production for human consumption, animal production for human consumption, animal consumption, and crop production for animal consumption for each county are provided in a tabular dataset. These county-level estimates were allocated to the watersheds of the surface-water sites to estimate watershed-level inputs from the same sources; these estimates also are provided in a tabular dataset, together with calculated estimates of net import of food and net import of feed and previously published estimates of inputs from atmospheric deposition, fertilizer, and recoverable manure. The previously published inputs are provided for each watershed so that final estimates of total anthropogenic nutrient inputs could be calculated. Estimates of total anthropogenic inputs are presented together with previously published estimates of riverine loads of total nitrogen and total phosphorus for reference.
Estimation of Enterococci Input from Bathers and Animals on A Recreational Beach Using Camera Images
D, Wang John; M, Solo-Gabriele Helena; M, Abdelzaher Amir; E, Fleming Lora
2010-01-01
Enterococci, are used nationwide as a water quality indicator of marine recreational beaches. Prior research has demonstrated that enterococci inputs to the study beach site (located in Miami, FL) are dominated by non-point sources (including humans and animals). We have estimated their respective source functions by developing a counting methodology for individuals to better understand their non-point source load impacts. The method utilizes camera images of the beach taken at regular time intervals to determine the number of people and animal visitors. The developed method translates raw image counts for weekdays and weekend days into daily and monthly visitation rates. Enterococci source functions were computed from the observed number of unique individuals for average days of each month of the year, and from average load contributions for humans and for animals. Results indicate that dogs represent the larger source of enterococci relative to humans and birds. PMID:20381094
Wong, Koon-Pong; Zhang, Xiaoli; Huang, Sung-Cheng
2013-01-01
Purpose Accurate determination of the plasma input function (IF) is essential for absolute quantification of physiological parameters in positron emission tomography (PET). However, it requires an invasive and tedious procedure of arterial blood sampling that is challenging in mice because of the limited blood volume. In this study, a hybrid modeling approach is proposed to estimate the plasma IF of 2-deoxy-2-[18F]fluoro-D-glucose ([18F]FDG) in mice using accumulated radioactivity in urinary bladder together with a single late-time blood sample measurement. Methods Dynamic PET scans were performed on nine isoflurane-anesthetized male C57BL/6 mice after a bolus injection of [18F]FDG at the lateral caudal vein. During a 60- or 90-min scan, serial blood samples were taken from the femoral artery. Image data were reconstructed using filtered backprojection with CT-based attenuation correction. Total accumulated radioactivity in the urinary bladder was fitted to a renal compartmental model with the last blood sample and a 1-exponential function that described the [18F]FDG clearance in blood. Multiple late-time blood sample estimates were calculated by the blood [18F]FDG clearance equation. A sum of 4-exponentials was assumed for the plasma IF that served as a forcing function to all tissues. The estimated plasma IF was obtained by simultaneously fitting the [18F]FDG model to the time-activity curves (TACs) of liver and muscle and the forcing function to early (0–1 min) left-ventricle data (corrected for delay, dispersion, partial-volume effects and erythrocytes uptake) and the late-time blood estimates. Using only the blood sample acquired at the end of the study to estimate the IF and the use of liver TAC as an alternative IF were also investigated. Results The area under the plasma TACs calculated for all studies using the hybrid approach was not significantly different from that using all blood samples. [18F]FDG uptake constants in brain, myocardium, skeletal muscle and liver computed by the Patlak analysis using estimated and measured plasma TACs were in excellent agreement (slope ~ 1; R2 > 0.938). The IF estimated using only the last blood sample acquired at the end of the study and the use of liver TAC as plasma IF provided less reliable results. Conclusions The estimated plasma IFs obtained with the hybrid model agreed well with those derived from arterial blood sampling. Importantly, the proposed method obviates the need of arterial catheterization, making it possible to perform repeated dynamic [18F]FDG PET studies on the same animal. Liver TAC is unsuitable as an input function for absolute quantification of [18F]FDG PET data. PMID:23322346
Revision of the Rawls et al. (1982) pedotransfer functions for their applicability to US croplands
USDA-ARS?s Scientific Manuscript database
Large scale environmental impact studies typically involve the use of simulation models and require a variety of inputs, some of which may need to be estimated in absence of adequate measured data. As an example, soil water retention needs to be estimated for a large number of soils that are to be u...
Optimal allocation of testing resources for statistical simulations
NASA Astrophysics Data System (ADS)
Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick
2015-07-01
Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.
Khazaee, Mostafa; Markazi, Amir H D; Omidi, Ehsan
2015-11-01
In this paper, a new Adaptive Fuzzy Predictive Sliding Mode Control (AFP-SMC) is presented for nonlinear systems with uncertain dynamics and unknown input delay. The control unit consists of a fuzzy inference system to approximate the ideal linearization control, together with a switching strategy to compensate for the estimation errors. Also, an adaptive fuzzy predictor is used to estimate the future values of the system states to compensate for the time delay. The adaptation laws are used to tune the controller and predictor parameters, which guarantee the stability based on a Lyapunov-Krasovskii functional. To evaluate the method effectiveness, the simulation and experiment on an overhead crane system are presented. According to the obtained results, AFP-SMC can effectively control the uncertain nonlinear systems, subject to input delays of known bound. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Synaptic control of the shape of the motoneuron pool input-output function
Heckman, Charles J.
2017-01-01
Although motoneurons have often been considered to be fairly linear transducers of synaptic input, recent evidence suggests that strong persistent inward currents (PICs) in motoneurons allow neuromodulatory and inhibitory synaptic inputs to induce large nonlinearities in the relation between the level of excitatory input and motor output. To try to estimate the possible extent of this nonlinearity, we developed a pool of model motoneurons designed to replicate the characteristics of motoneuron input-output properties measured in medial gastrocnemius motoneurons in the decerebrate cat with voltage-clamp and current-clamp techniques. We drove the model pool with a range of synaptic inputs consisting of various mixtures of excitation, inhibition, and neuromodulation. We then looked at the relation between excitatory drive and total pool output. Our results revealed that the PICs not only enhance gain but also induce a strong nonlinearity in the relation between the average firing rate of the motoneuron pool and the level of excitatory input. The relation between the total simulated force output and input was somewhat more linear because of higher force outputs in later-recruited units. We also found that the nonlinearity can be increased by increasing neuromodulatory input and/or balanced inhibitory input and minimized by a reciprocal, push-pull pattern of inhibition. We consider the possibility that a flexible input-output function may allow motor output to be tuned to match the widely varying demands of the normal motor repertoire. NEW & NOTEWORTHY Motoneuron activity is generally considered to reflect the level of excitatory drive. However, the activation of voltage-dependent intrinsic conductances can distort the relation between excitatory drive and the total output of a pool of motoneurons. Using a pool of realistic motoneuron models, we show that pool output can be a highly nonlinear function of synaptic input but linearity can be achieved through adjusting the time course of excitatory and inhibitory synaptic inputs. PMID:28053245
Influence of speckle image reconstruction on photometric precision for large solar telescopes
NASA Astrophysics Data System (ADS)
Peck, C. L.; Wöger, F.; Marino, J.
2017-11-01
Context. High-resolution observations from large solar telescopes require adaptive optics (AO) systems to overcome image degradation caused by Earth's turbulent atmosphere. AO corrections are, however, only partial. Achieving near-diffraction limited resolution over a large field of view typically requires post-facto image reconstruction techniques to reconstruct the source image. Aims: This study aims to examine the expected photometric precision of amplitude reconstructed solar images calibrated using models for the on-axis speckle transfer functions and input parameters derived from AO control data. We perform a sensitivity analysis of the photometric precision under variations in the model input parameters for high-resolution solar images consistent with four-meter class solar telescopes. Methods: Using simulations of both atmospheric turbulence and partial compensation by an AO system, we computed the speckle transfer function under variations in the input parameters. We then convolved high-resolution numerical simulations of the solar photosphere with the simulated atmospheric transfer function, and subsequently deconvolved them with the model speckle transfer function to obtain a reconstructed image. To compute the resulting photometric precision, we compared the intensity of the original image with the reconstructed image. Results: The analysis demonstrates that high photometric precision can be obtained for speckle amplitude reconstruction using speckle transfer function models combined with AO-derived input parameters. Additionally, it shows that the reconstruction is most sensitive to the input parameter that characterizes the atmospheric distortion, and sub-2% photometric precision is readily obtained when it is well estimated.
An external logic architecture for implementing traffic signal system control strategies.
DOT National Transportation Integrated Search
2011-09-01
The built-in logic functions in traffic controllers have very limited capability to store information, to analyze input data, to estimate performance measures, and to adopt control strategy decisions. These capabilities are imperative to support traf...
Assessment of input-output properties and control of neuroprosthetic hand grasp.
Hines, A E; Owens, N E; Crago, P E
1992-06-01
Three tests have been developed to evaluate rapidly and quantitatively the input-output properties and patient control of neuroprosthetic hand grasp. Each test utilizes a visual pursuit tracking task during which the subject controls the grasp force and grasp opening (position) of the hand. The first test characterizes the static input-output properties of the hand grasp, where the input is a slowly changing patient generated command signal and the outputs are grasp force and grasp opening. Nonlinearities and inappropriate slopes have been documented in these relationships, and in some instances the need for system returning has been indicated. For each subject larger grasp forces were produced when grasping larger objects, and for some subjects the shapes of the relationships also varied with object size. The second test quantifies the ability of the subject to control the hand grasp outputs while tracking steps and ramps. Neuroprosthesis users had rms errors two to three times larger when tracking steps versus ramps, and had rms errors four to five times larger than normals when tracking ramps. The third test provides an estimate of the frequency response of the hand grasp system dynamics, from input and output data collected during a random tracking task. Transfer functions were estimated by spectral analysis after removal of the static input-output nonlinearities measured in the first test. The dynamics had low-pass filter characteristics with 3 dB cutoff frequencies from 1.0 to 1.4 Hz. The tests developed in this study provide a rapid evaluation of both the system and the user. They provide information to 1) help interpret subject performance of functional tasks, 2) evaluate the efficacy of system features such as closed-loop control, and 3) screen the neuroprosthesis to indicate the need for retuning.
A flatness-based control approach to drug infusion for cardiac function regulation
NASA Astrophysics Data System (ADS)
Rigatos, Gerasimos; Zervos, Nikolaos; Melkikh, Alexey
2016-12-01
A new control method based on differential flatness theory is developed in this article, aiming at solving the problem of regulation of haemodynamic parameters, Actually control of the cardiac output (volume of blood pumped out by heart per unit of time) and of the arterial blood pressure is achieved through the administered infusion of cardiovascular drugs, such as dopamine and sodium nitroprusside. Time delays between the control inputs and the system's outputs are taken into account. Using the principle of dynamic extension, which means that by considering certain control inputs and their derivatives as additional state variables, a state-space description for the heart's function is obtained. It is proven that the dynamic model of the heart is a differentially flat one. This enables its transformation into a linear canonical and decoupled form, for which the design of a stabilizing feedback controller becomes possible. The proposed feedback controller is of proven stability and assures fast and accurate tracking of the reference setpoints by the outputs of the heart's dynamic model. Moreover, by using a Kalman Filter-based disturbances' estimator, it becomes possible to estimate in real-time and compensate for the model uncertainty and external perturbation inputs that affect the heart's model.
Adaptive model reduction for continuous systems via recursive rational interpolation
NASA Technical Reports Server (NTRS)
Lilly, John H.
1994-01-01
A method for adaptive identification of reduced-order models for continuous stable SISO and MIMO plants is presented. The method recursively finds a model whose transfer function (matrix) matches that of the plant on a set of frequencies chosen by the designer. The algorithm utilizes the Moving Discrete Fourier Transform (MDFT) to continuously monitor the frequency-domain profile of the system input and output signals. The MDFT is an efficient method of monitoring discrete points in the frequency domain of an evolving function of time. The model parameters are estimated from MDFT data using standard recursive parameter estimation techniques. The algorithm has been shown in simulations to be quite robust to additive noise in the inputs and outputs. A significant advantage of the method is that it enables a type of on-line model validation. This is accomplished by simultaneously identifying a number of models and comparing each with the plant in the frequency domain. Simulations of the method applied to an 8th-order SISO plant and a 10-state 2-input 2-output plant are presented. An example of on-line model validation applied to the SISO plant is also presented.
Maximum Entropy Approach in Dynamic Contrast-Enhanced Magnetic Resonance Imaging.
Farsani, Zahra Amini; Schmid, Volker J
2017-01-01
In the estimation of physiological kinetic parameters from Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) data, the determination of the arterial input function (AIF) plays a key role. This paper proposes a Bayesian method to estimate the physiological parameters of DCE-MRI along with the AIF in situations, where no measurement of the AIF is available. In the proposed algorithm, the maximum entropy method (MEM) is combined with the maximum a posterior approach (MAP). To this end, MEM is used to specify a prior probability distribution of the unknown AIF. The ability of this method to estimate the AIF is validated using the Kullback-Leibler divergence. Subsequently, the kinetic parameters can be estimated with MAP. The proposed algorithm is evaluated with a data set from a breast cancer MRI study. The application shows that the AIF can reliably be determined from the DCE-MRI data using MEM. Kinetic parameters can be estimated subsequently. The maximum entropy method is a powerful tool to reconstructing images from many types of data. This method is useful for generating the probability distribution based on given information. The proposed method gives an alternative way to assess the input function from the existing data. The proposed method allows a good fit of the data and therefore a better estimation of the kinetic parameters. In the end, this allows for a more reliable use of DCE-MRI. Schattauer GmbH.
GD SDR Automatic Gain Control Characterization Testing
NASA Technical Reports Server (NTRS)
Nappier, Jennifer M.; Briones, Janette C.
2013-01-01
The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) will provide experimenters an opportunity to develop and demonstrate experimental waveforms in space. The GD SDR platform and initial waveform were characterized on the ground before launch and the data will be compared to the data that will be collected during on-orbit operations. A desired function of the SDR is to estimate the received signal to noise ratio (SNR), which would enable experimenters to better determine on-orbit link conditions. The GD SDR does not have an SNR estimator, but it does have an analog and a digital automatic gain control (AGC). The AGCs can be used to estimate the SDR input power which can be converted into a SNR. Tests were conducted to characterize the AGC response to changes in SDR input power and temperature. This purpose of this paper is to describe the tests that were conducted, discuss the results showi ng how the AGCs relate to the SDR input power, and provide recommendations for AGC testing and characterization.
GD SDR Automatic Gain Control Characterization Testing
NASA Technical Reports Server (NTRS)
Nappier, Jennifer M.; Briones, Janette C.
2013-01-01
The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) will provide experimenters an opportunity to develop and demonstrate experimental waveforms in space. The GD SDR platform and initial waveform were characterized on the ground before launch and the data will be compared to the data that will be collected during on-orbit operations. A desired function of the SDR is to estimate the received signal to noise ratio (SNR), which would enable experimenters to better determine on-orbit link conditions. The GD SDR does not have an SNR estimator, but it does have an analog and a digital automatic gain control (AGC). The AGCs can be used to estimate the SDR input power which can be converted into a SNR. Tests were conducted to characterize the AGC response to changes in SDR input power and temperature. This purpose of this paper is to describe the tests that were conducted, discuss the results showing how the AGCs relate to the SDR input power, and provide recommendations for AGC testing and characterization.
A two-stage Monte Carlo approach to the expression of uncertainty with finite sample sizes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crowder, Stephen Vernon; Moyer, Robert D.
2005-05-01
Proposed supplement I to the GUM outlines a 'propagation of distributions' approach to deriving the distribution of a measurand for any non-linear function and for any set of random inputs. The supplement's proposed Monte Carlo approach assumes that the distributions of the random inputs are known exactly. This implies that the sample sizes are effectively infinite. In this case, the mean of the measurand can be determined precisely using a large number of Monte Carlo simulations. In practice, however, the distributions of the inputs will rarely be known exactly, but must be estimated using possibly small samples. If these approximatedmore » distributions are treated as exact, the uncertainty in estimating the mean is not properly taken into account. In this paper, we propose a two-stage Monte Carlo procedure that explicitly takes into account the finite sample sizes used to estimate parameters of the input distributions. We will illustrate the approach with a case study involving the efficiency of a thermistor mount power sensor. The performance of the proposed approach will be compared to the standard GUM approach for finite samples using simple non-linear measurement equations. We will investigate performance in terms of coverage probabilities of derived confidence intervals.« less
NASA Astrophysics Data System (ADS)
Tomasi, G.; Kimberley, S.; Rosso, L.; Aboagye, E.; Turkheimer, F.
2012-04-01
In positron emission tomography (PET) studies involving organs different from the brain, ignoring the metabolite contribution to the tissue time-activity curves (TAC), as in the standard single-input (SI) models, may compromise the accuracy of the estimated parameters. We employed here double-input (DI) compartmental modeling (CM), previously used for [11C]thymidine, and a novel DI spectral analysis (SA) approach on the tracers 5-[18F]fluorouracil (5-[18F]FU) and [18F]fluorothymidine ([18F]FLT). CM and SA were performed initially with a SI approach using the parent plasma TAC as an input function. These methods were then employed using a DI approach with the metabolite plasma TAC as an additional input function. Regions of interest (ROIs) corresponding to healthy liver, kidneys and liver metastases for 5-[18F]FU and to tumor, vertebra and liver for [18F]FLT were analyzed. For 5-[18F]FU, the improvement of the fit quality with the DI approaches was remarkable; in CM, the Akaike information criterion (AIC) always selected the DI over the SI model. Volume of distribution estimates obtained with DI CM and DI SA were in excellent agreement, for both parent 5-[18F]FU (R2 = 0.91) and metabolite [18F]FBAL (R2 = 0.99). For [18F]FLT, the DI methods provided notable improvements but less substantial than for 5-[18F]FU due to the lower rate of metabolism of [18F]FLT. On the basis of the AIC values, agreement between [18F]FLT Ki estimated with the SI and DI models was good (R2 = 0.75) for the ROIs where the metabolite contribution was negligible, indicating that the additional input did not bias the parent tracer only-related estimates. When the AIC suggested a substantial contribution of the metabolite [18F]FLT-glucuronide, on the other hand, the change in the parent tracer only-related parameters was significant (R2 = 0.33 for Ki). Our results indicated that improvements of DI over SI approaches can range from moderate to substantial and are more significant for tracers with a high rate of metabolism. Furthermore, they showed that SA is suitable for DI modeling and can be used effectively in the analysis of PET data.
Modal analysis using a Fourier analyzer, curve-fitting, and modal tuning
NASA Technical Reports Server (NTRS)
Craig, R. R., Jr.; Chung, Y. T.
1981-01-01
The proposed modal test program differs from single-input methods in that preliminary data may be acquired using multiple inputs, and modal tuning procedures may be employed to define closely spaced frquency modes more accurately or to make use of frequency response functions (FRF's) which are based on several input locations. In some respects the proposed modal test proram resembles earlier sine-sweep and sine-dwell testing in that broadband FRF's are acquired using several input locations, and tuning is employed to refine the modal parameter estimates. The major tasks performed in the proposed modal test program are outlined. Data acquisition and FFT processing, curve fitting, and modal tuning phases are described and examples are given to illustrate and evaluate them.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Justin; Hund, Lauren
2017-02-01
Dynamic compression experiments are being performed on complicated materials using increasingly complex drivers. The data produced in these experiments are beginning to reach a regime where traditional analysis techniques break down; requiring the solution of an inverse problem. A common measurement in dynamic experiments is an interface velocity as a function of time, and often this functional output can be simulated using a hydrodynamics code. Bayesian model calibration is a statistical framework to estimate inputs into a computational model in the presence of multiple uncertainties, making it well suited to measurements of this type. In this article, we apply Bayesianmore » model calibration to high pressure (250 GPa) ramp compression measurements in tantalum. We address several issues speci c to this calibration including the functional nature of the output as well as parameter and model discrepancy identi ability. Speci cally, we propose scaling the likelihood function by an e ective sample size rather than modeling the autocorrelation function to accommodate the functional output and propose sensitivity analyses using the notion of `modularization' to assess the impact of experiment-speci c nuisance input parameters on estimates of material properties. We conclude that the proposed Bayesian model calibration procedure results in simple, fast, and valid inferences on the equation of state parameters for tantalum.« less
Evaluation of Piloted Inputs for Onboard Frequency Response Estimation
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Martos, Borja
2013-01-01
Frequency response estimation results are presented using piloted inputs and a real-time estimation method recently developed for multisine inputs. A nonlinear simulation of the F-16 and a Piper Saratoga research aircraft were subjected to different piloted test inputs while the short period stabilator/elevator to pitch rate frequency response was estimated. Results show that the method can produce accurate results using wide-band piloted inputs instead of multisines. A new metric is introduced for evaluating which data points to include in the analysis and recommendations are provided for applying this method with piloted inputs.
Explicit least squares system parameter identification for exact differential input/output models
NASA Technical Reports Server (NTRS)
Pearson, A. E.
1993-01-01
The equation error for a class of systems modeled by input/output differential operator equations has the potential to be integrated exactly, given the input/output data on a finite time interval, thereby opening up the possibility of using an explicit least squares estimation technique for system parameter identification. The paper delineates the class of models for which this is possible and shows how the explicit least squares cost function can be obtained in a way that obviates dealing with unknown initial and boundary conditions. The approach is illustrated by two examples: a second order chemical kinetics model and a third order system of Lorenz equations.
Aerodynamic parameter estimation via Fourier modulating function techniques
NASA Technical Reports Server (NTRS)
Pearson, A. E.
1995-01-01
Parameter estimation algorithms are developed in the frequency domain for systems modeled by input/output ordinary differential equations. The approach is based on Shinbrot's method of moment functionals utilizing Fourier based modulating functions. Assuming white measurement noises for linear multivariable system models, an adaptive weighted least squares algorithm is developed which approximates a maximum likelihood estimate and cannot be biased by unknown initial or boundary conditions in the data owing to a special property attending Shinbrot-type modulating functions. Application is made to perturbation equation modeling of the longitudinal and lateral dynamics of a high performance aircraft using flight-test data. Comparative studies are included which demonstrate potential advantages of the algorithm relative to some well established techniques for parameter identification. Deterministic least squares extensions of the approach are made to the frequency transfer function identification problem for linear systems and to the parameter identification problem for a class of nonlinear-time-varying differential system models.
Parameter identification for nonlinear aerodynamic systems
NASA Technical Reports Server (NTRS)
Pearson, Allan E.
1990-01-01
Parameter identification for nonlinear aerodynamic systems is examined. It is presumed that the underlying model can be arranged into an input/output (I/O) differential operator equation of a generic form. The algorithm estimation is especially efficient since the equation error can be integrated exactly given any I/O pair to obtain an algebraic function of the parameters. The algorithm for parameter identification was extended to the order determination problem for linear differential system. The degeneracy in a least squares estimate caused by feedback was addressed. A method of frequency analysis for determining the transfer function G(j omega) from transient I/O data was formulated using complex valued Fourier based modulating functions in contrast with the trigonometric modulating functions for the parameter estimation problem. A simulation result of applying the algorithm is given under noise-free conditions for a system with a low pass transfer function.
Hybrid Simulation Modeling to Estimate U.S. Energy Elasticities
NASA Astrophysics Data System (ADS)
Baylin-Stern, Adam C.
This paper demonstrates how an U.S. application of CIMS, a technologically explicit and behaviourally realistic energy-economy simulation model which includes macro-economic feedbacks, can be used to derive estimates of elasticity of substitution (ESUB) and autonomous energy efficiency index (AEEI) parameters. The ability of economies to reduce greenhouse gas emissions depends on the potential for households and industry to decrease overall energy usage, and move from higher to lower emissions fuels. Energy economists commonly refer to ESUB estimates to understand the degree of responsiveness of various sectors of an economy, and use estimates to inform computable general equilibrium models used to study climate policies. Using CIMS, I have generated a set of future, 'pseudo-data' based on a series of simulations in which I vary energy and capital input prices over a wide range. I then used this data set to estimate the parameters for transcendental logarithmic production functions using regression techniques. From the production function parameter estimates, I calculated an array of elasticity of substitution values between input pairs. Additionally, this paper demonstrates how CIMS can be used to calculate price-independent changes in energy-efficiency in the form of the AEEI, by comparing energy consumption between technologically frozen and 'business as usual' simulations. The paper concludes with some ideas for model and methodological improvement, and how these might figure into future work in the estimation of ESUBs from CIMS. Keywords: Elasticity of substitution; hybrid energy-economy model; translog; autonomous energy efficiency index; rebound effect; fuel switching.
Solubility of aerosol trace elements: Sources and deposition fluxes in the Canary Region
NASA Astrophysics Data System (ADS)
López-García, Patricia; Gelado-Caballero, María Dolores; Collado-Sánchez, Cayetano; Hernández-Brito, José Joaquín
2017-01-01
African dust inputs have important effects on the climate and marine biogeochemistry of the subtropical North Atlantic Ocean. The impact of dust inputs on oceanic carbon uptake and climate is dependent on total dust deposition fluxes as well as the bioavailability of nutrients and metals in the dust. In this work, the solubility of trace metals (Fe, Al, Mn, Co and Cu) and ions (Ca, sulphate, nitrate and phosphate) has been estimated from the analysis of a long-time series of 109 samples collected over a 3-year period in the Canary Islands. Solubility is primarily a function of aerosol origin, with higher solubility values corresponding to aerosols with more anthropogenic influence. Using soluble fractions of trace elements measured in this work, atmospheric deposition fluxes of soluble metals and nutrients have been calculated. Inputs of dissolved nutrients (P, N and Fe) have been estimated for the mixed layer. Considering that P is the limiting factor when ratios of these elements are compared with phytoplankton requirements, an increase of 0.58 nM of P in the mixed layer (∼150 m depth) and in a year can be estimated, which can support an increase of 0.02 μg Chla L-1 y-1. These atmospheric inputs of trace metals and nutrients appear to be significant relative to the concentrations reported in this region, especially during the summer months when the water column is more stratified and deep-water nutrient inputs are reduced.
NASA Astrophysics Data System (ADS)
Korelin, Ivan A.; Porshnev, Sergey V.
2018-05-01
A model of the non-stationary queuing system (NQS) is described. The input of this model receives a flow of requests with input rate λ = λdet (t) + λrnd (t), where λdet (t) is a deterministic function depending on time; λrnd (t) is a random function. The parameters of functions λdet (t), λrnd (t) were identified on the basis of statistical information on visitor flows collected from various Russian football stadiums. The statistical modeling of NQS is carried out and the average statistical dependences are obtained: the length of the queue of requests waiting for service, the average wait time for the service, the number of visitors entered to the stadium on the time. It is shown that these dependencies can be characterized by the following parameters: the number of visitors who entered at the time of the match; time required to service all incoming visitors; the maximum value; the argument value when the studied dependence reaches its maximum value. The dependences of these parameters on the energy ratio of the deterministic and random component of the input rate are investigated.
NASA Astrophysics Data System (ADS)
Winant, Celeste D.; Aparici, Carina Mari; Zelnik, Yuval R.; Reutter, Bryan W.; Sitek, Arkadiusz; Bacharach, Stephen L.; Gullberg, Grant T.
2012-01-01
Computer simulations, a phantom study and a human study were performed to determine whether a slowly rotating single-photon computed emission tomography (SPECT) system could provide accurate arterial input functions for quantification of myocardial perfusion imaging using kinetic models. The errors induced by data inconsistency associated with imaging with slow camera rotation during tracer injection were evaluated with an approach called SPECT/P (dynamic SPECT from positron emission tomography (PET)) and SPECT/D (dynamic SPECT from database of SPECT phantom projections). SPECT/P simulated SPECT-like dynamic projections using reprojections of reconstructed dynamic 94Tc-methoxyisobutylisonitrile (94Tc-MIBI) PET images acquired in three human subjects (1 min infusion). This approach was used to evaluate the accuracy of estimating myocardial wash-in rate parameters K1 for rotation speeds providing 180° of projection data every 27 or 54 s. Blood input and myocardium tissue time-activity curves (TACs) were estimated using spatiotemporal splines. These were fit to a one-compartment perfusion model to obtain wash-in rate parameters K1. For the second method (SPECT/D), an anthropomorphic cardiac torso phantom was used to create real SPECT dynamic projection data of a tracer distribution derived from 94Tc-MIBI PET scans in the blood pool, myocardium, liver and background. This method introduced attenuation, collimation and scatter into the modeling of dynamic SPECT projections. Both approaches were used to evaluate the accuracy of estimating myocardial wash-in parameters for rotation speeds providing 180° of projection data every 27 and 54 s. Dynamic cardiac SPECT was also performed in a human subject at rest using a hybrid SPECT/CT scanner. Dynamic measurements of 99mTc-tetrofosmin in the myocardium were obtained using an infusion time of 2 min. Blood input, myocardium tissue and liver TACs were estimated using the same spatiotemporal splines. The spatiotemporal maximum-likelihood expectation-maximization (4D ML-EM) reconstructions gave more accurate reconstructions than did standard frame-by-frame static 3D ML-EM reconstructions. The SPECT/P results showed that 4D ML-EM reconstruction gave higher and more accurate estimates of K1 than did 3D ML-EM, yielding anywhere from a 44% underestimation to 24% overestimation for the three patients. The SPECT/D results showed that 4D ML-EM reconstruction gave an overestimation of 28% and 3D ML-EM gave an underestimation of 1% for K1. For the patient study the 4D ML-EM reconstruction provided continuous images as a function of time of the concentration in both ventricular cavities and myocardium during the 2 min infusion. It is demonstrated that a 2 min infusion with a two-headed SPECT system rotating 180° every 54 s can produce measurements of blood pool and myocardial TACs, though the SPECT simulation studies showed that one must sample at least every 30 s to capture a 1 min infusion input function.
Statistical Accounting for Uncertainty in Modeling Transport in Environmental Systems
Models frequently are used to predict the future extent of ground-water contamination, given estimates of their input parameters and forcing functions. Although models have a well established scientific basis for understanding the interactions between complex phenomena and for g...
Locally Based Kernel PLS Regression De-noising with Application to Event-Related Potentials
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Tino, Peter
2002-01-01
The close relation of signal de-noising and regression problems dealing with the estimation of functions reflecting dependency between a set of inputs and dependent outputs corrupted with some level of noise have been employed in our approach.
Flight Test Validation of Optimal Input Design and Comparison to Conventional Inputs
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1997-01-01
A technique for designing optimal inputs for aerodynamic parameter estimation was flight tested on the F-18 High Angle of Attack Research Vehicle (HARV). Model parameter accuracies calculated from flight test data were compared on an equal basis for optimal input designs and conventional inputs at the same flight condition. In spite of errors in the a priori input design models and distortions of the input form by the feedback control system, the optimal inputs increased estimated parameter accuracies compared to conventional 3-2-1-1 and doublet inputs. In addition, the tests using optimal input designs demonstrated enhanced design flexibility, allowing the optimal input design technique to use a larger input amplitude to achieve further increases in estimated parameter accuracy without departing from the desired flight test condition. This work validated the analysis used to develop the optimal input designs, and demonstrated the feasibility and practical utility of the optimal input design technique.
NASA Technical Reports Server (NTRS)
Mobasseri, B. G.; Mcgillem, C. D.; Anuta, P. E. (Principal Investigator)
1978-01-01
The author has identified the following significant results. The probability of correct classification of various populations in data was defined as the primary performance index. The multispectral data being of multiclass nature as well, required a Bayes error estimation procedure that was dependent on a set of class statistics alone. The classification error was expressed in terms of an N dimensional integral, where N was the dimensionality of the feature space. The multispectral scanner spatial model was represented by a linear shift, invariant multiple, port system where the N spectral bands comprised the input processes. The scanner characteristic function, the relationship governing the transformation of the input spatial, and hence, spectral correlation matrices through the systems, was developed.
Uncertainty in modeled upper ocean heat content change
NASA Astrophysics Data System (ADS)
Tokmakian, Robin; Challenor, Peter
2014-02-01
This paper examines the uncertainty in the change in the heat content in the ocean component of a general circulation model. We describe the design and implementation of our statistical methodology. Using an ensemble of model runs and an emulator, we produce an estimate of the full probability distribution function (PDF) for the change in upper ocean heat in an Atmosphere/Ocean General Circulation Model, the Community Climate System Model v. 3, across a multi-dimensional input space. We show how the emulator of the GCM's heat content change and hence, the PDF, can be validated and how implausible outcomes from the emulator can be identified when compared to observational estimates of the metric. In addition, the paper describes how the emulator outcomes and related uncertainty information might inform estimates of the same metric from a multi-model Coupled Model Intercomparison Project phase 3 ensemble. We illustrate how to (1) construct an ensemble based on experiment design methods, (2) construct and evaluate an emulator for a particular metric of a complex model, (3) validate the emulator using observational estimates and explore the input space with respect to implausible outcomes and (4) contribute to the understanding of uncertainties within a multi-model ensemble. Finally, we estimate the most likely value for heat content change and its uncertainty for the model, with respect to both observations and the uncertainty in the value for the input parameters.
Estimation of contrast agent bolus arrival delays for improved reproducibility of liver DCE MRI
NASA Astrophysics Data System (ADS)
Chouhan, Manil D.; Bainbridge, Alan; Atkinson, David; Punwani, Shonit; Mookerjee, Rajeshwar P.; Lythgoe, Mark F.; Taylor, Stuart A.
2016-10-01
Delays between contrast agent (CA) arrival at the site of vascular input function (VIF) sampling and the tissue of interest affect dynamic contrast enhanced (DCE) MRI pharmacokinetic modelling. We investigate effects of altering VIF CA bolus arrival delays on liver DCE MRI perfusion parameters, propose an alternative approach to estimating delays and evaluate reproducibility. Thirteen healthy volunteers (28.7 ± 1.9 years, seven males) underwent liver DCE MRI using dual-input single compartment modelling, with reproducibility (n = 9) measured at 7 days. Effects of VIF CA bolus arrival delays were assessed for arterial and portal venous input functions. Delays were pre-estimated using linear regression, with restricted free modelling around the pre-estimated delay. Perfusion parameters and 7 days reproducibility were compared using this method, freely modelled delays and no delays using one-way ANOVA. Reproducibility was assessed using Bland-Altman analysis of agreement. Maximum percent change relative to parameters obtained using zero delays, were -31% for portal venous (PV) perfusion, +43% for total liver blood flow (TLBF), +3247% for hepatic arterial (HA) fraction, +150% for mean transit time and -10% for distribution volume. Differences were demonstrated between the 3 methods for PV perfusion (p = 0.0085) and HA fraction (p < 0.0001), but not other parameters. Improved mean differences and Bland-Altman 95% Limits-of-Agreement for reproducibility of PV perfusion (9.3 ml/min/100 g, ±506.1 ml/min/100 g) and TLBF (43.8 ml/min/100 g, ±586.7 ml/min/100 g) were demonstrated using pre-estimated delays with constrained free modelling. CA bolus arrival delays cause profound differences in liver DCE MRI quantification. Pre-estimation of delays with constrained free modelling improved 7 days reproducibility of perfusion parameters in volunteers.
Neural networks for tracking of unknown SISO discrete-time nonlinear dynamic systems.
Aftab, Muhammad Saleheen; Shafiq, Muhammad
2015-11-01
This article presents a Lyapunov function based neural network tracking (LNT) strategy for single-input, single-output (SISO) discrete-time nonlinear dynamic systems. The proposed LNT architecture is composed of two feedforward neural networks operating as controller and estimator. A Lyapunov function based back propagation learning algorithm is used for online adjustment of the controller and estimator parameters. The controller and estimator error convergence and closed-loop system stability analysis is performed by Lyapunov stability theory. Moreover, two simulation examples and one real-time experiment are investigated as case studies. The achieved results successfully validate the controller performance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Low-complexity piecewise-affine virtual sensors: theory and design
NASA Astrophysics Data System (ADS)
Rubagotti, Matteo; Poggi, Tomaso; Oliveri, Alberto; Pascucci, Carlo Alberto; Bemporad, Alberto; Storace, Marco
2014-03-01
This paper is focused on the theoretical development and the hardware implementation of low-complexity piecewise-affine direct virtual sensors for the estimation of unmeasured variables of interest of nonlinear systems. The direct virtual sensor is designed directly from measured inputs and outputs of the system and does not require a dynamical model. The proposed approach allows one to design estimators which mitigate the effect of the so-called 'curse of dimensionality' of simplicial piecewise-affine functions, and can be therefore applied to relatively high-order systems, enjoying convergence and optimality properties. An automatic toolchain is also presented to generate the VHDL code describing the digital circuit implementing the virtual sensor, starting from the set of measured input and output data. The proposed methodology is applied to generate an FPGA implementation of the virtual sensor for the estimation of vehicle lateral velocity, using a hardware-in-the-loop setting.
Uncertainty in predicting soil hydraulic properties at the hillslope scale with indirect methods
NASA Astrophysics Data System (ADS)
Chirico, G. B.; Medina, H.; Romano, N.
2007-02-01
SummarySeveral hydrological applications require the characterisation of the soil hydraulic properties at large spatial scales. Pedotransfer functions (PTFs) are being developed as simplified methods to estimate soil hydraulic properties as an alternative to direct measurements, which are unfeasible for most practical circumstances. The objective of this study is to quantify the uncertainty in PTFs spatial predictions at the hillslope scale as related to the sampling density, due to: (i) the error in estimated soil physico-chemical properties and (ii) PTF model error. The analysis is carried out on a 2-km-long experimental hillslope in South Italy. The method adopted is based on a stochastic generation of patterns of soil variables using sequential Gaussian simulation, conditioned to the observed sample data. The following PTFs are applied: Vereecken's PTF [Vereecken, H., Diels, J., van Orshoven, J., Feyen, J., Bouma, J., 1992. Functional evaluation of pedotransfer functions for the estimation of soil hydraulic properties. Soil Sci. Soc. Am. J. 56, 1371-1378] and HYPRES PTF [Wösten, J.H.M., Lilly, A., Nemes, A., Le Bas, C., 1999. Development and use of a database of hydraulic properties of European soils. Geoderma 90, 169-185]. The two PTFs estimate reliably the soil water retention characteristic even for a relatively coarse sampling resolution, with prediction uncertainties comparable to the uncertainties in direct laboratory or field measurements. The uncertainty of soil water retention prediction due to the model error is as much as or more significant than the uncertainty associated with the estimated input, even for a relatively coarse sampling resolution. Prediction uncertainties are much more important when PTF are applied to estimate the saturated hydraulic conductivity. In this case model error dominates the overall prediction uncertainties, making negligible the effect of the input error.
Estimation of parameters of constant elasticity of substitution production functional model
NASA Astrophysics Data System (ADS)
Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi
2017-11-01
Nonlinear model building has become an increasing important powerful tool in mathematical economics. In recent years the popularity of applications of nonlinear models has dramatically been rising up. Several researchers in econometrics are very often interested in the inferential aspects of nonlinear regression models [6]. The present research study gives a distinct method of estimation of more complicated and highly nonlinear model viz Constant Elasticity of Substitution (CES) production functional model. Henningen et.al [5] proposed three solutions to avoid serious problems when estimating CES functions in 2012 and they are i) removing discontinuities by using the limits of the CES function and its derivative. ii) Circumventing large rounding errors by local linear approximations iii) Handling ill-behaved objective functions by a multi-dimensional grid search. Joel Chongeh et.al [7] discussed the estimation of the impact of capital and labour inputs to the gris output agri-food products using constant elasticity of substitution production function in Tanzanian context. Pol Antras [8] presented new estimates of the elasticity of substitution between capital and labour using data from the private sector of the U.S. economy for the period 1948-1998.
Input Forces Estimation for Nonlinear Systems by Applying a Square-Root Cubature Kalman Filter.
Song, Xuegang; Zhang, Yuexin; Liang, Dakai
2017-10-10
This work presents a novel inverse algorithm to estimate time-varying input forces in nonlinear beam systems. With the system parameters determined, the input forces can be estimated in real-time from dynamic responses, which can be used for structural health monitoring. In the process of input forces estimation, the Runge-Kutta fourth-order algorithm was employed to discretize the state equations; a square-root cubature Kalman filter (SRCKF) was employed to suppress white noise; the residual innovation sequences, a priori state estimate, gain matrix, and innovation covariance generated by SRCKF were employed to estimate the magnitude and location of input forces by using a nonlinear estimator. The nonlinear estimator was based on the least squares method. Numerical simulations of a large deflection beam and an experiment of a linear beam constrained by a nonlinear spring were employed. The results demonstrated accuracy of the nonlinear algorithm.
Wang, Wei; Wen, Changyun; Huang, Jiangshuai; Fan, Huijin
2017-11-01
In this paper, a backstepping based distributed adaptive control scheme is proposed for multiple uncertain Euler-Lagrange systems under directed graph condition. The common desired trajectory is allowed totally unknown by part of the subsystems and the linearly parameterized trajectory model assumed in currently available results is no longer needed. To compensate the effects due to unknown trajectory information, a smooth function of consensus errors and certain positive integrable functions are introduced in designing virtual control inputs. Besides, to overcome the difficulty of completely counteracting the coupling terms of distributed consensus errors and parameter estimation errors in the presence of asymmetric Laplacian matrix, extra information transmission of local parameter estimates are introduced among linked subsystem and adaptive gain technique is adopted to generate distributed torque inputs. It is shown that with the proposed distributed adaptive control scheme, global uniform boundedness of all the closed-loop signals and asymptotically output consensus tracking can be achieved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Smith, Andrew; LaVerde, Bruce; Hunt, Ron; Fulcher, Clay; Towner, Robert; McDonald, Emmett
2012-01-01
The design and theoretical basis of a new database tool that quickly generates vibroacoustic response estimates using a library of transfer functions (TFs) is discussed. During the early stages of a launch vehicle development program, these response estimates can be used to provide vibration environment specification to hardware vendors. The tool accesses TFs from a database, combines the TFs, and multiplies these by input excitations to estimate vibration responses. The database is populated with two sets of uncoupled TFs; the first set representing vibration response of a bare panel, designated as H(sup s), and the second set representing the response of the free-free component equipment by itself, designated as H(sup c). For a particular configuration undergoing analysis, the appropriate H(sup s) and H(sup c) are selected and coupled to generate an integrated TF, designated as H(sup s +c). This integrated TF is then used with the appropriate input excitations to estimate vibration responses. This simple yet powerful tool enables a user to estimate vibration responses without directly using finite element models, so long as suitable H(sup s) and H(sup c) sets are defined in the database libraries. The paper discusses the preparation of the database tool and provides the assumptions and methodologies necessary to combine H(sup s) and H(sup c) sets into an integrated H(sup s + c). An experimental validation of the approach is also presented.
Tao, Mengya; Li, Dingsheng; Song, Runsheng; Suh, Sangwon; Keller, Arturo A
2018-03-01
Chemicals in consumer products have become the focus of recent regulatory developments including California's Safer Consumer Products Act. However, quantifying the amount of chemicals released during the use and post-use phases of consumer products is challenging, limiting the ability to understand their impacts. Here we present a comprehensive framework, OrganoRelease, for estimating the release of organic chemicals from the use and post-use of consumer products given limited information. First, a novel Chemical Functional Use Classifier estimates functional uses based on chemical structure. Second, the quantity of chemicals entering different product streams is estimated based on market share data of the chemical functional uses. Third, chemical releases are estimated based on either chemical product categories or functional uses by using the Specific Environmental Release Categories and EU Technological Guidance Documents. OrganoRelease connects 19 unique functional uses and 14 product categories across 4 data sources and provides multiple pathways for chemical release estimation. Available user information can be incorporated in the framework at various stages. The Chemical Functional Use Classifier achieved an average accuracy above 84% for nine functional uses, which enables the OrganoRelease to provide release estimates for the chemical, mostly using only the molecular structure. The results can be can be used as input for methods estimating environmental fate and exposure. Copyright © 2017 Elsevier Ltd. All rights reserved.
Incorporating spatial context into statistical classification of multidimensional image data
NASA Technical Reports Server (NTRS)
Bauer, M. E. (Principal Investigator); Tilton, J. C.; Swain, P. H.
1981-01-01
Compound decision theory is employed to develop a general statistical model for classifying image data using spatial context. The classification algorithm developed from this model exploits the tendency of certain ground-cover classes to occur more frequently in some spatial contexts than in others. A key input to this contextural classifier is a quantitative characterization of this tendency: the context function. Several methods for estimating the context function are explored, and two complementary methods are recommended. The contextural classifier is shown to produce substantial improvements in classification accuracy compared to the accuracy produced by a non-contextural uniform-priors maximum likelihood classifier when these methods of estimating the context function are used. An approximate algorithm, which cuts computational requirements by over one-half, is presented. The search for an optimal implementation is furthered by an exploration of the relative merits of using spectral classes or information classes for classification and/or context function estimation.
NASA Astrophysics Data System (ADS)
Székely, Balázs; Kania, Adam; Varga, Katalin; Heilmeier, Hermann
2017-04-01
Lacunarity, a measure of the spatial distribution of the empty space is found to be a useful descriptive quantity of the forest structure. Its calculation, based on laser-scanned point clouds, results in a four-dimensional data set. The evaluation of results needs sophisticated tools and visualization techniques. To simplify the evaluation, it is straightforward to use approximation functions fitted to the results. The lacunarity function L(r), being a measure of scale-independent structural properties, has a power-law character. Previous studies showed that log(log(L(r))) transformation is suitable for analysis of spatial patterns. Accordingly, transformed lacunarity functions can be approximated by appropriate functions either in the original or in the transformed domain. As input data we have used a number of laser-scanned point clouds of various forests. The lacunarity distribution has been calculated along a regular horizontal grid at various (relative) elevations. The lacunarity data cube then has been logarithm-transformed and the resulting values became the input of parameter estimation at each point (point of interest, POI). This way at each POI a parameter set is generated that is suitable for spatial analysis. The expectation is that the horizontal variation and vertical layering of the vegetation can be characterized by this procedure. The results show that the transformed L(r) functions can be typically approximated by exponentials individually, and the residual values remain low in most cases. However, (1) in most cases the residuals may vary considerably, and (2) neighbouring POIs often give rather differing estimates both in horizontal and in vertical directions, of them the vertical variation seems to be more characteristic. In the vertical sense, the distribution of estimates shows abrupt changes at places, presumably related to the vertical structure of the forest. In low relief areas horizontal similarity is more typical, in higher relief areas horizontal similarity fades out in short distances. Some of the input data have been acquired in the framework of the ChangeHabitats2 project financed by the European Union. BS contributed as an Alexander von Humboldt Research Fellow.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaplanoglu, Erkan; Safak, Koray K.; Varol, H. Selcuk
2009-01-12
An experiment based method is proposed for parameter estimation of a class of linear multivariable systems. The method was applied to a pressure-level control process. Experimental time domain input/output data was utilized in a gray-box modeling approach. Prior knowledge of the form of the system transfer function matrix elements is assumed to be known. Continuous-time system transfer function matrix parameters were estimated in real-time by the least-squares method. Simulation results of experimentally determined system transfer function matrix compare very well with the experimental results. For comparison and as an alternative to the proposed real-time estimation method, we also implemented anmore » offline identification method using artificial neural networks and obtained fairly good results. The proposed methods can be implemented conveniently on a desktop PC equipped with a data acquisition board for parameter estimation of moderately complex linear multivariable systems.« less
Stefos, Theodore; Burgess, James F; Cohen, Jeffrey P; Lehner, Laura; Moran, Eileen
2012-12-01
We evaluate how changes to mental health workforce levels, composition, and degree of labor substitution, may impact typical practice output. Using a generalized Leontief production function and data from 134 U.S. Department of Veterans Affairs (VA) mental health practices, we estimate the q-complementarity/q-substitutability of mental health workers. We look at the entire spectrum of mental health services rather than just outpatient or physician office services. We also examine more labor types, including residents, than previous studies. The marginal patient care output contribution is estimated for each labor type as well as the degree to which physicians and other mental health workers may be substitutes or complements. Results indicate that numerous channels exist through which input substitution can improve productivity. Seven of eight labor and capital inputs have positive estimated marginal products. Most factor inputs exhibit diminishing marginal productivity. Of 28 unique labor-capital pairs, 17 are q-complements and 11 are q-substitutes. Complementarity among several labor types provides evidence of a team approach to mental health service provision. Our approach may serve to better inform healthcare providers regarding more productive mental health workforce composition both in and outside of VA.
Generalized local emission tomography
Katsevich, Alexander J.
1998-01-01
Emission tomography enables locations and values of internal isotope density distributions to be determined from radiation emitted from the whole object. In the method for locating the values of discontinuities, the intensities of radiation emitted from either the whole object or a region of the object containing the discontinuities are inputted to a local tomography function .function..sub..LAMBDA..sup.(.PHI.) to define the location S of the isotope density discontinuity. The asymptotic behavior of .function..sub..LAMBDA..sup.(.PHI.) is determined in a neighborhood of S, and the value for the discontinuity is estimated from the asymptotic behavior of .function..sub..LAMBDA..sup.(.PHI.) knowing pointwise values of the attenuation coefficient within the object. In the method for determining the location of the discontinuity, the intensities of radiation emitted from an object are inputted to a local tomography function .function..sub..LAMBDA..sup.(.PHI.) to define the location S of the density discontinuity and the location .GAMMA. of the attenuation coefficient discontinuity. Pointwise values of the attenuation coefficient within the object need not be known in this case.
Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models
NASA Astrophysics Data System (ADS)
Rothenberger, Michael J.
This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input-output measurements, and is the approach used in this dissertation. Research in the literature studies optimal current input shaping for high-order electrochemical battery models and focuses on offline laboratory cycling. While this body of research highlights improvements in identifiability through optimal input shaping, each optimal input is a function of nominal parameters, which creates a tautology. The parameter values must be known a priori to determine the optimal input for maximizing estimation speed and accuracy. The system identification literature presents multiple studies containing methods that avoid the challenges of this tautology, but these methods are absent from the battery parameter estimation domain. The gaps in the above literature are addressed in this dissertation through the following five novel and unique contributions. First, this dissertation optimizes the parameter identifiability of a thermal battery model, which Sergio Mendoza experimentally validates through a close collaboration with this dissertation's author. Second, this dissertation extends input-shaping optimization to a linear and nonlinear equivalent-circuit battery model and illustrates the substantial improvements in Fisher identifiability for a periodic optimal signal when compared against automotive benchmark cycles. Third, this dissertation presents an experimental validation study of the simulation work in the previous contribution. The estimation study shows that the automotive benchmark cycles either converge slower than the optimized cycle, or not at all for certain parameters. Fourth, this dissertation examines how automotive battery packs with additional power electronic components that dynamically route current to individual cells/modules can be used for parameter identifiability optimization. While the user and vehicle supervisory controller dictate the current demand for these packs, the optimized internal allocation of current still improves identifiability. Finally, this dissertation presents a robust Bayesian sequential input shaping optimization study to maximize the conditional Fisher information of the battery model parameters without prior knowledge of the nominal parameter set. This iterative algorithm only requires knowledge of the prior parameter distributions to converge to the optimal input trajectory.
Matsubara, Keisuke; Ikoma, Yoko; Okada, Maki; Ibaraki, Masanobu; Suhara, Tetsuya; Kinoshita, Toshibumi; Ito, Hiroshi
2014-02-01
O-methyl metabolite (L-[β-(11)C]OMD) of (11)C-labeled L-3,4-dihydroxyphenylalanine (L-[β-(11)C]DOPA) can penetrate into brain tissue through the blood-brain barrier, and can complicate the estimation of dopamine synthesis capacity by positron emission tomography (PET) study with L-[β-(11)C]DOPA. We evaluated the impact of L-[β-(11)C]OMD on the estimation of the dopamine synthesis capacity in a human L-[β-(11)C]DOPA PET study. The metabolite correction with mathematical modeling of L-[β-(11)C]OMD kinetics in a reference region without decarboxylation and further metabolism, proposed by a previous [(18)F]FDOPA PET study, were implemented to estimate radioactivity of tissue L-[β-(11)C]OMD in 10 normal volunteers. The component of L-[β-(11)C]OMD in tissue time-activity curves (TACs) in 10 regions were subtracted by the estimated radioactivity of L-[β-(11)C]OMD. To evaluate the influence of omitting blood sampling and metabolite correction, relative dopamine synthesis rate (kref) was estimated by Gjedde-Patlak analysis with reference tissue input function, as well as the net dopamine synthesis rate (Ki) by Gjedde-Patlak analysis with the arterial input function and TAC without and with metabolite correction. Overestimation of Ki was observed without metabolite correction. However, the kref and Ki with metabolite correction were significantly correlated. These data suggest that the influence of L-[β-(11)C]OMD is minimal for the estimation of kref as dopamine synthesis capacity.
Tomasi, G; Kimberley, S; Rosso, L; Aboagye, E; Turkheimer, F
2012-04-07
In positron emission tomography (PET) studies involving organs different from the brain, ignoring the metabolite contribution to the tissue time-activity curves (TAC), as in the standard single-input (SI) models, may compromise the accuracy of the estimated parameters. We employed here double-input (DI) compartmental modeling (CM), previously used for [¹¹C]thymidine, and a novel DI spectral analysis (SA) approach on the tracers 5-[¹⁸F]fluorouracil (5-[¹⁸F]FU) and [¹⁸F]fluorothymidine ([¹⁸F]FLT). CM and SA were performed initially with a SI approach using the parent plasma TAC as an input function. These methods were then employed using a DI approach with the metabolite plasma TAC as an additional input function. Regions of interest (ROIs) corresponding to healthy liver, kidneys and liver metastases for 5-[¹⁸F]FU and to tumor, vertebra and liver for [¹⁸F]FLT were analyzed. For 5-[¹⁸F]FU, the improvement of the fit quality with the DI approaches was remarkable; in CM, the Akaike information criterion (AIC) always selected the DI over the SI model. Volume of distribution estimates obtained with DI CM and DI SA were in excellent agreement, for both parent 5-[¹⁸F]FU (R(2) = 0.91) and metabolite [¹⁸F]FBAL (R(2) = 0.99). For [¹⁸F]FLT, the DI methods provided notable improvements but less substantial than for 5-[¹⁸F]FU due to the lower rate of metabolism of [¹⁸F]FLT. On the basis of the AIC values, agreement between [¹⁸F]FLT K(i) estimated with the SI and DI models was good (R² = 0.75) for the ROIs where the metabolite contribution was negligible, indicating that the additional input did not bias the parent tracer only-related estimates. When the AIC suggested a substantial contribution of the metabolite [¹⁸F]FLT-glucuronide, on the other hand, the change in the parent tracer only-related parameters was significant (R² = 0.33 for K(i)). Our results indicated that improvements of DI over SI approaches can range from moderate to substantial and are more significant for tracers with a high rate of metabolism. Furthermore, they showed that SA is suitable for DI modeling and can be used effectively in the analysis of PET data.
A second-order frequency-aided digital phase-locked loop for Doppler rate tracking
NASA Astrophysics Data System (ADS)
Chie, C. M.
1980-08-01
A second-order digital phase-locked loop (DPLL) has a finite lock range which is a function of the frequency of the incoming signal to be tracked. For this reason, it is not capable of tracking an input with Doppler rate for an indefinite period of time. In this correspondence, an analytical expression for the hold-in time is derived. In addition, an all-digital scheme to alleviate this problem is proposed based on the information obtained from estimating the input signal frequency.
Stability of individual loudness functions obtained by magnitude estimation and production
NASA Technical Reports Server (NTRS)
Hellman, R. P.
1981-01-01
A correlational analysis of individual magnitude estimation and production exponents at the same frequency is performed, as is an analysis of individual exponents produced in different sessions by the same procedure across frequency (250, 1000, and 3000 Hz). Taken as a whole, the results show that individual exponent differences do not decrease by counterbalancing magnitude estimation with magnitude production and that individual exponent differences remain stable over time despite changes in stimulus frequency. Further results show that although individual magnitude estimation and production exponents do not necessarily obey the .6 power law, it is possible to predict the slope of an equal-sensation function averaged for a group of listeners from individual magnitude estimation and production data. On the assumption that individual listeners with sensorineural hearing also produce stable and reliable magnitude functions, it is also shown that the slope of the loudness-recruitment function measured by magnitude estimation and production can be predicted for individuals with bilateral losses of long duration. Results obtained in normal and pathological ears thus suggest that individual listeners can produce loudness judgements that reveal, although indirectly, the input-output characteristic of the auditory system.
Rath, J J; Veluvolu, K C; Defoort, M
2014-01-01
The estimation of road excitation profile is important for evaluation of vehicle stability and vehicle suspension performance for autonomous vehicle control systems. In this work, the nonlinear dynamics of the active automotive system that is excited by the unknown road excitation profile are considered for modeling. To address the issue of estimation of road profile, we develop an adaptive supertwisting observer for state and unknown road profile estimation. Under Lipschitz conditions for the nonlinear functions, the convergence of the estimation error is proven. Simulation results with Ford Fiesta MK2 demonstrate the effectiveness of the proposed observer for state and unknown input estimation for nonlinear active suspension system.
Rath, J. J.; Veluvolu, K. C.; Defoort, M.
2014-01-01
The estimation of road excitation profile is important for evaluation of vehicle stability and vehicle suspension performance for autonomous vehicle control systems. In this work, the nonlinear dynamics of the active automotive system that is excited by the unknown road excitation profile are considered for modeling. To address the issue of estimation of road profile, we develop an adaptive supertwisting observer for state and unknown road profile estimation. Under Lipschitz conditions for the nonlinear functions, the convergence of the estimation error is proven. Simulation results with Ford Fiesta MK2 demonstrate the effectiveness of the proposed observer for state and unknown input estimation for nonlinear active suspension system. PMID:24683321
OP-Yield Version 1.00 user's guide
Martin W. Ritchie; Jianwei Zhang
2018-01-01
OP-Yield is a Microsoft Excel⢠spreadsheet with 14 specified user inputs to derive custom yield estimates using the original Oliver and Powers (1978) functions as the foundation. It presents yields for ponderosa pine (Pinus ponderosa Lawson & C. Lawson) plantations in northern California. The basic model forms for dominantand...
Uncertainty in flood damage estimates and its potential effect on investment decisions
NASA Astrophysics Data System (ADS)
Wagenaar, D. J.; de Bruijn, K. M.; Bouwer, L. M.; De Moel, H.
2015-01-01
This paper addresses the large differences that are found between damage estimates of different flood damage models. It explains how implicit assumptions in flood damage models can lead to large uncertainties in flood damage estimates. This explanation is used to quantify this uncertainty with a Monte Carlo Analysis. As input the Monte Carlo analysis uses a damage function library with 272 functions from 7 different flood damage models. This results in uncertainties in the order of magnitude of a factor 2 to 5. The resulting uncertainty is typically larger for small water depths and for smaller flood events. The implications of the uncertainty in damage estimates for flood risk management are illustrated by a case study in which the economic optimal investment strategy for a dike segment in the Netherlands is determined. The case study shows that the uncertainty in flood damage estimates can lead to significant over- or under-investments.
NASA Astrophysics Data System (ADS)
Dafonte, C.; Fustes, D.; Manteiga, M.; Garabato, D.; Álvarez, M. A.; Ulla, A.; Allende Prieto, C.
2016-10-01
Aims: We present an innovative artificial neural network (ANN) architecture, called Generative ANN (GANN), that computes the forward model, that is it learns the function that relates the unknown outputs (stellar atmospheric parameters, in this case) to the given inputs (spectra). Such a model can be integrated in a Bayesian framework to estimate the posterior distribution of the outputs. Methods: The architecture of the GANN follows the same scheme as a normal ANN, but with the inputs and outputs inverted. We train the network with the set of atmospheric parameters (Teff, log g, [Fe/H] and [α/ Fe]), obtaining the stellar spectra for such inputs. The residuals between the spectra in the grid and the estimated spectra are minimized using a validation dataset to keep solutions as general as possible. Results: The performance of both conventional ANNs and GANNs to estimate the stellar parameters as a function of the star brightness is presented and compared for different Galactic populations. GANNs provide significantly improved parameterizations for early and intermediate spectral types with rich and intermediate metallicities. The behaviour of both algorithms is very similar for our sample of late-type stars, obtaining residuals in the derivation of [Fe/H] and [α/ Fe] below 0.1 dex for stars with Gaia magnitude Grvs < 12, which accounts for a number in the order of four million stars to be observed by the Radial Velocity Spectrograph of the Gaia satellite. Conclusions: Uncertainty estimation of computed astrophysical parameters is crucial for the validation of the parameterization itself and for the subsequent exploitation by the astronomical community. GANNs produce not only the parameters for a given spectrum, but a goodness-of-fit between the observed spectrum and the predicted one for a given set of parameters. Moreover, they allow us to obtain the full posterior distribution over the astrophysical parameters space once a noise model is assumed. This can be used for novelty detection and quality assessment.
Chan, B
2015-01-01
Background Functional improvements have been seen in stroke patients who have received an increased intensity of physiotherapy. This requires additional costs in the form of increased physiotherapist time. Objectives The objective of this economic analysis is to determine the cost-effectiveness of increasing the intensity of physiotherapy (duration and/or frequency) during inpatient rehabilitation after stroke, from the perspective of the Ontario Ministry of Health and Long-term Care. Data Sources The inputs for our economic evaluation were extracted from articles published in peer-reviewed journals and from reports from government sources or the Canadian Stroke Network. Where published data were not available, we sought expert opinion and used inputs based on the experts' estimates. Review Methods The primary outcome we considered was cost per quality-adjusted life-year (QALY). We also evaluated functional strength training because of its similarities to physiotherapy. We used a 2-state Markov model to evaluate the cost-effectiveness of functional strength training and increased physiotherapy intensity for stroke inpatient rehabilitation. The model had a lifetime timeframe with a 5% annual discount rate. We then used sensitivity analyses to evaluate uncertainty in the model inputs. Results We found that functional strength training and higher-intensity physiotherapy resulted in lower costs and improved outcomes over a lifetime. However, our sensitivity analyses revealed high levels of uncertainty in the model inputs, and therefore in the results. Limitations There is a high level of uncertainty in this analysis due to the uncertainty in model inputs, with some of the major inputs based on expert panel consensus or expert opinion. In addition, the utility outcomes were based on a clinical study conducted in the United Kingdom (i.e., 1 study only, and not in an Ontario or Canadian setting). Conclusions Functional strength training and higher-intensity physiotherapy may result in lower costs and improved health outcomes. However, these results should be interpreted with caution. PMID:26366241
Hormuth, David A; Skinner, Jack T; Does, Mark D; Yankeelov, Thomas E
2014-05-01
Dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) can quantitatively and qualitatively assess physiological characteristics of tissue. Quantitative DCE-MRI requires an estimate of the time rate of change of the concentration of the contrast agent in the blood plasma, the vascular input function (VIF). Measuring the VIF in small animals is notoriously difficult as it requires high temporal resolution images limiting the achievable number of slices, field-of-view, spatial resolution, and signal-to-noise. Alternatively, a population-averaged VIF could be used to mitigate the acquisition demands in studies aimed to investigate, for example, tumor vascular characteristics. Thus, the overall goal of this manuscript is to determine how the kinetic parameters estimated by a population based VIF differ from those estimated by an individual VIF. Eight rats bearing gliomas were imaged before, during, and after an injection of Gd-DTPA. K(trans), ve, and vp were extracted from signal-time curves of tumor tissue using both individual and population-averaged VIFs. Extended model voxel estimates of K(trans) and ve in all animals had concordance correlation coefficients (CCC) ranging from 0.69 to 0.98 and Pearson correlation coefficients (PCC) ranging from 0.70 to 0.99. Additionally, standard model estimates resulted in CCCs ranging from 0.81 to 0.99 and PCCs ranging from 0.98 to 1.00, supporting the use of a population based VIF if an individual VIF is not available. Copyright © 2014 Elsevier Inc. All rights reserved.
Neuro-genetic non-invasive temperature estimation: intensity and spatial prediction.
Teixeira, César A; Ruano, M Graça; Ruano, António E; Pereira, Wagner C A
2008-06-01
The existence of proper non-invasive temperature estimators is an essential aspect when thermal therapy applications are envisaged. These estimators must be good predictors to enable temperature estimation at different operational situations, providing better control of the therapeutic instrumentation. In this work, radial basis functions artificial neural networks were constructed to access temperature evolution on an ultrasound insonated medium. The employed models were radial basis functions neural networks with external dynamics induced by their inputs. Both the most suited set of model inputs and number of neurons in the network were found using the multi-objective genetic algorithm. The neural models were validated in two situations: the operating ones, as used in the construction of the network; and in 11 unseen situations. The new data addressed two new spatial locations and a new intensity level, assessing the intensity and space prediction capacity of the proposed model. Good performance was obtained during the validation process both in terms of the spatial points considered and whenever the new intensity level was within the range of applied intensities. A maximum absolute error of 0.5 degrees C+/-10% (0.5 degrees C is the gold-standard threshold in hyperthermia/diathermia) was attained with low computationally complex models. The results confirm that the proposed neuro-genetic approach enables foreseeing temperature propagation, in connection to intensity and space parameters, thus enabling the assessment of different operating situations with proper temperature resolution.
Temperature-based estimation of global solar radiation using soft computing methodologies
NASA Astrophysics Data System (ADS)
Mohammadi, Kasra; Shamshirband, Shahaboddin; Danesh, Amir Seyed; Abdullah, Mohd Shahidan; Zamani, Mazdak
2016-07-01
Precise knowledge of solar radiation is indeed essential in different technological and scientific applications of solar energy. Temperature-based estimation of global solar radiation would be appealing owing to broad availability of measured air temperatures. In this study, the potentials of soft computing techniques are evaluated to estimate daily horizontal global solar radiation (DHGSR) from measured maximum, minimum, and average air temperatures ( T max, T min, and T avg) in an Iranian city. For this purpose, a comparative evaluation between three methodologies of adaptive neuro-fuzzy inference system (ANFIS), radial basis function support vector regression (SVR-rbf), and polynomial basis function support vector regression (SVR-poly) is performed. Five combinations of T max, T min, and T avg are served as inputs to develop ANFIS, SVR-rbf, and SVR-poly models. The attained results show that all ANFIS, SVR-rbf, and SVR-poly models provide favorable accuracy. Based upon all techniques, the higher accuracies are achieved by models (5) using T max- T min and T max as inputs. According to the statistical results, SVR-rbf outperforms SVR-poly and ANFIS. For SVR-rbf (5), the mean absolute bias error, root mean square error, and correlation coefficient are 1.1931 MJ/m2, 2.0716 MJ/m2, and 0.9380, respectively. The survey results approve that SVR-rbf can be used efficiently to estimate DHGSR from air temperatures.
Using Geothermal Play Types as an Analogue for Estimating Potential Resource Size
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terry, Rachel; Young, Katherine
Blind geothermal systems are becoming increasingly common as more geothermal fields are developed. Geothermal development is known to have high risk in the early stages of a project development because reservoir characteristics are relatively unknown until wells are drilled. Play types (or occurrence models) categorize potential geothermal fields into groups based on geologic characteristics. To aid in lowering exploration risk, these groups' reservoir characteristics can be used as analogues in new site exploration. The play type schemes used in this paper were Moeck and Beardsmore play types (Moeck et al. 2014) and Brophy occurrence models (Brophy et al. 2011). Operatingmore » geothermal fields throughout the world were classified based on their associated play type, and then reservoir characteristics data were catalogued. The distributions of these characteristics were plotted in histograms to develop probability density functions for each individual characteristic. The probability density functions can be used as input analogues in Monte Carlo estimations of resource potential for similar play types in early exploration phases. A spreadsheet model was created to estimate resource potential in undeveloped fields. The user can choose to input their own values for each reservoir characteristic or choose to use the probability distribution functions provided from the selected play type. This paper also addresses the United States Geological Survey's 1978 and 2008 assessment of geothermal resources by comparing their estimated values to reported values from post-site development. Information from the collected data was used in the comparison for thirty developed sites in the United States. No significant trends or suggestions for methodologies could be made by the comparison.« less
2014-01-01
Background This paper describes the “EMG Driven Force Estimator (EMGD-FE)”, a Matlab® graphical user interface (GUI) application that estimates skeletal muscle forces from electromyography (EMG) signals. Muscle forces are obtained by numerically integrating a system of ordinary differential equations (ODEs) that simulates Hill-type muscle dynamics and that utilises EMG signals as input. In the current version, the GUI can estimate the forces of lower limb muscles executing isometric contractions. Muscles from other parts of the body can be tested as well, although no default values for model parameters are provided. To achieve accurate evaluations, EMG collection is performed simultaneously with torque measurement from a dynamometer. The computer application guides the user, step-by-step, to pre-process the raw EMG signals, create inputs for the muscle model, numerically integrate the ODEs and analyse the results. Results An example of the application’s functions is presented using the quadriceps femoris muscle. Individual muscle force estimations for the four components as well the knee isometric torque are shown. Conclusions The proposed GUI can estimate individual muscle forces from EMG signals of skeletal muscles. The estimation accuracy depends on several factors, including signal collection and modelling hypothesis issues. PMID:24708668
Menegaldo, Luciano Luporini; de Oliveira, Liliam Fernandes; Minato, Kin K
2014-04-04
This paper describes the "EMG Driven Force Estimator (EMGD-FE)", a Matlab® graphical user interface (GUI) application that estimates skeletal muscle forces from electromyography (EMG) signals. Muscle forces are obtained by numerically integrating a system of ordinary differential equations (ODEs) that simulates Hill-type muscle dynamics and that utilises EMG signals as input. In the current version, the GUI can estimate the forces of lower limb muscles executing isometric contractions. Muscles from other parts of the body can be tested as well, although no default values for model parameters are provided. To achieve accurate evaluations, EMG collection is performed simultaneously with torque measurement from a dynamometer. The computer application guides the user, step-by-step, to pre-process the raw EMG signals, create inputs for the muscle model, numerically integrate the ODEs and analyse the results. An example of the application's functions is presented using the quadriceps femoris muscle. Individual muscle force estimations for the four components as well the knee isometric torque are shown. The proposed GUI can estimate individual muscle forces from EMG signals of skeletal muscles. The estimation accuracy depends on several factors, including signal collection and modelling hypothesis issues.
NESTEM-QRAS: A Tool for Estimating Probability of Failure
NASA Technical Reports Server (NTRS)
Patel, Bhogilal M.; Nagpal, Vinod K.; Lalli, Vincent A.; Pai, Shantaram; Rusick, Jeffrey J.
2002-01-01
An interface between two NASA GRC specialty codes, NESTEM and QRAS has been developed. This interface enables users to estimate, in advance, the risk of failure of a component, a subsystem, and/or a system under given operating conditions. This capability would be able to provide a needed input for estimating the success rate for any mission. NESTEM code, under development for the last 15 years at NASA Glenn Research Center, has the capability of estimating probability of failure of components under varying loading and environmental conditions. This code performs sensitivity analysis of all the input variables and provides their influence on the response variables in the form of cumulative distribution functions. QRAS, also developed by NASA, assesses risk of failure of a system or a mission based on the quantitative information provided by NESTEM or other similar codes, and user provided fault tree and modes of failure. This paper will describe briefly, the capabilities of the NESTEM, QRAS and the interface. Also, in this presentation we will describe stepwise process the interface uses using an example.
NESTEM-QRAS: A Tool for Estimating Probability of Failure
NASA Astrophysics Data System (ADS)
Patel, Bhogilal M.; Nagpal, Vinod K.; Lalli, Vincent A.; Pai, Shantaram; Rusick, Jeffrey J.
2002-10-01
An interface between two NASA GRC specialty codes, NESTEM and QRAS has been developed. This interface enables users to estimate, in advance, the risk of failure of a component, a subsystem, and/or a system under given operating conditions. This capability would be able to provide a needed input for estimating the success rate for any mission. NESTEM code, under development for the last 15 years at NASA Glenn Research Center, has the capability of estimating probability of failure of components under varying loading and environmental conditions. This code performs sensitivity analysis of all the input variables and provides their influence on the response variables in the form of cumulative distribution functions. QRAS, also developed by NASA, assesses risk of failure of a system or a mission based on the quantitative information provided by NESTEM or other similar codes, and user provided fault tree and modes of failure. This paper will describe briefly, the capabilities of the NESTEM, QRAS and the interface. Also, in this presentation we will describe stepwise process the interface uses using an example.
Quantitative Tomography for Continuous Variable Quantum Systems
NASA Astrophysics Data System (ADS)
Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.
2018-03-01
We present a continuous variable tomography scheme that reconstructs the Husimi Q function (Wigner function) by Lagrange interpolation, using measurements of the Q function (Wigner function) at the Padua points, conjectured to be optimal sampling points for two dimensional reconstruction. Our approach drastically reduces the number of measurements required compared to using equidistant points on a regular grid, although reanalysis of such experiments is possible. The reconstruction algorithm produces a reconstructed function with exponentially decreasing error and quasilinear runtime in the number of Padua points. Moreover, using the interpolating polynomial of the Q function, we present a technique to directly estimate the density matrix elements of the continuous variable state, with only a linear propagation of input measurement error. Furthermore, we derive a state-independent analytical bound on this error, such that our estimate of the density matrix is accompanied by a measure of its uncertainty.
Humidity compensation of bad-smell sensing system using a detector tube and a built-in camera
NASA Astrophysics Data System (ADS)
Hirano, Hiroyuki; Nakamoto, Takamichi
2011-09-01
We developed a low-cost sensing system robust against humidity change for detecting and estimating concentration of bad smell, such as hydrogen sulfide and ammonia. In the previous study, we developed automated measurement system for a gas detector tube using a built-in camera instead of the conventional manual inspection of the gas detector tube. Concentration detectable by the developed system ranges from a few tens of ppb to a few tens of ppm. However, we previously found that the estimated concentration depends not only on actual concentration, but on humidity. Here, we established the method to correct the influence of humidity by creating regression function with its inputs of discoloration rate and humidity. We studied 2 methods (Backpropagation, Radial basis function network) to get regression function and evaluated them. Consequently, the system successfully estimated the concentration on a practical level even when humidity changes.
Neural field theory of perceptual echo and implications for estimating brain connectivity
NASA Astrophysics Data System (ADS)
Robinson, P. A.; Pagès, J. C.; Gabay, N. C.; Babaie, T.; Mukta, K. N.
2018-04-01
Neural field theory is used to predict and analyze the phenomenon of perceptual echo in which random input stimuli at one location are correlated with electroencephalographic responses at other locations. It is shown that this echo correlation (EC) yields an estimate of the transfer function from the stimulated point to other locations. Modal analysis then explains the observed spatiotemporal structure of visually driven EC and the dominance of the alpha frequency; two eigenmodes of similar amplitude dominate the response, leading to temporal beating and a line of low correlation that runs from the crown of the head toward the ears. These effects result from mode splitting and symmetry breaking caused by interhemispheric coupling and cortical folding. It is shown how eigenmodes obtained from functional magnetic resonance imaging experiments can be combined with temporal dynamics from EC or other evoked responses to estimate the spatiotemporal transfer function between any two points and hence their effective connectivity.
AESOP- INTERACTIVE DESIGN OF LINEAR QUADRATIC REGULATORS AND KALMAN FILTERS
NASA Technical Reports Server (NTRS)
Lehtinen, B.
1994-01-01
AESOP was developed to solve a number of problems associated with the design of controls and state estimators for linear time-invariant systems. The systems considered are modeled in state-variable form by a set of linear differential and algebraic equations with constant coefficients. Two key problems solved by AESOP are the linear quadratic regulator (LQR) design problem and the steady-state Kalman filter design problem. AESOP is designed to be used in an interactive manner. The user can solve design problems and analyze the solutions in a single interactive session. Both numerical and graphical information are available to the user during the session. The AESOP program is structured around a list of predefined functions. Each function performs a single computation associated with control, estimation, or system response determination. AESOP contains over sixty functions and permits the easy inclusion of user defined functions. The user accesses these functions either by inputting a list of desired functions in the order they are to be performed, or by specifying a single function to be performed. The latter case is used when the choice of function and function order depends on the results of previous functions. The available AESOP functions are divided into several general areas including: 1) program control, 2) matrix input and revision, 3) matrix formation, 4) open-loop system analysis, 5) frequency response, 6) transient response, 7) transient function zeros, 8) LQR and Kalman filter design, 9) eigenvalues and eigenvectors, 10) covariances, and 11) user-defined functions. The most important functions are those that design linear quadratic regulators and Kalman filters. The user interacts with AESOP when using these functions by inputting design weighting parameters and by viewing displays of designed system response. Support functions obtain system transient and frequency responses, transfer functions, and covariance matrices. AESOP can also provide the user with open-loop system information including stability, controllability, and observability. The AESOP program is written in FORTRAN IV for interactive execution and has been implemented on an IBM 3033 computer using TSS 370. As currently configured, AESOP has a central memory requirement of approximately 2 Megs of 8 bit bytes. Memory requirements can be reduced by redimensioning arrays in the AESOP program. Graphical output requires adaptation of the AESOP plot routines to whatever device is available. The AESOP program was developed in 1984.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Tao; Tsui, Benjamin M. W.; Li, Xin
Purpose: The radioligand {sup 11}C-KR31173 has been introduced for positron emission tomography (PET) imaging of the angiotensin II subtype 1 receptor in the kidney in vivo. To study the biokinetics of {sup 11}C-KR31173 with a compartmental model, the input function is needed. Collection and analysis of arterial blood samples are the established approach to obtain the input function but they are not feasible in patients with renal diseases. The goal of this study was to develop a quantitative technique that can provide an accurate image-derived input function (ID-IF) to replace the conventional invasive arterial sampling and test the method inmore » pigs with the goal of translation into human studies. Methods: The experimental animals were injected with [{sup 11}C]KR31173 and scanned up to 90 min with dynamic PET. Arterial blood samples were collected for the artery derived input function (AD-IF) and used as a gold standard for ID-IF. Before PET, magnetic resonance angiography of the kidneys was obtained to provide the anatomical information required for derivation of the recovery coefficients in the abdominal aorta, a requirement for partial volume correction of the ID-IF. Different image reconstruction methods, filtered back projection (FBP) and ordered subset expectation maximization (OS-EM), were investigated for the best trade-off between bias and variance of the ID-IF. The effects of kidney uptakes on the quantitative accuracy of ID-IF were also studied. Biological variables such as red blood cell binding and radioligand metabolism were also taken into consideration. A single blood sample was used for calibration in the later phase of the input function. Results: In the first 2 min after injection, the OS-EM based ID-IF was found to be biased, and the bias was found to be induced by the kidney uptake. No such bias was found with the FBP based image reconstruction method. However, the OS-EM based image reconstruction was found to reduce variance in the subsequent phase of the ID-IF. The combined use of FBP and OS-EM resulted in reduced bias and noise. After performing all the necessary corrections, the areas under the curves (AUCs) of the AD-IF were close to that of the AD-IF (average AUC ratio =1 ± 0.08) during the early phase. When applied in a two-tissue-compartmental kinetic model, the average difference between the estimated model parameters from ID-IF and AD-IF was 10% which was within the error of the estimation method. Conclusions: The bias of radioligand concentration in the aorta from the OS-EM image reconstruction is significantly affected by radioligand uptake in the adjacent kidney and cannot be neglected for quantitative evaluation. With careful calibrations and corrections, the ID-IF derived from quantitative dynamic PET images can be used as the input function of the compartmental model to quantify the renal kinetics of {sup 11}C-KR31173 in experimental animals and the authors intend to evaluate this method in future human studies.« less
From Spiking Neuron Models to Linear-Nonlinear Models
Ostojic, Srdjan; Brunel, Nicolas
2011-01-01
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates. PMID:21283777
From spiking neuron models to linear-nonlinear models.
Ostojic, Srdjan; Brunel, Nicolas
2011-01-20
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.
NASA Astrophysics Data System (ADS)
Li, Qiangkun; Hu, Yawei; Jia, Qian; Song, Changji
2018-02-01
It is the key point of quantitative research on agricultural non-point source pollution load, the estimation of pollutant concentration in agricultural drain. In the guidance of uncertainty theory, the synthesis of fertilization and irrigation is used as an impulse input to the farmland, meanwhile, the pollutant concentration in agricultural drain is looked as the response process corresponding to the impulse input. The migration and transformation of pollutant in soil is expressed by Inverse Gaussian Probability Density Function. The law of pollutants migration and transformation in soil at crop different growth periods is reflected by adjusting parameters of Inverse Gaussian Distribution. Based on above, the estimation model for pollutant concentration in agricultural drain at field scale was constructed. Taking the of Qing Tong Xia Irrigation District in Ningxia as an example, the concentration of nitrate nitrogen and total phosphorus in agricultural drain was simulated by this model. The results show that the simulated results accorded with measured data approximately and Nash-Sutcliffe coefficients were 0.972 and 0.964, respectively.
White dwarf stars and the age of the Galactic disk
NASA Technical Reports Server (NTRS)
Wood, M. A.
1990-01-01
The history of the Galaxy is written in its oldest stars, the white dwarf (WD) stars. Significant limits can be placed on both the Galactic age and star formation history. A wide range of input WD model sequences is used to derive the current limits to the age estimates suggested by fitting to the observed falloff in the WD luminosity function. The results suggest that the star formation rate over the history of the Galaxy has been relatively constant, and that the disk age lies in the range 6-12 billion years, depending upon the assumed structure of WD stars, and in particular on the core composition and surface helium layer mass. Using plausible mixed C/O core input models, the estimates for the disk age range from 8-10.5 Gyr, i.e.,sustantially younger than most age estimates for the halo globular clusters. After speculating on the significance of the results, expected observational and theoretical refinements which will further enhance the reliability of the method are discussed.
Nitrogen Fuelling of the Pelagic Food Web of the Tropical Atlantic
Brandt, Peter; Dengler, Marcus; Stemmann, Lars; Vandromme, Pieter; Sommer, Ulrich
2015-01-01
We estimated the relative contribution of atmosphere (ic Nitrogen (N) input (wet and dry deposition and N fixation) to the epipelagic food web by measuring N isotopes of different functional groups of epipelagic zooplankton along 23°W (17°N-4°S) and 18°N (20-24°W) in the Eastern Tropical Atlantic. Results were related to water column observations of nutrient distribution and vertical diffusive flux as well as colony abundance of Trichodesmium obtained with an Underwater Vision Profiler (UVP5). The thickness and depth of the nitracline and phosphocline proved to be significant predictors of zooplankton stable N isotope values. Atmospheric N input was highest (61% of total N) in the strongly stratified and oligotrophic region between 3 and 7°N, which featured very high depth-integrated Trichodesmium abundance (up to 9.4×104 colonies m-2), strong thermohaline stratification and low zooplankton δ15N (~2‰). Relative atmospheric N input was lowest south of the equatorial upwelling between 3 and 5°S (27%). Values in the Guinea Dome region and north of Cape Verde ranged between 45 and 50%, respectively. The microstructure-derived estimate of the vertical diffusive N flux in the equatorial region was about one order of magnitude higher than in any other area (approximately 8 mmol m-2 d 1). At the same time, this region received considerable atmospheric N input (35% of total). In general, zooplankton δ15N and Trichodesmium abundance were closely correlated, indicating that N fixation is the major source of atmospheric N input. Although Trichodesmium is not the only N fixing organism, its abundance can be used with high confidence to estimate the relative atmospheric N input in the tropical Atlantic (r2 = 0.95). Estimates of absolute N fixation rates are two- to tenfold higher than incubation-derived rates reported for the same regions. Our approach integrates over large spatial and temporal scales and also quantifies fixed N released as dissolved inorganic and organic N. In a global analysis, it may thus help to close the gap in oceanic N budgets. PMID:26098917
Effect of land use change on the carbon cycle in Amazon soils
NASA Technical Reports Server (NTRS)
Trumbore, Susan E.; Davidson, Eric A.
1994-01-01
The overall goal of this study was to provide a quantitative understanding of the cycling of carbon in the soils associated with deep-rooting Amazon forests. In particular, we wished to apply the understanding gained by answering two questions: (1) what changes will accompany the major land use change in this region, the conversion of forest to pasture? and (2) what is the role of carbon stored deeper than one meter in depth in these soils? To construct carbon budgets for pasture and forest soils we combined the following: measurements of carbon stocks in above-ground vegetation, root biomass, detritus, and soil organic matter; rates of carbon inputs to soil and detrital layers using litterfall collection and sequential coring to estimate fine root turnover; C-14 analyses of fractionated SOM and soil CO2 to estimate residence times; C-13 analyses to estimate C inputs to pasture soils from C-4 grasses; soil pCO2, volumetric water content, and radon gradients to estimate CO2 production as a function of soil depth; soil respiration to estimate total C outputs; and a model of soil C dynamics that defines SOM fractions cycling on annual, decadal, and millennial time scales.
STEADY-STATE DESIGN OF VERTICAL WELLS FOR LIQUIDS ADDITION AT BIOREACTOR LANDFILLS
This paper presents design charts that a landfill engineer can use for the design of a vertical well system for liquids addition at bioreactor landfills. The flow rate and lateral and vertical zones of impact of a vertical well were estimated as a function of input variables su...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-16
... organizational structure, whereby multi- hospital systems have integrated their governing body functions to... based on nationally recognized and evidence-based guidelines and recommendations. Verbal Orders: We have..., Outpatient services ($300 million). Our estimates were based on input from stakeholders as well as on our own...
Large signal-to-noise ratio quantification in MLE for ARARMAX models
NASA Astrophysics Data System (ADS)
Zou, Yiqun; Tang, Xiafei
2014-06-01
It has been shown that closed-loop linear system identification by indirect method can be generally transferred to open-loop ARARMAX (AutoRegressive AutoRegressive Moving Average with eXogenous input) estimation. For such models, the gradient-related optimisation with large enough signal-to-noise ratio (SNR) can avoid the potential local convergence in maximum likelihood estimation. To ease the application of this condition, the threshold SNR needs to be quantified. In this paper, we build the amplitude coefficient which is an equivalence to the SNR and prove the finiteness of the threshold amplitude coefficient within the stability region. The quantification of threshold is achieved by the minimisation of an elaborately designed multi-variable cost function which unifies all the restrictions on the amplitude coefficient. The corresponding algorithm based on two sets of physically realisable system input-output data details the minimisation and also points out how to use the gradient-related method to estimate ARARMAX parameters when local minimum is present as the SNR is small. Then, the algorithm is tested on a theoretical AutoRegressive Moving Average with eXogenous input model for the derivation of the threshold and a gas turbine engine real system for model identification, respectively. Finally, the graphical validation of threshold on a two-dimensional plot is discussed.
Latin hypercube approach to estimate uncertainty in ground water vulnerability
Gurdak, J.J.; McCray, J.E.; Thyne, G.; Qi, S.L.
2007-01-01
A methodology is proposed to quantify prediction uncertainty associated with ground water vulnerability models that were developed through an approach that coupled multivariate logistic regression with a geographic information system (GIS). This method uses Latin hypercube sampling (LHS) to illustrate the propagation of input error and estimate uncertainty associated with the logistic regression predictions of ground water vulnerability. Central to the proposed method is the assumption that prediction uncertainty in ground water vulnerability models is a function of input error propagation from uncertainty in the estimated logistic regression model coefficients (model error) and the values of explanatory variables represented in the GIS (data error). Input probability distributions that represent both model and data error sources of uncertainty were simultaneously sampled using a Latin hypercube approach with logistic regression calculations of probability of elevated nonpoint source contaminants in ground water. The resulting probability distribution represents the prediction intervals and associated uncertainty of the ground water vulnerability predictions. The method is illustrated through a ground water vulnerability assessment of the High Plains regional aquifer. Results of the LHS simulations reveal significant prediction uncertainties that vary spatially across the regional aquifer. Additionally, the proposed method enables a spatial deconstruction of the prediction uncertainty that can lead to improved prediction of ground water vulnerability. ?? 2007 National Ground Water Association.
Shih, Peter; Kaul, Brian C; Jagannathan, Sarangapani; Drallmeier, James A
2009-10-01
A novel reinforcement-learning-based output adaptive neural network (NN) controller, which is also referred to as the adaptive-critic NN controller, is developed to deliver the desired tracking performance for a class of nonlinear discrete-time systems expressed in nonstrict feedback form in the presence of bounded and unknown disturbances. The adaptive-critic NN controller consists of an observer, a critic, and two action NNs. The observer estimates the states and output, and the two action NNs provide virtual and actual control inputs to the nonlinear discrete-time system. The critic approximates a certain strategic utility function, and the action NNs minimize the strategic utility function and control inputs. All NN weights adapt online toward minimization of a performance index, utilizing the gradient-descent-based rule, in contrast with iteration-based adaptive-critic schemes. Lyapunov functions are used to show the stability of the closed-loop tracking error, weights, and observer estimates. Separation and certainty equivalence principles, persistency of excitation condition, and linearity in the unknown parameter assumption are not needed. Experimental results on a spark ignition (SI) engine operating lean at an equivalence ratio of 0.75 show a significant (25%) reduction in cyclic dispersion in heat release with control, while the average fuel input changes by less than 1% compared with the uncontrolled case. Consequently, oxides of nitrogen (NO(x)) drop by 30%, and unburned hydrocarbons drop by 16% with control. Overall, NO(x)'s are reduced by over 80% compared with stoichiometric levels.
Farrance, Ian; Frenkel, Robert
2014-01-01
The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more ‘constants’, each of which has an empirically derived numerical value. Such empirically derived ‘constants’ must also have associated uncertainties which propagate through the functional relationship and contribute to the combined standard uncertainty of the measurand. PMID:24659835
Farrance, Ian; Frenkel, Robert
2014-02-01
The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel. The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling. The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more 'constants', each of which has an empirically derived numerical value. Such empirically derived 'constants' must also have associated uncertainties which propagate through the functional relationship and contribute to the combined standard uncertainty of the measurand.
Local linear regression for function learning: an analysis based on sample discrepancy.
Cervellera, Cristiano; Macciò, Danilo
2014-11-01
Local linear regression models, a kind of nonparametric structures that locally perform a linear estimation of the target function, are analyzed in the context of empirical risk minimization (ERM) for function learning. The analysis is carried out with emphasis on geometric properties of the available data. In particular, the discrepancy of the observation points used both to build the local regression models and compute the empirical risk is considered. This allows to treat indifferently the case in which the samples come from a random external source and the one in which the input space can be freely explored. Both consistency of the ERM procedure and approximating capabilities of the estimator are analyzed, proving conditions to ensure convergence. Since the theoretical analysis shows that the estimation improves as the discrepancy of the observation points becomes smaller, low-discrepancy sequences, a family of sampling methods commonly employed for efficient numerical integration, are also analyzed. Simulation results involving two different examples of function learning are provided.
Zanotti-Fregonara, Paolo; Hines, Christina S; Zoghbi, Sami S; Liow, Jeih-San; Zhang, Yi; Pike, Victor W; Drevets, Wayne C; Mallinger, Alan G; Zarate, Carlos A; Fujita, Masahiro; Innis, Robert B
2012-11-15
Quantitative PET studies of neuroreceptor tracers typically require that arterial input function be measured. The aim of this study was to explore the use of a population-based input function (PBIF) and an image-derived input function (IDIF) for [(11)C](R)-rolipram kinetic analysis, with the goal of reducing - and possibly eliminating - the number of arterial blood samples needed to measure parent radioligand concentrations. A PBIF was first generated using [(11)C](R)-rolipram parent time-activity curves from 12 healthy volunteers (Group 1). Both invasive (blood samples) and non-invasive (body weight, body surface area, and lean body mass) scaling methods for PBIF were tested. The scaling method that gave the best estimate of the Logan-V(T) values was then used to determine the test-retest variability of PBIF in Group 1 and then prospectively applied to another population of 25 healthy subjects (Group 2), as well as to a population of 26 patients with major depressive disorder (Group 3). Results were also compared to those obtained with an image-derived input function (IDIF) from the internal carotid artery. In some subjects, we measured arteriovenous differences in [(11)C](R)-rolipram concentration to see whether venous samples could be used instead of arterial samples. Finally, we assessed the ability of IDIF and PBIF to discriminate depressed patients (MDD) and healthy subjects. Arterial blood-scaled PBIF gave better results than any non-invasive scaling technique. Excellent results were obtained when the blood-scaled PBIF was prospectively applied to the subjects in Group 2 (V(T) ratio 1.02±0.05; mean±SD) and Group 3 (V(T) ratio 1.03±0.04). Equally accurate results were obtained for two subpopulations of subjects drawn from Groups 2 and 3 who had very differently shaped (i.e. "flatter" or "steeper") input functions compared to PBIF (V(T) ratio 1.07±0.04 and 0.99±0.04, respectively). Results obtained via PBIF were equivalent to those obtained via IDIF (V(T) ratio 0.99±0.05 and 1.00±0.04 for healthy subjects and MDD patients, respectively). Retest variability of PBIF was equivalent to that obtained with full input function and IDIF (14.5%, 15.2%, and 14.1%, respectively). Due to [(11)C](R)-rolipram arteriovenous differences, venous samples could not be substituted for arterial samples. With both IDIF and PBIF, depressed patients had a 20% reduction in [(11)C](R)-rolipram binding as compared to control (two-way ANOVA: p=0.008 and 0.005, respectively). These results were almost equivalent to those obtained using 23 arterial samples. Although some arterial samples are still necessary, both PBIF and IDIF are accurate and precise alternatives to full arterial input function for [(11)C](R)-rolipram PET studies. Both techniques give accurate results with low variability, even for clinically different groups of subjects and those with very differently shaped input functions. Published by Elsevier Inc.
Distributed Optimal Consensus Control for Multiagent Systems With Input Delay.
Zhang, Huaipin; Yue, Dong; Zhao, Wei; Hu, Songlin; Dou, Chunxia; Huaipin Zhang; Dong Yue; Wei Zhao; Songlin Hu; Chunxia Dou; Hu, Songlin; Zhang, Huaipin; Dou, Chunxia; Yue, Dong; Zhao, Wei
2018-06-01
This paper addresses the problem of distributed optimal consensus control for a continuous-time heterogeneous linear multiagent system subject to time varying input delays. First, by discretization and model transformation, the continuous-time input-delayed system is converted into a discrete-time delay-free system. Two delicate performance index functions are defined for these two systems. It is shown that the performance index functions are equivalent and the optimal consensus control problem of the input-delayed system can be cast into that of the delay-free system. Second, by virtue of the Hamilton-Jacobi-Bellman (HJB) equations, an optimal control policy for each agent is designed based on the delay-free system and a novel value iteration algorithm is proposed to learn the solutions to the HJB equations online. The proposed adaptive dynamic programming algorithm is implemented on the basis of a critic-action neural network (NN) structure. Third, it is proved that local consensus errors of the two systems and weight estimation errors of the critic-action NNs are uniformly ultimately bounded while the approximated control policies converge to their target values. Finally, two simulation examples are presented to illustrate the effectiveness of the developed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granderson, G.D.
The purpose of the dissertation is to examine the impact of rate-of-return regulation on the cost of transporting natural gas in interstate commerce. Of particular interest is the effect of the regulation on the input choice of a firm. Does regulation induce a regulated firm to produce its selected level of output at greater than minimum cost The theoretical model is based on the work of Rolf Faere and James Logan who investigate the duality relationship between the cost and production functions of a rate-of-return regulated firm. Faere and Logan derive the cost function for a regulated firm as themore » minimum cost of producing the firm's selected level of output, subject to the regulatory constraint. The regulated cost function is used to recover the unregulated cost function. A firm's unregulated cost function is the minimum cost of producing its selected level of output. Characteristics of the production technology are obtained from duality between the production and unregulated cost functions. Using data on 20 pipeline companies from 1977 to 1987, the author estimates a random effects model that consists of a regulated cost function and its associated input share equations. The model is estimated as a set of seemingly unrelated regressions. The empirical results are used to test the Faere and Logan theory and the traditional Averch-Johnson hypothesis of overcapitalization. Parameter estimates are used to recover the unregulated cost function and to calculate the amount by which transportation costs are increased by the regulation of the industry. Empirical results show that a firm's transportation cost decreases as the allowed rate of return increases and the regulatory constraint becomes less tight. Elimination of the regulatory constraint would lead to a reduction in costs on average of 5.278%. There is evidence that firms overcapitalize on pipeline capital. There is inconclusive evidence on whether firms overcapitalized on compressor station capital.« less
Troutman, Brent M.
1982-01-01
Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.
Nguyen, T B; Cron, G O; Perdrizet, K; Bezzina, K; Torres, C H; Chakraborty, S; Woulfe, J; Jansen, G H; Sinclair, J; Thornhill, R E; Foottit, C; Zanette, B; Cameron, I G
2015-11-01
Dynamic contrast-enhanced MR imaging parameters can be biased by poor measurement of the vascular input function. We have compared the diagnostic accuracy of dynamic contrast-enhanced MR imaging by using a phase-derived vascular input function and "bookend" T1 measurements with DSC MR imaging for preoperative grading of astrocytomas. This prospective study included 48 patients with a new pathologic diagnosis of an astrocytoma. Preoperative MR imaging was performed at 3T, which included 2 injections of 5-mL gadobutrol for dynamic contrast-enhanced and DSC MR imaging. During dynamic contrast-enhanced MR imaging, both magnitude and phase images were acquired to estimate plasma volume obtained from phase-derived vascular input function (Vp_Φ) and volume transfer constant obtained from phase-derived vascular input function (K(trans)_Φ) as well as plasma volume obtained from magnitude-derived vascular input function (Vp_SI) and volume transfer constant obtained from magnitude-derived vascular input function (K(trans)_SI). From DSC MR imaging, corrected relative CBV was computed. Four ROIs were placed over the solid part of the tumor, and the highest value among the ROIs was recorded. A Mann-Whitney U test was used to test for difference between grades. Diagnostic accuracy was assessed by using receiver operating characteristic analysis. Vp_ Φ and K(trans)_Φ values were lower for grade II compared with grade III astrocytomas (P < .05). Vp_SI and K(trans)_SI were not significantly different between grade II and grade III astrocytomas (P = .08-0.15). Relative CBV and dynamic contrast-enhanced MR imaging parameters except for K(trans)_SI were lower for grade III compared with grade IV (P ≤ .05). In differentiating low- and high-grade astrocytomas, we found no statistically significant difference in diagnostic accuracy between relative CBV and dynamic contrast-enhanced MR imaging parameters. In the preoperative grading of astrocytomas, the diagnostic accuracy of dynamic contrast-enhanced MR imaging parameters is similar to that of relative CBV. © 2015 by American Journal of Neuroradiology.
Practical input optimization for aircraft parameter estimation experiments. Ph.D. Thesis, 1990
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1993-01-01
The object of this research was to develop an algorithm for the design of practical, optimal flight test inputs for aircraft parameter estimation experiments. A general, single pass technique was developed which allows global optimization of the flight test input design for parameter estimation using the principles of dynamic programming with the input forms limited to square waves only. Provision was made for practical constraints on the input, including amplitude constraints, control system dynamics, and selected input frequency range exclusions. In addition, the input design was accomplished while imposing output amplitude constraints required by model validity and considerations of safety during the flight test. The algorithm has multiple input design capability, with optional inclusion of a constraint that only one control move at a time, so that a human pilot can implement the inputs. It is shown that the technique can be used to design experiments for estimation of open loop model parameters from closed loop flight test data. The report includes a new formulation of the optimal input design problem, a description of a new approach to the solution, and a summary of the characteristics of the algorithm, followed by three example applications of the new technique which demonstrate the quality and expanded capabilities of the input designs produced by the new technique. In all cases, the new input design approach showed significant improvement over previous input design methods in terms of achievable parameter accuracies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Z; Terry, N; Hubbard, S S
2013-02-12
In this study, we evaluate the possibility of monitoring soil moisture variation using tomographic ground penetrating radar travel time data through Bayesian inversion, which is integrated with entropy memory function and pilot point concepts, as well as efficient sampling approaches. It is critical to accurately estimate soil moisture content and variations in vadose zone studies. Many studies have illustrated the promise and value of GPR tomographic data for estimating soil moisture and associated changes, however, challenges still exist in the inversion of GPR tomographic data in a manner that quantifies input and predictive uncertainty, incorporates multiple data types, handles non-uniquenessmore » and nonlinearity, and honors time-lapse tomograms collected in a series. To address these challenges, we develop a minimum relative entropy (MRE)-Bayesian based inverse modeling framework that non-subjectively defines prior probabilities, incorporates information from multiple sources, and quantifies uncertainty. The framework enables us to estimate dielectric permittivity at pilot point locations distributed within the tomogram, as well as the spatial correlation range. In the inversion framework, MRE is first used to derive prior probability distribution functions (pdfs) of dielectric permittivity based on prior information obtained from a straight-ray GPR inversion. The probability distributions are then sampled using a Quasi-Monte Carlo (QMC) approach, and the sample sets provide inputs to a sequential Gaussian simulation (SGSim) algorithm that constructs a highly resolved permittivity/velocity field for evaluation with a curved-ray GPR forward model. The likelihood functions are computed as a function of misfits, and posterior pdfs are constructed using a Gaussian kernel. Inversion of subsequent time-lapse datasets combines the Bayesian estimates from the previous inversion (as a memory function) with new data. The memory function and pilot point design takes advantage of the spatial-temporal correlation of the state variables. We first apply the inversion framework to a static synthetic example and then to a time-lapse GPR tomographic dataset collected during a dynamic experiment conducted at the Hanford Site in Richland, WA. We demonstrate that the MRE-Bayesian inversion enables us to merge various data types, quantify uncertainty, evaluate nonlinear models, and produce more detailed and better resolved estimates than straight-ray based inversion; therefore, it has the potential to improve estimates of inter-wellbore dielectric permittivity and soil moisture content and to monitor their temporal dynamics more accurately.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Zhangshuan; Terry, Neil C.; Hubbard, Susan S.
2013-02-22
In this study, we evaluate the possibility of monitoring soil moisture variation using tomographic ground penetrating radar travel time data through Bayesian inversion, which is integrated with entropy memory function and pilot point concepts, as well as efficient sampling approaches. It is critical to accurately estimate soil moisture content and variations in vadose zone studies. Many studies have illustrated the promise and value of GPR tomographic data for estimating soil moisture and associated changes, however, challenges still exist in the inversion of GPR tomographic data in a manner that quantifies input and predictive uncertainty, incorporates multiple data types, handles non-uniquenessmore » and nonlinearity, and honors time-lapse tomograms collected in a series. To address these challenges, we develop a minimum relative entropy (MRE)-Bayesian based inverse modeling framework that non-subjectively defines prior probabilities, incorporates information from multiple sources, and quantifies uncertainty. The framework enables us to estimate dielectric permittivity at pilot point locations distributed within the tomogram, as well as the spatial correlation range. In the inversion framework, MRE is first used to derive prior probability density functions (pdfs) of dielectric permittivity based on prior information obtained from a straight-ray GPR inversion. The probability distributions are then sampled using a Quasi-Monte Carlo (QMC) approach, and the sample sets provide inputs to a sequential Gaussian simulation (SGSIM) algorithm that constructs a highly resolved permittivity/velocity field for evaluation with a curved-ray GPR forward model. The likelihood functions are computed as a function of misfits, and posterior pdfs are constructed using a Gaussian kernel. Inversion of subsequent time-lapse datasets combines the Bayesian estimates from the previous inversion (as a memory function) with new data. The memory function and pilot point design takes advantage of the spatial-temporal correlation of the state variables. We first apply the inversion framework to a static synthetic example and then to a time-lapse GPR tomographic dataset collected during a dynamic experiment conducted at the Hanford Site in Richland, WA. We demonstrate that the MRE-Bayesian inversion enables us to merge various data types, quantify uncertainty, evaluate nonlinear models, and produce more detailed and better resolved estimates than straight-ray based inversion; therefore, it has the potential to improve estimates of inter-wellbore dielectric permittivity and soil moisture content and to monitor their temporal dynamics more accurately.« less
Bell, L C; Does, M D; Stokes, A M; Baxter, L C; Schmainda, K M; Dueck, A C; Quarles, C C
2017-09-01
The optimal TE must be calculated to minimize the variance in CBV measurements made with DSC MR imaging. Simulations can be used to determine the influence of the TE on CBV, but they may not adequately recapitulate the in vivo heterogeneity of precontrast T2*, contrast agent kinetics, and the biophysical basis of contrast agent-induced T2* changes. The purpose of this study was to combine quantitative multiecho DSC MRI T2* time curves with error analysis in order to compute the optimal TE for a traditional single-echo acquisition. Eleven subjects with high-grade gliomas were scanned at 3T with a dual-echo DSC MR imaging sequence to quantify contrast agent-induced T2* changes in this retrospective study. Optimized TEs were calculated with propagation of error analysis for high-grade glial tumors, normal-appearing white matter, and arterial input function estimation. The optimal TE is a weighted average of the T2* values that occur as a contrast agent bolus transverses a voxel. The mean optimal TEs were 30.0 ± 7.4 ms for high-grade glial tumors, 36.3 ± 4.6 ms for normal-appearing white matter, and 11.8 ± 1.4 ms for arterial input function estimation (repeated-measures ANOVA, P < .001). Greater heterogeneity was observed in the optimal TE values for high-grade gliomas, and mean values of all 3 ROIs were statistically significant. The optimal TE for the arterial input function estimation is much shorter; this finding implies that quantitative DSC MR imaging acquisitions would benefit from multiecho acquisitions. In the case of a single-echo acquisition, the optimal TE prescribed should be 30-35 ms (without a preload) and 20-30 ms (with a standard full-dose preload). © 2017 by American Journal of Neuroradiology.
Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models
NASA Astrophysics Data System (ADS)
Ardani, S.; Kaihatu, J. M.
2012-12-01
Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques, MCMC
Wang, Guobao; Corwin, Michael T; Olson, Kristin A; Badawi, Ramsey D; Sarkar, Souvik
2018-05-30
The hallmark of nonalcoholic steatohepatitis is hepatocellular inflammation and injury in the setting of hepatic steatosis. Recent work has indicated that dynamic 18F-FDG PET with kinetic modeling has the potential to assess hepatic inflammation noninvasively, while static FDG-PET did not show a promise. Because the liver has dual blood supplies, kinetic modeling of dynamic liver PET data is challenging in human studies. The objective of this study is to evaluate and identify a dual-input kinetic modeling approach for dynamic FDG-PET of human liver inflammation. Fourteen human patients with nonalcoholic fatty liver disease were included in the study. Each patient underwent one-hour dynamic FDG-PET/CT scan and had liver biopsy within six weeks. Three models were tested for kinetic analysis: traditional two-tissue compartmental model with an image-derived single-blood input function (SBIF), model with population-based dual-blood input function (DBIF), and modified model with optimization-derived DBIF through a joint estimation framework. The three models were compared using Akaike information criterion (AIC), F test and histopathologic inflammation reference. The results showed that the optimization-derived DBIF model improved the fitting of liver time activity curves and achieved lower AIC values and higher F values than the SBIF and population-based DBIF models in all patients. The optimization-derived model significantly increased FDG K1 estimates by 101% and 27% as compared with traditional SBIF and population-based DBIF. K1 by the optimization-derived model was significantly associated with histopathologic grades of liver inflammation while the other two models did not provide a statistical significance. In conclusion, modeling of DBIF is critical for kinetic analysis of dynamic liver FDG-PET data in human studies. The optimization-derived DBIF model is more appropriate than SBIF and population-based DBIF for dynamic FDG-PET of liver inflammation. © 2018 Institute of Physics and Engineering in Medicine.
Enhancing PTFs with remotely sensed data for multi-scale soil water retention estimation
NASA Astrophysics Data System (ADS)
Jana, Raghavendra B.; Mohanty, Binayak P.
2011-03-01
SummaryUse of remotely sensed data products in the earth science and water resources fields is growing due to increasingly easy availability of the data. Traditionally, pedotransfer functions (PTFs) employed for soil hydraulic parameter estimation from other easily available data have used basic soil texture and structure information as inputs. Inclusion of surrogate/supplementary data such as topography and vegetation information has shown some improvement in the PTF's ability to estimate more accurate soil hydraulic parameters. Artificial neural networks (ANNs) are a popular tool for PTF development, and are usually applied across matching spatial scales of inputs and outputs. However, different hydrologic, hydro-climatic, and contaminant transport models require input data at different scales, all of which may not be easily available from existing databases. In such a scenario, it becomes necessary to scale the soil hydraulic parameter values estimated by PTFs to suit the model requirements. Also, uncertainties in the predictions need to be quantified to enable users to gauge the suitability of a particular dataset in their applications. Bayesian Neural Networks (BNNs) inherently provide uncertainty estimates for their outputs due to their utilization of Markov Chain Monte Carlo (MCMC) techniques. In this paper, we present a PTF methodology to estimate soil water retention characteristics built on a Bayesian framework for training of neural networks and utilizing several in situ and remotely sensed datasets jointly. The BNN is also applied across spatial scales to provide fine scale outputs when trained with coarse scale data. Our training data inputs include ground/remotely sensed soil texture, bulk density, elevation, and Leaf Area Index (LAI) at 1 km resolutions, while similar properties measured at a point scale are used as fine scale inputs. The methodology was tested at two different hydro-climatic regions. We also tested the effect of varying the support scale of the training data for the BNNs by sequentially aggregating finer resolution training data to coarser resolutions, and the applicability of the technique to upscaling problems. The BNN outputs are corrected for bias using a non-linear CDF-matching technique. Final results show good promise of the suitability of this Bayesian Neural Network approach for soil hydraulic parameter estimation across spatial scales using ground-, air-, or space-based remotely sensed geophysical parameters. Inclusion of remotely sensed data such as elevation and LAI in addition to in situ soil physical properties improved the estimation capabilities of the BNN-based PTF in certain conditions.
Satoshi Hirabayashi; Chuck Kroll; David Nowak
2011-01-01
The Urban Forest Effects-Deposition model (UFORE-D) was developed with a component-based modeling approach. Functions of the model were separated into components that are responsible for user interface, data input/output, and core model functions. Taking advantage of the component-based approach, three UFORE-D applications were developed: a base application to estimate...
NASA Astrophysics Data System (ADS)
Baker, A. R.; Lesworth, T.; Adams, C.; Jickells, T. D.; Ganzeveld, L.
2010-09-01
Atmospheric nitrogen inputs to the ocean are estimated to have increased by up to a factor of three as a result of increased anthropogenic emissions over the last 150 years, with further increases expected in the short- to mid-term at least. Such estimates are largely based on emissions and atmospheric transport modeling, because, apart from a few island sites, there is very little observational data available for atmospheric nitrogen concentrations over the remote ocean. Here we use samples of rainwater and aerosol we obtained during 12 long-transect cruises across the Atlantic Ocean between 50°N and 50°S as the basis for a climatological estimate of nitrogen inputs to the basin. The climatology is for the 5 years 2001-2005, during which almost all of the cruises took place, and includes dry and wet deposition of nitrate and ammonium explicitly, together with a more uncertain estimate of soluble organic nitrogen deposition. Our results indicate that nitrogen inputs into the region were ˜850-1420 Gmol (12-20 Tg) N yr-1, with ˜78-85% of this in the form of wet deposition. Inputs were greater in the Northern Hemisphere and in wet regions, and wet regions had a greater proportion of input via wet deposition. The largest uncertainty in our estimate of dry inputs is associated with variability in deposition velocities, while the largest uncertainty in our wet nitrogen input estimate is due to the limited amount and uneven geographic distribution of observational data. We also estimate a lower limit of dry deposition of phosphate to be ˜0.19 Gmol P yr-1, using data from the same cruises. We compare our results to several recent estimates of N and P deposition to the Atlantic and discuss the likely sources of uncertainty, such as the potential seasonal bias introduced by our sampling, on our climatology.
NASA Astrophysics Data System (ADS)
Erazo, Kalil; Nagarajaiah, Satish
2017-06-01
In this paper an offline approach for output-only Bayesian identification of stochastic nonlinear systems is presented. The approach is based on a re-parameterization of the joint posterior distribution of the parameters that define a postulated state-space stochastic model class. In the re-parameterization the state predictive distribution is included, marginalized, and estimated recursively in a state estimation step using an unscented Kalman filter, bypassing state augmentation as required by existing online methods. In applications expectations of functions of the parameters are of interest, which requires the evaluation of potentially high-dimensional integrals; Markov chain Monte Carlo is adopted to sample the posterior distribution and estimate the expectations. The proposed approach is suitable for nonlinear systems subjected to non-stationary inputs whose realization is unknown, and that are modeled as stochastic processes. Numerical verification and experimental validation examples illustrate the effectiveness and advantages of the approach, including: (i) an increased numerical stability with respect to augmented-state unscented Kalman filtering, avoiding divergence of the estimates when the forcing input is unmeasured; (ii) the ability to handle arbitrary prior and posterior distributions. The experimental validation of the approach is conducted using data from a large-scale structure tested on a shake table. It is shown that the approach is robust to inherent modeling errors in the description of the system and forcing input, providing accurate prediction of the dynamic response when the excitation history is unknown.
Vriens, Dennis; de Geus-Oei, Lioe-Fee; Oyen, Wim J G; Visser, Eric P
2009-12-01
For the quantification of dynamic (18)F-FDG PET studies, the arterial plasma time-activity concentration curve (APTAC) needs to be available. This can be obtained using serial sampling of arterial blood or an image-derived input function (IDIF). Arterial sampling is invasive and often not feasible in practice; IDIFs are biased because of partial-volume effects and cannot be used when no large arterial blood pool is in the field of view. We propose a mathematic function, consisting of an initial linear rising activity concentration followed by a triexponential decay, to describe the APTAC. This function was fitted to 80 oncologic patients and verified for 40 different oncologic patients by area-under-the-curve (AUC) comparison, Patlak glucose metabolic rate (MR(glc)) estimation, and therapy response monitoring (Delta MR(glc)). The proposed function was compared with the gold standard (serial arterial sampling) and the IDIF. To determine the free parameters of the function, plasma time-activity curves based on arterial samples in 80 patients were fitted after normalization for administered activity (AA) and initial distribution volume (iDV) of (18)F-FDG. The medians of these free parameters were used for the model. In 40 other patients (20 baseline and 20 follow-up dynamic (18)F-FDG PET scans), this model was validated. The population-based curve, individually calibrated by AA and iDV (APTAC(AA/iDV)), by 1 late arterial sample (APTAC(1 sample)), and by the individual IDIF (APTAC(IDIF)), was compared with the gold standard of serial arterial sampling (APTAC(sampled)) using the AUC. Additionally, these 3 methods of APTAC determination were evaluated with Patlak MR(glc) estimation and with Delta MR(glc) for therapy effects using serial sampling as the gold standard. Excellent individual fits to the function were derived with significantly different decay constants (P < 0.001). Correlations between AUC from APTAC(AA/iDV), APTAC(1 sample), and APTAC(IDIF) with the gold standard (APTAC(sampled)) were 0.880, 0.994, and 0.856, respectively. For MR(glc), these correlations were 0.963, 0.994, and 0.966, respectively. In response monitoring, these correlations were 0.947, 0.982, and 0.949, respectively. Additional scaling by 1 late arterial sample showed a significant improvement (P < 0.001). The fitted input function calibrated for AA and iDV performed similarly to IDIF. Performance improved significantly using 1 late arterial sample. The proposed model can be used when an IDIF is not available or when serial arterial sampling is not feasible.
Estimation of the interference coupling into cables within electrically large multiroom structures
NASA Astrophysics Data System (ADS)
Keghie, J.; Kanyou Nana, R.; Schetelig, B.; Potthast, S.; Dickmann, S.
2010-10-01
Communication cables are used to transfer data between components of a system. As a part of the EMC analysis of complex systems, it is necessary to determine which level of interference can be expected at the input of connected devices due to the coupling into the irradiated cable. For electrically large systems consisting of several rooms with cables connecting components located in different rooms, an estimation of the coupled disturbances inside cables using commercial field computation software is often not feasible without several restrictions. In many cases, this is related to the non-availability of computing memory and processing power needed for the computation. In this paper, we are going to show that, starting from a topological analysis of the entire system, weak coupling paths within the system can be can be identified. By neglecting these coupling paths and using the transmission line approach, the original system will be simplified so that a simpler estimation is possible. Using the example of a system which is composed of two rooms, multiple apertures, and a network cable located in both chambers, it is shown that an estimation of the coupled disturbances due to external electromagnetic sources is feasible with this approach. Starting from an incident electromagnetic field, we determine transfer functions describing the coupling means (apertures, cables). Using these transfer functions and the knowledge of the weak coupling paths above, a decision is taken regarding the means for paths that can be neglected during the estimation. The estimation of the coupling into the cable is then made while taking only paths with strong coupling into account. The remaining part of the wiring harness in areas with weak coupling is represented by its input impedance. A comparison with the original network shows a good agreement.
Experiment design for pilot identification in compensatory tracking tasks
NASA Technical Reports Server (NTRS)
Wells, W. R.
1976-01-01
A design criterion for input functions in laboratory tracking tasks resulting in efficient parameter estimation is formulated. The criterion is that the statistical correlations between pairs of parameters be reduced in order to minimize the problem of nonuniqueness in the extraction process. The effectiveness of the method is demonstrated for a lower order dynamic system.
Development of a distributed air pollutant dry deposition modeling framework
Satoshi Hirabayashi; Charles N. Kroll; David J. Nowak
2012-01-01
A distributed air pollutant dry deposition modeling systemwas developed with a geographic information system (GIS) to enhance the functionality of i-Tree Eco (i-Tree, 2011). With the developed system, temperature, leaf area index (LAI) and air pollutant concentration in a spatially distributed form can be estimated, and based on these and other input variables, dry...
Sbarciog, M; Moreno, J A; Vande Wouwer, A
2014-01-01
This paper presents the estimation of the unknown states and inputs of an anaerobic digestion system characterized by a two-step reaction model. The estimation is based on the measurement of the two substrate concentrations and of the outflow rate of biogas and relies on the use of an observer, consisting of three parts. The first is a generalized super-twisting observer, which estimates a linear combination of the two input concentrations. The second is an asymptotic observer, which provides one of the two biomass concentrations, whereas the third is a super-twisting observer for one of the input concentrations and the second biomass concentration.
NASA Technical Reports Server (NTRS)
Helder, Dennis; Choi, Taeyoung; Rangaswamy, Manjunath
2005-01-01
The spatial characteristics of an imaging system cannot be expressed by a single number or simple statement. However, the Modulation Transfer Function (MTF) is one approach to measure the spatial quality of an imaging system. Basically, MTF is the normalized spatial frequency response of an imaging system. The frequency response of the system can be evaluated by applying an impulse input. The resulting impulse response is termed the Point Spread function (PSF). This function is a measure of the amount of blurring present in the imaging system and is itself a useful measure of spatial quality. An underlying assumption is that the imaging system is linear and shift-independent. The Fourier transform of the PSF is called the Optical Transfer Function (OTF) and the normalized magnitude of the OTF is the MTF. In addition to using an impulse input, a knife-edge in technique has also been used in this project. The sharp edge exercises an imaging system at all spatial frequencies. The profile of an edge response from an imaging system is called an Edge Spread Function (ESF). Differentiation of the ESF results in a one-dimensional version of the Point Spread Function (PSF). Finally, MTF can be calculated through use of Fourier transform of the PSF as stated previously. Every image includes noise in some degree which makes MTF of PSF estimation more difficult. To avoid the noise effects, many MTF estimation approaches use smooth numerical models. Historically, Gaussian models and Fermi functions were applied to reduce the random noise in the output profiles. The pulse-input method was used to measure the MTF of the Landsat Thematic Mapper (TM) using 8th order even functions over the San Mateo Bridge in San Francisco, California. Because the bridge width was smaller than the 30-meter ground sample distance (GSD) of the TM, the Nyquist frequency was located before the first zero-crossing point of the sinc function from the Fourier transformation of the bridge pulse. To avoid the zero-crossing points in the frequency domain from a pulse, the pulse width should be less than the width of two pixels (or 2 GSD's), but the short extent of the pulse results in a poor signal-to-noise ratio. Similarly, for a high-resolution satellite imaging system such as Quickbird, the input pulse width was critical because of the zero crossing points and noise present in the background area. It is important, therefore, that the width of the input pulse be appropriately sized. Finally, the MTF was calculated by taking ratio between Fourier transform of output and Fourier transform of input. Regardless of whether the edge, pulse and impulse target method is used, the orientation of the targets is critical in order to obtain uniformly spaced sub-pixel data points. When the orientation is incorrect, sample data points tend to be located in clusters that result in poor reconstruction of the edge or pulse profiles. Thus, a compromise orientation must be selected so that all spectral bands can be accommodated. This report continues by outlining the objectives in Section 2, procedures followed in Section 3, descriptions of the field campaigns in Section 4, results in Section 5, and a brief summary in Section 6.
Reconstruction of neuronal input through modeling single-neuron dynamics and computations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qin, Qing; Wang, Jiang; Yu, Haitao
Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-spacemore » method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.« less
Reconstruction of neuronal input through modeling single-neuron dynamics and computations
NASA Astrophysics Data System (ADS)
Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok
2016-06-01
Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.
Forecasting overhaul or replacement intervals based on estimated system failure intensity
NASA Astrophysics Data System (ADS)
Gannon, James M.
1994-12-01
System reliability can be expressed in terms of the pattern of failure events over time. Assuming a nonhomogeneous Poisson process and Weibull intensity function for complex repairable system failures, the degree of system deterioration can be approximated. Maximum likelihood estimators (MLE's) for the system Rate of Occurrence of Failure (ROCOF) function are presented. Evaluating the integral of the ROCOF over annual usage intervals yields the expected number of annual system failures. By associating a cost of failure with the expected number of failures, budget and program policy decisions can be made based on expected future maintenance costs. Monte Carlo simulation is used to estimate the range and the distribution of the net present value and internal rate of return of alternative cash flows based on the distributions of the cost inputs and confidence intervals of the MLE's.
Spatially resolved assessment of hepatic function using 99mTc-IDA SPECT
Wang, Hesheng; Cao, Yue
2013-01-01
Purpose: 99mTc-iminodiacetic acid (IDA) hepatobiliary imaging is usually quantified for hepatic function on the entire liver or regions of interest (ROIs) in the liver. The authors presented a method to estimate the hepatic extraction fraction (HEF) voxel-by-voxel from single-photon emission computed tomography (SPECT)/CT with a 99mTc-labeled IDA agent of mebrofenin and evaluated the spatially resolved HEF measurements with an independent physiological measurement. Methods: Fourteen patients with intrahepatic cancers were treated with radiation therapy (RT) and imaged by 99mTc-mebrofenin SPECT before and 1 month after RT. The dynamic SPECT volumes were with a resolution of 3.9 × 3.9 × 2.5 mm3. Throughout the whole liver with approximate 50 000 voxels, voxelwise HEF quantifications were estimated and compared between using arterial input function (AIF) from the heart and using vascular input function (VIF) from the spleen. The correlation between mean of the HEFs over the nontumor liver tissue and the overall liver function measured by Indocyanine green clearance half-time (T1/2) was assessed. Variation of the voxelwise estimation was evaluated in ROIs drawn in relatively homogeneous regions of the livers. The authors also examined effects of the time range parameter on the voxelwise HEF quantification. Results: Mean of the HEFs over the liver estimated using AIF significantly correlated with the physiological measurement T1/2 (r = 0.52, p = 0.0004), and the correlation was greatly improved by using VIF (r = 0.79, p < 0.0001). The parameter of time range for the retention phase did not lead to a significant difference in the means of the HEFs in the ROIs. Using VIF and a retention phase time range of 7–30 min, the relative variation of the voxelwise HEF in the ROIs was 10% ± 6% of respective mean HEF. Conclusions: The voxelwise HEF derived from 99mTc-IDA SPECT by the deconvolution analysis is feasible to assess the spatial distribution of hepatic function in the liver. PMID:24007177
A soft-computing methodology for noninvasive time-spatial temperature estimation.
Teixeira, César A; Ruano, Maria Graça; Ruano, António E; Pereira, Wagner C A
2008-02-01
The safe and effective application of thermal therapies is restricted due to lack of reliable noninvasive temperature estimators. In this paper, the temporal echo-shifts of backscattered ultrasound signals, collected from a gel-based phantom, were tracked and assigned with the past temperature values as radial basis functions neural networks input information. The phantom was heated using a piston-like therapeutic ultrasound transducer. The neural models were assigned to estimate the temperature at different intensities and points arranged across the therapeutic transducer radial line (60 mm apart from the transducer face). Model inputs, as well as the number of neurons were selected using the multiobjective genetic algorithm (MOGA). The best attained models present, in average, a maximum absolute error less than 0.5 degrees C, which is pointed as the borderline between a reliable and an unreliable estimator in hyperthermia/diathermia. In order to test the spatial generalization capacity, the best models were tested using spatial points not yet assessed, and some of them presented a maximum absolute error inferior to 0.5 degrees C, being "elected" as the best models. It should be also stressed that these best models present implementational low-complexity, as desired for real-time applications.
Statistics, Uncertainty, and Transmitted Variation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendelberger, Joanne Roth
2014-11-05
The field of Statistics provides methods for modeling and understanding data and making decisions in the presence of uncertainty. When examining response functions, variation present in the input variables will be transmitted via the response function to the output variables. This phenomenon can potentially have significant impacts on the uncertainty associated with results from subsequent analysis. This presentation will examine the concept of transmitted variation, its impact on designed experiments, and a method for identifying and estimating sources of transmitted variation in certain settings.
Measurement of regional cerebral blood flow with copper-62-PTSM and a three-compartment model.
Okazawa, H; Yonekura, Y; Fujibayashi, Y; Mukai, T; Nishizawa, S; Magata, Y; Ishizu, K; Tamaki, N; Konishi, J
1996-07-01
We evaluated quantitatively 62Cu-labeled pyruvaldehyde bis(N4-methylthiosemicarbazone) copper II (62Cu-PTSM) as a brain perfusion tracer for positron emission tomography (PET). For quantitative measurement, the octanol extraction method is needed to correct for arterial radioactivity in estimating the lipophilic input function, but the procedure is not practical for clinical studies. To measure regional cerebral blood flow (rCBF) by 62Cu-PTSM with simple arterial blood sampling, a standard curve of the octanol extraction ratio and a three-compartment model were applied. We performed both 15O-labeled water PET and 62 Cu-PTSM PET with dynamic data acquisition and arterial sampling in six subjects. Data obtained in 10 subjects studied previously were used for the standard octanol extraction curve. Arterial activity was measured and corrected to obtain the true input function using the standard curve. Graphical analysis (Gjedde-Patlak plot) with the data for each subject fitted by a straight regression line suggested that 62Cu-PTSM can be analyzed by the three-compartment model with negligible K4. Using this model, K1-K3 were estimated from curve fitting of the cerebral time-activity curve and the corrected input function. The fractional uptake of 62Cu-PTSM was corrected to rCBF with the individual extraction at steady state calculated from K1-K3. The influx rates (Ki) obtained from three-compartment model and graphical analyses were compared for the validation of the model. A comparison of rCBF values obtained from 62Cu-PTSM and 150-water studies demonstrated excellent correlation. The results suggest the potential feasibility of quantitation of cerebral perfusion with 62Cu-PTSM accompanied by dynamic PET and simple arterial sampling.
Automatic Estimation of Verified Floating-Point Round-Off Errors via Static Analysis
NASA Technical Reports Server (NTRS)
Moscato, Mariano; Titolo, Laura; Dutle, Aaron; Munoz, Cesar A.
2017-01-01
This paper introduces a static analysis technique for computing formally verified round-off error bounds of floating-point functional expressions. The technique is based on a denotational semantics that computes a symbolic estimation of floating-point round-o errors along with a proof certificate that ensures its correctness. The symbolic estimation can be evaluated on concrete inputs using rigorous enclosure methods to produce formally verified numerical error bounds. The proposed technique is implemented in the prototype research tool PRECiSA (Program Round-o Error Certifier via Static Analysis) and used in the verification of floating-point programs of interest to NASA.
Kawada, Toru; Zheng, Can; Yanagiya, Yusuke; Uemura, Kazunori; Miyamoto, Tadayoshi; Inagaki, Masashi; Shishido, Toshiaki; Sugimachi, Masaru; Sunagawa, Kenji
2002-03-01
A transfer function from baroreceptor pressure input to sympathetic nerve activity (SNA) shows derivative characteristics in the frequency range below 0.8 Hz in rabbits. These derivative characteristics contribute to a quick and stable arterial pressure (AP) regulation. However, if the derivative characteristics hold up to heart rate frequency, the pulsatile pressure input will yield a markedly augmented SNA signal. Such a signal would saturate the baroreflex signal transduction, thereby disabling the baroreflex regulation of AP. We hypothesized that the transfer gain at heart rate frequency would be much smaller than that predicted from extrapolating the derivative characteristics. In anesthetized rabbits (n = 6), we estimated the neural arc transfer function in the frequency range up to 10 Hz. The transfer gain was lost at a rate of -20 dB/decade when the input frequency exceeded 0.8 Hz. A numerical simulation indicated that the high-cut characteristics above 0.8 Hz were effective to attenuate the pulsatile signal and preserve the open-loop gain when the baroreflex dynamic range was finite.
Hayashi, Ryusuke; Watanabe, Osamu; Yokoyama, Hiroki; Nishida, Shin'ya
2017-06-01
Characterization of the functional relationship between sensory inputs and neuronal or observers' perceptual responses is one of the fundamental goals of systems neuroscience and psychophysics. Conventional methods, such as reverse correlation and spike-triggered data analyses are limited in their ability to resolve complex and inherently nonlinear neuronal/perceptual processes because these methods require input stimuli to be Gaussian with a zero mean. Recent studies have shown that analyses based on a generalized linear model (GLM) do not require such specific input characteristics and have advantages over conventional methods. GLM, however, relies on iterative optimization algorithms and its calculation costs become very expensive when estimating the nonlinear parameters of a large-scale system using large volumes of data. In this paper, we introduce a new analytical method for identifying a nonlinear system without relying on iterative calculations and yet also not requiring any specific stimulus distribution. We demonstrate the results of numerical simulations, showing that our noniterative method is as accurate as GLM in estimating nonlinear parameters in many cases and outperforms conventional, spike-triggered data analyses. As an example of the application of our method to actual psychophysical data, we investigated how different spatiotemporal frequency channels interact in assessments of motion direction. The nonlinear interaction estimated by our method was consistent with findings from previous vision studies and supports the validity of our method for nonlinear system identification.
NASA Astrophysics Data System (ADS)
Riabkov, Dmitri
Compartment modeling of dynamic medical image data implies that the concentration of the tracer over time in a particular region of the organ of interest is well-modeled as a convolution of the tissue response with the tracer concentration in the blood stream. The tissue response is different for different tissues while the blood input is assumed to be the same for different tissues. The kinetic parameters characterizing the tissue responses can be estimated by blind identification methods. These algorithms use the simultaneous measurements of concentration in separate regions of the organ; if the regions have different responses, the measurement of the blood input function may not be required. In this work it is shown that the blind identification problem has a unique solution for two-compartment model tissue response. For two-compartment model tissue responses in dynamic cardiac MRI imaging conditions with gadolinium-DTPA contrast agent, three blind identification algorithms are analyzed here to assess their utility: Eigenvector-based Algorithm for Multichannel Blind Deconvolution (EVAM), Cross Relations (CR), and Iterative Quadratic Maximum Likelihood (IQML). Comparisons of accuracy with conventional (not blind) identification techniques where the blood input is known are made as well. The statistical accuracies of estimation for the three methods are evaluated and compared for multiple parameter sets. The results show that the IQML method gives more accurate estimates than the other two blind identification methods. A proof is presented here that three-compartment model blind identification is not unique in the case of only two regions. It is shown that it is likely unique for the case of more than two regions, but this has not been proved analytically. For the three-compartment model the tissue responses in dynamic FDG PET imaging conditions are analyzed with the blind identification algorithms EVAM and Separable variables Least Squares (SLS). A method of identification that assumes that FDG blood input in the brain can be modeled as a function of time and several parameters (IFM) is analyzed also. Nonuniform sampling SLS (NSLS) is developed due to the rapid change of the FDG concentration in the blood during the early postinjection stage. Comparisons of accuracy of EVAM, SLS, NSLS and IFM identification techniques are made.
Ning, Jia; Schubert, Tilman; Johnson, Kevin M; Roldán-Alzate, Alejandro; Chen, Huijun; Yuan, Chun; Reeder, Scott B
2018-06-01
To propose a simple method to correct vascular input function (VIF) due to inflow effects and to test whether the proposed method can provide more accurate VIFs for improved pharmacokinetic modeling. A spoiled gradient echo sequence-based inflow quantification and contrast agent concentration correction method was proposed. Simulations were conducted to illustrate improvement in the accuracy of VIF estimation and pharmacokinetic fitting. Animal studies with dynamic contrast-enhanced MR scans were conducted before, 1 week after, and 2 weeks after portal vein embolization (PVE) was performed in the left portal circulation of pigs. The proposed method was applied to correct the VIFs for model fitting. Pharmacokinetic parameters fitted using corrected and uncorrected VIFs were compared between different lobes and visits. Simulation results demonstrated that the proposed method can improve accuracy of VIF estimation and pharmacokinetic fitting. In animal study results, pharmacokinetic fitting using corrected VIFs demonstrated changes in perfusion consistent with changes expected after PVE, whereas the perfusion estimates derived by uncorrected VIFs showed no significant changes. The proposed correction method improves accuracy of VIFs and therefore provides more precise pharmacokinetic fitting. This method may be promising in improving the reliability of perfusion quantification. Magn Reson Med 79:3093-3102, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Brvar, Nina; Mateović-Rojnik, Tatjana; Grabnar, Iztok
2014-10-01
This study aimed to develop a population pharmacokinetic model for tramadol that combines different input rates with disposition characteristics. Data used for the analysis were pooled from two phase I bioavailability studies with immediate (IR) and prolonged release (PR) formulations in healthy volunteers. Tramadol plasma concentration-time data were described by an inverse Gaussian function to model the complete input process linked to a two-compartment disposition model with first-order elimination. Although polymorphic CYP2D6 appears to be a major enzyme involved in the metabolism of tramadol, application of a mixture model to test the assumption of two and three subpopulations did not reveal any improvement of the model. The final model estimated parameters with reasonable precision and was able to estimate the interindividual variability of all parameters except for the relative bioavailability of PR vs. IR formulation. Validity of the model was further tested using the nonparametric bootstrap approach. Finally, the model was applied to assess absorption kinetics of tramadol and predict steady-state pharmacokinetics following administration of both types of formulations. For both formulations, the final model yielded a stable estimate of the absorption time profiles. Steady-state simulation supports switching of patients from IR to PR formulation. Copyright © 2014 Elsevier B.V. All rights reserved.
Development of advanced techniques for rotorcraft state estimation and parameter identification
NASA Technical Reports Server (NTRS)
Hall, W. E., Jr.; Bohn, J. G.; Vincent, J. H.
1980-01-01
An integrated methodology for rotorcraft system identification consists of rotorcraft mathematical modeling, three distinct data processing steps, and a technique for designing inputs to improve the identifiability of the data. These elements are as follows: (1) a Kalman filter smoother algorithm which estimates states and sensor errors from error corrupted data. Gust time histories and statistics may also be estimated; (2) a model structure estimation algorithm for isolating a model which adequately explains the data; (3) a maximum likelihood algorithm for estimating the parameters and estimates for the variance of these estimates; and (4) an input design algorithm, based on a maximum likelihood approach, which provides inputs to improve the accuracy of parameter estimates. Each step is discussed with examples to both flight and simulated data cases.
NASA Astrophysics Data System (ADS)
Nelson, D. J.
2007-09-01
In the basic correlation process a sequence of time-lag-indexed correlation coefficients are computed as the inner or dot product of segments of two signals. The time-lag(s) for which the magnitude of the correlation coefficient sequence is maximized is the estimated relative time delay of the two signals. For discrete sampled signals, the delay estimated in this manner is quantized with the same relative accuracy as the clock used in sampling the signals. In addition, the correlation coefficients are real if the input signals are real. There have been many methods proposed to estimate signal delay to more accuracy than the sample interval of the digitizer clock, with some success. These methods include interpolation of the correlation coefficients, estimation of the signal delay from the group delay function, and beam forming techniques, such as the MUSIC algorithm. For spectral estimation, techniques based on phase differentiation have been popular, but these techniques have apparently not been applied to the correlation problem . We propose a phase based delay estimation method (PBDEM) based on the phase of the correlation function that provides a significant improvement of the accuracy of time delay estimation. In the process, the standard correlation function is first calculated. A time lag error function is then calculated from the correlation phase and is used to interpolate the correlation function. The signal delay is shown to be accurately estimated as the zero crossing of the correlation phase near the index of the peak correlation magnitude. This process is nearly as fast as the conventional correlation function on which it is based. For real valued signals, a simple modification is provided, which results in the same correlation accuracy as is obtained for complex valued signals.
Determination of the Time-Space Magnetic Correlation Functions in the Solar Wind
NASA Astrophysics Data System (ADS)
Weygand, J. M.; Matthaeus, W. H.; Kivelson, M.; Dasso, S.
2013-12-01
Magnetic field data from many different intervals and 7 different solar wind spacecraft are employed to estimate the scale-dependent time decorrelation function in the interplanetary magnetic field in both the slow and fast solar wind. This estimation requires correlations varying with both space and time lags. The two point correlation function with no time lag is determined by correlating time series data from multiple spacecraft separated in space and for complete coverage of length scales relies on many intervals with different spacecraft spatial separations. In addition we employ single spacecraft time-lagged correlations, and two spacecraft time lagged correlations to access different spatial and temporal correlation data. Combining these data sets gives estimates of the scale-dependent time decorrelation function, which in principle tells us how rapidly time decorrelation occurs at a given wavelength. For static fields the scale-dependent time decorrelation function is trivially unity, but in turbulence the nonlinear cascade process induces time-decorrelation at a given length scale that occurs more rapidly with decreasing scale. The scale-dependent time decorrelation function is valuable input to theories as well as various applications such as scattering, transport, and study of predictability. It is also a fundamental element of formal turbulence theory. Our results are extension of the Eulerian correlation functions estimated in Matthaeus et al. [2010], Weygand et al [2012; 2013].
The optimal input optical pulse shape for the self-phase modulation based chirp generator
NASA Astrophysics Data System (ADS)
Zachinyaev, Yuriy; Rumyantsev, Konstantin
2018-04-01
The work is aimed to obtain the optimal shape of the input optical pulse for the proper functioning of the self-phase modulation based chirp generator allowing to achieve high values of chirp frequency deviation. During the research, the structure of the device based on self-phase modulation effect using has been analyzed. The influence of the input optical pulse shape of the transmitting optical module on the chirp frequency deviation has been studied. The relationship between the frequency deviation of the generated chirp and frequency linearity for the three options for implementation of the pulse shape has been also estimated. The results of research are related to the development of the theory of radio processors based on fiber-optic structures and can be used in radars, secure communications, geolocation and tomography.
iGeoT v1.0: Automatic Parameter Estimation for Multicomponent Geothermometry, User's Guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spycher, Nicolas; Finsterle, Stefan
GeoT implements the multicomponent geothermometry method developed by Reed and Spycher [1984] into a stand-alone computer program to ease the application of this method and to improve the prediction of geothermal reservoir temperatures using full and integrated chemical analyses of geothermal fluids. Reservoir temperatures are estimated from statistical analyses of mineral saturation indices computed as a function of temperature. The reconstruction of the deep geothermal fluid compositions, and geothermometry computations, are all implemented into the same computer program, allowing unknown or poorly constrained input parameters to be estimated by numerical optimization. This integrated geothermometry approach presents advantages over classical geothermometersmore » for fluids that have not fully equilibrated with reservoir minerals and/or that have been subject to processes such as dilution and gas loss. This manual contains installation instructions for iGeoT, and briefly describes the input formats needed to run iGeoT in Automatic or Expert Mode. An example is also provided to demonstrate the use of iGeoT.« less
Inferring Nonlinear Neuronal Computation Based on Physiologically Plausible Inputs
McFarland, James M.; Cui, Yuwei; Butts, Daniel A.
2013-01-01
The computation represented by a sensory neuron's response to stimuli is constructed from an array of physiological processes both belonging to that neuron and inherited from its inputs. Although many of these physiological processes are known to be nonlinear, linear approximations are commonly used to describe the stimulus selectivity of sensory neurons (i.e., linear receptive fields). Here we present an approach for modeling sensory processing, termed the Nonlinear Input Model (NIM), which is based on the hypothesis that the dominant nonlinearities imposed by physiological mechanisms arise from rectification of a neuron's inputs. Incorporating such ‘upstream nonlinearities’ within the standard linear-nonlinear (LN) cascade modeling structure implicitly allows for the identification of multiple stimulus features driving a neuron's response, which become directly interpretable as either excitatory or inhibitory. Because its form is analogous to an integrate-and-fire neuron receiving excitatory and inhibitory inputs, model fitting can be guided by prior knowledge about the inputs to a given neuron, and elements of the resulting model can often result in specific physiological predictions. Furthermore, by providing an explicit probabilistic model with a relatively simple nonlinear structure, its parameters can be efficiently optimized and appropriately regularized. Parameter estimation is robust and efficient even with large numbers of model components and in the context of high-dimensional stimuli with complex statistical structure (e.g. natural stimuli). We describe detailed methods for estimating the model parameters, and illustrate the advantages of the NIM using a range of example sensory neurons in the visual and auditory systems. We thus present a modeling framework that can capture a broad range of nonlinear response functions while providing physiologically interpretable descriptions of neural computation. PMID:23874185
State-space model with deep learning for functional dynamics estimation in resting-state fMRI.
Suk, Heung-Il; Wee, Chong-Yaw; Lee, Seong-Whan; Shen, Dinggang
2016-04-01
Studies on resting-state functional Magnetic Resonance Imaging (rs-fMRI) have shown that different brain regions still actively interact with each other while a subject is at rest, and such functional interaction is not stationary but changes over time. In terms of a large-scale brain network, in this paper, we focus on time-varying patterns of functional networks, i.e., functional dynamics, inherent in rs-fMRI, which is one of the emerging issues along with the network modelling. Specifically, we propose a novel methodological architecture that combines deep learning and state-space modelling, and apply it to rs-fMRI based Mild Cognitive Impairment (MCI) diagnosis. We first devise a Deep Auto-Encoder (DAE) to discover hierarchical non-linear functional relations among regions, by which we transform the regional features into an embedding space, whose bases are complex functional networks. Given the embedded functional features, we then use a Hidden Markov Model (HMM) to estimate dynamic characteristics of functional networks inherent in rs-fMRI via internal states, which are unobservable but can be inferred from observations statistically. By building a generative model with an HMM, we estimate the likelihood of the input features of rs-fMRI as belonging to the corresponding status, i.e., MCI or normal healthy control, based on which we identify the clinical label of a testing subject. In order to validate the effectiveness of the proposed method, we performed experiments on two different datasets and compared with state-of-the-art methods in the literature. We also analyzed the functional networks learned by DAE, estimated the functional connectivities by decoding hidden states in HMM, and investigated the estimated functional connectivities by means of a graph-theoretic approach. Copyright © 2016 Elsevier Inc. All rights reserved.
State-space model with deep learning for functional dynamics estimation in resting-state fMRI
Suk, Heung-Il; Wee, Chong-Yaw; Lee, Seong-Whan; Shen, Dinggang
2017-01-01
Studies on resting-state functional Magnetic Resonance Imaging (rs-fMRI) have shown that different brain regions still actively interact with each other while a subject is at rest, and such functional interaction is not stationary but changes over time. In terms of a large-scale brain network, in this paper, we focus on time-varying patterns of functional networks, i.e., functional dynamics, inherent in rs-fMRI, which is one of the emerging issues along with the network modelling. Specifically, we propose a novel methodological architecture that combines deep learning and state-space modelling, and apply it to rs-fMRI based Mild Cognitive Impairment (MCI) diagnosis. We first devise a Deep Auto-Encoder (DAE) to discover hierarchical non-linear functional relations among regions, by which we transform the regional features into an embedding space, whose bases are complex functional networks. Given the embedded functional features, we then use a Hidden Markov Model (HMM) to estimate dynamic characteristics of functional networks inherent in rs-fMRI via internal states, which are unobservable but can be inferred from observations statistically. By building a generative model with an HMM, we estimate the likelihood of the input features of rs-fMRI as belonging to the corresponding status, i.e., MCI or normal healthy control, based on which we identify the clinical label of a testing subject. In order to validate the effectiveness of the proposed method, we performed experiments on two different datasets and compared with state-of-the-art methods in the literature. We also analyzed the functional networks learned by DAE, estimated the functional connectivities by decoding hidden states in HMM, and investigated the estimated functional connectivities by means of a graph-theoretic approach. PMID:26774612
Tradeoffs in the Use of Value-Added Estimates of Teacher Effectiveness by School Districts
ERIC Educational Resources Information Center
Baxter, Andrew David
2011-01-01
A new capacity to track the inputs and outcomes of individual students' education production function has spurred a growing number of school districts to attempt to measure the productivity of their teachers in terms of student outcomes. The use of these value-added measures of teacher effectiveness is at the center of current education reform.…
ERIC Educational Resources Information Center
Wu, Mei-Jiun
2013-01-01
Background/Context: Recently emerged with the implementation of the California's Public Schools Accountability Act of 1999 and the NCLB Act of 2001 is an increase in the number of education production function studies estimating the relationship between educational inputs and APIs. While the majority of past research on California school…
NASA Technical Reports Server (NTRS)
Castruccio, P. A.; Loats, H. L., Jr.; Fowler, T. R.
1977-01-01
Methods for the reduction of remotely sensed data and its application in hydrologic land use assessment, surface water inventory, and soil property studies are presented. LANDSAT data is used to provide quantitative parameters and coefficients to construct watershed transfer functions for a hydrologic planning model aimed at estimating peak outflow from rainfall inputs.
Enhancement of regional wet deposition estimates based on modeled precipitation inputs
James A. Lynch; Jeffery W. Grimm; Edward S. Corbett
1996-01-01
Application of a variety of two-dimensional interpolation algorithms to precipitation chemistry data gathered at scattered monitoring sites for the purpose of estimating precipitation- born ionic inputs for specific points or regions have failed to produce accurate estimates. The accuracy of these estimates is particularly poor in areas of high topographic relief....
NASA Technical Reports Server (NTRS)
Chamberlain, R. G.; Aster, R. W.; Firnett, P. J.; Miller, M. A.
1985-01-01
Improved Price Estimation Guidelines, IPEG4, program provides comparatively simple, yet relatively accurate estimate of price of manufactured product. IPEG4 processes user supplied input data to determine estimate of price per unit of production. Input data include equipment cost, space required, labor cost, materials and supplies cost, utility expenses, and production volume on industry wide or process wide basis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duesbery, M.S.
1993-11-30
This program aims at improving current methods of lifetime assessment by building in the characteristics of the micro-mechanisms known to be responsible for damage and failure. The broad approach entails the integration and, where necessary, augmentation of the micro-scale research results currently available in the literature into a macro-sale model with predictive capability. In more detail, the program will develop a set of hierarchically structured models at different length scales, from atomic to macroscopic, at each level taking as parametric input the results of the model at the next smaller scale. In this way the known microscopic properties can bemore » transported by systematic procedures to the unknown macro-scale region. It may mot be possible to eliminate empiricism completely, because some of the quantities involved cannot yet be estimated to the required degree of precision. In this case the aim will be at least to eliminate functional empiricism. Restriction of empiricism to the choice of parameters to be input to known functional forms permits some confidence in extrapolation procedures and has the advantage that the models can readily be updated as better estimates of the parameters become available.« less
Orientation tuning of binocular summation: a comparison of colour to achromatic contrast
Gheiratmand, Mina; Cherniawsky, Avital S.; Mullen, Kathy T.
2016-01-01
A key function of the primary visual cortex is to combine the input from the two eyes into a unified binocular percept. At low, near threshold, contrasts a process of summation occurs if the visual inputs from the two eyes are similar. Here we measure the orientation tuning of binocular summation for chromatic and equivalent achromatic contrast. We derive estimates of orientation tuning by measuring binocular summation as a function of the orientation difference between two sinusoidal gratings presented dichoptically to different eyes. We then use a model to estimate the orientation bandwidth of the neural detectors underlying the binocular combination. We find that orientation bandwidths are similar for chromatic and achromatic stimuli at both low (0.375 c/deg) and mid (1.5 c/deg) spatial frequencies, with an overall average of 29 ± 3 degs (HWHH, s.e.m). This effect occurs despite the overall greater binocular summation found for the low spatial frequency chromatic stimuli. These results suggest that similar, oriented processes underlie both chromatic and achromatic binocular contrast combination. The non-oriented detection process found in colour vision at low spatial frequencies under monocular viewing is not evident at the binocular combination stage. PMID:27168119
Analysis of fMRI data using noise-diffusion network models: a new covariance-coding perspective.
Gilson, Matthieu
2018-04-01
Since the middle of the 1990s, studies of resting-state fMRI/BOLD data have explored the correlation patterns of activity across the whole brain, which is referred to as functional connectivity (FC). Among the many methods that have been developed to interpret FC, a recently proposed model-based approach describes the propagation of fluctuating BOLD activity within the recurrently connected brain network by inferring the effective connectivity (EC). In this model, EC quantifies the strengths of directional interactions between brain regions, viewed from the proxy of BOLD activity. In addition, the tuning procedure for the model provides estimates for the local variability (input variances) to explain how the observed FC is generated. Generalizing, the network dynamics can be studied in the context of an input-output mapping-determined by EC-for the second-order statistics of fluctuating nodal activities. The present paper focuses on the following detection paradigm: observing output covariances, how discriminative is the (estimated) network model with respect to various input covariance patterns? An application with the model fitted to experimental fMRI data-movie viewing versus resting state-illustrates that changes in local variability and changes in brain coordination go hand in hand.
Aitkenhead, Matt J; Black, Helaina I J
2018-02-01
Using the International Centre for Research in Agroforestry-International Soil Reference and Information Centre (ICRAF-ISRIC) global soil spectroscopy database, models were developed to estimate a number of soil variables using different input data types. These input types included: (1) site data only; (2) visible-near-infrared (Vis-NIR) diffuse reflectance spectroscopy only; (3) combined site and Vis-NIR data; (4) red-green-blue (RGB) color data only; and (5) combined site and RGB color data. The models produced variable estimation accuracy, with RGB only being generally worst and spectroscopy plus site being best. However, we showed that for certain variables, estimation accuracy levels achieved with the "site plus RGB input data" were sufficiently good to provide useful estimates (r 2 > 0.7). These included major elements (Ca, Si, Al, Fe), organic carbon, and cation exchange capacity. Estimates for bulk density, contrast-to-noise (C/N), and P were moderately good, but K was not well estimated using this model type. For the "spectra plus site" model, many more variables were well estimated, including many that are important indicators for agricultural productivity and soil health. Sum of cation, electrical conductivity, Si, Ca, and Al oxides, and C/N ratio were estimated using this approach with r 2 values > 0.9. This work provides a mechanism for identifying the cost-effectiveness of using different model input data, with associated costs, for estimating soil variables to required levels of accuracy.
Estimating wheat and maize daily evapotranspiration using artificial neural network
NASA Astrophysics Data System (ADS)
Abrishami, Nazanin; Sepaskhah, Ali Reza; Shahrokhnia, Mohammad Hossein
2018-02-01
In this research, artificial neural network (ANN) is used for estimating wheat and maize daily standard evapotranspiration. Ten ANN models with different structures were designed for each crop. Daily climatic data [maximum temperature (T max), minimum temperature (T min), average temperature (T ave), maximum relative humidity (RHmax), minimum relative humidity (RHmin), average relative humidity (RHave), wind speed (U 2), sunshine hours (n), net radiation (Rn)], leaf area index (LAI), and plant height (h) were used as inputs. For five structures of ten, the evapotranspiration (ETC) values calculated by ETC = ET0 × K C equation (ET0 from Penman-Monteith equation and K C from FAO-56, ANNC) were used as outputs, and for the other five structures, the ETC values measured by weighing lysimeter (ANNM) were used as outputs. In all structures, a feed forward multiple-layer network with one or two hidden layers and sigmoid transfer function and BR or LM training algorithm was used. Favorite network was selected based on various statistical criteria. The results showed the suitable capability and acceptable accuracy of ANNs, particularly those having two hidden layers in their structure in estimating the daily evapotranspiration. Best model for estimation of maize daily evapotranspiration is «M»ANN1 C (8-4-2-1), with T max, T min, RHmax, RHmin, U 2, n, LAI, and h as input data and LM training rule and its statistical parameters (NRMSE, d, and R2) are 0.178, 0.980, and 0.982, respectively. Best model for estimation of wheat daily evapotranspiration is «W»ANN5 C (5-2-3-1), with T max, T min, Rn, LAI, and h as input data and LM training rule, its statistical parameters (NRMSE, d, and R 2) are 0.108, 0.987, and 0.981 respectively. In addition, if the calculated ETC used as the output of the network for both wheat and maize, higher accurate estimation was obtained. Therefore, ANN is suitable method for estimating evapotranspiration of wheat and maize.
Distributed estimation for adaptive sensor selection in wireless sensor networks
NASA Astrophysics Data System (ADS)
Mahmoud, Magdi S.; Hassan Hamid, Matasm M.
2014-05-01
Wireless sensor networks (WSNs) are usually deployed for monitoring systems with the distributed detection and estimation of sensors. Sensor selection in WSNs is considered for target tracking. A distributed estimation scenario is considered based on the extended information filter. A cost function using the geometrical dilution of precision measure is derived for active sensor selection. A consensus-based estimation method is proposed in this paper for heterogeneous WSNs with two types of sensors. The convergence properties of the proposed estimators are analyzed under time-varying inputs. Accordingly, a new adaptive sensor selection (ASS) algorithm is presented in which the number of active sensors is adaptively determined based on the absolute local innovations vector. Simulation results show that the tracking accuracy of the ASS is comparable to that of the other algorithms.
Variable input observer for state estimation of high-rate dynamics
NASA Astrophysics Data System (ADS)
Hong, Jonathan; Cao, Liang; Laflamme, Simon; Dodson, Jacob
2017-04-01
High-rate systems operating in the 10 μs to 10 ms timescale are likely to experience damaging effects due to rapid environmental changes (e.g., turbulence, ballistic impact). Some of these systems could benefit from real-time state estimation to enable their full potential. Examples of such systems include blast mitigation strategies, automotive airbag technologies, and hypersonic vehicles. Particular challenges in high-rate state estimation include: 1) complex time varying nonlinearities of system (e.g. noise, uncertainty, and disturbance); 2) rapid environmental changes; 3) requirement of high convergence rate. Here, we propose using a Variable Input Observer (VIO) concept to vary the input space as the event unfolds. When systems experience high-rate dynamics, rapid changes in the system occur. To investigate the VIO's potential, a VIO-based neuro-observer is constructed and studied using experimental data collected from a laboratory impact test. Results demonstrate that the input space is unique to different impact conditions, and that adjusting the input space throughout the dynamic event produces better estimations than using a traditional fixed input space strategy.
On the distinguishability of HRF models in fMRI.
Rosa, Paulo N; Figueiredo, Patricia; Silvestre, Carlos J
2015-01-01
Modeling the Hemodynamic Response Function (HRF) is a critical step in fMRI studies of brain activity, and it is often desirable to estimate HRF parameters with physiological interpretability. A biophysically informed model of the HRF can be described by a non-linear time-invariant dynamic system. However, the identification of this dynamic system may leave much uncertainty on the exact values of the parameters. Moreover, the high noise levels in the data may hinder the model estimation task. In this context, the estimation of the HRF may be seen as a problem of model falsification or invalidation, where we are interested in distinguishing among a set of eligible models of dynamic systems. Here, we propose a systematic tool to determine the distinguishability among a set of physiologically plausible HRF models. The concept of absolutely input-distinguishable systems is introduced and applied to a biophysically informed HRF model, by exploiting the structure of the underlying non-linear dynamic system. A strategy to model uncertainty in the input time-delay and magnitude is developed and its impact on the distinguishability of two physiologically plausible HRF models is assessed, in terms of the maximum noise amplitude above which it is not possible to guarantee the falsification of one model in relation to another. Finally, a methodology is proposed for the choice of the input sequence, or experimental paradigm, that maximizes the distinguishability of the HRF models under investigation. The proposed approach may be used to evaluate the performance of HRF model estimation techniques from fMRI data.
QVAST: a new Quantum GIS plugin for estimating volcanic susceptibility
NASA Astrophysics Data System (ADS)
Bartolini, S.; Cappello, A.; Martí, J.; Del Negro, C.
2013-08-01
One of the most important tasks of modern volcanology is the construction of hazard maps simulating different eruptive scenarios that can be used in risk-based decision-making in land-use planning and emergency management. The first step in the quantitative assessment of volcanic hazards is the development of susceptibility maps, i.e. the spatial probability of a future vent opening given the past eruptive activity of a volcano. This challenging issue is generally tackled using probabilistic methods that use the calculation of a kernel function at each data location to estimate probability density functions (PDFs). The smoothness and the modeling ability of the kernel function are controlled by the smoothing parameter, also known as the bandwidth. Here we present a new tool, QVAST, part of the open-source Geographic Information System Quantum GIS, that is designed to create user-friendly quantitative assessments of volcanic susceptibility. QVAST allows to select an appropriate method for evaluating the bandwidth for the kernel function on the basis of the input parameters and the shapefile geometry, and can also evaluate the PDF with the Gaussian kernel. When different input datasets are available for the area, the total susceptibility map is obtained by assigning different weights to each of the PDFs, which are then combined via a weighted summation and modeled in a non-homogeneous Poisson process. The potential of QVAST, developed in a free and user-friendly environment, is here shown through its application in the volcanic fields of Lanzarote (Canary Islands) and La Garrotxa (NE Spain).
Weiss, Michael
2017-06-01
Appropriate model selection is important in fitting oral concentration-time data due to the complex character of the absorption process. When IV reference data are available, the problem is the selection of an empirical input function (absorption model). In the present examples a weighted sum of inverse Gaussian density functions (IG) was found most useful. It is shown that alternative models (gamma and Weibull density) are only valid if the input function is log-concave. Furthermore, it is demonstrated for the first time that the sum of IGs model can be also applied to fit oral data directly (without IV data). In the present examples, a weighted sum of two or three IGs was sufficient. From the parameters of this function, the model-independent measures AUC and mean residence time can be calculated. It turned out that a good fit of the data in the terminal phase is essential to avoid parameter biased estimates. The time course of fractional elimination rate and the concept of log-concavity have proved as useful tools in model selection.
INFERRING THE ECCENTRICITY DISTRIBUTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogg, David W.; Bovy, Jo; Myers, Adam D., E-mail: david.hogg@nyu.ed
2010-12-20
Standard maximum-likelihood estimators for binary-star and exoplanet eccentricities are biased high, in the sense that the estimated eccentricity tends to be larger than the true eccentricity. As with most non-trivial observables, a simple histogram of estimated eccentricities is not a good estimate of the true eccentricity distribution. Here, we develop and test a hierarchical probabilistic method for performing the relevant meta-analysis, that is, inferring the true eccentricity distribution, taking as input the likelihood functions for the individual star eccentricities, or samplings of the posterior probability distributions for the eccentricities (under a given, uninformative prior). The method is a simple implementationmore » of a hierarchical Bayesian model; it can also be seen as a kind of heteroscedastic deconvolution. It can be applied to any quantity measured with finite precision-other orbital parameters, or indeed any astronomical measurements of any kind, including magnitudes, distances, or photometric redshifts-so long as the measurements have been communicated as a likelihood function or a posterior sampling.« less
Linear and nonlinear ARMA model parameter estimation using an artificial neural network
NASA Technical Reports Server (NTRS)
Chon, K. H.; Cohen, R. J.
1997-01-01
This paper addresses parametric system identification of linear and nonlinear dynamic systems by analysis of the input and output signals. Specifically, we investigate the relationship between estimation of the system using a feedforward neural network model and estimation of the system by use of linear and nonlinear autoregressive moving-average (ARMA) models. By utilizing a neural network model incorporating a polynomial activation function, we show the equivalence of the artificial neural network to the linear and nonlinear ARMA models. We compare the parameterization of the estimated system using the neural network and ARMA approaches by utilizing data generated by means of computer simulations. Specifically, we show that the parameters of a simulated ARMA system can be obtained from the neural network analysis of the simulated data or by conventional least squares ARMA analysis. The feasibility of applying neural networks with polynomial activation functions to the analysis of experimental data is explored by application to measurements of heart rate (HR) and instantaneous lung volume (ILV) fluctuations.
Chen, Chang Hao; McCullagh, Elizabeth A.; Pun, Sio Hang; Mak, Peng Un; Vai, Mang I; Mak, Pui In; Klug, Achim; Lei, Tim C.
2017-01-01
The ability to record and to control action potential firing in neuronal circuits of the brain is critical to understand how the brain functions on the cellular and network levels. Recent development of optogenetic proteins allows direct stimulation or inhibition of action potential firing of neurons upon optical illumination. In this paper, we combined a low-noise and high input impedance (or low input capacitance) neural recording amplifier, and a high current laser/LED driver in a monolithic integrated circuit (IC) for simultaneous neural recording and optogenetic neural control. The low input capacitance of the amplifier (9.7 pF) was achieved through adding a dedicated unity gain input stage optimized for high impedance metal electrodes. The input referred noise of the amplifier was measured to be 4.57 µVrms, which is lower than the estimated thermal noise of the metal electrode. Thus, action potentials originating from a single neuron can be recorded with a signal-to-noise ratio of ~6.6. The LED/laser current driver delivers a maximum current of 330 mA to generate adequate light for optogenetic control. We experimentally tested the functionality of the IC with an anesthetized Mongolian gerbil and recorded auditory stimulated action potentials from the inferior colliculus. Furthermore, we showed that spontaneous firing of 5th (trigeminal) nerve fibers was inhibited using the optogenetic protein Halorhodopsin. A noise model was also derived including the equivalent electronic components of the metal electrode and the high current driver to guide the design. PMID:28221990
DOE Office of Scientific and Technical Information (OSTI.GOV)
The purpose of the computer program is to generate system matrices that model data acquisition process in dynamic single photon emission computed tomography (SPECT). The application is for the reconstruction of dynamic data from projection measurements that provide the time evolution of activity uptake and wash out in an organ of interest. The measurement of the time activity in the blood and organ tissue provide time-activity curves (TACs) that are used to estimate kinetic parameters. The program provides a correct model of the in vivo spatial and temporal distribution of radioactive in organs. The model accounts for the attenuation ofmore » the internal emitting radioactivity, it accounts for the vary point response of the collimators, and correctly models the time variation of the activity in the organs. One important application where the software is being used in a measuring the arterial input function (AIF) in a dynamic SPECT study where the data are acquired from a slow camera rotation. Measurement of the arterial input function (AIF) is essential to deriving quantitative estimates of regional myocardial blood flow using kinetic models. A study was performed to evaluate whether a slowly rotating SPECT system could provide accurate AIF's for myocardial perfusion imaging (MPI). Methods: Dynamic cardiac SPECT was first performed in human subjects at rest using a Phillips Precedence SPECT/CT scanner. Dynamic measurements of Tc-99m-tetrofosmin in the myocardium were obtained using an infusion time of 2 minutes. Blood input, myocardium tissue and liver TACs were estimated using spatiotemporal splines. These were fit to a one-compartment perfusion model to obtain wash-in rate parameters K1. Results: The spatiotemporal 4D ML-EM reconstructions gave more accurate reconstructions that did standard frame-by-frame 3D ML-EM reconstructions. From additional computer simulations and phantom studies, it was determined that a 1 minute infusion with a SPECT system rotation speed providing 180 degrees of projection data every 54s can produce measurements of blood pool and myocardial TACs. This has important application in the circulation of coronary flow reserve using rest/stress dynamic cardiac SPECT. They system matrices are used in maximum likelihood and maximum a posterior formulations in estimation theory where through iterative algorithms (conjugate gradient, expectation maximization, or maximum a posteriori probability algorithms) the solution is determined that maximizes a likelihood or a posteriori probability function.« less
NASA Astrophysics Data System (ADS)
Hanachi, Houman; Liu, Jie; Banerjee, Avisekh; Chen, Ying
2016-05-01
Health state estimation of inaccessible components in complex systems necessitates effective state estimation techniques using the observable variables of the system. The task becomes much complicated when the system is nonlinear/non-Gaussian and it receives stochastic input. In this work, a novel sequential state estimation framework is developed based on particle filtering (PF) scheme for state estimation of general class of nonlinear dynamical systems with stochastic input. Performance of the developed framework is then validated with simulation on a Bivariate Non-stationary Growth Model (BNGM) as a benchmark. In the next step, three-year operating data of an industrial gas turbine engine (GTE) are utilized to verify the effectiveness of the developed framework. A comprehensive thermodynamic model for the GTE is therefore developed to formulate the relation of the observable parameters and the dominant degradation symptoms of the turbine, namely, loss of isentropic efficiency and increase of the mass flow. The results confirm the effectiveness of the developed framework for simultaneous estimation of multiple degradation symptoms in complex systems with noisy measured inputs.
NASA Astrophysics Data System (ADS)
Alvarez, Diego A.; Uribe, Felipe; Hurtado, Jorge E.
2018-02-01
Random set theory is a general framework which comprises uncertainty in the form of probability boxes, possibility distributions, cumulative distribution functions, Dempster-Shafer structures or intervals; in addition, the dependence between the input variables can be expressed using copulas. In this paper, the lower and upper bounds on the probability of failure are calculated by means of random set theory. In order to accelerate the calculation, a well-known and efficient probability-based reliability method known as subset simulation is employed. This method is especially useful for finding small failure probabilities in both low- and high-dimensional spaces, disjoint failure domains and nonlinear limit state functions. The proposed methodology represents a drastic reduction of the computational labor implied by plain Monte Carlo simulation for problems defined with a mixture of representations for the input variables, while delivering similar results. Numerical examples illustrate the efficiency of the proposed approach.
Sensitivity analysis of radionuclides atmospheric dispersion following the Fukushima accident
NASA Astrophysics Data System (ADS)
Girard, Sylvain; Korsakissok, Irène; Mallet, Vivien
2014-05-01
Atmospheric dispersion models are used in response to accidental releases with two purposes: - minimising the population exposure during the accident; - complementing field measurements for the assessment of short and long term environmental and sanitary impacts. The predictions of these models are subject to considerable uncertainties of various origins. Notably, input data, such as meteorological fields or estimations of emitted quantities as function of time, are highly uncertain. The case studied here is the atmospheric release of radionuclides following the Fukushima Daiichi disaster. The model used in this study is Polyphemus/Polair3D, from which derives IRSN's operational long distance atmospheric dispersion model ldX. A sensitivity analysis was conducted in order to estimate the relative importance of a set of identified uncertainty sources. The complexity of this task was increased by four characteristics shared by most environmental models: - high dimensional inputs; - correlated inputs or inputs with complex structures; - high dimensional output; - multiplicity of purposes that require sophisticated and non-systematic post-processing of the output. The sensitivities of a set of outputs were estimated with the Morris screening method. The input ranking was highly dependent on the considered output. Yet, a few variables, such as horizontal diffusion coefficient or clouds thickness, were found to have a weak influence on most of them and could be discarded from further studies. The sensitivity analysis procedure was also applied to indicators of the model performance computed on a set of gamma dose rates observations. This original approach is of particular interest since observations could be used later to calibrate the input variables probability distributions. Indeed, only the variables that are influential on performance scores are likely to allow for calibration. An indicator based on emission peaks time matching was elaborated in order to complement classical statistical scores which were dominated by deposit dose rates and almost insensitive to lower atmosphere dose rates. The substantial sensitivity of these performance indicators is auspicious for future calibration attempts and indicates that the simple perturbations used here may be sufficient to represent an essential part of the overall uncertainty.
Estimation of effective connectivity using multi-layer perceptron artificial neural network.
Talebi, Nasibeh; Nasrabadi, Ali Motie; Mohammad-Rezazadeh, Iman
2018-02-01
Studies on interactions between brain regions estimate effective connectivity, (usually) based on the causality inferences made on the basis of temporal precedence. In this study, the causal relationship is modeled by a multi-layer perceptron feed-forward artificial neural network, because of the ANN's ability to generate appropriate input-output mapping and to learn from training examples without the need of detailed knowledge of the underlying system. At any time instant, the past samples of data are placed in the network input, and the subsequent values are predicted at its output. To estimate the strength of interactions, the measure of " Causality coefficient " is defined based on the network structure, the connecting weights and the parameters of hidden layer activation function. Simulation analysis demonstrates that the method, called "CREANN" (Causal Relationship Estimation by Artificial Neural Network), can estimate time-invariant and time-varying effective connectivity in terms of MVAR coefficients. The method shows robustness with respect to noise level of data. Furthermore, the estimations are not significantly influenced by the model order (considered time-lag), and the different initial conditions (initial random weights and parameters of the network). CREANN is also applied to EEG data collected during a memory recognition task. The results implicate that it can show changes in the information flow between brain regions, involving in the episodic memory retrieval process. These convincing results emphasize that CREANN can be used as an appropriate method to estimate the causal relationship among brain signals.
Systems and methods for reconfiguring input devices
NASA Technical Reports Server (NTRS)
Lancaster, Jeff (Inventor); De Mers, Robert E. (Inventor)
2012-01-01
A system includes an input device having first and second input members configured to be activated by a user. The input device is configured to generate activation signals associated with activation of the first and second input members, and each of the first and second input members are associated with an input function. A processor is coupled to the input device and configured to receive the activation signals. A memory coupled to the processor, and includes a reconfiguration module configured to store the input functions assigned to the first and second input members and, upon execution of the processor, to reconfigure the input functions assigned to the input members when the first input member is inoperable.
NASA Astrophysics Data System (ADS)
Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke
2017-04-01
Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.
Feasibility study of TSPO quantification with [18F]FEPPA using population-based input function.
Mabrouk, Rostom; Strafella, Antonio P; Knezevic, Dunja; Ghadery, Christine; Mizrahi, Romina; Gharehgazlou, Avideh; Koshimori, Yuko; Houle, Sylvain; Rusjan, Pablo
2017-01-01
The input function (IF) is a core element in the quantification of Translocator protein 18 kDa with positron emission tomography (PET), as no suitable reference region with negligible binding has been identified. Arterial blood sampling is indeed needed to create the IF (ASIF). In the present manuscript we study individualization of a population based input function (PBIF) with a single arterial manual sample to estimate total distribution volume (VT) for [18F]FEPPA and to replicate previously published clinical studies in which the ASIF was used. The data of 3 previous [18F]FEPPA studies (39 of healthy controls (HC), 16 patients with Parkinson's disease (PD) and 18 with Alzheimer's disease (AD)) was reanalyzed with the new approach. PBIF was used with the Logan graphical analysis (GA) neglecting the vascular contribution to estimate VT. Time of linearization of the GA was determined with the maximum error criteria. The optimal calibration of the PBIF was determined based on the area under the curve (AUC) of the IF and the agreement range of VT between methods. The shape of the IF between groups was studied while taking into account genotyping of the polymorphism (rs6971). PBIF scaled with a single value of activity due to unmetabolized radioligand in arterial plasma, calculated as the average of a sample taken at 60 min and a sample taken at 90 min post-injection, yielded a good interval of agreement between methods and optimized the area under the curve of IF. In HC, gray matter VTs estimated by PBIF highly correlated with those using the standard method (r2 = 0.82, p = 0.0001). Bland-Altman plots revealed PBIF slightly underestimates (~1 mL/cm3) VT calculated by ASIF (including a vascular contribution). It was verified that the AUC of the ASIF were independent of genotype and disease (HC, PD, and AD). Previous clinical results were replicated using PBIF but with lower statistical power. A single arterial blood sample taken 75 minute post-injection contains enough information to individualize the IF in the groups of subjects studied; however, the higher variability produced requires an increase in sample size to reach the same effect size.
Frequency domain surface EMG sensor fusion for estimating finger forces.
Potluri, Chandrasekhar; Kumar, Parmod; Anugolu, Madhavi; Urfer, Alex; Chiu, Steve; Naidu, D; Schoen, Marco P
2010-01-01
Extracting or estimating skeletal hand/finger forces using surface electro myographic (sEMG) signals poses many challenges due to cross-talk, noise, and a temporal and spatially modulated signal characteristics. Normal sEMG measurements are based on single sensor data. In this paper, array sensors are used along with a proposed sensor fusion scheme that result in a simple Multi-Input-Single-Output (MISO) transfer function. Experimental data is used along with system identification to find this MISO system. A Genetic Algorithm (GA) approach is employed to optimize the characteristics of the MISO system. The proposed fusion-based approach is tested experimentally and indicates improvement in finger/hand force estimation.
Shi, Wuxi; Luo, Rui; Li, Baoquan
2017-01-01
In this study, an adaptive fuzzy prescribed performance control approach is developed for a class of uncertain multi-input and multi-output (MIMO) nonlinear systems with unknown control direction and unknown dead-zone inputs. The properties of symmetric matrix are exploited to design adaptive fuzzy prescribed performance controller, and a Nussbaum-type function is incorporated in the controller to estimate the unknown control direction. This method has two prominent advantages: it does not require the priori knowledge of control direction and only three parameters need to be updated on-line for this MIMO systems. It is proved that all the signals in the resulting closed-loop system are bounded and that the tracking errors converge to a small residual set with the prescribed performance bounds. The effectiveness of the proposed approach is validated by simulation results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Chandrasekaran, Srinivas Niranj; Das, Jhuma; Dokholyan, Nikolay V.; Carter, Charles W.
2016-01-01
PATH rapidly computes a path and a transition state between crystal structures by minimizing the Onsager-Machlup action. It requires input parameters whose range of values can generate different transition-state structures that cannot be uniquely compared with those generated by other methods. We outline modifications to estimate these input parameters to circumvent these difficulties and validate the PATH transition states by showing consistency between transition-states derived by different algorithms for unrelated protein systems. Although functional protein conformational change trajectories are to a degree stochastic, they nonetheless pass through a well-defined transition state whose detailed structural properties can rapidly be identified using PATH. PMID:26958584
NASA Astrophysics Data System (ADS)
Itahashi, S.; Yumimoto, K.; Uno, I.; Kim, S.
2012-12-01
Air quality studies based on the chemical transport model have been provided many important results for promoting our knowledge of air pollution phenomena, however, discrepancies between modeling results and observation data are still important issue to overcome. One of the concerning issue would be an over-prediction of summertime tropospheric ozone in remote area of Japan. This problem has been pointed out in the model comparison study of both regional scale (e.g., MICS-Asia) and global scale model (e.g., TH-FTAP). Several reasons for this issue can be listed as, (i) the modeled reproducibility on the penetration of clean oceanic air mass, (ii) correct estimation of the anthropogenic NOx / VOC emissions over East Asia, (iii) the chemical reaction scheme used in model simulation. In this study, we attempt to inverse estimation of some important chemical reactions based on the combining system of DDM (decoupled direct method) sensitivity analysis and modeled Green's function approach. The decoupled direct method (DDM) is an efficient and accurate way of performing sensitivity analysis to model inputs, calculates sensitivity coefficients representing the responsiveness of atmospheric chemical concentrations to perturbations in a model input or parameter. The inverse solutions with the Green's functions are given by a linear, least-squares method but are still robust against nonlinearities, To construct the response matrix (i.e., Green's functions), we can directly use the results of DDM sensitivity analysis. The solution of chemical reaction constants which have relatively large uncertainties are determined with constraints of observed ozone concentration data over the remote area in Japan. Our inversed estimation demonstrated that the underestimation of reaction constant to produce HNO3 (NO2 + OH + M → HNO3 + M) in SAPRC99 chemical scheme, and the inversed results indicated the +29.0 % increment to this reaction. This estimation has good agreement when compared with the CB4 and CB5, and also to the SAPRC07 estimation. For the NO2 photolysis rates, 49.4 % reduction was pronounced. This result indicates the importance of heavy aerosol effect for the change of photolysis rate must be incorporated in the numerical study.
Faes, L; Porta, A; Cucino, R; Cerutti, S; Antolini, R; Nollo, G
2004-06-01
Although the concept of transfer function is intrinsically related to an input-output relationship, the traditional and widely used estimation method merges both feedback and feedforward interactions between the two analyzed signals. This limitation may endanger the reliability of transfer function analysis in biological systems characterized by closed loop interactions. In this study, a method for estimating the transfer function between closed loop interacting signals was proposed and validated in the field of cardiovascular and cardiorespiratory variability. The two analyzed signals x and y were described by a bivariate autoregressive model, and the causal transfer function from x to y was estimated after imposing causality by setting to zero the model coefficients representative of the reverse effects from y to x. The method was tested in simulations reproducing linear open and closed loop interactions, showing a better adherence of the causal transfer function to the theoretical curves with respect to the traditional approach in presence of non-negligible reverse effects. It was then applied in ten healthy young subjects to characterize the transfer functions from respiration to heart period (RR interval) and to systolic arterial pressure (SAP), and from SAP to RR interval. In the first two cases, the causal and non-causal transfer function estimates were comparable, indicating that respiration, acting as exogenous signal, sets an open loop relationship upon SAP and RR interval. On the contrary, causal and traditional transfer functions from SAP to RR were significantly different, suggesting the presence of a considerable influence on the opposite causal direction. Thus, the proposed causal approach seems to be appropriate for the estimation of parameters, like the gain and the phase lag from SAP to RR interval, which have a large clinical and physiological relevance.
Estimation of Electrically-Evoked Knee Torque from Mechanomyography Using Support Vector Regression.
Ibitoye, Morufu Olusola; Hamzaid, Nur Azah; Abdul Wahab, Ahmad Khairi; Hasnan, Nazirah; Olatunji, Sunday Olusanya; Davis, Glen M
2016-07-19
The difficulty of real-time muscle force or joint torque estimation during neuromuscular electrical stimulation (NMES) in physical therapy and exercise science has motivated recent research interest in torque estimation from other muscle characteristics. This study investigated the accuracy of a computational intelligence technique for estimating NMES-evoked knee extension torque based on the Mechanomyographic signals (MMG) of contracting muscles that were recorded from eight healthy males. Simulation of the knee torque was modelled via Support Vector Regression (SVR) due to its good generalization ability in related fields. Inputs to the proposed model were MMG amplitude characteristics, the level of electrical stimulation or contraction intensity, and knee angle. Gaussian kernel function, as well as its optimal parameters were identified with the best performance measure and were applied as the SVR kernel function to build an effective knee torque estimation model. To train and test the model, the data were partitioned into training (70%) and testing (30%) subsets, respectively. The SVR estimation accuracy, based on the coefficient of determination (R²) between the actual and the estimated torque values was up to 94% and 89% during the training and testing cases, with root mean square errors (RMSE) of 9.48 and 12.95, respectively. The knee torque estimations obtained using SVR modelling agreed well with the experimental data from an isokinetic dynamometer. These findings support the realization of a closed-loop NMES system for functional tasks using MMG as the feedback signal source and an SVR algorithm for joint torque estimation.
Probabilistic estimation of residential air exchange rates for ...
Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER measurements. An algorithm for probabilistically estimating AER was developed based on the Lawrence Berkley National Laboratory Infiltration model utilizing housing characteristics and meteorological data with adjustment for window opening behavior. The algorithm was evaluated by comparing modeled and measured AERs in four US cities (Los Angeles, CA; Detroit, MI; Elizabeth, NJ; and Houston, TX) inputting study-specific data. The impact on the modeled AER of using publically available housing data representative of the region for each city was also assessed. Finally, modeled AER based on region-specific inputs was compared with those estimated using literature-based distributions. While modeled AERs were similar in magnitude to the measured AER they were consistently lower for all cities except Houston. AERs estimated using region-specific inputs were lower than those using study-specific inputs due to differences in window opening probabilities. The algorithm produced more spatially and temporally variable AERs compared with literature-based distributions reflecting within- and between-city differences, helping reduce error in estimates of air pollutant exposure. Published in the Journal of
Reexamination of optimal quantum state estimation of pure states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayashi, A.; Hashimoto, T.; Horibe, M.
2005-09-15
A direct derivation is given for the optimal mean fidelity of quantum state estimation of a d-dimensional unknown pure state with its N copies given as input, which was first obtained by Hayashi in terms of an infinite set of covariant positive operator valued measures (POVM's) and by Bruss and Macchiavello establishing a connection to optimal quantum cloning. An explicit condition for POVM measurement operators for optimal estimators is obtained, by which we construct optimal estimators with finite POVMs using exact quadratures on a hypersphere. These finite optimal estimators are not generally universal, where universality means the fidelity is independentmore » of input states. However, any optimal estimator with finite POVM for M(>N) copies is universal if it is used for N copies as input.« less
Real Time Calibration Method for Signal Conditioning Amplifiers
NASA Technical Reports Server (NTRS)
Medelius, Pedro J. (Inventor); Mata, Carlos T. (Inventor); Eckhoff, Anthony (Inventor); Perotti, Jose (Inventor); Lucena, Angel (Inventor)
2004-01-01
A signal conditioning amplifier receives an input signal from an input such as a transducer. The signal is amplified and processed through an analog to digital converter and sent to a processor. The processor estimates the input signal provided by the transducer to the amplifier via a multiplexer. The estimated input signal is provided as a calibration voltage to the amplifier immediately following the receipt of the amplified input signal. The calibration voltage is amplified by the amplifier and provided to the processor as an amplified calibration voltage. The amplified calibration voltage is compared to the amplified input signal, and if a significant error exists, the gain and/or offset of the amplifier may be adjusted as necessary.
Overview and Assessment of Antarctic Ice-Sheet Mass Balance Estimates: 1992-2009
NASA Technical Reports Server (NTRS)
Zwally, H. Jay; Giovinetto, Mario B.
2011-01-01
Mass balance estimates for the Antarctic Ice Sheet (AIS) in the 2007 report by the Intergovernmental Panel on Climate Change and in more recent reports lie between approximately ?50 to -250 Gt/year for 1992 to 2009. The 300 Gt/year range is approximately 15% of the annual mass input and 0.8 mm/year Sea Level Equivalent (SLE). Two estimates from radar altimeter measurements of elevation change by European Remote-sensing Satellites (ERS) (?28 and -31 Gt/year) lie in the upper part, whereas estimates from the Input-minus-Output Method (IOM) and the Gravity Recovery and Climate Experiment (GRACE) lie in the lower part (-40 to -246 Gt/year). We compare the various estimates, discuss the methodology used, and critically assess the results. We also modify the IOM estimate using (1) an alternate extrapolation to estimate the discharge from the non-observed 15% of the periphery, and (2) substitution of input from a field data compilation for input from an atmospheric model in 6% of area. The modified IOM estimate reduces the loss from 136 Gt/year to 13 Gt/year. Two ERS-based estimates, the modified IOM, and a GRACE-based estimate for observations within 1992 2005 lie in a narrowed range of ?27 to -40 Gt/year, which is about 3% of the annual mass input and only 0.2 mm/year SLE. Our preferred estimate for 1992 2001 is -47 Gt/year for West Antarctica, ?16 Gt/year for East Antarctica, and -31 Gt/year overall (?0.1 mm/year SLE), not including part of the Antarctic Peninsula (1.07% of the AIS area). Although recent reports of large and increasing rates of mass loss with time from GRACE-based studies cite agreement with IOM results, our evaluation does not support that conclusion
Parameter Estimation for Thurstone Choice Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vojnovic, Milan; Yun, Seyoung
We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one ormore » more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.« less
Economic cost of initial attack and large-fire suppression
Armando González-Cabán
1983-01-01
A procedure has been developed for estimating the economic cost of initial attack and large-fire suppression. The procedure uses a per-unit approach to estimate total attack and suppression costs on an input-by-input basis. Fire management inputs (FMIs) are the production units used. All direct and indirect costs are charged to the FMIs. With the unit approach, all...
Cortical connective field estimates from resting state fMRI activity.
Gravel, Nicolás; Harvey, Ben; Nordhjem, Barbara; Haak, Koen V; Dumoulin, Serge O; Renken, Remco; Curčić-Blake, Branislava; Cornelissen, Frans W
2014-01-01
One way to study connectivity in visual cortical areas is by examining spontaneous neural activity. In the absence of visual input, such activity remains shaped by the underlying neural architecture and, presumably, may still reflect visuotopic organization. Here, we applied population connective field (CF) modeling to estimate the spatial profile of functional connectivity in the early visual cortex during resting state functional magnetic resonance imaging (RS-fMRI). This model-based analysis estimates the spatial integration between blood-oxygen level dependent (BOLD) signals in distinct cortical visual field maps using fMRI. Just as population receptive field (pRF) mapping predicts the collective neural activity in a voxel as a function of response selectivity to stimulus position in visual space, CF modeling predicts the activity of voxels in one visual area as a function of the aggregate activity in voxels in another visual area. In combination with pRF mapping, CF locations on the cortical surface can be interpreted in visual space, thus enabling reconstruction of visuotopic maps from resting state data. We demonstrate that V1 ➤ V2 and V1 ➤ V3 CF maps estimated from resting state fMRI data show visuotopic organization. Therefore, we conclude that-despite some variability in CF estimates between RS scans-neural properties such as CF maps and CF size can be derived from resting state data.
Sun, Ying; Gu, Lianhong; Dickinson, Robert E; Pallardy, Stephen G; Baker, John; Cao, Yonghui; DaMatta, Fábio Murilo; Dong, Xuejun; Ellsworth, David; Van Goethem, Davina; Jensen, Anna M; Law, Beverly E; Loos, Rodolfo; Martins, Samuel C Vitor; Norby, Richard J; Warren, Jeffrey; Weston, David; Winter, Klaus
2014-04-01
Worldwide measurements of nearly 130 C3 species covering all major plant functional types are analysed in conjunction with model simulations to determine the effects of mesophyll conductance (g(m)) on photosynthetic parameters and their relationships estimated from A/Ci curves. We find that an assumption of infinite g(m) results in up to 75% underestimation for maximum carboxylation rate V(cmax), 60% for maximum electron transport rate J(max), and 40% for triose phosphate utilization rate T(u) . V(cmax) is most sensitive, J(max) is less sensitive, and T(u) has the least sensitivity to the variation of g(m). Because of this asymmetrical effect of g(m), the ratios of J(max) to V(cmax), T(u) to V(cmax) and T(u) to J(max) are all overestimated. An infinite g(m) assumption also limits the freedom of variation of estimated parameters and artificially constrains parameter relationships to stronger shapes. These findings suggest the importance of quantifying g(m) for understanding in situ photosynthetic machinery functioning. We show that a nonzero resistance to CO2 movement in chloroplasts has small effects on estimated parameters. A non-linear function with gm as input is developed to convert the parameters estimated under an assumption of infinite gm to proper values. This function will facilitate gm representation in global carbon cycle models. © 2013 John Wiley & Sons Ltd.
Model-free quantification of dynamic PET data using nonparametric deconvolution
Zanderigo, Francesca; Parsey, Ramin V; Todd Ogden, R
2015-01-01
Dynamic positron emission tomography (PET) data are usually quantified using compartment models (CMs) or derived graphical approaches. Often, however, CMs either do not properly describe the tracer kinetics, or are not identifiable, leading to nonphysiologic estimates of the tracer binding. The PET data are modeled as the convolution of the metabolite-corrected input function and the tracer impulse response function (IRF) in the tissue. Using nonparametric deconvolution methods, it is possible to obtain model-free estimates of the IRF, from which functionals related to tracer volume of distribution and binding may be computed, but this approach has rarely been applied in PET. Here, we apply nonparametric deconvolution using singular value decomposition to simulated and test–retest clinical PET data with four reversible tracers well characterized by CMs ([11C]CUMI-101, [11C]DASB, [11C]PE2I, and [11C]WAY-100635), and systematically compare reproducibility, reliability, and identifiability of various IRF-derived functionals with that of traditional CMs outcomes. Results show that nonparametric deconvolution, completely free of any model assumptions, allows for estimates of tracer volume of distribution and binding that are very close to the estimates obtained with CMs and, in some cases, show better test–retest performance than CMs outcomes. PMID:25873427
NASA Technical Reports Server (NTRS)
Anderson, Leif F.; Harrington, Sean P.; Omeke, Ojei, II; Schwaab, Douglas G.
2009-01-01
This is a case study on revised estimates of induced failure for International Space Station (ISS) on-orbit replacement units (ORUs). We devise a heuristic to leverage operational experience data by aggregating ORU, associated function (vehicle sub -system), and vehicle effective' k-factors using actual failure experience. With this input, we determine a significant failure threshold and minimize the difference between the actual and predicted failure rates. We conclude with a discussion on both qualitative and quantitative improvements the heuristic methods and potential benefits to ISS supportability engineering analysis.
Flood Nowcasting With Linear Catchment Models, Radar and Kalman Filters
NASA Astrophysics Data System (ADS)
Pegram, Geoff; Sinclair, Scott
A pilot study using real time rainfall data as input to a parsimonious linear distributed flood forecasting model is presented. The aim of the study is to deliver an operational system capable of producing flood forecasts, in real time, for the Mgeni and Mlazi catchments near the city of Durban in South Africa. The forecasts can be made at time steps which are of the order of a fraction of the catchment response time. To this end, the model is formulated in Finite Difference form in an equation similar to an Auto Regressive Moving Average (ARMA) model; it is this formulation which provides the required computational efficiency. The ARMA equation is a discretely coincident form of the State-Space equations that govern the response of an arrangement of linear reservoirs. This results in a functional relationship between the reservoir response con- stants and the ARMA coefficients, which guarantees stationarity of the ARMA model. Input to the model is a combined "Best Estimate" spatial rainfall field, derived from a combination of weather RADAR and Satellite rainfield estimates with point rain- fall given by a network of telemetering raingauges. Several strategies are employed to overcome the uncertainties associated with forecasting. Principle among these are the use of optimal (double Kalman) filtering techniques to update the model states and parameters in response to current streamflow observations and the application of short term forecasting techniques to provide future estimates of the rainfield as input to the model.
Space transfer vehicle concepts and requirements study. Volume 3, book 1: Program cost estimates
NASA Technical Reports Server (NTRS)
Peffley, Al F.
1991-01-01
The Space Transfer Vehicle (STV) Concepts and Requirements Study cost estimate and program planning analysis is presented. The cost estimating technique used to support STV system, subsystem, and component cost analysis is a mixture of parametric cost estimating and selective cost analogy approaches. The parametric cost analysis is aimed at developing cost-effective aerobrake, crew module, tank module, and lander designs with the parametric cost estimates data. This is accomplished using cost as a design parameter in an iterative process with conceptual design input information. The parametric estimating approach segregates costs by major program life cycle phase (development, production, integration, and launch support). These phases are further broken out into major hardware subsystems, software functions, and tasks according to the STV preliminary program work breakdown structure (WBS). The WBS is defined to a low enough level of detail by the study team to highlight STV system cost drivers. This level of cost visibility provided the basis for cost sensitivity analysis against various design approaches aimed at achieving a cost-effective design. The cost approach, methodology, and rationale are described. A chronological record of the interim review material relating to cost analysis is included along with a brief summary of the study contract tasks accomplished during that period of review and the key conclusions or observations identified that relate to STV program cost estimates. The STV life cycle costs are estimated on the proprietary parametric cost model (PCM) with inputs organized by a project WBS. Preliminary life cycle schedules are also included.
Gronberg, JoAnn M.; Arnold, Terri L.
2017-03-24
County-level estimates of nitrogen and phosphorus inputs from animal manure for the conterminous United States were calculated from animal population inventories in the 2007 and 2012 Census of Agriculture, using previously published methods. These estimates of non-point nitrogen and phosphorus inputs from animal manure were compiled in support of the U.S. Geological Survey’s National Water-Quality Assessment Project of the National Water Quality Program and are needed to support national-scale investigations of stream and groundwater water quality. The estimates published in this report are comparable with older estimates which can be compared to show changes in nitrogen and phosphorus inputs from manure over time.
Cao, Hui; Yan, Xingyu; Li, Yaojiang; Wang, Yanxia; Zhou, Yan; Yang, Sanchun
2014-01-01
Quantitative analysis for the flue gas of natural gas-fired generator is significant for energy conservation and emission reduction. The traditional partial least squares method may not deal with the nonlinear problems effectively. In the paper, a nonlinear partial least squares method with extended input based on radial basis function neural network (RBFNN) is used for components prediction of flue gas. For the proposed method, the original independent input matrix is the input of RBFNN and the outputs of hidden layer nodes of RBFNN are the extension term of the original independent input matrix. Then, the partial least squares regression is performed on the extended input matrix and the output matrix to establish the components prediction model of flue gas. A near-infrared spectral dataset of flue gas of natural gas combustion is used for estimating the effectiveness of the proposed method compared with PLS. The experiments results show that the root-mean-square errors of prediction values of the proposed method for methane, carbon monoxide, and carbon dioxide are, respectively, reduced by 4.74%, 21.76%, and 5.32% compared to those of PLS. Hence, the proposed method has higher predictive capabilities and better robustness.
Input-output model for MACCS nuclear accident impacts estimation¹
DOE Office of Scientific and Technical Information (OSTI.GOV)
Outkin, Alexander V.; Bixler, Nathan E.; Vargas, Vanessa N
Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domesticmore » product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.« less
The uncertainty of nitrous oxide emissions from grazed grasslands: A New Zealand case study
NASA Astrophysics Data System (ADS)
Kelliher, Francis M.; Henderson, Harold V.; Cox, Neil R.
2017-01-01
Agricultural soils emit nitrous oxide (N2O), a greenhouse gas and the primary source of nitrogen oxides which deplete stratospheric ozone. Agriculture has been estimated to be the largest anthropogenic N2O source. In New Zealand (NZ), pastoral agriculture uses half the land area. To estimate the annual N2O emissions from NZ's agricultural soils, the nitrogen (N) inputs have been determined and multiplied by an emission factor (EF), the mass fraction of N inputs emitted as N2Osbnd N. To estimate the associated uncertainty, we developed an analytical method. For comparison, another estimate was determined by Monte Carlo numerical simulation. For both methods, expert judgement was used to estimate the N input uncertainty. The EF uncertainty was estimated by meta-analysis of the results from 185 NZ field trials. For the analytical method, assuming a normal distribution and independence of the terms used to calculate the emissions (correlation = 0), the estimated 95% confidence limit was ±57%. When there was a normal distribution and an estimated correlation of 0.4 between N input and EF, the latter inferred from experimental data involving six NZ soils, the analytical method estimated a 95% confidence limit of ±61%. The EF data from 185 NZ field trials had a logarithmic normal distribution. For the Monte Carlo method, assuming a logarithmic normal distribution for EF, a normal distribution for the other terms and independence of all terms, the estimated 95% confidence limits were -32% and +88% or ±60% on average. When there were the same distribution assumptions and a correlation of 0.4 between N input and EF, the Monte Carlo method estimated 95% confidence limits were -34% and +94% or ±64% on average. For the analytical and Monte Carlo methods, EF uncertainty accounted for 95% and 83% of the emissions uncertainty when the correlation between N input and EF was 0 and 0.4, respectively. As the first uncertainty analysis of an agricultural soils N2O emissions inventory using "country-specific" field trials to estimate EF uncertainty, this can be a potentially informative case study for the international scientific community.
Adaptive torque estimation of robot joint with harmonic drive transmission
NASA Astrophysics Data System (ADS)
Shi, Zhiguo; Li, Yuankai; Liu, Guangjun
2017-11-01
Robot joint torque estimation using input and output position measurements is a promising technique, but the result may be affected by the load variation of the joint. In this paper, a torque estimation method with adaptive robustness and optimality adjustment according to load variation is proposed for robot joint with harmonic drive transmission. Based on a harmonic drive model and a redundant adaptive robust Kalman filter (RARKF), the proposed approach can adapt torque estimation filtering optimality and robustness to the load variation by self-tuning the filtering gain and self-switching the filtering mode between optimal and robust. The redundant factor of RARKF is designed as a function of the motor current for tolerating the modeling error and load-dependent filtering mode switching. The proposed joint torque estimation method has been experimentally studied in comparison with a commercial torque sensor and two representative filtering methods. The results have demonstrated the effectiveness of the proposed torque estimation technique.
Dynamic causal modelling of brain-behaviour relationships.
Rigoux, L; Daunizeau, J
2015-08-15
In this work, we expose a mathematical treatment of brain-behaviour relationships, which we coin behavioural Dynamic Causal Modelling or bDCM. This approach aims at decomposing the brain's transformation of stimuli into behavioural outcomes, in terms of the relative contribution of brain regions and their connections. In brief, bDCM places the brain at the interplay between stimulus and behaviour: behavioural outcomes arise from coordinated activity in (hidden) neural networks, whose dynamics are driven by experimental inputs. Estimating neural parameters that control network connectivity and plasticity effectively performs a neurobiologically-constrained approximation to the brain's input-outcome transform. In other words, neuroimaging data essentially serves to enforce the realism of bDCM's decomposition of input-output relationships. In addition, post-hoc artificial lesions analyses allow us to predict induced behavioural deficits and quantify the importance of network features for funnelling input-output relationships. This is important, because this enables one to bridge the gap with neuropsychological studies of brain-damaged patients. We demonstrate the face validity of the approach using Monte-Carlo simulations, and its predictive validity using empirical fMRI/behavioural data from an inhibitory control task. Lastly, we discuss promising applications of this work, including the assessment of functional degeneracy (in the healthy brain) and the prediction of functional recovery after lesions (in neurological patients). Copyright © 2015 Elsevier Inc. All rights reserved.
Lee, Myunghun
2005-10-01
Given restrictions on sulfur dioxide emissions, a feasible long-run response could involve either an investment in improving boiler fuel-efficiency or a shift to a production process that is effective in removing sulfur dioxide. To allow for the possibility of substitution between sulfur and productive capital, we measure the shadow price of sulfur dioxide as the opportunity cost of lowering sulfur emissions in terms of forgone capital. The input distance function is estimated with data from 51 coal-fired US power units operating between 1977 and 1986. The indirect Morishima elasticities of substitution indicate that the substitutability of capital for sulfur is relatively high. The overall weighted average estimate of the shadow price of sulfur is -0.076 dollars per pound in constant 1976 dollars.
Statistical plant set estimation using Schroeder-phased multisinusoidal input design
NASA Technical Reports Server (NTRS)
Bayard, D. S.
1992-01-01
A frequency domain method is developed for plant set estimation. The estimation of a plant 'set' rather than a point estimate is required to support many methods of modern robust control design. The approach here is based on using a Schroeder-phased multisinusoid input design which has the special property of placing input energy only at the discrete frequency points used in the computation. A detailed analysis of the statistical properties of the frequency domain estimator is given, leading to exact expressions for the probability distribution of the estimation error, and many important properties. It is shown that, for any nominal parametric plant estimate, one can use these results to construct an overbound on the additive uncertainty to any prescribed statistical confidence. The 'soft' bound thus obtained can be used to replace 'hard' bounds presently used in many robust control analysis and synthesis methods.
Katsevich, Alexander J.; Ramm, Alexander G.
1996-01-01
Local tomography is enhanced to determine the location and value of a discontinuity between a first internal density of an object and a second density of a region within the object. A beam of radiation is directed in a predetermined pattern through the region of the object containing the discontinuity. Relative attenuation data of the beam is determined within the predetermined pattern having a first data component that includes attenuation data through the region. In a first method for evaluating the value of the discontinuity, the relative attenuation data is inputted to a local tomography function .function..sub..LAMBDA. to define the location S of the density discontinuity. The asymptotic behavior of .function..sub..LAMBDA. is determined in a neighborhood of S, and the value for the discontinuity is estimated from the asymptotic behavior of .function..sub..LAMBDA.. In a second method for evaluating the value of the discontinuity, a gradient value for a mollified local tomography function .gradient..function..sub..LAMBDA..epsilon. (x.sub.ij) is determined along the discontinuity; and the value of the jump of the density across the discontinuity curve (or surface) S is estimated from the gradient values.
Dynamic physiological modeling for functional diffuse optical tomography
Diamond, Solomon Gilbert; Huppert, Theodore J.; Kolehmainen, Ville; Franceschini, Maria Angela; Kaipio, Jari P.; Arridge, Simon R.; Boas, David A.
2009-01-01
Diffuse optical tomography (DOT) is a noninvasive imaging technology that is sensitive to local concentration changes in oxy- and deoxyhemoglobin. When applied to functional neuroimaging, DOT measures hemodynamics in the scalp and brain that reflect competing metabolic demands and cardiovascular dynamics. The diffuse nature of near-infrared photon migration in tissue and the multitude of physiological systems that affect hemodynamics motivate the use of anatomical and physiological models to improve estimates of the functional hemodynamic response. In this paper, we present a linear state-space model for DOT analysis that models the physiological fluctuations present in the data with either static or dynamic estimation. We demonstrate the approach by using auxiliary measurements of blood pressure variability and heart rate variability as inputs to model the background physiology in DOT data. We evaluate the improvements accorded by modeling this physiology on ten human subjects with simulated functional hemodynamic responses added to the baseline physiology. Adding physiological modeling with a static estimator significantly improved estimates of the simulated functional response, and further significant improvements were achieved with a dynamic Kalman filter estimator (paired t tests, n = 10, P < 0.05). These results suggest that physiological modeling can improve DOT analysis. The further improvement with the Kalman filter encourages continued research into dynamic linear modeling of the physiology present in DOT. Cardiovascular dynamics also affect the blood-oxygen-dependent (BOLD) signal in functional magnetic resonance imaging (fMRI). This state-space approach to DOT analysis could be extended to BOLD fMRI analysis, multimodal studies and real-time analysis. PMID:16242967
A selective-update affine projection algorithm with selective input vectors
NASA Astrophysics Data System (ADS)
Kong, NamWoong; Shin, JaeWook; Park, PooGyeon
2011-10-01
This paper proposes an affine projection algorithm (APA) with selective input vectors, which based on the concept of selective-update in order to reduce estimation errors and computations. The algorithm consists of two procedures: input- vector-selection and state-decision. The input-vector-selection procedure determines the number of input vectors by checking with mean square error (MSE) whether the input vectors have enough information for update. The state-decision procedure determines the current state of the adaptive filter by using the state-decision criterion. As the adaptive filter is in transient state, the algorithm updates the filter coefficients with the selected input vectors. On the other hand, as soon as the adaptive filter reaches the steady state, the update procedure is not performed. Through these two procedures, the proposed algorithm achieves small steady-state estimation errors, low computational complexity and low update complexity for colored input signals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamp, F.; Brueningk, S.C.; Wilkens, J.J.
Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g.more » RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment of uncertainties. Supported by DFG grant WI 3745/1-1 and DFG cluster of excellence: Munich-Centre for Advanced Photonics.« less
NASA Astrophysics Data System (ADS)
Cescon, Marzia; Johansson, Rolf; Renard, Eric; Maran, Alberto
2014-07-01
One of the main limiting factors in improving glucose control for type 1 diabetes mellitus (T1DM) subjects is the lack of a precise description of meal and insulin intake effects on blood glucose. Knowing the magnitude and duration of such effects would be useful not only for patients and physicians, but also for the development of a controller targeting glycaemia regulation. Therefore, in this paper we focus on estimating low-complexity yet physiologically sound and individualised multi-input single-output (MISO) models of the glucose metabolism in T1DM able to reflect the basic dynamical features of the glucose-insulin metabolic system in response to a meal intake or an insulin injection. The models are continuous-time second-order transfer functions relating the amount of carbohydrate of a meal and the insulin units of the accordingly administered dose (inputs) to plasma glucose evolution (output) and consist of few parameters clinically relevant to be estimated. The estimation strategy is continuous-time data-driven system identification and exploits a database in which meals and insulin boluses are separated in time, allowing the unique identification of the model parameters.
NASA Astrophysics Data System (ADS)
Cara, Javier
2016-05-01
Modal parameters comprise natural frequencies, damping ratios, modal vectors and modal masses. In a theoretic framework, these parameters are the basis for the solution of vibration problems using the theory of modal superposition. In practice, they can be computed from input-output vibration data: the usual procedure is to estimate a mathematical model from the data and then to compute the modal parameters from the estimated model. The most popular models for input-output data are based on the frequency response function, but in recent years the state space model in the time domain has become popular among researchers and practitioners of modal analysis with experimental data. In this work, the equations to compute the modal parameters from the state space model when input and output data are available (like in combined experimental-operational modal analysis) are derived in detail using invariants of the state space model: the equations needed to compute natural frequencies, damping ratios and modal vectors are well known in the operational modal analysis framework, but the equation needed to compute the modal masses has not generated much interest in technical literature. These equations are applied to both a numerical simulation and an experimental study in the last part of the work.
Electric energy costs and firm productivity in the countries of the Pacific Alliance
NASA Astrophysics Data System (ADS)
Camacho, Anamaria
This paper explores the relation between energy as an input of production and firm-level productivity for Chile, Colombia, Mexico and Peru, all country members of the Pacific Alliance economic bloc. The empirical literature, has explored the impact of infrastructure on productivity; however there is limited analysis on the impact of particular infrastructure variables, such as energy, on productivity at the firm level in Latin America. Therefore, this study conducts a quantitative assessment of the responsiveness of productivity to energy cost and quality for Chile, Colombia, Mexico and Peru. For this, the empirical strategy is to estimate a Cobb-Douglas production function using the World Bank's Enterprise Survey to obtain comparable measures of output and inputs of production. This approach provides estimates of input factor elasticities for all of the factors of production including energy. The results indicate that electric energy costs explain cross-country differences in firm level productivity. For the particular case of Colombia, the country exhibits the lowest capital and labor productivity of the PA, and firm output is highly responsive to changes in energy use. As a result, the evidence suggests that policies reducing electric energy costs are an efficient alternative to increase firm performance, particularly in the case of Colombia.
Estimating Fast Neural Input Using Anatomical and Functional Connectivity
Eriksson, David
2016-01-01
In the last 20 years there has been an increased interest in estimating signals that are sent between neurons and brain areas. During this time many new methods have appeared for measuring those signals. Here we review a wide range of methods for which connected neurons can be identified anatomically, by tracing axons that run between the cells, or functionally, by detecting if the activity of two neurons are correlated with a short lag. The signals that are sent between the neurons are represented by the activity in the neurons that are connected to the target population or by the activity at the corresponding synapses. The different methods not only differ in the accuracy of the signal measurement but they also differ in the type of signal being measured. For example, unselective recording of all neurons in the source population encompasses more indirect pathways to the target population than if one selectively record from the neurons that project to the target population. Infact, this degree of selectivity is similar to that of optogenetic perturbations; one can perturb selectively or unselectively. Thus it becomes possible to match a given signal measurement method with a signal perturbation method, something that allows for an exact input control to any neuronal population. PMID:28066189
QVAST: a new Quantum GIS plugin for estimating volcanic susceptibility
NASA Astrophysics Data System (ADS)
Bartolini, S.; Cappello, A.; Martí, J.; Del Negro, C.
2013-11-01
One of the most important tasks of modern volcanology is the construction of hazard maps simulating different eruptive scenarios that can be used in risk-based decision making in land-use planning and emergency management. The first step in the quantitative assessment of volcanic hazards is the development of susceptibility maps (i.e., the spatial probability of a future vent opening given the past eruptive activity of a volcano). This challenging issue is generally tackled using probabilistic methods that use the calculation of a kernel function at each data location to estimate probability density functions (PDFs). The smoothness and the modeling ability of the kernel function are controlled by the smoothing parameter, also known as the bandwidth. Here we present a new tool, QVAST, part of the open-source geographic information system Quantum GIS, which is designed to create user-friendly quantitative assessments of volcanic susceptibility. QVAST allows the selection of an appropriate method for evaluating the bandwidth for the kernel function on the basis of the input parameters and the shapefile geometry, and can also evaluate the PDF with the Gaussian kernel. When different input data sets are available for the area, the total susceptibility map is obtained by assigning different weights to each of the PDFs, which are then combined via a weighted summation and modeled in a non-homogeneous Poisson process. The potential of QVAST, developed in a free and user-friendly environment, is here shown through its application in the volcanic fields of Lanzarote (Canary Islands) and La Garrotxa (NE Spain).
Dynamic modal estimation using instrumental variables
NASA Technical Reports Server (NTRS)
Salzwedel, H.
1980-01-01
A method to determine the modes of dynamical systems is described. The inputs and outputs of a system are Fourier transformed and averaged to reduce the error level. An instrumental variable method that estimates modal parameters from multiple correlations between responses of single input, multiple output systems is applied to estimate aircraft, spacecraft, and off-shore platform modal parameters.
INDES User's guide multistep input design with nonlinear rotorcraft modeling
NASA Technical Reports Server (NTRS)
1979-01-01
The INDES computer program, a multistep input design program used as part of a data processing technique for rotorcraft systems identification, is described. Flight test inputs base on INDES improve the accuracy of parameter estimates. The input design algorithm, program input, and program output are presented.
Modeling of an intelligent pressure sensor using functional link artificial neural networks.
Patra, J C; van den Bos, A
2000-01-01
A capacitor pressure sensor (CPS) is modeled for accurate readout of applied pressure using a novel artificial neural network (ANN). The proposed functional link ANN (FLANN) is a computationally efficient nonlinear network and is capable of complex nonlinear mapping between its input and output pattern space. The nonlinearity is introduced into the FLANN by passing the input pattern through a functional expansion unit. Three different polynomials such as, Chebyschev, Legendre and power series have been employed in the FLANN. The FLANN offers computational advantage over a multilayer perceptron (MLP) for similar performance in modeling of the CPS. The prime aim of the present paper is to develop an intelligent model of the CPS involving less computational complexity, so that its implementation can be economical and robust. It is shown that, over a wide temperature variation ranging from -50 to 150 degrees C, the maximum error of estimation of pressure remains within +/- 3%. With the help of computer simulation, the performance of the three types of FLANN models has been compared to that of an MLP based model.
Learning User Preferences for Sets of Objects
NASA Technical Reports Server (NTRS)
desJardins, Marie; Eaton, Eric; Wagstaff, Kiri L.
2006-01-01
Most work on preference learning has focused on pairwise preferences or rankings over individual items. In this paper, we present a method for learning preferences over sets of items. Our learning method takes as input a collection of positive examples--that is, one or more sets that have been identified by a user as desirable. Kernel density estimation is used to estimate the value function for individual items, and the desired set diversity is estimated from the average set diversity observed in the collection. Since this is a new learning problem, we introduce a new evaluation methodology and evaluate the learning method on two data collections: synthetic blocks-world data and a new real-world music data collection that we have gathered.
Remote sensing of agricultural crops and soils
NASA Technical Reports Server (NTRS)
Bauer, M. E. (Principal Investigator)
1983-01-01
Research in the correlative and noncorrelative approaches to image registration and the spectral estimation of corn canopy phytomass and water content is reported. Scene radiation research results discussed include: corn and soybean LANDSAT MSS classification performance as a function of scene characteristics; estimating crop development stages from MSS data; the interception of photosynthetically active radiation in corn and soybean canopies; costs of measuring leaf area index of corn; LANDSAT spectral inputs to crop models including the use of the greenness index to assess crop stress and the evaluation of MSS data for estimating corn and soybean development stages; field research experiment design data acquisition and preprocessing; and Sun-view angles studies of corn and soybean canopies in support of vegetation canopy reflection modeling.
Foreground effect on the J-factor estimation of classical dwarf spheroidal galaxies
NASA Astrophysics Data System (ADS)
Ichikawa, Koji; Ishigaki, Miho N.; Matsumoto, Shigeki; Ibe, Masahiro; Sugai, Hajime; Hayashi, Kohei; Horigome, Shun-ichi
2017-07-01
The gamma-ray observation of the dwarf spheroidal galaxies (dSphs) is a promising approach to search for the dark matter annihilation (or decay) signal. The dSphs are the nearby satellite galaxies with a clean environment and dense dark matter halo so that they give stringent constraints on the O(1) TeV dark matter. However, recent studies have revealed that current estimation of astrophysical factors relevant for the dark matter searches are not conservative, where the various non-negligible systematic uncertainties are not taken into account. Among them, the effect of foreground stars on the astrophysical factors has not been paid much attention, which becomes more important for deeper and wider stellar surveys in the future. In this article, we assess the effects of the foreground contamination by generating the mock samples of stars and using a model of future spectrographs. We investigate various data cuts to optimize the quality of the data and find that the cuts on the velocity and surface gravity can efficiently eliminate the contamination. We also propose a new likelihood function that includes the foreground distribution function. We apply this likelihood function to the fit of the three types of the mock data (Ursa Minor, Draco with large dark matter halo and Draco with small halo) and three cases of the observation. The likelihood successfully reproduces the input J-factor value while the fit without considering the foreground distribution gives a large deviation from the input value by a factor of 3.
DC servomechanism parameter identification: a Closed Loop Input Error approach.
Garrido, Ruben; Miranda, Roger
2012-01-01
This paper presents a Closed Loop Input Error (CLIE) approach for on-line parametric estimation of a continuous-time model of a DC servomechanism functioning in closed loop. A standard Proportional Derivative (PD) position controller stabilizes the loop without requiring knowledge on the servomechanism parameters. The analysis of the identification algorithm takes into account the control law employed for closing the loop. The model contains four parameters that depend on the servo inertia, viscous, and Coulomb friction as well as on a constant disturbance. Lyapunov stability theory permits assessing boundedness of the signals associated to the identification algorithm. Experiments on a laboratory prototype allows evaluating the performance of the approach. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Natsume, Takahiro; Ishida, Masaki; Kitagawa, Kakuya; Nagata, Motonori; Sakuma, Hajime; Ichihara, Takashi
2015-11-01
The purpose of this study was to develop a method to determine time discrepancies between input and myocardial time-signal intensity (TSI) curves for accurate estimation of myocardial perfusion with first-pass contrast-enhanced MRI. Estimation of myocardial perfusion with contrast-enhanced MRI using kinetic models requires faithful recording of contrast content in the blood and myocardium. Typically, the arterial input function (AIF) is obtained by setting a region of interest in the left ventricular cavity. However, there is a small delay between the AIF and the myocardial curves, and such time discrepancies can lead to errors in flow estimation using Patlak plot analysis. In this study, the time discrepancies between the arterial TSI curve and the myocardial tissue TSI curve were estimated based on the compartment model. In the early phase after the arrival of the contrast agent in the myocardium, the relationship between rate constant K1 and the concentrations of Gd-DTPA contrast agent in the myocardium and arterial blood (LV blood) can be described by the equation K1={dCmyo(tpeak)/dt}/Ca(tpeak), where Cmyo(t) and Ca(t) are the relative concentrations of Gd-DTPA contrast agent in the myocardium and in the LV blood, respectively, and tpeak is the time corresponding to the peak of Ca(t). In the ideal case, the time corresponding to the maximum upslope of Cmyo(t), tmax, is equal to tpeak. In practice, however, there is a small difference in the arrival times of the contrast agent into the LV and into the myocardium. This difference was estimated to correspond to the difference between tpeak and tmax. The magnitudes of such time discrepancies and the effectiveness of the correction for these time discrepancies were measured in 18 subjects who underwent myocardial perfusion MRI under rest and stress conditions. The effects of the time discrepancies could be corrected effectively in the myocardial perfusion estimates. Copyright © 2015 Elsevier Inc. All rights reserved.
Software for Quantifying and Simulating Microsatellite Genotyping Error
Johnson, Paul C.D.; Haydon, Daniel T.
2007-01-01
Microsatellite genetic marker data are exploited in a variety of fields, including forensics, gene mapping, kinship inference and population genetics. In all of these fields, inference can be thwarted by failure to quantify and account for data errors, and kinship inference in particular can benefit from separating errors into two distinct classes: allelic dropout and false alleles. Pedant is MS Windows software for estimating locus-specific maximum likelihood rates of these two classes of error. Estimation is based on comparison of duplicate error-prone genotypes: neither reference genotypes nor pedigree data are required. Other functions include: plotting of error rate estimates and confidence intervals; simulations for performing power analysis and for testing the robustness of error rate estimates to violation of the underlying assumptions; and estimation of expected heterozygosity, which is a required input. The program, documentation and source code are available from http://www.stats.gla.ac.uk/~paulj/pedant.html. PMID:20066126
Comparison of Dynamic Contrast Enhanced MRI and Quantitative SPECT in a Rat Glioma Model
Skinner, Jack T.; Yankeelov, Thomas E.; Peterson, Todd E.; Does, Mark D.
2012-01-01
Pharmacokinetic modeling of dynamic contrast enhanced (DCE)-MRI data provides measures of the extracellular volume fraction (ve) and the volume transfer constant (Ktrans) in a given tissue. These parameter estimates may be biased, however, by confounding issues such as contrast agent and tissue water dynamics, or assumptions of vascularization and perfusion made by the commonly used model. In contrast to MRI, radiotracer imaging with SPECT is insensitive to water dynamics. A quantitative dual-isotope SPECT technique was developed to obtain an estimate of ve in a rat glioma model for comparison to the corresponding estimates obtained using DCE-MRI with a vascular input function (VIF) and reference region model (RR). Both DCE-MRI methods produced consistently larger estimates of ve in comparison to the SPECT estimates, and several experimental sources were postulated to contribute to these differences. PMID:22991315
Dual ant colony operational modal analysis parameter estimation method
NASA Astrophysics Data System (ADS)
Sitarz, Piotr; Powałka, Bartosz
2018-01-01
Operational Modal Analysis (OMA) is a common technique used to examine the dynamic properties of a system. Contrary to experimental modal analysis, the input signal is generated in object ambient environment. Operational modal analysis mainly aims at determining the number of pole pairs and at estimating modal parameters. Many methods are used for parameter identification. Some methods operate in time while others in frequency domain. The former use correlation functions, the latter - spectral density functions. However, while some methods require the user to select poles from a stabilisation diagram, others try to automate the selection process. Dual ant colony operational modal analysis parameter estimation method (DAC-OMA) presents a new approach to the problem, avoiding issues involved in the stabilisation diagram. The presented algorithm is fully automated. It uses deterministic methods to define the interval of estimated parameters, thus reducing the problem to optimisation task which is conducted with dedicated software based on ant colony optimisation algorithm. The combination of deterministic methods restricting parameter intervals and artificial intelligence yields very good results, also for closely spaced modes and significantly varied mode shapes within one measurement point.
Feeney, Daniel F; Mani, Diba; Enoka, Roger M
2018-06-07
We investigated the associations between grooved pegboard times, force steadiness (coefficient of variation for force), and variability in an estimate of the common synaptic input to motor neurons innervating the wrist extensor muscles during steady contractions performed by young and older adults. The discharge times of motor units were derived from recordings obtained with high-density surface electrodes while participants performed steady isometric contractions at 10% and 20% of maximal voluntary contraction (MVC) force. The steady contractions were performed with a pinch grip and wrist extension, both independently (single action) and concurrently (double action). The variance in common synaptic input to motor neurons was estimated with a state-space model of the latent common input dynamics. There was a statistically significant association between the coefficient of variation for force during the steady contractions and the estimated variance in common synaptic input in young (r 2 = 0.31) and older (r 2 = 0.39) adults, but not between either the mean or the coefficient of variation for interspike interval of single motor units with the coefficient of variation for force. Moreover, the estimated variance in common synaptic input during the double-action task with the wrist extensors at the 20% target was significantly associated with grooved pegboard time (r 2 = 0.47) for older adults, but not young adults. These findings indicate that longer pegboard times of older adults were associated with worse force steadiness and greater fluctuations in the estimated common synaptic input to motor neurons during steady contractions. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
NASA Astrophysics Data System (ADS)
Molina, S.; Lang, D. H.; Lindholm, C. D.
2010-03-01
The era of earthquake risk and loss estimation basically began with the seminal paper on hazard by Allin Cornell in 1968. Following the 1971 San Fernando earthquake, the first studies placed strong emphasis on the prediction of human losses (number of casualties and injured used to estimate the needs in terms of health care and shelters in the immediate aftermath of a strong event). In contrast to these early risk modeling efforts, later studies have focused on the disruption of the serviceability of roads, telecommunications and other important lifeline systems. In the 1990s, the National Institute of Building Sciences (NIBS) developed a tool (HAZUS ®99) for the Federal Emergency Management Agency (FEMA), where the goal was to incorporate the best quantitative methodology in earthquake loss estimates. Herein, the current version of the open-source risk and loss estimation software SELENA v4.1 is presented. While using the spectral displacement-based approach (capacity spectrum method), this fully self-contained tool analytically computes the degree of damage on specific building typologies as well as the associated economic losses and number of casualties. The earthquake ground shaking estimates for SELENA v4.1 can be calculated or provided in three different ways: deterministic, probabilistic or based on near-real-time data. The main distinguishing feature of SELENA compared to other risk estimation software tools is that it is implemented in a 'logic tree' computation scheme which accounts for uncertainties of any input (e.g., scenario earthquake parameters, ground-motion prediction equations, soil models) or inventory data (e.g., building typology, capacity curves and fragility functions). The data used in the analysis is assigned with a decimal weighting factor defining the weight of the respective branch of the logic tree. The weighting of the input parameters accounts for the epistemic and aleatoric uncertainties that will always follow the necessary parameterization of the different types of input data. Like previous SELENA versions, SELENA v4.1 is coded in MATLAB which allows for easy dissemination among the scientific-technical community. Furthermore, any user has access to the source code in order to adapt, improve or refine the tool according to his or her particular needs. The handling of SELENA's current version and the provision of input data is customized for an academic environment but which can then support decision-makers of local, state and regional governmental agencies in estimating possible losses from future earthquakes.
NASA Technical Reports Server (NTRS)
Delp, P.; Crossman, E. R. F. W.; Szostak, H.
1972-01-01
The automobile-driver describing function for lateral position control was estimated for three subjects from frequency response analysis of straight road test results. The measurement procedure employed an instrumented full size sedan with known steering response characteristics, and equipped with a lateral lane position measuring device based on video detection of white stripe lane markings. Forcing functions were inserted through a servo driven double steering wheel coupling the driver to the steering system proper. Random appearing, Gaussian, and transient time functions were used. The quasi-linear models fitted to the random appearing input frequency response characterized the driver as compensating for lateral position error in a proportional, derivative, and integral manner. Similar parameters were fitted to the Gabor transformed frequency response of the driver to transient functions. A fourth term corresponding to response to lateral acceleration was determined by matching the time response histories of the model to the experimental results. The time histories show evidence of pulse-like nonlinear behavior during extended response to step transients which appear as high frequency remnant power.
Constant Switching Frequency DTC for Matrix Converter Fed Speed Sensorless Induction Motor Drive
NASA Astrophysics Data System (ADS)
Mir, Tabish Nazir; Singh, Bhim; Bhat, Abdul Hamid
2018-05-01
The paper presents a constant switching frequency scheme for speed sensorless Direct Torque Control (DTC) of Matrix Converter fed Induction Motor Drive. The use of matrix converter facilitates improved power quality on input as well as motor side, along with Input Power Factor control, besides eliminating the need for heavy passive elements. Moreover, DTC through Space Vector Modulation helps in achieving a fast control over the torque and flux of the motor, with added benefit of constant switching frequency. A constant switching frequency aids in maintaining desired power quality of AC mains current even at low motor speeds, and simplifies input filter design of the matrix converter, as compared to conventional hysteresis based DTC. Further, stator voltage estimation from sensed input voltage, and subsequent stator (and rotor) flux estimation is done. For speed sensorless operation, a Model Reference Adaptive System is used, which emulates the speed dependent rotor flux equations of the induction motor. The error between conventionally estimated rotor flux (reference model) and the rotor flux estimated through the adaptive observer is processed through PI controller to generate the rotor speed estimate.
Identification of differences in health impact modelling of salt reduction
Geleijnse, Johanna M.; van Raaij, Joop M. A.; Cappuccio, Francesco P.; Cobiac, Linda C.; Scarborough, Peter; Nusselder, Wilma J.; Jaccard, Abbygail; Boshuizen, Hendriek C.
2017-01-01
We examined whether specific input data and assumptions explain outcome differences in otherwise comparable health impact assessment models. Seven population health models estimating the impact of salt reduction on morbidity and mortality in western populations were compared on four sets of key features, their underlying assumptions and input data. Next, assumptions and input data were varied one by one in a default approach (the DYNAMO-HIA model) to examine how it influences the estimated health impact. Major differences in outcome were related to the size and shape of the dose-response relation between salt and blood pressure and blood pressure and disease. Modifying the effect sizes in the salt to health association resulted in the largest change in health impact estimates (33% lower), whereas other changes had less influence. Differences in health impact assessment model structure and input data may affect the health impact estimate. Therefore, clearly defined assumptions and transparent reporting for different models is crucial. However, the estimated impact of salt reduction was substantial in all of the models used, emphasizing the need for public health actions. PMID:29182636
GET electronics samples data analysis
NASA Astrophysics Data System (ADS)
Giovinazzo, J.; Goigoux, T.; Anvar, S.; Baron, P.; Blank, B.; Delagnes, E.; Grinyer, G. F.; Pancin, J.; Pedroza, J. L.; Pibernat, J.; Pollacco, E.; Rebii, A.; Roger, T.; Sizun, P.
2016-12-01
The General Electronics for TPCs (GET) has been developed to equip a generation of time projection chamber detectors for nuclear physics, and may also be used for a wider range of detector types. The goal of this paper is to propose first analysis procedures to be applied on raw data samples from the GET system, in order to correct for systematic effects observed on test measurements. We also present a method to estimate the response function of the GET system channels. The response function is required in analysis where the input signal needs to be reconstructed, in terms of time distribution, from the registered output samples.
A Method for Evaluating Tuning Functions of Single Neurons based on Mutual Information Maximization
NASA Astrophysics Data System (ADS)
Brostek, Lukas; Eggert, Thomas; Ono, Seiji; Mustari, Michael J.; Büttner, Ulrich; Glasauer, Stefan
2011-03-01
We introduce a novel approach for evaluation of neuronal tuning functions, which can be expressed by the conditional probability of observing a spike given any combination of independent variables. This probability can be estimated out of experimentally available data. By maximizing the mutual information between the probability distribution of the spike occurrence and that of the variables, the dependence of the spike on the input variables is maximized as well. We used this method to analyze the dependence of neuronal activity in cortical area MSTd on signals related to movement of the eye and retinal image movement.
NASA Technical Reports Server (NTRS)
Tomaine, R. L.
1976-01-01
Flight test data from a large 'crane' type helicopter were collected and processed for the purpose of identifying vehicle rigid body stability and control derivatives. The process consisted of using digital and Kalman filtering techniques for state estimation and Extended Kalman filtering for parameter identification, utilizing a least squares algorithm for initial derivative and variance estimates. Data were processed for indicated airspeeds from 0 m/sec to 152 m/sec. Pulse, doublet and step control inputs were investigated. Digital filter frequency did not have a major effect on the identification process, while the initial derivative estimates and the estimated variances had an appreciable effect on many derivative estimates. The major derivatives identified agreed fairly well with analytical predictions and engineering experience. Doublet control inputs provided better results than pulse or step inputs.
NASA Astrophysics Data System (ADS)
Casasent, David P.; Shenoy, Rajesh
1997-10-01
Classification and pose estimation of distorted input objects are considered. The feature space trajectory representation of distorted views of an object is used with a new eigenfeature space. For a distorted input object, the closest trajectory denotes the class of the input and the closest line segment on it denotes its pose. If an input point is too far from a trajectory, it is rejected as clutter. New methods for selecting Fukunaga-Koontz discriminant vectors, the number of dominant eigenvectors per class and for determining training, and test set compatibility are presented.
Estimating Basic Preliminary Design Performances of Aerospace Vehicles
NASA Technical Reports Server (NTRS)
Luz, Paul L.; Alexander, Reginald
2004-01-01
Aerodynamics and Performance Estimation Toolset is a collection of four software programs for rapidly estimating the preliminary design performance of aerospace vehicles represented by doing simplified calculations based on ballistic trajectories, the ideal rocket equation, and supersonic wedges through standard atmosphere. The program consists of a set of Microsoft Excel worksheet subprograms. The input and output data are presented in a user-friendly format, and calculations are performed rapidly enough that the user can iterate among different trajectories and/or shapes to perform "what-if" studies. Estimates that can be computed by these programs include: 1. Ballistic trajectories as a function of departure angles, initial velocities, initial positions, and target altitudes; assuming point masses and no atmosphere. The program plots the trajectory in two-dimensions and outputs the position, pitch, and velocity along the trajectory. 2. The "Rocket Equation" program calculates and plots the trade space for a vehicle s propellant mass fraction over a range of specific impulse and mission velocity values, propellant mass fractions as functions of specific impulses and velocities. 3. "Standard Atmosphere" will estimate the temperature, speed of sound, pressure, and air density as a function of altitude in a standard atmosphere, properties of a standard atmosphere as functions of altitude. 4. "Supersonic Wedges" will calculate the free-stream, normal-shock, oblique-shock, and isentropic flow properties for a wedge-shaped body flying supersonically through a standard atmosphere. It will also calculate the maximum angle for which a shock remains attached, and the minimum Mach number for which a shock becomes attached, all as functions of the wedge angle, altitude, and Mach number.
Trogdon, Justin G.; Subramanian, Sujha; Crouse, Wesley
2018-01-01
This study investigates the existence of economies of scale in the provision of breast and cervical cancer screening and diagnostic services by state National Breast and Cervical Cancer Early Detection Program (NBCCEDP) grantees. A translog cost function is estimated as a system with input factor share equations. The estimated cost function is then used to determine output levels for which average costs are decreasing (i.e., economies of scale exist). Data were collected from all state NBCCEDP programs and District of Columbia for program years 2006–2007, 2008–2009 and 2009–2010 (N =147). Costs included all programmatic and in-kind contributions from federal and non-federal sources, allocated to breast and cervical cancer screening activities. Output was measured by women served, women screened and cancers detected, separately by breast and cervical services for each measure. Inputs included labor, rent and utilities, clinical services, and quasi-fixed factors (e.g., percent of women eligible for screening by the NBCCEDP). 144 out of 147 program-years demonstrated significant economies of scale for women served and women screened; 136 out of 145 program-years displayed significant economies of scale for cancers detected. The cost data were self-reported by the NBCCEDP State programs. Quasi-fixed inputs were allowed to affect costs but not economies of scale or the share equations. The main analysis accounted for clustering of observations within State programs, but it did not make full use of the panel data. The average cost of providing breast and cervical cancer screening services decreases as the number of women screened and served increases. PMID:24326873
A Numerical Estimate of The Impact of The Saharan Dust On Medityerranean Trophic Web
NASA Astrophysics Data System (ADS)
Crise, A.; Crispi, G.
A first estimate of the importance of Saharan dust as input of macronutrients on the phytoplankton standing crop concentration and primary production at basin scale is here presented using a three-dimensional numerical model of the Mediterranean Sea. The numerical scheme adopted is a 1/4 degree resolution 31 levels MOM-based eco- hydrodynamical model with climatological ('perpetual year') forcings coupled on-line with a structure including multi-nutrient, size-fractionated phytoplankton functional groups, herbivores and a parametrized recycling detritus submodel, so to (explicitely or implicitely) include the major energy pathways of the upper layer mediterranean ecosystem. This model takes into account as potential limiting factors, among others, Nitrogen (in its oxidized and reduced forms) and Phosphorus. A gridded data setof (wet and dry) dust deposition over Mediterranean derived from SKIRON operational model is used to identify statistically the areas and the duration/intensity of the events. Starting from this averaging process, experiments are carried out to study the dust induced episodes of release of bioavailable phosphorus which is supposed to be the limiting factor in the oligotrophic waters of the surface layer in Med Sea. The metrics for the evaluation of the impact of deposition have been identified in phyto standing crop, primary and export production and switching in the food web functioning. These global parameters, even if cannot exaust the whealth of the informations provided by the model, can help discriminate the sensitivity of food web to the nutrient pulses induced by the deposition. First results of a scenario analysis of typical atmospheric input events, provide evidence of the response of the upper layer ecosystem to assess the sensitivity of the model predictions to the variability to integrated intensity of external input.
GLACiAR: GaLAxy survey Completeness AlgoRithm
NASA Astrophysics Data System (ADS)
Carrasco, Daniela; Trenti, Michele; Mutch, Simon; Oesch, Pascal
2018-05-01
GLACiAR (GaLAxy survey Completeness AlgoRithm) estimates the completeness and selection functions in galaxy surveys. Tailored for multiband imaging surveys aimed at searching for high-redshift galaxies through the Lyman Break technique, the code can nevertheless be applied broadly. GLACiAR generates artificial galaxies that follow Sérsic profiles with different indexes and with customizable size, redshift and spectral energy distribution properties, adds them to input images, and measures the recovery rate.
Quantifying similarity in reliability surfaces using the probability of agreement
Stevens, Nathaniel T.; Anderson-Cook, Christine Michaela
2017-03-30
When separate populations exhibit similar reliability as a function of multiple explanatory variables, combining them into a single population is tempting. This can simplify future predictions and reduce uncertainty associated with estimation. However, combining these populations may introduce bias if the underlying relationships are in fact different. The probability of agreement formally and intuitively quantifies the similarity of estimated reliability surfaces across a two-factor input space. An example from the reliability literature demonstrates the utility of the approach when deciding whether to combine two populations or to keep them as distinct. As a result, new graphical summaries provide strategies formore » visualizing the results.« less
Quantifying similarity in reliability surfaces using the probability of agreement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, Nathaniel T.; Anderson-Cook, Christine Michaela
When separate populations exhibit similar reliability as a function of multiple explanatory variables, combining them into a single population is tempting. This can simplify future predictions and reduce uncertainty associated with estimation. However, combining these populations may introduce bias if the underlying relationships are in fact different. The probability of agreement formally and intuitively quantifies the similarity of estimated reliability surfaces across a two-factor input space. An example from the reliability literature demonstrates the utility of the approach when deciding whether to combine two populations or to keep them as distinct. As a result, new graphical summaries provide strategies formore » visualizing the results.« less
Joint Smoothed l₀-Norm DOA Estimation Algorithm for Multiple Measurement Vectors in MIMO Radar.
Liu, Jing; Zhou, Weidong; Juwono, Filbert H
2017-05-08
Direction-of-arrival (DOA) estimation is usually confronted with a multiple measurement vector (MMV) case. In this paper, a novel fast sparse DOA estimation algorithm, named the joint smoothed l 0 -norm algorithm, is proposed for multiple measurement vectors in multiple-input multiple-output (MIMO) radar. To eliminate the white or colored Gaussian noises, the new method first obtains a low-complexity high-order cumulants based data matrix. Then, the proposed algorithm designs a joint smoothed function tailored for the MMV case, based on which joint smoothed l 0 -norm sparse representation framework is constructed. Finally, for the MMV-based joint smoothed function, the corresponding gradient-based sparse signal reconstruction is designed, thus the DOA estimation can be achieved. The proposed method is a fast sparse representation algorithm, which can solve the MMV problem and perform well for both white and colored Gaussian noises. The proposed joint algorithm is about two orders of magnitude faster than the l 1 -norm minimization based methods, such as l 1 -SVD (singular value decomposition), RV (real-valued) l 1 -SVD and RV l 1 -SRACV (sparse representation array covariance vectors), and achieves better DOA estimation performance.
Measuring the economic effects of Japan's Mikawa Port: Pre- and-post disaster assessments
NASA Astrophysics Data System (ADS)
Shibusawa, Hiroyuki; Miyata, Yuzuru
2017-10-01
This study examines the economic effects of Japan's Mikawa Port on Aichi Prefecture before and after a natural disaster interrupts its operations for one year. Using a regional input-output model, backward and forward linkage impacts are calculated along the waterfront where the auto industry is concentrated. In addition, economic damage from natural disasters is estimated. We assess the economic implications on the hinterland of Mikawa Port. Density functions of the backward and forward linkage impacts are derived. A production stoppage along the waterfront of Mikawa Port generates large indirect negative effects on the regional economy. Results found that density functions of the total impacts are decreasing function of distance but that several sectors are characterized by non-decreasing functions.
Event-by-Event Continuous Respiratory Motion Correction for Dynamic PET Imaging.
Yu, Yunhan; Chan, Chung; Ma, Tianyu; Liu, Yaqiang; Gallezot, Jean-Dominique; Naganawa, Mika; Kelada, Olivia J; Germino, Mary; Sinusas, Albert J; Carson, Richard E; Liu, Chi
2016-07-01
Existing respiratory motion-correction methods are applied only to static PET imaging. We have previously developed an event-by-event respiratory motion-correction method with correlations between internal organ motion and external respiratory signals (INTEX). This method is uniquely appropriate for dynamic imaging because it corrects motion for each time point. In this study, we applied INTEX to human dynamic PET studies with various tracers and investigated the impact on kinetic parameter estimation. The use of 3 tracers-a myocardial perfusion tracer, (82)Rb (n = 7); a pancreatic β-cell tracer, (18)F-FP(+)DTBZ (n = 4); and a tumor hypoxia tracer, (18)F-fluoromisonidazole ((18)F-FMISO) (n = 1)-was investigated in a study of 12 human subjects. Both rest and stress studies were performed for (82)Rb. The Anzai belt system was used to record respiratory motion. Three-dimensional internal organ motion in high temporal resolution was calculated by INTEX to guide event-by-event respiratory motion correction of target organs in each dynamic frame. Time-activity curves of regions of interest drawn based on end-expiration PET images were obtained. For (82)Rb studies, K1 was obtained with a 1-tissue model using a left-ventricle input function. Rest-stress myocardial blood flow (MBF) and coronary flow reserve (CFR) were determined. For (18)F-FP(+)DTBZ studies, the total volume of distribution was estimated with arterial input functions using the multilinear analysis 1 method. For the (18)F-FMISO study, the net uptake rate Ki was obtained with a 2-tissue irreversible model using a left-ventricle input function. All parameters were compared with the values derived without motion correction. With INTEX, K1 and MBF increased by 10% ± 12% and 15% ± 19%, respectively, for (82)Rb stress studies. CFR increased by 19% ± 21%. For studies with motion amplitudes greater than 8 mm (n = 3), K1, MBF, and CFR increased by 20% ± 12%, 30% ± 20%, and 34% ± 23%, respectively. For (82)Rb rest studies, INTEX had minimal effect on parameter estimation. The total volume of distribution of (18)F-FP(+)DTBZ and Ki of (18)F-FMISO increased by 17% ± 6% and 20%, respectively. Respiratory motion can have a substantial impact on dynamic PET in the thorax and abdomen. The INTEX method using continuous external motion data substantially changed parameters in kinetic modeling. More accurate estimation is expected with INTEX. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Estimating the average length of hospitalization due to pneumonia: a fuzzy approach.
Nascimento, L F C; Rizol, P M S R; Peneluppi, A P
2014-08-29
Exposure to air pollutants is associated with hospitalizations due to pneumonia in children. We hypothesized the length of hospitalization due to pneumonia may be dependent on air pollutant concentrations. Therefore, we built a computational model using fuzzy logic tools to predict the mean time of hospitalization due to pneumonia in children living in São José dos Campos, SP, Brazil. The model was built with four inputs related to pollutant concentrations and effective temperature, and the output was related to the mean length of hospitalization. Each input had two membership functions and the output had four membership functions, generating 16 rules. The model was validated against real data, and a receiver operating characteristic (ROC) curve was constructed to evaluate model performance. The values predicted by the model were significantly correlated with real data. Sulfur dioxide and particulate matter significantly predicted the mean length of hospitalization in lags 0, 1, and 2. This model can contribute to the care provided to children with pneumonia.
Estimating the average length of hospitalization due to pneumonia: a fuzzy approach.
Nascimento, L F C; Rizol, P M S R; Peneluppi, A P
2014-11-01
Exposure to air pollutants is associated with hospitalizations due to pneumonia in children. We hypothesized the length of hospitalization due to pneumonia may be dependent on air pollutant concentrations. Therefore, we built a computational model using fuzzy logic tools to predict the mean time of hospitalization due to pneumonia in children living in São José dos Campos, SP, Brazil. The model was built with four inputs related to pollutant concentrations and effective temperature, and the output was related to the mean length of hospitalization. Each input had two membership functions and the output had four membership functions, generating 16 rules. The model was validated against real data, and a receiver operating characteristic (ROC) curve was constructed to evaluate model performance. The values predicted by the model were significantly correlated with real data. Sulfur dioxide and particulate matter significantly predicted the mean length of hospitalization in lags 0, 1, and 2. This model can contribute to the care provided to children with pneumonia.
Schuwirth, Nele; Reichert, Peter
2013-02-01
For the first time, we combine concepts of theoretical food web modeling, the metabolic theory of ecology, and ecological stoichiometry with the use of functional trait databases to predict the coexistence of invertebrate taxa in streams. We developed a mechanistic model that describes growth, death, and respiration of different taxa dependent on various environmental influence factors to estimate survival or extinction. Parameter and input uncertainty is propagated to model results. Such a model is needed to test our current quantitative understanding of ecosystem structure and function and to predict effects of anthropogenic impacts and restoration efforts. The model was tested using macroinvertebrate monitoring data from a catchment of the Swiss Plateau. Even without fitting model parameters, the model is able to represent key patterns of the coexistence structure of invertebrates at sites varying in external conditions (litter input, shading, water quality). This confirms the suitability of the model concept. More comprehensive testing and resulting model adaptations will further increase the predictive accuracy of the model.
NASA Astrophysics Data System (ADS)
Kamynin, V. L.; Bukharova, T. I.
2017-01-01
We prove the estimates of stability with respect to perturbations of input data for the solutions of inverse problems for degenerate parabolic equations with unbounded coefficients. An important feature of these estimates is that the constants in these estimates are written out explicitly by the input data of the problem.
Comparison of estimates of snow input to a small alpine watershed
R. A. Sommerfeld; R. C. Musselman; G. L. Wooldridge
1990-01-01
We have used five methods to estimate the snow water equivalent input to the Glacier Lakes Ecosystem Experiments Site (GLEES) in south-central Wyoming during the winter 1987-1988 and to obtain an estimate of the errors. The methods are: (1) the Martinec and Rango degree-day method; (2) Wooldridge et al. method of determining the average yearly snowfall from tree...
Chen, Chang Hao; McCullagh, Elizabeth A; Pun, Sio Hang; Mak, Peng Un; Vai, Mang I; Mak, Pui In; Klug, Achim; Lei, Tim C
2017-03-01
The ability to record and to control action potential firing in neuronal circuits is critical to understand how the brain functions. The objective of this study is to develop a monolithic integrated circuit (IC) to record action potentials and simultaneously control action potential firing using optogenetics. A low-noise and high input impedance (or low input capacitance) neural recording amplifier is combined with a high current laser/light-emitting diode (LED) driver in a single IC. The low input capacitance of the amplifier (9.7 pF) was achieved by adding a dedicated unity gain stage optimized for high impedance metal electrodes. The input referred noise of the amplifier is [Formula: see text], which is lower than the estimated thermal noise of the metal electrode. Thus, the action potentials originating from a single neuron can be recorded with a signal-to-noise ratio of at least 6.6. The LED/laser current driver delivers a maximum current of 330 mA, which is adequate for optogenetic control. The functionality of the IC was tested with an anesthetized Mongolian gerbil and auditory stimulated action potentials were recorded from the inferior colliculus. Spontaneous firings of fifth (trigeminal) nerve fibers were also inhibited using the optogenetic protein Halorhodopsin. Moreover, a noise model of the system was derived to guide the design. A single IC to measure and control action potentials using optogenetic proteins is realized so that more complicated behavioral neuroscience research and the translational neural disorder treatments become possible in the future.
van Mierlo, Pieter; Lie, Octavian; Staljanssens, Willeke; Coito, Ana; Vulliémoz, Serge
2018-04-26
We investigated the influence of processing steps in the estimation of multivariate directed functional connectivity during seizures recorded with intracranial EEG (iEEG) on seizure-onset zone (SOZ) localization. We studied the effect of (i) the number of nodes, (ii) time-series normalization, (iii) the choice of multivariate time-varying connectivity measure: Adaptive Directed Transfer Function (ADTF) or Adaptive Partial Directed Coherence (APDC) and (iv) graph theory measure: outdegree or shortest path length. First, simulations were performed to quantify the influence of the various processing steps on the accuracy to localize the SOZ. Afterwards, the SOZ was estimated from a 113-electrodes iEEG seizure recording and compared with the resection that rendered the patient seizure-free. The simulations revealed that ADTF is preferred over APDC to localize the SOZ from ictal iEEG recordings. Normalizing the time series before analysis resulted in an increase of 25-35% of correctly localized SOZ, while adding more nodes to the connectivity analysis led to a moderate decrease of 10%, when comparing 128 with 32 input nodes. The real-seizure connectivity estimates localized the SOZ inside the resection area using the ADTF coupled to outdegree or shortest path length. Our study showed that normalizing the time-series is an important pre-processing step, while adding nodes to the analysis did only marginally affect the SOZ localization. The study shows that directed multivariate Granger-based connectivity analysis is feasible with many input nodes (> 100) and that normalization of the time-series before connectivity analysis is preferred.
NASA Technical Reports Server (NTRS)
Kurmanaliyev, T. I.; Breslavets, A. V.
1974-01-01
The difficulties in obtaining exact calculation data for the labor input and estimated cost are noted. The method of calculating the labor cost of the design work using the provisional normative indexes with respect to individual forms of operations is proposed. Values of certain coefficients recommended for use in the practical calculations of the labor input for the development of new scientific equipment for space research are presented.
NASA Technical Reports Server (NTRS)
Ioup, George E.; Ioup, Juliette W.
1988-01-01
This thesis reviews the technique established to clear channels in the Power Spectral Estimate by applying linear combinations of well known window functions to the autocorrelation function. The need for windowing the auto correlation function is due to the fact that the true auto correlation is not generally used to obtain the Power Spectral Estimate. When applied, the windows serve to reduce the effect that modifies the auto correlation by truncating the data and possibly the autocorrelation has on the Power Spectral Estimate. It has been shown in previous work that a single channel has been cleared, allowing for the detection of a small peak in the presence of a large peak in the Power Spectral Estimate. The utility of this method is dependent on the robustness of it on different input situations. We extend the analysis in this paper, to include clearing up to three channels. We examine the relative positions of the spikes to each other and also the effect of taking different percentages of lags of the auto correlation in the Power Spectral Estimate. This method could have application wherever the Power Spectrum is used. An example of this is beam forming for source location, where a small target can be located next to a large target. Other possibilities extend into seismic data processing. As the method becomes more automated other applications may present themselves.
Identifiability Results for Several Classes of Linear Compartment Models.
Meshkat, Nicolette; Sullivant, Seth; Eisenberg, Marisa
2015-08-01
Identifiability concerns finding which unknown parameters of a model can be estimated, uniquely or otherwise, from given input-output data. If some subset of the parameters of a model cannot be determined given input-output data, then we say the model is unidentifiable. In this work, we study linear compartment models, which are a class of biological models commonly used in pharmacokinetics, physiology, and ecology. In past work, we used commutative algebra and graph theory to identify a class of linear compartment models that we call identifiable cycle models, which are unidentifiable but have the simplest possible identifiable functions (so-called monomial cycles). Here we show how to modify identifiable cycle models by adding inputs, adding outputs, or removing leaks, in such a way that we obtain an identifiable model. We also prove a constructive result on how to combine identifiable models, each corresponding to strongly connected graphs, into a larger identifiable model. We apply these theoretical results to several real-world biological models from physiology, cell biology, and ecology.
NASA Astrophysics Data System (ADS)
Radac, Mircea-Bogdan; Precup, Radu-Emil; Roman, Raul-Cristian
2017-04-01
This paper proposes the combination of two model-free controller tuning techniques, namely linear virtual reference feedback tuning (VRFT) and nonlinear state-feedback Q-learning, referred to as a new mixed VRFT-Q learning approach. VRFT is first used to find stabilising feedback controller using input-output experimental data from the process in a model reference tracking setting. Reinforcement Q-learning is next applied in the same setting using input-state experimental data collected under perturbed VRFT to ensure good exploration. The Q-learning controller learned with a batch fitted Q iteration algorithm uses two neural networks, one for the Q-function estimator and one for the controller, respectively. The VRFT-Q learning approach is validated on position control of a two-degrees-of-motion open-loop stable multi input-multi output (MIMO) aerodynamic system (AS). Extensive simulations for the two independent control channels of the MIMO AS show that the Q-learning controllers clearly improve performance over the VRFT controllers.
The Impact of Structural Heterogeneity on Excitation-Inhibition Balance in Cortical Networks.
Landau, Itamar D; Egger, Robert; Dercksen, Vincent J; Oberlaender, Marcel; Sompolinsky, Haim
2016-12-07
Models of cortical dynamics often assume a homogeneous connectivity structure. However, we show that heterogeneous input connectivity can prevent the dynamic balance between excitation and inhibition, a hallmark of cortical dynamics, and yield unrealistically sparse and temporally regular firing. Anatomically based estimates of the connectivity of layer 4 (L4) rat barrel cortex and numerical simulations of this circuit indicate that the local network possesses substantial heterogeneity in input connectivity, sufficient to disrupt excitation-inhibition balance. We show that homeostatic plasticity in inhibitory synapses can align the functional connectivity to compensate for structural heterogeneity. Alternatively, spike-frequency adaptation can give rise to a novel state in which local firing rates adjust dynamically so that adaptation currents and synaptic inputs are balanced. This theory is supported by simulations of L4 barrel cortex during spontaneous and stimulus-evoked conditions. Our study shows how synaptic and cellular mechanisms yield fluctuation-driven dynamics despite structural heterogeneity in cortical circuits. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.
Simplifying BRDF input data for optical signature modeling
NASA Astrophysics Data System (ADS)
Hallberg, Tomas; Pohl, Anna; Fagerström, Jan
2017-05-01
Scene simulations of optical signature properties using signature codes normally requires input of various parameterized measurement data of surfaces and coatings in order to achieve realistic scene object features. Some of the most important parameters are used in the model of the Bidirectional Reflectance Distribution Function (BRDF) and are normally determined by surface reflectance and scattering measurements. Reflectance measurements of the spectral Directional Hemispherical Reflectance (DHR) at various incident angles can normally be performed in most spectroscopy labs, while measuring the BRDF is more complicated or may not be available at all in many optical labs. We will present a method in order to achieve the necessary BRDF data directly from DHR measurements for modeling software using the Sandford-Robertson BRDF model. The accuracy of the method is tested by modeling a test surface by comparing results from using estimated and measured BRDF data as input to the model. These results show that using this method gives no significant loss in modeling accuracy.
Semework, Mulugeta; DiStasio, Marcello
2014-01-01
Recording the activity of large populations of neurons requires new methods to analyze and use the large volumes of time series data thus created. Fast and clear methods for finding functional connectivity are an important step toward the goal of understanding neural processing. This problem presents itself readily in somatosensory neuroprosthesis (SSNP) research, which uses microstimulation (MiSt) to activate neural tissue to mimic natural stimuli, and has the capacity to potentiate, depotentiate, or even destroy functional connections. As the aim of SSNP engineering is artificially creating neural responses that resemble those observed during natural inputs, a central goal is describing the influence of MiSt on activity structure among groups of neurons, and how this structure may be altered to affect perception or behavior. In this paper, we demonstrate the concept of Granger causality, combined with maximum likelihood methods, applied to neural signals recorded before, during, and after natural and electrical stimulation. We show how these analyses can be used to evaluate the changing interactions in the thalamocortical somatosensory system in response to repeated perturbation. Using LFPs recorded from the ventral posterolateral thalamus (VPL) and somatosensory cortex (S1) in anesthetized rats, we estimated pair-wise functional interactions between functional microdomains. The preliminary results demonstrate input-dependent modulations in the direction and strength of information flow during and after application of MiSt. Cortico-cortical interactions during cortical MiSt and baseline conditions showed the largest causal influence differences, while there was no statistically significant difference between pre- and post-stimulation baseline causal activities. These functional connectivity changes agree with physiologically accepted communication patterns through the network, and their particular parameters have implications for both rehabilitation and brain—machine interface SSNP applications. PMID:25249973
Tsai, Jason S-H; Hsu, Wen-Teng; Lin, Long-Guei; Guo, Shu-Mei; Tann, Joseph W
2014-01-01
A modified nonlinear autoregressive moving average with exogenous inputs (NARMAX) model-based state-space self-tuner with fault tolerance is proposed in this paper for the unknown nonlinear stochastic hybrid system with a direct transmission matrix from input to output. Through the off-line observer/Kalman filter identification method, one has a good initial guess of modified NARMAX model to reduce the on-line system identification process time. Then, based on the modified NARMAX-based system identification, a corresponding adaptive digital control scheme is presented for the unknown continuous-time nonlinear system, with an input-output direct transmission term, which also has measurement and system noises and inaccessible system states. Besides, an effective state space self-turner with fault tolerance scheme is presented for the unknown multivariable stochastic system. A quantitative criterion is suggested by comparing the innovation process error estimated by the Kalman filter estimation algorithm, so that a weighting matrix resetting technique by adjusting and resetting the covariance matrices of parameter estimate obtained by the Kalman filter estimation algorithm is utilized to achieve the parameter estimation for faulty system recovery. Consequently, the proposed method can effectively cope with partially abrupt and/or gradual system faults and input failures by the fault detection. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Pujol, Laure; Johnson, Nicholas Brian; Magras, Catherine; Albert, Isabelle; Membré, Jeanne-Marie
2015-10-15
In a previous study, a quantitative microbial exposure assessment (QMEA) model applied to an aseptic-UHT food process was developed [Pujol, L., Albert, I., Magras, C., Johnson, N. B., Membré, J. M. Probabilistic exposure assessment model to estimate aseptic UHT product failure rate. 2015 International Journal of Food Microbiology. 192, 124-141]. It quantified Sterility Failure Rate (SFR) associated with Bacillus cereus and Geobacillus stearothermophilus per process module (nine modules in total from raw material reception to end-product storage). Previously, the probabilistic model inputs were set by experts (using knowledge and in-house data). However, only the variability dimension was taken into account. The model was then improved using expert elicitation knowledge in two ways. First, the model was refined by adding the uncertainty dimension to the probabilistic inputs, enabling to set a second order Monte Carlo analysis. The eight following inputs, and their impact on SFR, are presented in detail in this present study: D-value for each bacteria of interest (B. cereus and G. stearothermophilus) associated with the inactivation model for the UHT treatment step, i.e., two inputs; log reduction (decimal reduction) number associated with the inactivation model for the packaging sterilization step for each bacterium and each part of the packaging (product container and sealing component), i.e., four inputs; and bacterial spore air load of the aseptic tank and the filler cabinet rooms, i.e., two inputs. Second, the model was improved by leveraging expert knowledge to develop further the existing model. The proportion of bacteria in the product which settled on surface of pipes (between the UHT treatment and the aseptic tank on one hand, and between the aseptic tank and the filler cabinet on the other hand) leading to a possible biofilm formation for each bacterium, was better characterized. It was modeled as a function of the hygienic design level of the aseptic-UHT line: the experts provided the model structure and most of the model parameters values. Mean of SFR was estimated to 10×10(-8) (95% Confidence Interval=[0×10(-8); 350×10(-8)]) and 570×10(-8) (95% CI=[380×10(-8); 820×10(-8)]) for B. cereus and G. stearothermophilus, respectively. These estimations were more accurate (since the confidence interval was provided) than those given by the model with only variability (for which the estimates were 15×10(-8) and 580×10(-8) for B. cereus and G. stearothermophilus, respectively). The updated model outputs were also compared with those obtained when inputs were described by a generic distribution, without specific information related to the case-study. Results showed that using a generic distribution can lead to unrealistic estimations (e.g., 3,181,000 product units contaminated by G. stearothermophilus among 10(8) product units produced) and emphasized the added value of eliciting information from experts from the relevant specialist field knowledge. Copyright © 2015 Elsevier B.V. All rights reserved.
Surround-Masking Affects Visual Estimation Ability
Jastrzebski, Nicola R.; Hugrass, Laila E.; Crewther, Sheila G.; Crewther, David P.
2017-01-01
Visual estimation of numerosity involves the discrimination of magnitude between two distributions or perceptual sets that vary in number of elements. How performance on such estimation depends on peripheral sensory stimulation is unclear, even in typically developing adults. Here, we varied the central and surround contrast of stimuli that comprised a visual estimation task in order to determine whether mechanisms involved with the removal of unessential visual input functionally contributes toward number acuity. The visual estimation judgments of typically developed adults were significantly impaired for high but not low contrast surround stimulus conditions. The center and surround contrasts of the stimuli also differentially affected the accuracy of numerosity estimation depending on whether fewer or more dots were presented. Remarkably, observers demonstrated the highest mean percentage accuracy across stimulus conditions in the discrimination of more elements when the surround contrast was low and the background luminance of the central region containing the elements was dark (black center). Conversely, accuracy was severely impaired during the discrimination of fewer elements when the surround contrast was high and the background luminance of the central region was mid level (gray center). These findings suggest that estimation ability is functionally related to the quality of low-order filtration of unessential visual information. These surround masking results may help understanding of the poor visual estimation ability commonly observed in developmental dyscalculia. PMID:28360845
Ravicz, Michael E; Rosowski, John J
2013-10-01
The transfer function H(V) between stapes velocity V(S) and sound pressure near the tympanic membrane P(TM) is a descriptor of sound transmission through the middle ear (ME). The ME power transmission efficiency (MEE), the ratio of sound power entering the cochlea to power entering the middle ear, was computed from H(V) measured in seven chinchilla ears and previously reported measurements of ME input admittance Y(TM) and ME pressure gain G(MEP) [Ravicz and Rosowski, J. Acoust. Soc. Am. 132, 2437-2454 (2012); J. Acoust. Soc. Am. 133, 2208-2223 (2013)] in the same ears. The ME was open, and a pressure sensor was inserted into the cochlear vestibule for most measurements. The cochlear input admittance Y(C) computed from H(V) and G(MEP) is controlled by a combination of mass and resistance and is consistent with a minimum-phase system up to 27 kHz. The real part Re{Y(C)}, which relates cochlear sound power to inner-ear sound pressure, decreased gradually with frequency up to 25 kHz and more rapidly above that. MEE was about 0.5 between 0.1 and 8 kHz, higher than previous estimates in this species, and decreased sharply at higher frequencies.
Ravicz, Michael E.; Rosowski, John J.
2013-01-01
The transfer function HV between stapes velocity VS and sound pressure near the tympanic membrane PTM is a descriptor of sound transmission through the middle ear (ME). The ME power transmission efficiency (MEE), the ratio of sound power entering the cochlea to power entering the middle ear, was computed from HV measured in seven chinchilla ears and previously reported measurements of ME input admittance YTM and ME pressure gain GMEP [Ravicz and Rosowski, J. Acoust. Soc. Am. 132, 2437–2454 (2012); J. Acoust. Soc. Am. 133, 2208–2223 (2013)] in the same ears. The ME was open, and a pressure sensor was inserted into the cochlear vestibule for most measurements. The cochlear input admittance YC computed from HV and GMEP is controlled by a combination of mass and resistance and is consistent with a minimum-phase system up to 27 kHz. The real part Re{YC}, which relates cochlear sound power to inner-ear sound pressure, decreased gradually with frequency up to 25 kHz and more rapidly above that. MEE was about 0.5 between 0.1 and 8 kHz, higher than previous estimates in this species, and decreased sharply at higher frequencies. PMID:24116422
Elliott, Jonathan T.; Samkoe, Kimberley S.; Davis, Scott C.; Gunn, Jason R.; Paulsen, Keith D.; Roberts, David W.; Pogue, Brian W.
2017-01-01
Receptor concentration imaging (RCI) with targeted-untargeted optical dye pairs has enabled in vivo immunohistochemistry analysis in preclinical subcutaneous tumors. Successful application of RCI to fluorescence guided resection (FGR), so that quantitative molecular imaging of tumor-specific receptors could be performed in situ, would have a high impact. However, assumptions of pharmacokinetics, permeability and retention, as well as the lack of a suitable reference region limit the potential for RCI in human neurosurgery. In this study, an arterial input graphic analysis (AIGA) method is presented which is enabled by independent component analysis (ICA). The percent difference in arterial concentration between the image-derived arterial input function (AIFICA) and that obtained by an invasive method (ICACAR) was 2.0 ± 2.7% during the first hour of circulation of a targeted-untargeted dye pair in mice. Estimates of distribution volume and receptor concentration in tumor bearing mice (n = 5) recovered using the AIGA technique did not differ significantly from values obtained using invasive AIF measurements (p=0.12). The AIGA method, enabled by the subject-specific AIFICA, was also applied in a rat orthotopic model of U-251 glioblastoma to obtain the first reported receptor concentration and distribution volume maps during open craniotomy. PMID:26349671
Computing Functions by Approximating the Input
ERIC Educational Resources Information Center
Goldberg, Mayer
2012-01-01
In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their…
A Nonlinear Least Squares Approach to Time of Death Estimation Via Body Cooling.
Rodrigo, Marianito R
2016-01-01
The problem of time of death (TOD) estimation by body cooling is revisited by proposing a nonlinear least squares approach that takes as input a series of temperature readings only. Using a reformulation of the Marshall-Hoare double exponential formula and a technique for reducing the dimension of the state space, an error function that depends on the two cooling rates is constructed, with the aim of minimizing this function. Standard nonlinear optimization methods that are used to minimize the bivariate error function require an initial guess for these unknown rates. Hence, a systematic procedure based on the given temperature data is also proposed to determine an initial estimate for the rates. Then, an explicit formula for the TOD is given. Results of numerical simulations using both theoretical and experimental data are presented, both yielding reasonable estimates. The proposed procedure does not require knowledge of the temperature at death nor the body mass. In fact, the method allows the estimation of the temperature at death once the cooling rates and the TOD have been calculated. The procedure requires at least three temperature readings, although more measured readings could improve the estimates. With the aid of computerized recording and thermocouple detectors, temperature readings spaced 10-15 min apart, for example, can be taken. The formulas can be straightforwardly programmed and installed on a hand-held device for field use. © 2015 American Academy of Forensic Sciences.
NASA Astrophysics Data System (ADS)
Kaida, Yukiko; Murakami, Toshiyuki
A wheelchair is an important apparatus of mobility for people with disability. Power-assist motion in an electric wheelchair is to expand the operator's field of activities. This paper describes force sensorless detection of human input torque. Reaction torque estimation observer calculates the total disturbance torque first. Then, the human input torque is extracted from the estimated disturbance. In power-assist motion, assist torque is synthesized according to the product of assist gain and the average torque of the right and left input torque. Finally, the proposed method is verified through the experiments of power-assist motion.
The determinants of dentists' productivity and the measurement of output.
Gutacker, Nils; Harris, Anthony; Brennan, David; Hollingsworth, Bruce
2015-01-01
Improving the productivity of the healthcare system, for example by taking advantage of scale economies or encouraging substitution of expensive specialist personnel with less expensive workers, is often seen as an attractive way to meet increasing demand within a constrained budget. Using data on 558 dentists participating in the Longitudinal Study of Dentists' Practice Activity (LSDPA) survey between 1993 and 2003 linked to patient data and average fee schedules, we estimate production functions for private dental services in Australia to quantify the contribution of different capital and labour inputs and identify economies of scale in the production of dental care. Given the challenges in measuring output in the healthcare setting, we discuss three different output measures (raw activity, time-, and price-weighted activity) and test the sensitivity of results to the choice of measure. Our results suggest that expansion of the scale of dental services is unlikely to be constrained by decreasing returns to scale. We note that conclusions about the contribution of individual input factors and the estimated returns to scale are sensitive to the choice of output measure employed. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Beijen, Michiel A.; Voorhoeve, Robbert; Heertjes, Marcel F.; Oomen, Tom
2018-07-01
Vibration isolation is essential for industrial high-precision systems to suppress external disturbances. The aim of this paper is to develop a general identification approach to estimate the frequency response function (FRF) of the transmissibility matrix, which is a key performance indicator for vibration isolation systems. The major challenge lies in obtaining a good signal-to-noise ratio in view of a large system weight. A non-parametric system identification method is proposed that combines floor and shaker excitations. Furthermore, a method is presented to analyze the input power spectrum of the floor excitations, both in terms of magnitude and direction. In turn, the input design of the shaker excitation signals is investigated to obtain sufficient excitation power in all directions with minimum experiment cost. The proposed methods are shown to provide an accurate FRF of the transmissibility matrix in three relevant directions on an industrial active vibration isolation system over a large frequency range. This demonstrates that, despite their heavy weight, industrial vibration isolation systems can be accurately identified using this approach.
NASA Astrophysics Data System (ADS)
Constantine, P. G.; Emory, M.; Larsson, J.; Iaccarino, G.
2015-12-01
We present a computational analysis of the reactive flow in a hypersonic scramjet engine with focus on effects of uncertainties in the operating conditions. We employ a novel methodology based on active subspaces to characterize the effects of the input uncertainty on the scramjet performance. The active subspace identifies one-dimensional structure in the map from simulation inputs to quantity of interest that allows us to reparameterize the operating conditions; instead of seven physical parameters, we can use a single derived active variable. This dimension reduction enables otherwise infeasible uncertainty quantification, considering the simulation cost of roughly 9500 CPU-hours per run. For two values of the fuel injection rate, we use a total of 68 simulations to (i) identify the parameters that contribute the most to the variation in the output quantity of interest, (ii) estimate upper and lower bounds on the quantity of interest, (iii) classify sets of operating conditions as safe or unsafe corresponding to a threshold on the output quantity of interest, and (iv) estimate a cumulative distribution function for the quantity of interest.
Ator, Scott; Garcia, Ana Maria.
2016-01-01
Understanding spatial variability in contaminant fate and transport is critical to efficient regional water-quality restoration. An approach to capitalize on previously calibrated spatially referenced regression (SPARROW) models to improve the understanding of contaminant fate and transport was developed and applied to the case of nitrogen in the 166,000 km2 Chesapeake Bay watershed. A continuous function of four hydrogeologic, soil, and other landscape properties significant (α = 0.10) to nitrogen transport from uplands to streams was evaluated and compared among each of the more than 80,000 individual catchments (mean area, 2.1 km2) in the watershed. Budgets (including inputs, losses or net change in storage in uplands and stream corridors, and delivery to tidal waters) were also estimated for nitrogen applied to these catchments from selected upland sources. Most (81%) of such inputs are removed, retained, or otherwise processed in uplands rather than transported to surface waters. Combining SPARROW results with previous budget estimates suggests 55% of this processing is attributable to denitrification, 23% to crop or timber harvest, and 6% to volatilization. Remaining upland inputs represent a net annual increase in landscape storage in soils or biomass exceeding 10 kg per hectare in some areas. Such insights are important for planning watershed restoration and for improving future watershed models.
Optimized System Identification
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Longman, Richard W.
1999-01-01
In system identification, one usually cares most about finding a model whose outputs are as close as possible to the true system outputs when the same input is applied to both. However, most system identification algorithms do not minimize this output error. Often they minimize model equation error instead, as in typical least-squares fits using a finite-difference model, and it is seen here that this distinction is significant. Here, we develop a set of system identification algorithms that minimize output error for multi-input/multi-output and multi-input/single-output systems. This is done with sequential quadratic programming iterations on the nonlinear least-squares problems, with an eigendecomposition to handle indefinite second partials. This optimization minimizes a nonlinear function of many variables, and hence can converge to local minima. To handle this problem, we start the iterations from the OKID (Observer/Kalman Identification) algorithm result. Not only has OKID proved very effective in practice, it minimizes an output error of an observer which has the property that as the data set gets large, it converges to minimizing the criterion of interest here. Hence, it is a particularly good starting point for the nonlinear iterations here. Examples show that the methods developed here eliminate the bias that is often observed using any system identification methods of either over-estimating or under-estimating the damping of vibration modes in lightly damped structures.
NASA Technical Reports Server (NTRS)
Napolitano, Marcello R.
1996-01-01
This progress report presents the results of an investigation focused on parameter identification for the NASA F/A-18 HARV. This aircraft was used in the high alpha research program at the NASA Dryden Flight Research Center. In this study the longitudinal and lateral-directional stability derivatives are estimated from flight data using the Maximum Likelihood method coupled with a Newton-Raphson minimization technique. The objective is to estimate an aerodynamic model describing the aircraft dynamics over a range of angle of attack from 5 deg to 60 deg. The mathematical model is built using the traditional static and dynamic derivative buildup. Flight data used in this analysis were from a variety of maneuvers. The longitudinal maneuvers included large amplitude multiple doublets, optimal inputs, frequency sweeps, and pilot pitch stick inputs. The lateral-directional maneuvers consisted of large amplitude multiple doublets, optimal inputs and pilot stick and rudder inputs. The parameter estimation code pEst, developed at NASA Dryden, was used in this investigation. Results of the estimation process from alpha = 5 deg to alpha = 60 deg are presented and discussed.
Sensitivity analysis of the near-road dispersion model RLINE - An evaluation at Detroit, Michigan
NASA Astrophysics Data System (ADS)
Milando, Chad W.; Batterman, Stuart A.
2018-05-01
The development of accurate and appropriate exposure metrics for health effect studies of traffic-related air pollutants (TRAPs) remains challenging and important given that traffic has become the dominant urban exposure source and that exposure estimates can affect estimates of associated health risk. Exposure estimates obtained using dispersion models can overcome many of the limitations of monitoring data, and such estimates have been used in several recent health studies. This study examines the sensitivity of exposure estimates produced by dispersion models to meteorological, emission and traffic allocation inputs, focusing on applications to health studies examining near-road exposures to TRAP. Daily average concentrations of CO and NOx predicted using the Research Line source model (RLINE) and a spatially and temporally resolved mobile source emissions inventory are compared to ambient measurements at near-road monitoring sites in Detroit, MI, and are used to assess the potential for exposure measurement error in cohort and population-based studies. Sensitivity of exposure estimates is assessed by comparing nominal and alternative model inputs using statistical performance evaluation metrics and three sets of receptors. The analysis shows considerable sensitivity to meteorological inputs; generally the best performance was obtained using data specific to each monitoring site. An updated emission factor database provided some improvement, particularly at near-road sites, while the use of site-specific diurnal traffic allocations did not improve performance compared to simpler default profiles. Overall, this study highlights the need for appropriate inputs, especially meteorological inputs, to dispersion models aimed at estimating near-road concentrations of TRAPs. It also highlights the potential for systematic biases that might affect analyses that use concentration predictions as exposure measures in health studies.
Ning, Jia; Sun, Yongliang; Xie, Sheng; Zhang, Bida; Huang, Feng; Koken, Peter; Smink, Jouke; Yuan, Chun; Chen, Huijun
2018-05-01
To propose a simultaneous acquisition sequence for improved hepatic pharmacokinetics quantification accuracy (SAHA) method for liver dynamic contrast-enhanced MRI. The proposed SAHA simultaneously acquired high temporal-resolution 2D images for vascular input function extraction using Cartesian sampling and 3D large-coverage high spatial-resolution liver dynamic contrast-enhanced images using golden angle stack-of-stars acquisition in an interleaved way. Simulations were conducted to investigate the accuracy of SAHA in pharmacokinetic analysis. A healthy volunteer and three patients with cirrhosis or hepatocellular carcinoma were included in the study to investigate the feasibility of SAHA in vivo. Simulation studies showed that SAHA can provide closer results to the true values and lower root mean square error of estimated pharmacokinetic parameters in all of the tested scenarios. The in vivo scans of subjects provided fair image quality of both 2D images for arterial input function and portal venous input function and 3D whole liver images. The in vivo fitting results showed that the perfusion parameters of healthy liver were significantly different from those of cirrhotic liver and HCC. The proposed SAHA can provide improved accuracy in pharmacokinetic modeling and is feasible in human liver dynamic contrast-enhanced MRI, suggesting that SAHA is a potential tool for liver dynamic contrast-enhanced MRI. Magn Reson Med 79:2629-2641, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Monte Carlo Solution to Find Input Parameters in Systems Design Problems
NASA Astrophysics Data System (ADS)
Arsham, Hossein
2013-06-01
Most engineering system designs, such as product, process, and service design, involve a framework for arriving at a target value for a set of experiments. This paper considers a stochastic approximation algorithm for estimating the controllable input parameter within a desired accuracy, given a target value for the performance function. Two different problems, what-if and goal-seeking problems, are explained and defined in an auxiliary simulation model, which represents a local response surface model in terms of a polynomial. A method of constructing this polynomial by a single run simulation is explained. An algorithm is given to select the design parameter for the local response surface model. Finally, the mean time to failure (MTTF) of a reliability subsystem is computed and compared with its known analytical MTTF value for validation purposes.
Performance limitations of a white light extrinsic Fabry-Perot interferometric displacement sensor
NASA Astrophysics Data System (ADS)
Moro, Erik A.; Todd, Michael D.; Puckett, Anthony D.
2012-06-01
Non-contacting interferometric fiber optic sensors offer a minimally invasive, high-accuracy means of measuring a structure's kinematic response to loading. The performance of interferometric sensors is often dictated by the technique employed for demodulating the kinematic measurand of interest from phase in the observed optical signal. In this paper a white-light extrinsic Fabry-Perot interferometer is implemented, offering robust displacement sensing performance. Displacement data is extracted from an estimate of the power spectral density, calculated from the interferometer's received optical power measured as a function of optical transmission frequency, and the sensor's performance is dictated by the details surrounding the implementation of this power spectral density estimation. One advantage of this particular type of interferometric sensor is that many of its control parameters (e.g., frequency range, frequency sampling density, sampling rate, etc.) may be chosen to so that the sensor satisfies application-specific performance needs in metrics such as bandwidth, axial displacement range, displacement resolution, and accuracy. A suite of user-controlled input values is investigated for estimating the spectrum of power versus wavelength data, and the relationships between performance metrics and input parameters are described in an effort to characterize the sensor's operational performance limitations. This work has been approved by Los Alamos National Laboratory for unlimited public release (LA-UR 12-01512).
Approximate Bayesian evaluations of measurement uncertainty
NASA Astrophysics Data System (ADS)
Possolo, Antonio; Bodnar, Olha
2018-04-01
The Guide to the Expression of Uncertainty in Measurement (GUM) includes formulas that produce an estimate of a scalar output quantity that is a function of several input quantities, and an approximate evaluation of the associated standard uncertainty. This contribution presents approximate, Bayesian counterparts of those formulas for the case where the output quantity is a parameter of the joint probability distribution of the input quantities, also taking into account any information about the value of the output quantity available prior to measurement expressed in the form of a probability distribution on the set of possible values for the measurand. The approximate Bayesian estimates and uncertainty evaluations that we present have a long history and illustrious pedigree, and provide sufficiently accurate approximations in many applications, yet are very easy to implement in practice. Differently from exact Bayesian estimates, which involve either (analytical or numerical) integrations, or Markov Chain Monte Carlo sampling, the approximations that we describe involve only numerical optimization and simple algebra. Therefore, they make Bayesian methods widely accessible to metrologists. We illustrate the application of the proposed techniques in several instances of measurement: isotopic ratio of silver in a commercial silver nitrate; odds of cryptosporidiosis in AIDS patients; height of a manometer column; mass fraction of chromium in a reference material; and potential-difference in a Zener voltage standard.
Local classifier weighting by quadratic programming.
Cevikalp, Hakan; Polikar, Robi
2008-10-01
It has been widely accepted that the classification accuracy can be improved by combining outputs of multiple classifiers. However, how to combine multiple classifiers with various (potentially conflicting) decisions is still an open problem. A rich collection of classifier combination procedures -- many of which are heuristic in nature -- have been developed for this goal. In this brief, we describe a dynamic approach to combine classifiers that have expertise in different regions of the input space. To this end, we use local classifier accuracy estimates to weight classifier outputs. Specifically, we estimate local recognition accuracies of classifiers near a query sample by utilizing its nearest neighbors, and then use these estimates to find the best weights of classifiers to label the query. The problem is formulated as a convex quadratic optimization problem, which returns optimal nonnegative classifier weights with respect to the chosen objective function, and the weights ensure that locally most accurate classifiers are weighted more heavily for labeling the query sample. Experimental results on several data sets indicate that the proposed weighting scheme outperforms other popular classifier combination schemes, particularly on problems with complex decision boundaries. Hence, the results indicate that local classification-accuracy-based combination techniques are well suited for decision making when the classifiers are trained by focusing on different regions of the input space.
Bacciu, Davide; Starita, Antonina
2008-11-01
Determining a compact neural coding for a set of input stimuli is an issue that encompasses several biological memory mechanisms as well as various artificial neural network models. In particular, establishing the optimal network structure is still an open problem when dealing with unsupervised learning models. In this paper, we introduce a novel learning algorithm, named competitive repetition-suppression (CoRe) learning, inspired by a cortical memory mechanism called repetition suppression (RS). We show how such a mechanism is used, at various levels of the cerebral cortex, to generate compact neural representations of the visual stimuli. From the general CoRe learning model, we derive a clustering algorithm, named CoRe clustering, that can automatically estimate the unknown cluster number from the data without using a priori information concerning the input distribution. We illustrate how CoRe clustering, besides its biological plausibility, posses strong theoretical properties in terms of robustness to noise and outliers, and we provide an error function describing CoRe learning dynamics. Such a description is used to analyze CoRe relationships with the state-of-the art clustering models and to highlight CoRe similitude with rival penalized competitive learning (RPCL), showing how CoRe extends such a model by strengthening the rival penalization estimation by means of loss functions from robust statistics.
Assessment of Antarctic Ice-Sheet Mass Balance Estimates: 1992 - 2009
NASA Technical Reports Server (NTRS)
Zwally, H. Jay; Giovinetto, Mario B.
2011-01-01
Published mass balance estimates for the Antarctic Ice Sheet (AIS) lie between approximately +50 to -250 Gt/year for 1992 to 2009, which span a range equivalent to 15% of the annual mass input and 0.8 mm/year Sea Level Equivalent (SLE). Two estimates from radar-altimeter measurements of elevation change by European Remote-sensing Satellites (ERS) (+28 and -31 Gt/year) lie in the upper part, whereas estimates from the Input-minus-Output Method (IOM) and the Gravity Recovery and Climate Experiment (GRACE) lie in the lower part (-40 to -246 Gt/year). We compare the various estimates, discuss the methodology used, and critically assess the results. Although recent reports of large and accelerating rates of mass loss from GRACE=based studies cite agreement with IOM results, our evaluation does not support that conclusion. We find that the extrapolation used in the published IOM estimates for the 15 % of the periphery for which discharge velocities are not observed gives twice the rate of discharge per unit of associated ice-sheet area than the 85% faster-moving parts. Our calculations show that the published extrapolation overestimates the ice discharge by 282 Gt/yr compared to our assumption that the slower moving areas have 70% as much discharge per area as the faster moving parts. Also, published data on the time-series of discharge velocities and accumulation/precipitation do not support mass output increases or input decreases with time, respectively. Our modified IOM estimate, using the 70% discharge assumption and substituting input from a field-data compilation for input from an atmospheric model over 6% of area, gives a loss of only 13 Gt/year (versus 136 Gt/year) for the period around 2000. Two ERS-based estimates, our modified IOM, and a GRACE-based estimate for observations within 1992 to 2005 lie in a narrowed range of +27 to - 40 Gt/year, which is about 3% of the annual mass input and only 0.2 mm/year SLE. Our preferred estimate for 1992-2001 is - 47 Gt/year for West Antarctica, + 16 Gt/year for East Antarctica, and -31 Gt/year overall (+0.1 mm/year SLE), not including part of the Antarctic Peninsula (1.07 % of the AIS area)
NASA Technical Reports Server (NTRS)
Pierson, W. J., Jr.
1984-01-01
Backscatter measurements at upwind and crosswind are simulated for five incidence angles by means of the SASS-1 model function. The effects of communication noise and attitude errors are simulated by Monte Carlo methods, and the winds are recovered by both the Sum of Square (SOS) algorithm and a Maximum Likelihood Estimater (MLE). The SOS algorithm is shown to fail for light enough winds at all incidence angles and to fail to show areas of calm because backscatter estimates that were negative or that produced incorrect values of K sub p greater than one were discarded. The MLE performs well for all input backscatter estimates and returns calm when both are negative. The use of the SOS algorithm is shown to have introduced errors in the SASS-1 model function that, in part, cancel out the errors that result from using it, but that also cause disagreement with other data sources such as the AAFE circle flight data at light winds. Implications for future scatterometer systems are given.
Transforming the Way We Teach Function Transformations
ERIC Educational Resources Information Center
Faulkenberry, Eileen Durand; Faulkenberry, Thomas J.
2010-01-01
In this article, the authors discuss "function," a well-defined rule that relates inputs to outputs. They have found that by using the input-output definition of "function," they can examine transformations of functions simply by looking at changes to input or output and the respective changes to the graph. Applying transformations to the input…
Non-intrusive parameter identification procedure user's guide
NASA Technical Reports Server (NTRS)
Hanson, G. D.; Jewell, W. F.
1983-01-01
Written in standard FORTRAN, NAS is capable of identifying linear as well as nonlinear relations between input and output parameters; the only restriction is that the input/output relation be linear with respect to the unknown coefficients of the estimation equations. The output of the identification algorithm can be specified to be in either the time domain (i.e., the estimation equation coefficients) or in the frequency domain (i.e., a frequency response of the estimation equation). The frame length ("window") over which the identification procedure is to take place can be specified to be any portion of the input time history, thereby allowing the freedom to start and stop the identification procedure within a time history. There also is an option which allows a sliding window, which gives a moving average over the time history. The NAS software also includes the ability to identify several assumed solutions simultaneously for the same or different input data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprung, J.L.; Jow, H-N; Rollstin, J.A.
1990-12-01
Estimation of offsite accident consequences is the customary final step in a probabilistic assessment of the risks of severe nuclear reactor accidents. Recently, the Nuclear Regulatory Commission reassessed the risks of severe accidents at five US power reactors (NUREG-1150). Offsite accident consequences for NUREG-1150 source terms were estimated using the MELCOR Accident Consequence Code System (MACCS). Before these calculations were performed, most MACCS input parameters were reviewed, and for each parameter reviewed, a best-estimate value was recommended. This report presents the results of these reviews. Specifically, recommended values and the basis for their selection are presented for MACCS atmospheric andmore » biospheric transport, emergency response, food pathway, and economic input parameters. Dose conversion factors and health effect parameters are not reviewed in this report. 134 refs., 15 figs., 110 tabs.« less
Spatial interpolation schemes of daily precipitation for hydrologic modeling
Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.
2012-01-01
Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.
An Empirical Method for Determining the Lunar Gravity Field. Ph.D. Thesis - George Washington Univ.
NASA Technical Reports Server (NTRS)
Ferrari, A. J.
1971-01-01
A method has been devised to determine the spherical harmonic coefficients of the lunar gravity field. This method consists of a two-step data reduction and estimation process. In the first step, a weighted least-squares empirical orbit determination scheme is applied to Doppler tracking data from lunar orbits to estimate long-period Kepler elements and rates. Each of the Kepler elements is represented by an independent function of time. The long-period perturbing effects of the earth, sun, and solar radiation are explicitly modeled in this scheme. Kepler element variations estimated by this empirical processor are ascribed to the non-central lunar gravitation features. Doppler data are reduced in this manner for as many orbits as are available. In the second step, the Kepler element rates are used as input to a second least-squares processor that estimates lunar gravity coefficients using the long-period Lagrange perturbation equations.
Opitz v Boberfeld, W; Theobald, P C; Laser, H
2003-06-01
Regarding the estimation of the energy concentration or digestibility of herb-dominated forage and plant samples from winter pastures, it could be expected that the estimation is only reliable when in vitro methods with rumen fluid as inoculum (= gas production techniques) are used. For the verification of this thesis based on logical reflections, an in vitro-method with rumen fluid added as inoculum, as well as chemical, and enzymatic methods were applied under consideration of existing estimating functions. As a possible reason for the observed divergence of the methods, effects of fungal infections or, respectively, secondary compounds in herbs are discussed. At the present state of knowledge, it is adequate to estimate the energy concentration in vitro by gas tests, as far as fattening types like suckler cows and beef cattle are concerned, maybe in contrast to the forage evaluation for dairy cows.
NASA Astrophysics Data System (ADS)
Moaveni, Bijan; Khosravi Roqaye Abad, Mahdi; Nasiri, Sayyad
2015-10-01
In this paper, vehicle longitudinal velocity during the braking process is estimated by measuring the wheels speed. Here, a new algorithm based on the unknown input Kalman filter is developed to estimate the vehicle longitudinal velocity with a minimum mean square error and without using the value of braking torque in the estimation procedure. The stability and convergence of the filter are analysed and proved. Effectiveness of the method is shown by designing a real experiment and comparing the estimation result with actual longitudinal velocity computing from a three-axis accelerometer output.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu
State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less
Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu
2017-10-31
State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less
A Neural-Dynamic Architecture for Concurrent Estimation of Object Pose and Identity
Lomp, Oliver; Faubel, Christian; Schöner, Gregor
2017-01-01
Handling objects or interacting with a human user about objects on a shared tabletop requires that objects be identified after learning from a small number of views and that object pose be estimated. We present a neurally inspired architecture that learns object instances by storing features extracted from a single view of each object. Input features are color and edge histograms from a localized area that is updated during processing. The system finds the best-matching view for the object in a novel input image while concurrently estimating the object’s pose, aligning the learned view with current input. The system is based on neural dynamics, computationally operating in real time, and can handle dynamic scenes directly off live video input. In a scenario with 30 everyday objects, the system achieves recognition rates of 87.2% from a single training view for each object, while also estimating pose quite precisely. We further demonstrate that the system can track moving objects, and that it can segment the visual array, selecting and recognizing one object while suppressing input from another known object in the immediate vicinity. Evaluation on the COIL-100 dataset, in which objects are depicted from different viewing angles, revealed recognition rates of 91.1% on the first 30 objects, each learned from four training views. PMID:28503145
Robust estimation of adaptive tensors of curvature by tensor voting.
Tong, Wai-Shun; Tang, Chi-Keung
2005-03-01
Although curvature estimation from a given mesh or regularly sampled point set is a well-studied problem, it is still challenging when the input consists of a cloud of unstructured points corrupted by misalignment error and outlier noise. Such input is ubiquitous in computer vision. In this paper, we propose a three-pass tensor voting algorithm to robustly estimate curvature tensors, from which accurate principal curvatures and directions can be calculated. Our quantitative estimation is an improvement over the previous two-pass algorithm, where only qualitative curvature estimation (sign of Gaussian curvature) is performed. To overcome misalignment errors, our improved method automatically corrects input point locations at subvoxel precision, which also rejects outliers that are uncorrectable. To adapt to different scales locally, we define the RadiusHit of a curvature tensor to quantify estimation accuracy and applicability. Our curvature estimation algorithm has been proven with detailed quantitative experiments, performing better in a variety of standard error metrics (percentage error in curvature magnitudes, absolute angle difference in curvature direction) in the presence of a large amount of misalignment noise.
Rudin, M; Beckmann, N; Sauter, A
1997-01-01
Determination of tissue perfusion rates by MRI bolus tracking methods relies on the central volume principle which states that tissue blood flow is given by the tissue blood volume divided by the mean tracer transit time (MTT). Accurate determination of the MTT requires knowledge of the arterial input function which in MRI experiments is usually not known, especially when using small animals. The problem of unknown arterial input can be circumvented in animal experiments by directly injecting the contrast agent into a feeding artery of the tissue of interest. In the present article the passage of magnetite nanoparticles through the rat cerebral cortex is analyzed after injection into the internal carotid artery. The results are discussed in the framework of linear system theory using a one-compartment model for brain tissue and by using the well characterized gamma-variate function to describe the tissue concentration profile of the contrast agent. The results obtained from the intra-arterial tracer administration experiments are then compared with the commonly used intra-venous injection of the contrast agent in order to estimate the contribution of the peripheral circulation to the MTT values in the latter case. The experiments were analyzed using a two-compartment model and the gamma-variate function. As an application perfusion rates in normal and ischemic cerebral cortex of hypertensive rats were estimated in a model of focal cerebral ischemia. The results indicate that peripheral circulation has a significant influence on the MTT values and thus on the perfusion rates, which cannot be neglected.
Star Classification for the Kepler Input Catalog: From Images to Stellar Parameters
NASA Astrophysics Data System (ADS)
Brown, T. M.; Everett, M.; Latham, D. W.; Monet, D. G.
2005-12-01
The Stellar Classification Project is a ground-based effort to screen stars within the Kepler field of view, to allow removal of stars with large radii (and small potential transit signals) from the target list. Important components of this process are: (1) An automated photometry pipeline estimates observed magnitudes both for target stars and for stars in several calibration fields. (2) Data from calibration fields yield extinction-corrected AB magnitudes (with g, r, i, z magnitudes transformed to the SDSS system). We merge these with 2MASS J, H, K magnitudes. (3) The Basel grid of stellar atmosphere models yields synthetic colors, which are transformed to our photometric system by calibration against observations of stars in M67. (4) We combine the r magnitude and stellar galactic latitude with a simple model of interstellar extinction to derive a relation connecting {Teff, luminosity} to distance and reddening. For models satisfying this relation, we compute a chi-squared statistic describing the match between each model and the observed colors. (5) We create a merit function based on the chi-squared statistic, and on a Bayesian prior probability distribution which gives probability as a function of Teff, luminosity, log(Z), and height above the galactic plane. The stellar parameters ascribed to a star are those of the model that maximizes this merit function. (6) Parameter estimates are merged with positional and other information from extant catalogs to yield the Kepler Input Catalog, from which targets will be chosen. Testing and validation of this procedure are underway, with encouraging initial results.
Kennedy, T.A.; Ralston, B.E.
2012-01-01
Dams and associated river regulation have led to the expansion of riparian vegetation, especially nonnative species, along downstream ecosystems. Nonnative saltcedar is one of the dominant riparian plants along virtually every major river system in the arid western United States, but allochthonous inputs have never been quantified along a segment of a large river that is dominated by saltcedar. We developed a novel method for estimating direct allochthonous inputs along the 387km-long reach of the Colorado River downstream of Glen Canyon Dam that utilized a GIS vegetation map developed from aerial photographs, empirical and literature-derived litter production data for the dominant vegetation types, and virtual shorelines of annual peak discharge (566m 3s -1 stage elevation). Using this method, we estimate that direct allochthonous inputs from riparian vegetation for the entire reach studied total 186metric tonsyear -1, which represents mean inputs of 470gAFDMm -1year -1 of shoreline or 5.17gAFDMm -2year -1 of river surface. These values are comparable to allochthonous inputs for other large rivers and systems that also have sparse riparian vegetation. Nonnative saltcedar represents a significant component of annual allochthonous inputs (36% of total direct inputs) in the Colorado River. We also estimated direct allochthonous inputs for 46.8km of the Colorado River prior to closure of Glen Canyon Dam using a vegetation map that was developed from historical photographs. Regulation has led to significant increases in riparian vegetation (270-319% increase in cover, depending on stage elevation), but annual allochthonous inputs appear unaffected by regulation because of the lower flood peaks on the post-dam river. Published in 2010 by John Wiley & Sons, Ltd.
Optimal inverse functions created via population-based optimization.
Jennings, Alan L; Ordóñez, Raúl
2014-06-01
Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem.
Maxine: A spreadsheet for estimating dose from chronic atmospheric radioactive releases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jannik, Tim; Bell, Evaleigh; Dixon, Kenneth
MAXINE is an EXCEL© spreadsheet, which is used to estimate dose to individuals for routine and accidental atmospheric releases of radioactive materials. MAXINE does not contain an atmospheric dispersion model, but rather doses are estimated using air and ground concentrations as input. Minimal input is required to run the program and site specific parameters are used when possible. Complete code description, verification of models, and user’s manual have been included.
NASA Astrophysics Data System (ADS)
Astroza, Rodrigo; Ebrahimian, Hamed; Li, Yong; Conte, Joel P.
2017-09-01
A methodology is proposed to update mechanics-based nonlinear finite element (FE) models of civil structures subjected to unknown input excitation. The approach allows to jointly estimate unknown time-invariant model parameters of a nonlinear FE model of the structure and the unknown time histories of input excitations using spatially-sparse output response measurements recorded during an earthquake event. The unscented Kalman filter, which circumvents the computation of FE response sensitivities with respect to the unknown model parameters and unknown input excitations by using a deterministic sampling approach, is employed as the estimation tool. The use of measurement data obtained from arrays of heterogeneous sensors, including accelerometers, displacement sensors, and strain gauges is investigated. Based on the estimated FE model parameters and input excitations, the updated nonlinear FE model can be interrogated to detect, localize, classify, and assess damage in the structure. Numerically simulated response data of a three-dimensional 4-story 2-by-1 bay steel frame structure with six unknown model parameters subjected to unknown bi-directional horizontal seismic excitation, and a three-dimensional 5-story 2-by-1 bay reinforced concrete frame structure with nine unknown model parameters subjected to unknown bi-directional horizontal seismic excitation are used to illustrate and validate the proposed methodology. The results of the validation studies show the excellent performance and robustness of the proposed algorithm to jointly estimate unknown FE model parameters and unknown input excitations.
The Pilot Training Study: A Cost-Estimating Model for Undergraduate Pilot Training.
ERIC Educational Resources Information Center
Allison, S. L.
A means for estimating the resource requirements and attendant costs of any configuration of the undergraduate pilot training system (UPT) is described by inputs that are supplied by the user of the model. The inputs consist of data such as UPT graduate requirements, course syllabus requirements, instructor-student ratios, administrative and…
Watershed nitrogen and phosphorus balance: The upper Potomac River basin
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jaworski, N.A.; Groffman, P.M.; Keller, A.A.
1992-01-01
Nitrogen and phosphorus mass balances were estimated for the portion of the Potomac River basin watershed located above Washington, D.C. The total nitrogen (N) balance included seven input source terms, six sinks, and one 'change-in-storage' term, but was simplified to five input terms and three output terms. The phosphorus (P) baance had four input and three output terms. The estimated balances are based on watershed data from seven information sources. Major sources of nitrogen are animal waste and atmospheric deposition. The major sources of phosphorus are animal waste and fertilizer. The major sink for nitrogen is combined denitrification, volatilization, andmore » change-in-storage. The major sink for phosphorus is change-in-storage. River exports of N and P were 17% and 8%, respectively, of the total N and P inputs. Over 60% of the N and P were volatilized or stored. The major input and output terms on the budget are estimated from direct measurements, but the change-in-storage term is calculated by difference. The factors regulating retention and storage processes are discussed and research needs are identified.« less
VALDRIFT 1.0: A valley atmospheric dispersion model with deposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allwine, K.J.; Bian, X.; Whiteman, C.D.
1995-05-01
VALDRIFT version 1.0 is an atmospheric transport and diffusion model for use in well-defined mountain valleys. It is designed to determine the extent of ddft from aedal pesticide spraying activities, but can also be applied to estimate the transport and diffusion of various air pollutants in valleys. The model is phenomenological -- that is, the dominant meteorological processes goveming the behavior of the valley atmosphere are formulated explicitly in the model, albeit in a highly parameterized fashion. The key meteorological processes treated are: (1) nonsteady and nonhomogeneous along-valley winds and turbulent diffusivities, (2) convective boundary layer growth, (3) inversion descent,more » (4) noctumal temperature inversion breakup, and (5) subsidence. The model is applicable under relatively cloud-free, undisturbed synoptic conditions and is configured to operate through one diumal cycle for a single valley. The inputs required are the valley topographical characteristics, pesticide release rate as a function of time and space, along-valley wind speed as a function of time and space, temperature inversion characteristics at sunrise, and sensible heat flux as a function of time following sunrise. Default values are provided for certain inputs in the absence of detailed observations. The outputs are three-dimensional air concentration and ground-level deposition fields as a function of time.« less
Rio, Daniel E.; Rawlings, Robert R.; Woltz, Lawrence A.; Gilman, Jodi; Hommer, Daniel W.
2013-01-01
A linear time-invariant model based on statistical time series analysis in the Fourier domain for single subjects is further developed and applied to functional MRI (fMRI) blood-oxygen level-dependent (BOLD) multivariate data. This methodology was originally developed to analyze multiple stimulus input evoked response BOLD data. However, to analyze clinical data generated using a repeated measures experimental design, the model has been extended to handle multivariate time series data and demonstrated on control and alcoholic subjects taken from data previously analyzed in the temporal domain. Analysis of BOLD data is typically carried out in the time domain where the data has a high temporal correlation. These analyses generally employ parametric models of the hemodynamic response function (HRF) where prewhitening of the data is attempted using autoregressive (AR) models for the noise. However, this data can be analyzed in the Fourier domain. Here, assumptions made on the noise structure are less restrictive, and hypothesis tests can be constructed based on voxel-specific nonparametric estimates of the hemodynamic transfer function (HRF in the Fourier domain). This is especially important for experimental designs involving multiple states (either stimulus or drug induced) that may alter the form of the response function. PMID:23840281
Rio, Daniel E; Rawlings, Robert R; Woltz, Lawrence A; Gilman, Jodi; Hommer, Daniel W
2013-01-01
A linear time-invariant model based on statistical time series analysis in the Fourier domain for single subjects is further developed and applied to functional MRI (fMRI) blood-oxygen level-dependent (BOLD) multivariate data. This methodology was originally developed to analyze multiple stimulus input evoked response BOLD data. However, to analyze clinical data generated using a repeated measures experimental design, the model has been extended to handle multivariate time series data and demonstrated on control and alcoholic subjects taken from data previously analyzed in the temporal domain. Analysis of BOLD data is typically carried out in the time domain where the data has a high temporal correlation. These analyses generally employ parametric models of the hemodynamic response function (HRF) where prewhitening of the data is attempted using autoregressive (AR) models for the noise. However, this data can be analyzed in the Fourier domain. Here, assumptions made on the noise structure are less restrictive, and hypothesis tests can be constructed based on voxel-specific nonparametric estimates of the hemodynamic transfer function (HRF in the Fourier domain). This is especially important for experimental designs involving multiple states (either stimulus or drug induced) that may alter the form of the response function.
Estimated Probability of a Cervical Spine Injury During an ISS Mission
NASA Technical Reports Server (NTRS)
Brooker, John E.; Weaver, Aaron S.; Myers, Jerry G.
2013-01-01
Introduction: The Integrated Medical Model (IMM) utilizes historical data, cohort data, and external simulations as input factors to provide estimates of crew health, resource utilization and mission outcomes. The Cervical Spine Injury Module (CSIM) is an external simulation designed to provide the IMM with parameter estimates for 1) a probability distribution function (PDF) of the incidence rate, 2) the mean incidence rate, and 3) the standard deviation associated with the mean resulting from injury/trauma of the neck. Methods: An injury mechanism based on an idealized low-velocity blunt impact to the superior posterior thorax of an ISS crewmember was used as the simulated mission environment. As a result of this impact, the cervical spine is inertially loaded from the mass of the head producing an extension-flexion motion deforming the soft tissues of the neck. A multibody biomechanical model was developed to estimate the kinematic and dynamic response of the head-neck system from a prescribed acceleration profile. Logistic regression was performed on a dataset containing AIS1 soft tissue neck injuries from rear-end automobile collisions with published Neck Injury Criterion values producing an injury transfer function (ITF). An injury event scenario (IES) was constructed such that crew 1 is moving through a primary or standard translation path transferring large volume equipment impacting stationary crew 2. The incidence rate for this IES was estimated from in-flight data and used to calculate the probability of occurrence. The uncertainty in the model input factors were estimated from representative datasets and expressed in terms of probability distributions. A Monte Carlo Method utilizing simple random sampling was employed to propagate both aleatory and epistemic uncertain factors. Scatterplots and partial correlation coefficients (PCC) were generated to determine input factor sensitivity. CSIM was developed in the SimMechanics/Simulink environment with a Monte Carlo wrapper (MATLAB) used to integrate the components of the module. Results: The probability of generating an AIS1 soft tissue neck injury from the extension/flexion motion induced by a low-velocity blunt impact to the superior posterior thorax was fitted with a lognormal PDF with mean 0.26409, standard deviation 0.11353, standard error of mean 0.00114, and 95% confidence interval [0.26186, 0.26631]. Combining the probability of an AIS1 injury with the probability of IES occurrence was fitted with a Johnson SI PDF with mean 0.02772, standard deviation 0.02012, standard error of mean 0.00020, and 95% confidence interval [0.02733, 0.02812]. The input factor sensitivity analysis in descending order was IES incidence rate, ITF regression coefficient 1, impactor initial velocity, ITF regression coefficient 2, and all others (equipment mass, crew 1 body mass, crew 2 body mass) insignificant. Verification and Validation (V&V): The IMM V&V, based upon NASA STD 7009, was implemented which included an assessment of the data sets used to build CSIM. The documentation maintained includes source code comments and a technical report. The software code and documentation is under Subversion configuration management. Kinematic validation was performed by comparing the biomechanical model output to established corridors.
Hill, Mary C.; Banta, E.R.; Harbaugh, A.W.; Anderman, E.R.
2000-01-01
This report documents the Observation, Sensitivity, and Parameter-Estimation Processes of the ground-water modeling computer program MODFLOW-2000. The Observation Process generates model-calculated values for comparison with measured, or observed, quantities. A variety of statistics is calculated to quantify this comparison, including a weighted least-squares objective function. In addition, a number of files are produced that can be used to compare the values graphically. The Sensitivity Process calculates the sensitivity of hydraulic heads throughout the model with respect to specified parameters using the accurate sensitivity-equation method. These are called grid sensitivities. If the Observation Process is active, it uses the grid sensitivities to calculate sensitivities for the simulated values associated with the observations. These are called observation sensitivities. Observation sensitivities are used to calculate a number of statistics that can be used (1) to diagnose inadequate data, (2) to identify parameters that probably cannot be estimated by regression using the available observations, and (3) to evaluate the utility of proposed new data. The Parameter-Estimation Process uses a modified Gauss-Newton method to adjust values of user-selected input parameters in an iterative procedure to minimize the value of the weighted least-squares objective function. Statistics produced by the Parameter-Estimation Process can be used to evaluate estimated parameter values; statistics produced by the Observation Process and post-processing program RESAN-2000 can be used to evaluate how accurately the model represents the actual processes; statistics produced by post-processing program YCINT-2000 can be used to quantify the uncertainty of model simulated values. Parameters are defined in the Ground-Water Flow Process input files and can be used to calculate most model inputs, such as: for explicitly defined model layers, horizontal hydraulic conductivity, horizontal anisotropy, vertical hydraulic conductivity or vertical anisotropy, specific storage, and specific yield; and, for implicitly represented layers, vertical hydraulic conductivity. In addition, parameters can be defined to calculate the hydraulic conductance of the River, General-Head Boundary, and Drain Packages; areal recharge rates of the Recharge Package; maximum evapotranspiration of the Evapotranspiration Package; pumpage or the rate of flow at defined-flux boundaries of the Well Package; and the hydraulic head at constant-head boundaries. The spatial variation of model inputs produced using defined parameters is very flexible, including interpolated distributions that require the summation of contributions from different parameters. Observations can include measured hydraulic heads or temporal changes in hydraulic heads, measured gains and losses along head-dependent boundaries (such as streams), flows through constant-head boundaries, and advective transport through the system, which generally would be inferred from measured concentrations. MODFLOW-2000 is intended for use on any computer operating system. The program consists of algorithms programmed in Fortran 90, which efficiently performs numerical calculations and is fully compatible with the newer Fortran 95. The code is easily modified to be compatible with FORTRAN 77. Coordination for multiple processors is accommodated using Message Passing Interface (MPI) commands. The program is designed in a modular fashion that is intended to support inclusion of new capabilities.
Multiscale Bayesian neural networks for soil water content estimation
NASA Astrophysics Data System (ADS)
Jana, Raghavendra B.; Mohanty, Binayak P.; Springer, Everett P.
2008-08-01
Artificial neural networks (ANN) have been used for some time now to estimate soil hydraulic parameters from other available or more easily measurable soil properties. However, most such uses of ANNs as pedotransfer functions (PTFs) have been at matching spatial scales (1:1) of inputs and outputs. This approach assumes that the outputs are only required at the same scale as the input data. Unfortunately, this is rarely true. Different hydrologic, hydroclimatic, and contaminant transport models require soil hydraulic parameter data at different spatial scales, depending upon their grid sizes. While conventional (deterministic) ANNs have been traditionally used in these studies, the use of Bayesian training of ANNs is a more recent development. In this paper, we develop a Bayesian framework to derive soil water retention function including its uncertainty at the point or local scale using PTFs trained with coarser-scale Soil Survey Geographic (SSURGO)-based soil data. The approach includes an ANN trained with Bayesian techniques as a PTF tool with training and validation data collected across spatial extents (scales) in two different regions in the United States. The two study areas include the Las Cruces Trench site in the Rio Grande basin of New Mexico, and the Southern Great Plains 1997 (SGP97) hydrology experimental region in Oklahoma. Each region-specific Bayesian ANN is trained using soil texture and bulk density data from the SSURGO database (scale 1:24,000), and predictions of the soil water contents at different pressure heads with point scale data (1:1) inputs are made. The resulting outputs are corrected for bias using both linear and nonlinear correction techniques. The results show good agreement between the soil water content values measured at the point scale and those predicted by the Bayesian ANN-based PTFs for both the study sites. Overall, Bayesian ANNs coupled with nonlinear bias correction are found to be very suitable tools for deriving soil hydraulic parameters at the local/fine scale from soil physical properties at coarser-scale and across different spatial extents. This approach could potentially be used for soil hydraulic properties estimation and downscaling.
NASA Technical Reports Server (NTRS)
Barrett, C. E.; Presler, A. F.
1976-01-01
A FORTRAN computer program (COREST) was developed to analyze the high-temperature paralinear oxidation behavior of metals. It is based on a mass-balance approach and uses typical gravimetric input data. COREST was applied to predominantly Cr2O3-forming alloys tested isothermally for long times. These alloys behaved paralinearly above 1100 C as a result of simultaneous scale formation and scale vaporization. Output includes the pertinent formation and vaporization constants and kinetic values of interest. COREST also estimates specific sample weight and specific scale weight as a function of time. Most importantly, from a corrosion standpoint, it estimates specific metal loss.
Software reliability perspectives
NASA Technical Reports Server (NTRS)
Wilson, Larry; Shen, Wenhui
1987-01-01
Software which is used in life critical functions must be known to be highly reliable before installation. This requires a strong testing program to estimate the reliability, since neither formal methods, software engineering nor fault tolerant methods can guarantee perfection. Prior to the final testing software goes through a debugging period and many models have been developed to try to estimate reliability from the debugging data. However, the existing models are poorly validated and often give poor performance. This paper emphasizes the fact that part of their failures can be attributed to the random nature of the debugging data given to these models as input, and it poses the problem of correcting this defect as an area of future research.
Sliding mode control-based linear functional observers for discrete-time stochastic systems
NASA Astrophysics Data System (ADS)
Singh, Satnesh; Janardhanan, Sivaramakrishnan
2017-11-01
Sliding mode control (SMC) is one of the most popular techniques to stabilise linear discrete-time stochastic systems. However, application of SMC becomes difficult when the system states are not available for feedback. This paper presents a new approach to design a SMC-based functional observer for discrete-time stochastic systems. The functional observer is based on the Kronecker product approach. Existence conditions and stability analysis of the proposed observer are given. The control input is estimated by a novel linear functional observer. This approach leads to a non-switching type of control, thereby eliminating the fundamental cause of chatter. Furthermore, the functional observer is designed in such a way that the effect of process and measurement noise is minimised. Simulation example is given to illustrate and validate the proposed design method.
NASA Astrophysics Data System (ADS)
Dumedah, Gift; Walker, Jeffrey P.
2017-03-01
The sources of uncertainty in land surface models are numerous and varied, from inaccuracies in forcing data to uncertainties in model structure and parameterizations. Majority of these uncertainties are strongly tied to the overall makeup of the model, but the input forcing data set is independent with its accuracy usually defined by the monitoring or the observation system. The impact of input forcing data on model estimation accuracy has been collectively acknowledged to be significant, yet its quantification and the level of uncertainty that is acceptable in the context of the land surface model to obtain a competitive estimation remain mostly unknown. A better understanding is needed about how models respond to input forcing data and what changes in these forcing variables can be accommodated without deteriorating optimal estimation of the model. As a result, this study determines the level of forcing data uncertainty that is acceptable in the Joint UK Land Environment Simulator (JULES) to competitively estimate soil moisture in the Yanco area in south eastern Australia. The study employs hydro genomic mapping to examine the temporal evolution of model decision variables from an archive of values obtained from soil moisture data assimilation. The data assimilation (DA) was undertaken using the advanced Evolutionary Data Assimilation. Our findings show that the input forcing data have significant impact on model output, 35% in root mean square error (RMSE) for 5cm depth of soil moisture and 15% in RMSE for 15cm depth of soil moisture. This specific quantification is crucial to illustrate the significance of input forcing data spread. The acceptable uncertainty determined based on dominant pathway has been validated and shown to be reliable for all forcing variables, so as to provide optimal soil moisture. These findings are crucial for DA in order to account for uncertainties that are meaningful from the model standpoint. Moreover, our results point to a proper treatment of input forcing data in general land surface and hydrological model estimation.
NASA Astrophysics Data System (ADS)
Keller, J. Y.; Chabir, K.; Sauter, D.
2016-03-01
State estimation of stochastic discrete-time linear systems subject to unknown inputs or constant biases has been widely studied but no work has been dedicated to the case where a disturbance switches between unknown input and constant bias. We show that such disturbance can affect a networked control system subject to deception attacks and data losses on the control signals transmitted by the controller to the plant. This paper proposes to estimate the switching disturbance from an augmented state version of the intermittent unknown input Kalman filter recently developed by the authors. Sufficient stochastic stability conditions are established when the arrival binary sequence of data losses follows a Bernoulli random process.
NASA Astrophysics Data System (ADS)
Koeppe, Robert Allen
Positron computed tomography (PCT) is a diagnostic imaging technique that provides both three dimensional imaging capability and quantitative measurements of local tissue radioactivity concentrations in vivo. This allows the development of non-invasive methods that employ the principles of tracer kinetics for determining physiological properties such as mass specific blood flow, tissue pH, and rates of substrate transport or utilization. A physiologically based, two-compartment tracer kinetic model was derived to mathematically describe the exchange of a radioindicator between blood and tissue. The model was adapted for use with dynamic sequences of data acquired with a positron tomograph. Rapid estimation techniques were implemented to produce functional images of the model parameters by analyzing each individual pixel sequence of the image data. A detailed analysis of the performance characteristics of three different parameter estimation schemes was performed. The analysis included examination of errors caused by statistical uncertainties in the measured data, errors in the timing of the data, and errors caused by violation of various assumptions of the tracer kinetic model. Two specific radioindicators were investigated. ('18)F -fluoromethane, an inert freely diffusible gas, was used for local quantitative determinations of both cerebral blood flow and tissue:blood partition coefficient. A method was developed that did not require direct sampling of arterial blood for the absolute scaling of flow values. The arterial input concentration time course was obtained by assuming that the alveolar or end-tidal expired breath radioactivity concentration is proportional to the arterial blood concentration. The scale of the input function was obtained from a series of venous blood concentration measurements. The method of absolute scaling using venous samples was validated in four studies, performed on normal volunteers, in which directly measured arterial concentrations were compared to those predicted from the expired air and venous blood samples. The glucose analog ('18)F-3-deoxy-3-fluoro-D -glucose (3-FDG) was used for quantitating the membrane transport rate of glucose. The measured data indicated that the phosphorylation rate of 3-FDG was low enough to allow accurate estimation of the transport rate using a two compartment model.
The series-elastic shock absorber: tendons attenuate muscle power during eccentric actions.
Roberts, Thomas J; Azizi, Emanuel
2010-08-01
Elastic tendons can act as muscle power amplifiers or energy-conserving springs during locomotion. We used an in situ muscle-tendon preparation to examine the mechanical function of tendons during lengthening contractions, when muscles absorb energy. Force, length, and power were measured in the lateral gastrocnemius muscle of wild turkeys. Sonomicrometry was used to measure muscle fascicle length independently from muscle-tendon unit (MTU) length, as measured by a muscle lever system (servomotor). A series of ramp stretches of varying velocities was applied to the MTU in fully activated muscles. Fascicle length changes were decoupled from length changes imposed on the MTU by the servomotor. Under most conditions, muscle fascicles shortened on average, while the MTU lengthened. Energy input to the MTU during the fastest lengthenings was -54.4 J/kg, while estimated work input to the muscle fascicles during this period was only -11.24 J/kg. This discrepancy indicates that energy was first absorbed by elastic elements, then released to do work on muscle fascicles after the lengthening phase of the contraction. The temporary storage of energy by elastic elements also resulted in a significant attenuation of power input to the muscle fascicles. At the fastest lengthening rates, peak instantaneous power input to the MTU reached -2,143.9 W/kg, while peak power input to the fascicles was only -557.6 W/kg. These results demonstrate that tendons may act as mechanical buffers by limiting peak muscle forces, lengthening rates, and power inputs during energy-absorbing contractions.
Atmospheric Nitrogen Inputs to the Ocean and their Impact
NASA Astrophysics Data System (ADS)
Jickells, Tim D.
2016-04-01
Atmospheric Nitrogen Inputs to the Ocean and their Impact T Jickells (1), K. Altieri (2), D. Capone (3), E. Buitenhuis (1), R. Duce (4), F. Dentener (5), K. Fennel (6), J. Galloway (7), M. Kanakidou (8), J. LaRoche (9), K. Lee (10), P. Liss (1), J. Middleburg (11), K. Moore (12), S. Nickovic (13), G. Okin (14), A. Oschilies (15), J. Prospero (16), M. Sarin (17), S. Seitzinger (18), J. Scharples (19), P. Suntharalingram (1), M. Uematsu (20), L. Zamora (21) Atmospheric nitrogen inputs to the ocean have been identified as an important source of nitrogen to the oceans which has increased greatly as a result of human activity. The significance of atmospheric inputs for ocean biogeochemistry were evaluated in a seminal paper by Duce et al., 2008 (Science 320, 893-7). In this presentation we will update the Duce et al 2008 study estimating the impact of atmospheric deposition on the oceans. We will summarise the latest model estimates of total atmospheric nitrogen deposition to the ocean, their chemical form (nitrate, ammonium and organic nitrogen) and spatial distribution from the TM4 model. The model estimates are somewhat smaller than the Duce et al estimate, but with similar spatial distributions. We will compare these flux estimates with a new estimate of the impact of fluvial nitrogen inputs on the open ocean (Sharples submitted) which estimates some transfer of fluvial nitrogen to the open ocean, particularly at low latitudes, compared to the complete trapping of fluvial inputs on the continental shelf assumed by Duce et al. We will then estimate the impact of atmospheric deposition on ocean primary productivity and N2O emissions from the oceans using the PlankTOM10 model. The impacts of atmospheric deposition we estimate on ocean productivity here are smaller than those predicted by Duce et al impacts, consistent with the smaller atmospheric deposition estimates. However, the atmospheric input is still larger than the estimated fluvial inputs to the open ocean, even with the increased transport across shelf to the open ocean from low latitude fluvial systems identified. 1. School of Environmental Science University of East Anglia UK 2. Energy Research Centre University of Cape Town SA 3. Department of Biological Sciences University of S California USA 4. Departments of Oceanography and Atmospheric Sciences Texas A&M University USA 5. JRC Ispra Italy 6. Department of Oceanography Dalhousie University Canada 7. Department of Environmental Sciences U. Virginia USA 8. Department of Chemistry, University of Crete, Greece 9. Department of Biology Dalhousie University, Canada 10. School of Environmental Science and Engineering Pohang University S Korea. 11. Faculty of Geosciences University of Utrecht Netherlands 12. Department of Earth System Science University of California at Irvine USA 13. WMO Geneva 14. Department of Geography University of California USA 15. GEOMAR Keil Germany 16. Department of Atmospheric Sciences, University of Miami, USA 17. Geosciences Division at Physical Research Laboratory, Ahmedabad, India 18. Department of Environmental Studies, University of Victoria, Canada 19. School of Environmentak Sciences, U Liverpool UK 20. Center for International Collaboration, Atmosphere and Ocean Research Institute, The University of Tokyo Japan 21. Oak Ridge Associated Universities USA
An enhanced multi-channel bacterial foraging optimization algorithm for MIMO communication system
NASA Astrophysics Data System (ADS)
Palanimuthu, Senthilkumar Jayalakshmi; Muthial, Chandrasekaran
2017-04-01
Channel estimation and optimisation are the main challenging tasks in Multi Input Multi Output (MIMO) wireless communication systems. In this work, a Multi-Channel Bacterial Foraging Optimization Algorithm approach is proposed for the selection of antenna in a transmission area. The main advantage of this method is, it reduces the loss of bandwidth during data transmission effectively. Here, we considered the channel estimation and optimisation for improving the transmission speed and reducing the unused bandwidth. Initially, the message is given to the input of the communication system. Then, the symbol mapping process is performed for converting the message into signals. It will be encoded based on the space-time encoding technique. Here, the single signal is divided into multiple signals and it will be given to the input of space-time precoder. Hence, the multiplexing is applied to transmission channel estimation. In this paper, the Rayleigh channel is selected based on the bandwidth range. This is the Gaussian distribution type channel. Then, the demultiplexing is applied on the obtained signal that is the reverse function of multiplexing, which splits the combined signal arriving from a medium into the original information signal. Furthermore, the long-term evolution technique is used for scheduling the time to channels during transmission. Here, the hidden Markov model technique is employed to predict the status information of the channel. Finally, the signals are decoded and the reconstructed signal is obtained after performing the scheduling process. The experimental results evaluate the performance of the proposed MIMO communication system in terms of bit error rate, mean squared error, average throughput, outage capacity and signal to interference noise ratio.
Dalsgaard, Lise; Astrup, Rasmus; Antón-Fernández, Clara; Borgen, Signe Kynding; Breidenbach, Johannes; Lange, Holger; Lehtonen, Aleksi; Liski, Jari
2016-01-01
Boreal forests contain 30% of the global forest carbon with the majority residing in soils. While challenging to quantify, soil carbon changes comprise a significant, and potentially increasing, part of the terrestrial carbon cycle. Thus, their estimation is important when designing forest-based climate change mitigation strategies and soil carbon change estimates are required for the reporting of greenhouse gas emissions. Organic matter decomposition varies with climate in complex nonlinear ways, rendering data aggregation nontrivial. Here, we explored the effects of temporal and spatial aggregation of climatic and litter input data on regional estimates of soil organic carbon stocks and changes for upland forests. We used the soil carbon and decomposition model Yasso07 with input from the Norwegian National Forest Inventory (11275 plots, 1960-2012). Estimates were produced at three spatial and three temporal scales. Results showed that a national level average soil carbon stock estimate varied by 10% depending on the applied spatial and temporal scale of aggregation. Higher stocks were found when applying plot-level input compared to country-level input and when long-term climate was used as compared to annual or 5-year mean values. A national level estimate for soil carbon change was similar across spatial scales, but was considerably (60-70%) lower when applying annual or 5-year mean climate compared to long-term mean climate reflecting the recent climatic changes in Norway. This was particularly evident for the forest-dominated districts in the southeastern and central parts of Norway and in the far north. We concluded that the sensitivity of model estimates to spatial aggregation will depend on the region of interest. Further, that using long-term climate averages during periods with strong climatic trends results in large differences in soil carbon estimates. The largest differences in this study were observed in central and northern regions with strongly increasing temperatures.
Dalsgaard, Lise; Astrup, Rasmus; Antón-Fernández, Clara; Borgen, Signe Kynding; Breidenbach, Johannes; Lange, Holger; Lehtonen, Aleksi; Liski, Jari
2016-01-01
Boreal forests contain 30% of the global forest carbon with the majority residing in soils. While challenging to quantify, soil carbon changes comprise a significant, and potentially increasing, part of the terrestrial carbon cycle. Thus, their estimation is important when designing forest-based climate change mitigation strategies and soil carbon change estimates are required for the reporting of greenhouse gas emissions. Organic matter decomposition varies with climate in complex nonlinear ways, rendering data aggregation nontrivial. Here, we explored the effects of temporal and spatial aggregation of climatic and litter input data on regional estimates of soil organic carbon stocks and changes for upland forests. We used the soil carbon and decomposition model Yasso07 with input from the Norwegian National Forest Inventory (11275 plots, 1960–2012). Estimates were produced at three spatial and three temporal scales. Results showed that a national level average soil carbon stock estimate varied by 10% depending on the applied spatial and temporal scale of aggregation. Higher stocks were found when applying plot-level input compared to country-level input and when long-term climate was used as compared to annual or 5-year mean values. A national level estimate for soil carbon change was similar across spatial scales, but was considerably (60–70%) lower when applying annual or 5-year mean climate compared to long-term mean climate reflecting the recent climatic changes in Norway. This was particularly evident for the forest-dominated districts in the southeastern and central parts of Norway and in the far north. We concluded that the sensitivity of model estimates to spatial aggregation will depend on the region of interest. Further, that using long-term climate averages during periods with strong climatic trends results in large differences in soil carbon estimates. The largest differences in this study were observed in central and northern regions with strongly increasing temperatures. PMID:26901763
Frequency domain system identification methods - Matrix fraction description approach
NASA Technical Reports Server (NTRS)
Horta, Luca G.; Juang, Jer-Nan
1993-01-01
This paper presents the use of matrix fraction descriptions for least-squares curve fitting of the frequency spectra to compute two matrix polynomials. The matrix polynomials are intermediate step to obtain a linearized representation of the experimental transfer function. Two approaches are presented: first, the matrix polynomials are identified using an estimated transfer function; second, the matrix polynomials are identified directly from the cross/auto spectra of the input and output signals. A set of Markov parameters are computed from the polynomials and subsequently realization theory is used to recover a minimum order state space model. Unevenly spaced frequency response functions may be used. Results from a simple numerical example and an experiment are discussed to highlight some of the important aspect of the algorithm.
Multi-element stochastic spectral projection for high quantile estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ko, Jordan, E-mail: jordan.ko@mac.com; Garnier, Josselin
2013-06-15
We investigate quantile estimation by multi-element generalized Polynomial Chaos (gPC) metamodel where the exact numerical model is approximated by complementary metamodels in overlapping domains that mimic the model’s exact response. The gPC metamodel is constructed by the non-intrusive stochastic spectral projection approach and function evaluation on the gPC metamodel can be considered as essentially free. Thus, large number of Monte Carlo samples from the metamodel can be used to estimate α-quantile, for moderate values of α. As the gPC metamodel is an expansion about the means of the inputs, its accuracy may worsen away from these mean values where themore » extreme events may occur. By increasing the approximation accuracy of the metamodel, we may eventually improve accuracy of quantile estimation but it is very expensive. A multi-element approach is therefore proposed by combining a global metamodel in the standard normal space with supplementary local metamodels constructed in bounded domains about the design points corresponding to the extreme events. To improve the accuracy and to minimize the sampling cost, sparse-tensor and anisotropic-tensor quadratures are tested in addition to the full-tensor Gauss quadrature in the construction of local metamodels; different bounds of the gPC expansion are also examined. The global and local metamodels are combined in the multi-element gPC (MEgPC) approach and it is shown that MEgPC can be more accurate than Monte Carlo or importance sampling methods for high quantile estimations for input dimensions roughly below N=8, a limit that is very much case- and α-dependent.« less
An investigation of soil-structure interaction effects observed at the MIT Green Building
Taciroglu, Ertugrul; Çelebi, Mehmet; Ghahari, S. Farid; Abazarsa, Fariba
2016-01-01
The soil-foundation impedance function of the MIT Green Building is identified from its response signals recorded during an earthquake. Estimation of foundation impedance functions from seismic response signals is a challenging task, because: (1) the foundation input motions (FIMs) are not directly measurable, (2) the as-built properties of the super-structure are only approximately known, and (3) the soil-foundation impedance functions are inherently frequency-dependent. In the present study, aforementioned difficulties are circumvented by using, in succession, a blind modal identification (BMID) method, a simplified Timoshenko beam model (TBM), and a parametric updating of transfer functions (TFs). First, the flexible-base modal properties of the building are identified from response signals using the BMID method. Then, a flexible-base TBM is updated using the identified modal data. Finally, the frequency-dependent soil-foundation impedance function is estimated by minimizing the discrepancy between TFs (of pairs instrumented floors) that are (1) obtained experimentally from earthquake data and (2) analytically from the updated TBM. Using the fully identified flexible-base TBM, the FIMs as well as building responses at locations without instruments can be predicted, as demonstrated in the present study.
Tensor voting for image correction by global and local intensity alignment.
Jia, Jiaya; Tang, Chi-Keung
2005-01-01
This paper presents a voting method to perform image correction by global and local intensity alignment. The key to our modeless approach is the estimation of global and local replacement functions by reducing the complex estimation problem to the robust 2D tensor voting in the corresponding voting spaces. No complicated model for replacement function (curve) is assumed. Subject to the monotonic constraint only, we vote for an optimal replacement function by propagating the curve smoothness constraint using a dense tensor field. Our method effectively infers missing curve segments and rejects image outliers. Applications using our tensor voting approach are proposed and described. The first application consists of image mosaicking of static scenes, where the voted replacement functions are used in our iterative registration algorithm for computing the best warping matrix. In the presence of occlusion, our replacement function can be employed to construct a visually acceptable mosaic by detecting occlusion which has large and piecewise constant color. Furthermore, by the simultaneous consideration of color matches and spatial constraints in the voting space, we perform image intensity compensation and high contrast image correction using our voting framework, when only two defective input images are given.
Characterization of the LANDSAT sensors' spatial responses
NASA Technical Reports Server (NTRS)
Markham, B. L.
1984-01-01
The characteristics of the thematic mapper (TM) and multispectral scanner (MSS) sensors on LANDSATs 4 and 5 affecting their spatial responses are described, and functions defining the response of the system to an arbitrary input spatial pattern are derived, i.e., transfer functions (TF) and line spread functions (LSF). These design LSF's and TF's were modified based on prelaunch component and system measurements to provide improved estimates. Prelaunch estimates of LSF/FT's are compared to in-orbit estimates. For the MSS instruments, only limited prelaunch scan direction square-wave response (SWR) data were available. Design estimates were modified by convolving in Gaussian blur till the derived LSF/TF's produced SWR's comparable to the measurements. The two MSS instruments were comparable at their temperatures of best focus; separate calculations were performed for bands 1 and 3, band 2 and band 4. The pre-sample nadir effective instantaneous field's of view (EIFOV's) based on the .5 modulation transfer function (MTF) criteria vary from 70 to 75 meters in the track direction and 79 to 82 meters in the scan direction. For the TM instruments more extensive prelaunch measurements were available. Bands 1 to 4, 5 and 7, and 6 were handled separately as were the two instruments. Derived MTF's indicate nadir pre-sample EIFOV's of 32 to 33 meter track (bands 1 to 5, 7) and 36 meter scan (bands 1 to 5, 7) and 1245 meter track (band 6) and 141 meter scan (band 6) for both TM's.
Input design for identification of aircraft stability and control derivatives
NASA Technical Reports Server (NTRS)
Gupta, N. K.; Hall, W. E., Jr.
1975-01-01
An approach for designing inputs to identify stability and control derivatives from flight test data is presented. This approach is based on finding inputs which provide the maximum possible accuracy of derivative estimates. Two techniques of input specification are implemented for this objective - a time domain technique and a frequency domain technique. The time domain technique gives the control input time history and can be used for any allowable duration of test maneuver, including those where data lengths can only be of short duration. The frequency domain technique specifies the input frequency spectrum, and is best applied for tests where extended data lengths, much longer than the time constants of the modes of interest, are possible. These technqiues are used to design inputs to identify parameters in longitudinal and lateral linear models of conventional aircraft. The constraints of aircraft response limits, such as on structural loads, are realized indirectly through a total energy constraint on the input. Tests with simulated data and theoretical predictions show that the new approaches give input signals which can provide more accurate parameter estimates than can conventional inputs of the same total energy. Results obtained indicate that the approach has been brought to the point where it should be used on flight tests for further evaluation.
NASA Technical Reports Server (NTRS)
Hughes, D. L.; Ray, R. J.; Walton, J. T.
1985-01-01
The calculated value of net thrust of an aircraft powered by a General Electric F404-GE-400 afterburning turbofan engine was evaluated for its sensitivity to various input parameters. The effects of a 1.0-percent change in each input parameter on the calculated value of net thrust with two calculation methods are compared. This paper presents the results of these comparisons and also gives the estimated accuracy of the overall net thrust calculation as determined from the influence coefficients and estimated parameter measurement accuracies.
Kratochvíla, Jiří; Jiřík, Radovan; Bartoš, Michal; Standara, Michal; Starčuk, Zenon; Taxt, Torfinn
2016-03-01
One of the main challenges in quantitative dynamic contrast-enhanced (DCE) MRI is estimation of the arterial input function (AIF). Usually, the signal from a single artery (ignoring contrast dispersion, partial volume effects and flow artifacts) or a population average of such signals (also ignoring variability between patients) is used. Multi-channel blind deconvolution is an alternative approach avoiding most of these problems. The AIF is estimated directly from the measured tracer concentration curves in several tissues. This contribution extends the published methods of multi-channel blind deconvolution by applying a more realistic model of the impulse residue function, the distributed capillary adiabatic tissue homogeneity model (DCATH). In addition, an alternative AIF model is used and several AIF-scaling methods are tested. The proposed method is evaluated on synthetic data with respect to the number of tissue regions and to the signal-to-noise ratio. Evaluation on clinical data (renal cell carcinoma patients before and after the beginning of the treatment) gave consistent results. An initial evaluation on clinical data indicates more reliable and less noise sensitive perfusion parameter estimates. Blind multi-channel deconvolution using the DCATH model might be a method of choice for AIF estimation in a clinical setup. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Li, Yinlin; Kundu, Bijoy K.
2018-03-01
The three-compartment model with spillover (SP) and partial volume (PV) corrections has been widely used for noninvasive kinetic parameter studies of dynamic 2-[18F] fluoro-2deoxy-D-glucose (FDG) positron emission tomography images of small animal hearts in vivo. However, the approach still suffers from estimation uncertainty or slow convergence caused by the commonly used optimization algorithms. The aim of this study was to develop an improved optimization algorithm with better estimation performance. Femoral artery blood samples, image-derived input functions from heart ventricles and myocardial time-activity curves (TACs) were derived from data on 16 C57BL/6 mice obtained from the UCLA Mouse Quantitation Program. Parametric equations of the average myocardium and the blood pool TACs with SP and PV corrections in a three-compartment tracer kinetic model were formulated. A hybrid method integrating artificial immune-system and interior-reflective Newton methods were developed to solve the equations. Two penalty functions and one late time-point tail vein blood sample were used to constrain the objective function. The estimation accuracy of the method was validated by comparing results with experimental values using the errors in the areas under curves (AUCs) of the model corrected input function (MCIF) and the 18F-FDG influx constant K i . Moreover, the elapsed time was used to measure the convergence speed. The overall AUC error of MCIF for the 16 mice averaged -1.4 ± 8.2%, with correlation coefficients of 0.9706. Similar results can be seen in the overall K i error percentage, which was 0.4 ± 5.8% with a correlation coefficient of 0.9912. The t-test P value for both showed no significant difference. The mean and standard deviation of the MCIF AUC and K i percentage errors have lower values compared to the previously published methods. The computation time of the hybrid method is also several times lower than using just a stochastic algorithm. The proposed method significantly improved the model estimation performance in terms of the accuracy of the MCIF and K i , as well as the convergence speed.
F-111C Flight Data Reduction and Analysis Procedures
1990-12-01
BPHI NO 24 BTHE YES 25 BPSI NO 26 BH YES 27 LVEL NO 28 LBET NO 29 LALP YES 30 LPHI NO 31 LTHE NO 32 LPSI NO 33 LH NO 34 TABLE 2 INPUTS I Ax YES 2 Av NO...03 * 51 IJ Appendix G - A priori Data from Six Degree of Free- dom Flight Dynamic Model The six degree of freedom flight dynamic mathematical model of...Estimated Mathematical mode response - > of aircraft !Gauss- Maximum " Newton --- likelihood 4,computational cost Salgorithm function Maximum
Efficiency and Productivity Analysis of Multidivisional Firms
NASA Astrophysics Data System (ADS)
Gong, Binlei
Multidivisional firms are those who have footprints in multiple segments and hence using multiple technologies to convert inputs to outputs, which makes it difficult to estimate the resource allocations, aggregated production functions, and technical efficiencies of this type of companies. This dissertation aims to explore and reveal such unobserved information by several parametric and semiparametric stochastic frontier analyses and some other structural models. In the empirical study, this dissertation analyzes the productivity and efficiency for firms in the global oilfield market.
Automated Quantitative Nuclear Cardiology Methods
Motwani, Manish; Berman, Daniel S.; Germano, Guido; Slomka, Piotr J.
2016-01-01
Quantitative analysis of SPECT and PET has become a major part of nuclear cardiology practice. Current software tools can automatically segment the left ventricle, quantify function, establish myocardial perfusion maps and estimate global and local measures of stress/rest perfusion – all with minimal user input. State-of-the-art automated techniques have been shown to offer high diagnostic accuracy for detecting coronary artery disease, as well as predict prognostic outcomes. This chapter briefly reviews these techniques, highlights several challenges and discusses the latest developments. PMID:26590779
Hendrickson, Phillip J; Yu, Gene J; Song, Dong; Berger, Theodore W
2016-01-01
This paper describes a million-plus granule cell compartmental model of the rat hippocampal dentate gyrus, including excitatory, perforant path input from the entorhinal cortex, and feedforward and feedback inhibitory input from dentate interneurons. The model includes experimentally determined morphological and biophysical properties of granule cells, together with glutamatergic AMPA-like EPSP and GABAergic GABAA-like IPSP synaptic excitatory and inhibitory inputs, respectively. Each granule cell was composed of approximately 200 compartments having passive and active conductances distributed throughout the somatic and dendritic regions. Modeling excitatory input from the entorhinal cortex was guided by axonal transport studies documenting the topographical organization of projections from subregions of the medial and lateral entorhinal cortex, plus other important details of the distribution of glutamatergic inputs to the dentate gyrus. Information contained within previously published maps of this major hippocampal afferent were systematically converted to scales that allowed the topographical distribution and relative synaptic densities of perforant path inputs to be quantitatively estimated for inclusion in the current model. Results showed that when medial and lateral entorhinal cortical neurons maintained Poisson random firing, dentate granule cells expressed, throughout the million-cell network, a robust nonrandom pattern of spiking best described as a spatiotemporal "clustering." To identify the network property or properties responsible for generating such firing "clusters," we progressively eliminated from the model key mechanisms, such as feedforward and feedback inhibition, intrinsic membrane properties underlying rhythmic burst firing, and/or topographical organization of entorhinal afferents. Findings conclusively identified topographical organization of inputs as the key element responsible for generating a spatiotemporal distribution of clustered firing. These results uncover a functional organization of perforant path afferents to the dentate gyrus not previously recognized: topography-dependent clusters of granule cell activity as "functional units" or "channels" that organize the processing of entorhinal signals. This modeling study also reveals for the first time how a global signal processing feature of a neural network can evolve from one of its underlying structural characteristics.
Hendrickson, Phillip J.; Yu, Gene J.; Song, Dong; Berger, Theodore W.
2016-01-01
Goal This manuscript describes a million-plus granule cell compartmental model of the rat hippocampal dentate gyrus, including excitatory, perforant path input from the entorhinal cortex, and feedforward and feedback inhibitory input from dentate interneurons. Methods The model includes experimentally determined morphological and biophysical properties of granule cells, together with glutamatergic AMPA-like EPSP and GABAergic GABAA-like IPSP synaptic excitatory and inhibitory inputs, respectively. Each granule cell was composed of approximately 200 compartments having passive and active conductances distributed throughout the somatic and dendritic regions. Modeling excitatory input from the entorhinal cortex was guided by axonal transport studies documenting the topographical organization of projections from subregions of the medial and lateral entorhinal cortex, plus other important details of the distribution of glutamatergic inputs to the dentate gyrus. Information contained within previously published maps of this major hippocampal afferent were systematically converted to scales that allowed the topographical distribution and relative synaptic densities of perforant path inputs to be quantitatively estimated for inclusion in the current model. Results Results showed that when medial and lateral entorhinal cortical neurons maintained Poisson random firing, dentate granule cells expressed, throughout the million-cell network, a robust, non-random pattern of spiking best described as spatio-temporal “clustering”. To identify the network property or properties responsible for generating such firing “clusters”, we progressively eliminated from the model key mechanisms such as feedforward and feedback inhibition, intrinsic membrane properties underlying rhythmic burst firing, and/or topographical organization of entorhinal afferents. Conclusion Findings conclusively identified topographical organization of inputs as the key element responsible for generating a spatio-temporal distribution of clustered firing. These results uncover a functional organization of perforant path afferents to the dentate gyrus not previously recognized: topography-dependent clusters of granule cell activity as “functional units” or “channels” that organize the processing of entorhinal signals. This modeling study also reveals for the first time how a global signal processing feature of a neural network can evolve from one of its underlying structural characteristics. PMID:26087482
Edison, Paul; Brooks, David J; Turkheimer, Federico E; Archer, Hilary A; Hinz, Rainer
2009-11-01
Pittsburgh compound B or [11C]PIB is an amyloid imaging agent which shows a clear differentiation between subjects with Alzheimer's disease (AD) and controls. However the observed signal difference in other forms of dementia such as dementia with Lewy bodies (DLB) is smaller, and mild cognitively impaired (MCI) subjects and some healthy elderly normals may show intermediate levels of [11C]PIB binding. The cerebellum, a commonly used reference region for non-specific tracer uptake in [11C]PIB studies in AD may not be valid in Prion disorders or monogenic forms of AD. The aim of this work was to: 1-compare methods for generating parametric maps of [11C]PIB retention in tissue using a plasma input function in respect of their ability to discriminate between AD subjects and controls and 2-estimate the test-retest reproducibility in AD subjects. 12 AD subjects (5 of which underwent a repeat scan within 6 weeks) and 10 control subjects had 90 minute [11C]PIB dynamic PET scans, and arterial plasma input functions were measured. Parametric maps were generated with graphical analysis of reversible binding (Logan plot), irreversible binding (Patlak plot), and spectral analysis. Between group differentiation was calculated using Student's t-test and comparisons between different methods were made using p values. Reproducibility was assessed by intraclass correlation coefficients (ICC). We found that the 75 min value of the impulse response function showed the best group differentiation and had a higher ICC than volume of distribution maps generated from Logan and spectral analysis. Patlak analysis of [11C]PIB binding was the least reproducible.
40 CFR 98.335 - Procedures for estimating missing data.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Procedures for estimating missing data... missing data. For the carbon input procedure in § 98.333(b), a complete record of all measured parameters... average carbon contents of inputs according to the procedures in § 98.335(b) if data are missing. (b) For...
40 CFR 98.335 - Procedures for estimating missing data.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Procedures for estimating missing data... missing data. For the carbon input procedure in § 98.333(b), a complete record of all measured parameters... average carbon contents of inputs according to the procedures in § 98.335(b) if data are missing. (b) For...
40 CFR 98.335 - Procedures for estimating missing data.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Procedures for estimating missing data... missing data. For the carbon input procedure in § 98.333(b), a complete record of all measured parameters... average carbon contents of inputs according to the procedures in § 98.335(b) if data are missing. (b) For...
40 CFR 98.335 - Procedures for estimating missing data.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Procedures for estimating missing data... missing data. For the carbon input procedure in § 98.333(b), a complete record of all measured parameters... average carbon contents of inputs according to the procedures in § 98.335(b) if data are missing. (b) For...
40 CFR 98.335 - Procedures for estimating missing data.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... missing data. For the carbon input procedure in § 98.333(b), a complete record of all measured parameters... average carbon contents of inputs according to the procedures in § 98.335(b) if data are missing. (b) For...
Cullings, H M; Grant, E J; Egbert, S D; Watanabe, T; Oda, T; Nakamura, F; Yamashita, T; Fuchi, H; Funamoto, S; Marumo, K; Sakata, R; Kodama, Y; Ozasa, K; Kodama, K
2017-01-01
Individual dose estimates calculated by Dosimetry System 2002 (DS02) for the Life Span Study (LSS) of atomic bomb survivors are based on input data that specify location and shielding at the time of the bombing (ATB). A multi-year effort to improve information on survivors' locations ATB has recently been completed, along with comprehensive improvements in their terrain shielding input data and several improvements to computational algorithms used in combination with DS02 at RERF. Improvements began with a thorough review and prioritization of original questionnaire data on location and shielding that were taken from survivors or their proxies in the period 1949-1963. Related source documents varied in level of detail, from relatively simple lists to carefully-constructed technical drawings of structural and other shielding and surrounding neighborhoods. Systematic errors were reduced in this work by restoring the original precision of map coordinates that had been truncated due to limitations in early data processing equipment and by correcting distortions in the old (WWII-era) maps originally used to specify survivors' positions, among other improvements. Distortion errors were corrected by aligning the old maps and neighborhood drawings to orthophotographic mosaics of the cities that were newly constructed from pre-bombing aerial photographs. Random errors that were reduced included simple transcription errors and mistakes in identifying survivors' locations on the old maps. Terrain shielding input data that had been originally estimated for limited groups of survivors using older methods and data sources were completely re-estimated for all survivors using new digital terrain elevation data. Improvements to algorithms included a fix to an error in the DS02 code for coupling house and terrain shielding, a correction for elevation at the survivor's location in calculating angles to the horizon used for terrain shielding input, an improved method for truncating high dose estimates to 4 Gy to reduce the effect of dose error, and improved methods for calculating averaged shielding transmission factors that are used to calculate doses for survivors without detailed shielding input data. Input data changes are summarized and described here in some detail, along with the resulting changes in dose estimates and a simple description of changes in risk estimates for solid cancer mortality. This and future RERF publications will refer to the new dose estimates described herein as "DS02R1 doses."
Harbaugh, Arien W.
2011-01-01
The MFI2005 data-input (entry) program was developed for use with the U.S. Geological Survey modular three-dimensional finite-difference groundwater model, MODFLOW-2005. MFI2005 runs on personal computers and is designed to be easy to use; data are entered interactively through a series of display screens. MFI2005 supports parameter estimation using the UCODE_2005 program for parameter estimation. Data for MODPATH, a particle-tracking program for use with MODFLOW-2005, also can be entered using MFI2005. MFI2005 can be used in conjunction with other data-input programs so that the different parts of a model dataset can be entered by using the most suitable program.
Toward an inventory of nitrogen input to the United States
Accurate accounting of nitrogen inputs is increasingly necessary for policy decisions related to aquatic nutrient pollution. Here we synthesize available data to provide the first integrated estimates of the amount and uncertainty of nitrogen inputs to the United States. Abou...
Nagaraja, Tavarekere N.; Karki, Kishor; Ewing, James R.; Divine, George W.; Fenstermacher, Joseph D.; Patlak, Clifford S.; Knight, Robert A.
2009-01-01
The hypothesis that the arterial input function (AIF) of gadolinium-diethylenetriaminepentaacetic acid (Gd-DTPA) injected by intravenous (iv) bolus and measured by the change in the T1-relaxation rate (ΔR1; R1=1/T1) of superior sagittal sinus blood (AIF-I) approximates the AIF of 14C-labeled Gd-DTPA measured in arterial blood (AIF*) was tested in a rat stroke model (n=13). Contrary to the hypothesis, the initial part of the ΔR1-time curve was underestimated, and the area under the normalized curve for AIF-I was about 15% lower than that for AIF*, the reference AIF. Hypothetical AIF’s for Gd-DTPA (AIF-II) were derived from the AIF* values and averaged to obtain AIF-III. Influx rate constants (Ki) and proton distribution volumes at zero time (Vp+Vo) were estimated with Patlak plots of AIF-I, -II and -III and tissue ΔR1 data. For the regions of interest, the Ki’s estimated with AIF-I were slightly but not significantly higher than those obtained with AIF-II and AIF-III. In contrast, Vp+Vo was significantly higher when calculated with AIF-I. Similar estimates of Ki and Vp+Vo were obtained with AIF-II and AIF-III. In summary, AIF-I underestimated the reference AIF (AIF*); this shortcoming had little effect on the Ki calculated by Patlak plot but produced a significant overestimation of Vp+Vo. PMID:20512853
NASA Astrophysics Data System (ADS)
Tang, W.; Qin, J.; Yang, K.; Liu, S.; Lu, N.; Niu, X.
2015-12-01
Cloud parameters (cloud mask, effective particle radius and liquid/ice water path) are the important inputs in determining surface solar radiation (SSR). These parameters can be derived from MODIS with high accuracy but their temporal resolution is too low to obtain high temporal resolution SSR retrievals. In order to obtain hourly cloud parameters, the Artificial Neural Network (ANN) is applied in this study to directly construct a functional relationship between MODIS cloud products and Multi-functional Transport Satellite (MTSAT) geostationary satellite signals. Meanwhile, an efficient parameterization model for SSR retrieval is introduced and, when driven with MODIS atmospheric and land products, its root mean square error (RMSE) is about 100 W m-2 for 44 Baseline Surface Radiation Network (BSRN) stations. Once the estimated cloud parameters and other information (such as aerosol, precipitable water, ozone and so on) are input to the model, we can derive SSR at high spatio-temporal resolution. The retrieved SSR is first evaluated against hourly radiation data at three experimental stations in the Haihe River Basin of China. The mean bias error (MBE) and RMSE in hourly SSR estimate are 12.0 W m-2 (or 3.5 %) and 98.5 W m-2 (or 28.9 %), respectively. The retrieved SSR is also evaluated against daily radiation data at 90 China Meteorological Administration (CMA) stations. The MBEs are 9.8 W m-2 (5.4 %); the RMSEs in daily and monthly-mean SSR estimates are 34.2 W m-2 (19.1 %) and 22.1 W m-2 (12.3 %), respectively. The accuracy is comparable or even higher than other two radiation products (GLASS and ISCCP-FD), and the present method is more computationally efficient and can produce hourly SSR data at a spatial resolution of 5 km.
NASA Technical Reports Server (NTRS)
Kihm, Frederic; Rizzi, Stephen A.; Ferguson, Neil S.; Halfpenny, Andrew
2013-01-01
High cycle fatigue of metals typically occurs through long term exposure to time varying loads which, although modest in amplitude, give rise to microscopic cracks that can ultimately propagate to failure. The fatigue life of a component is primarily dependent on the stress amplitude response at critical failure locations. For most vibration tests, it is common to assume a Gaussian distribution of both the input acceleration and stress response. In real life, however, it is common to experience non-Gaussian acceleration input, and this can cause the response to be non-Gaussian. Examples of non-Gaussian loads include road irregularities such as potholes in the automotive world or turbulent boundary layer pressure fluctuations for the aerospace sector or more generally wind, wave or high amplitude acoustic loads. The paper first reviews some of the methods used to generate non-Gaussian excitation signals with a given power spectral density and kurtosis. The kurtosis of the response is examined once the signal is passed through a linear time invariant system. Finally an algorithm is presented that determines the output kurtosis based upon the input kurtosis, the input power spectral density and the frequency response function of the system. The algorithm is validated using numerical simulations. Direct applications of these results include improved fatigue life estimations and a method to accelerate shaker tests by generating high kurtosis, non-Gaussian drive signals.
Artificial neural network model for ozone concentration estimation and Monte Carlo analysis
NASA Astrophysics Data System (ADS)
Gao, Meng; Yin, Liting; Ning, Jicai
2018-07-01
Air pollution in urban atmosphere directly affects public-health; therefore, it is very essential to predict air pollutant concentrations. Air quality is a complex function of emissions, meteorology and topography, and artificial neural networks (ANNs) provide a sound framework for relating these variables. In this study, we investigated the feasibility of using ANN model with meteorological parameters as input variables to predict ozone concentration in the urban area of Jinan, a metropolis in Northern China. We firstly found that the architecture of network of neurons had little effect on the predicting capability of ANN model. A parsimonious ANN model with 6 routinely monitored meteorological parameters and one temporal covariate (the category of day, i.e. working day, legal holiday and regular weekend) as input variables was identified, where the 7 input variables were selected following the forward selection procedure. Compared with the benchmarking ANN model with 9 meteorological and photochemical parameters as input variables, the predicting capability of the parsimonious ANN model was acceptable. Its predicting capability was also verified in term of warming success ratio during the pollution episodes. Finally, uncertainty and sensitivity analysis were also performed based on Monte Carlo simulations (MCS). It was concluded that the ANN could properly predict the ambient ozone level. Maximum temperature, atmospheric pressure, sunshine duration and maximum wind speed were identified as the predominate input variables significantly influencing the prediction of ambient ozone concentrations.
Estimating outcomes in newborn infants using fuzzy logic
Chaves, Luciano Eustáquio; Nascimento, Luiz Fernando C.
2014-01-01
OBJECTIVE: To build a linguistic model using the properties of fuzzy logic to estimate the risk of death of neonates admitted to a Neonatal Intensive Care Unit. METHODS: Computational model using fuzzy logic. The input variables of the model were birth weight, gestational age, 5th-minute Apgar score and inspired fraction of oxygen in newborn infants admitted to a Neonatal Intensive Care Unit of Taubaté, Southeast Brazil. The output variable was the risk of death, estimated as a percentage. Three membership functions related to birth weight, gestational age and 5th-minute Apgar score were built, as well as two functions related to the inspired fraction of oxygen; the risk presented five membership functions. The model was developed using the Mandani inference by means of Matlab(r) software. The model values were compared with those provided by experts and their performance was estimated by ROC curve. RESULTS: 100 newborns were included, and eight of them died. The model estimated an average possibility of death of 49.7±29.3%, and the possibility of hospital discharge was 24±17.5%. These values are different when compared by Student's t-test (p<0.001). The correlation test revealed r=0.80 and the performance of the model was 81.9%. CONCLUSIONS: This predictive, non-invasive and low cost model showed a good accuracy and can be applied in neonatal care, given the easiness of its use. PMID:25119746
Galka, Andreas; Siniatchkin, Michael; Stephani, Ulrich; Groening, Kristina; Wolff, Stephan; Bosch-Bayard, Jorge; Ozaki, Tohru
2010-12-01
The analysis of time series obtained by functional magnetic resonance imaging (fMRI) may be approached by fitting predictive parametric models, such as nearest-neighbor autoregressive models with exogeneous input (NNARX). As a part of the modeling procedure, it is possible to apply instantaneous linear transformations to the data. Spatial smoothing, a common preprocessing step, may be interpreted as such a transformation. The autoregressive parameters may be constrained, such that they provide a response behavior that corresponds to the canonical haemodynamic response function (HRF). We present an algorithm for estimating the parameters of the linear transformations and of the HRF within a rigorous maximum-likelihood framework. Using this approach, an optimal amount of both the spatial smoothing and the HRF can be estimated simultaneously for a given fMRI data set. An example from a motor-task experiment is discussed. It is found that, for this data set, weak, but non-zero, spatial smoothing is optimal. Furthermore, it is demonstrated that activated regions can be estimated within the maximum-likelihood framework.
African crop yield reductions due to increasingly unbalanced Nitrogen and Phosphorus consumption
NASA Astrophysics Data System (ADS)
van der Velde, Marijn; Folberth, Christian; Balkovič, Juraj; Ciais, Philippe; Fritz, Steffen; Janssens, Ivan A.; Obersteiner, Michael; See, Linda; Skalský, Rastislav; Xiong, Wei; Peñuealas, Josep
2014-05-01
The impact of soil nutrient depletion on crop production has been known for decades, but robust assessments of the impact of increasingly unbalanced nitrogen (N) and phosphorus (P) application rates on crop production are lacking. Here, we use crop response functions based on 741 FAO maize crop trials and EPIC crop modeling across Africa to examine maize yield deficits resulting from unbalanced N:P applications under low, medium, and high input scenarios, for past (1975), current, and future N:P mass ratios of respectively, 1:0.29, 1:0.15, and 1:0.05. At low N inputs (10 kg/ha), current yield deficits amount to 10% but will increase up to 27% under the assumed future N:P ratio, while at medium N inputs (50 kg N/ha), future yield losses could amount to over 40%. The EPIC crop model was then used to simulate maize yields across Africa. The model results showed relative median future yield reductions at low N inputs of 40%, and 50% at medium and high inputs, albeit with large spatial variability. Dominant low-quality soils such as Ferralsols, which are strongly adsorbing P, and Arenosols with a low nutrient retention capacity, are associated with a strong yield decline, although Arenosols show very variable crop yield losses at low inputs. Optimal N:P ratios, i.e. those where the lowest amount of applied P produces the highest yield (given N input) where calculated with EPIC to be as low as 1:0.5. Finally, we estimated the additional P required given current N inputs, and given N inputs that would allow Africa to close yield gaps (ca. 70%). At current N inputs, P consumption would have to increase 2.3-fold to be optimal, and to increase 11.7-fold to close yield gaps. The P demand to overcome these yield deficits would provide a significant additional pressure on current global extraction of P resources.
NASA Astrophysics Data System (ADS)
Fattoruso, Grazia; Longobardi, Antonia; Pizzuti, Alfredo; Molinara, Mario; Marocco, Claudio; De Vito, Saverio; Tortorella, Francesco; Di Francia, Girolamo
2017-06-01
Rainfall data collection gathered in continuous by a distributed rain gauge network is instrumental to more effective hydro-geological risk forecasting and management services though the input estimated rainfall fields suffer from prediction uncertainty. Optimal rain gauge networks can generate accurate estimated rainfall fields. In this research work, a methodology has been investigated for evaluating an optimal rain gauges network aimed at robust hydrogeological hazard investigations. The rain gauges of the Sarno River basin (Southern Italy) has been evaluated by optimizing a two-objective function that maximizes the estimated accuracy and minimizes the total metering cost through the variance reduction algorithm along with the climatological variogram (time-invariant). This problem has been solved by using an enumerative search algorithm, evaluating the exact Pareto-front by an efficient computational time.
Illumination estimation via thin-plate spline interpolation.
Shi, Lilong; Xiong, Weihua; Funt, Brian
2011-05-01
Thin-plate spline interpolation is used to interpolate the chromaticity of the color of the incident scene illumination across a training set of images. Given the image of a scene under unknown illumination, the chromaticity of the scene illumination can be found from the interpolated function. The resulting illumination-estimation method can be used to provide color constancy under changing illumination conditions and automatic white balancing for digital cameras. A thin-plate spline interpolates over a nonuniformly sampled input space, which in this case is a training set of image thumbnails and associated illumination chromaticities. To reduce the size of the training set, incremental k medians are applied. Tests on real images demonstrate that the thin-plate spline method can estimate the color of the incident illumination quite accurately, and the proposed training set pruning significantly decreases the computation.
Saxena, Anupam; Lipson, Hod; Valero-Cuevas, Francisco J.
2012-01-01
In systems and computational biology, much effort is devoted to functional identification of systems and networks at the molecular-or cellular scale. However, similarly important networks exist at anatomical scales such as the tendon network of human fingers: the complex array of collagen fibers that transmits and distributes muscle forces to finger joints. This network is critical to the versatility of the human hand, and its function has been debated since at least the 16th century. Here, we experimentally infer the structure (both topology and parameter values) of this network through sparse interrogation with force inputs. A population of models representing this structure co-evolves in simulation with a population of informative future force inputs via the predator-prey estimation-exploration algorithm. Model fitness depends on their ability to explain experimental data, while the fitness of future force inputs depends on causing maximal functional discrepancy among current models. We validate our approach by inferring two known synthetic Latex networks, and one anatomical tendon network harvested from a cadaver's middle finger. We find that functionally similar but structurally diverse models can exist within a narrow range of the training set and cross-validation errors. For the Latex networks, models with low training set error [<4%] and resembling the known network have the smallest cross-validation errors [∼5%]. The low training set [<4%] and cross validation [<7.2%] errors for models for the cadaveric specimen demonstrate what, to our knowledge, is the first experimental inference of the functional structure of complex anatomical networks. This work expands current bioinformatics inference approaches by demonstrating that sparse, yet informative interrogation of biological specimens holds significant computational advantages in accurate and efficient inference over random testing, or assuming model topology and only inferring parameters values. These findings also hold clues to both our evolutionary history and the development of versatile machines. PMID:23144601
Saxena, Anupam; Lipson, Hod; Valero-Cuevas, Francisco J
2012-01-01
In systems and computational biology, much effort is devoted to functional identification of systems and networks at the molecular-or cellular scale. However, similarly important networks exist at anatomical scales such as the tendon network of human fingers: the complex array of collagen fibers that transmits and distributes muscle forces to finger joints. This network is critical to the versatility of the human hand, and its function has been debated since at least the 16(th) century. Here, we experimentally infer the structure (both topology and parameter values) of this network through sparse interrogation with force inputs. A population of models representing this structure co-evolves in simulation with a population of informative future force inputs via the predator-prey estimation-exploration algorithm. Model fitness depends on their ability to explain experimental data, while the fitness of future force inputs depends on causing maximal functional discrepancy among current models. We validate our approach by inferring two known synthetic Latex networks, and one anatomical tendon network harvested from a cadaver's middle finger. We find that functionally similar but structurally diverse models can exist within a narrow range of the training set and cross-validation errors. For the Latex networks, models with low training set error [<4%] and resembling the known network have the smallest cross-validation errors [∼5%]. The low training set [<4%] and cross validation [<7.2%] errors for models for the cadaveric specimen demonstrate what, to our knowledge, is the first experimental inference of the functional structure of complex anatomical networks. This work expands current bioinformatics inference approaches by demonstrating that sparse, yet informative interrogation of biological specimens holds significant computational advantages in accurate and efficient inference over random testing, or assuming model topology and only inferring parameters values. These findings also hold clues to both our evolutionary history and the development of versatile machines.
Statistical methods for biodosimetry in the presence of both Berkson and classical measurement error
NASA Astrophysics Data System (ADS)
Miller, Austin
In radiation epidemiology, the true dose received by those exposed cannot be assessed directly. Physical dosimetry uses a deterministic function of the source term, distance and shielding to estimate dose. For the atomic bomb survivors, the physical dosimetry system is well established. The classical measurement errors plaguing the location and shielding inputs to the physical dosimetry system are well known. Adjusting for the associated biases requires an estimate for the classical measurement error variance, for which no data-driven estimate exists. In this case, an instrumental variable solution is the most viable option to overcome the classical measurement error indeterminacy. Biological indicators of dose may serve as instrumental variables. Specification of the biodosimeter dose-response model requires identification of the radiosensitivity variables, for which we develop statistical definitions and variables. More recently, researchers have recognized Berkson error in the dose estimates, introduced by averaging assumptions for many components in the physical dosimetry system. We show that Berkson error induces a bias in the instrumental variable estimate of the dose-response coefficient, and then address the estimation problem. This model is specified by developing an instrumental variable mixed measurement error likelihood function, which is then maximized using a Monte Carlo EM Algorithm. These methods produce dose estimates that incorporate information from both physical and biological indicators of dose, as well as the first instrumental variable based data-driven estimate for the classical measurement error variance.
RRAWFLOW: Rainfall-Response Aquifer and Watershed Flow Model (v1.15)
Long, Andrew J.
2015-01-01
The Rainfall-Response Aquifer and Watershed Flow Model (RRAWFLOW) is a lumped-parameter model that simulates streamflow, spring flow, groundwater level, or solute transport for a measurement point in response to a system input of precipitation, recharge, or solute injection. I introduce the first version of RRAWFLOW available for download and public use and describe additional options. The open-source code is written in the R language and is available at http://sd.water.usgs.gov/projects/RRAWFLOW/RRAWFLOW.html along with an example model of streamflow. RRAWFLOW includes a time-series process to estimate recharge from precipitation and simulates the response to recharge by convolution, i.e., the unit-hydrograph approach. Gamma functions are used for estimation of parametric impulse-response functions (IRFs); a combination of two gamma functions results in a double-peaked IRF. A spline fit to a set of control points is introduced as a new method for estimation of nonparametric IRFs. Several options are included to simulate time-variant systems. For many applications, lumped models simulate the system response with equal accuracy to that of distributed models, but moreover, the ease of model construction and calibration of lumped models makes them a good choice for many applications (e.g., estimating missing periods in a hydrologic record). RRAWFLOW provides professional hydrologists and students with an accessible and versatile tool for lumped-parameter modeling.
Improved protein model quality assessments by changing the target function.
Uziela, Karolis; Menéndez Hurtado, David; Shu, Nanjiang; Wallner, Björn; Elofsson, Arne
2018-06-01
Protein modeling quality is an important part of protein structure prediction. We have for more than a decade developed a set of methods for this problem. We have used various types of description of the protein and different machine learning methodologies. However, common to all these methods has been the target function used for training. The target function in ProQ describes the local quality of a residue in a protein model. In all versions of ProQ the target function has been the S-score. However, other quality estimation functions also exist, which can be divided into superposition- and contact-based methods. The superposition-based methods, such as S-score, are based on a rigid body superposition of a protein model and the native structure, while the contact-based methods compare the local environment of each residue. Here, we examine the effects of retraining our latest predictor, ProQ3D, using identical inputs but different target functions. We find that the contact-based methods are easier to predict and that predictors trained on these measures provide some advantages when it comes to identifying the best model. One possible reason for this is that contact based methods are better at estimating the quality of multi-domain targets. However, training on the S-score gives the best correlation with the GDT_TS score, which is commonly used in CASP to score the global model quality. To take the advantage of both of these features we provide an updated version of ProQ3D that predicts local and global model quality estimates based on different quality estimates. © 2018 Wiley Periodicals, Inc.
Hamilton, Matthew B; Tartakovsky, Maria; Battocletti, Amy
2018-05-01
The genetic effective population size, N e , can be estimated from the average gametic disequilibrium (r2^) between pairs of loci, but such estimates require evaluation of assumptions and currently have few methods to estimate confidence intervals. speed-ne is a suite of matlab computer code functions to estimate Ne^ from r2^ with a graphical user interface and a rich set of outputs that aid in understanding data patterns and comparing multiple estimators. speed-ne includes functions to either generate or input simulated genotype data to facilitate comparative studies of Ne^ estimators under various population genetic scenarios. speed-ne was validated with data simulated under both time-forward and time-backward coalescent models of genetic drift. Three classes of estimators were compared with simulated data to examine several general questions: what are the impacts of microsatellite null alleles on Ne^, how should missing data be treated, and does disequilibrium contributed by reduced recombination among some loci in a sample impact Ne^. Estimators differed greatly in precision in the scenarios examined, and a widely employed Ne^ estimator exhibited the largest variances among replicate data sets. speed-ne implements several jackknife approaches to estimate confidence intervals, and simulated data showed that jackknifing over loci and jackknifing over individuals provided ~95% confidence interval coverage for some estimators and should be useful for empirical studies. speed-ne provides an open-source extensible tool for estimation of Ne^ from empirical genotype data and to conduct simulations of both microsatellite and single nucleotide polymorphism (SNP) data types to develop expectations and to compare Ne^ estimators. © 2018 John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Inventor); Awwal, Abdul A. S. (Inventor); Karim, Mohammad A. (Inventor)
1993-01-01
An inner-product array processor is provided with thresholding of the inner product during each iteration to make more significant the inner product employed in estimating a vector to be used as the input vector for the next iteration. While stored vectors and estimated vectors are represented in bipolar binary (1,-1), only those elements of an initial partial input vector that are believed to be common with those of a stored vector are represented in bipolar binary; the remaining elements of a partial input vector are set to 0. This mode of representation, in which the known elements of a partial input vector are in bipolar binary form and the remaining elements are set equal to 0, is referred to as trinary representation. The initial inner products corresponding to the partial input vector will then be equal to the number of known elements. Inner-product thresholding is applied to accelerate convergence and to avoid convergence to a negative input product.
How much to trust the senses: Likelihood learning
Sato, Yoshiyuki; Kording, Konrad P.
2014-01-01
Our brain often needs to estimate unknown variables from imperfect information. Our knowledge about the statistical distributions of quantities in our environment (called priors) and currently available information from sensory inputs (called likelihood) are the basis of all Bayesian models of perception and action. While we know that priors are learned, most studies of prior-likelihood integration simply assume that subjects know about the likelihood. However, as the quality of sensory inputs change over time, we also need to learn about new likelihoods. Here, we show that human subjects readily learn the distribution of visual cues (likelihood function) in a way that can be predicted by models of statistically optimal learning. Using a likelihood that depended on color context, we found that a learned likelihood generalized to new priors. Thus, we conclude that subjects learn about likelihood. PMID:25398975
Speech versus manual control of camera functions during a telerobotic task
NASA Technical Reports Server (NTRS)
Bierschwale, John M.; Sampaio, Carlos E.; Stuart, Mark A.; Smith, Randy L.
1989-01-01
Voice input for control of camera functions was investigated in this study. Objective were to (1) assess the feasibility of a voice-commanded camera control system, and (2) identify factors that differ between voice and manual control of camera functions. Subjects participated in a remote manipulation task that required extensive camera-aided viewing. Each subject was exposed to two conditions, voice and manual input, with a counterbalanced administration order. Voice input was found to be significantly slower than manual input for this task. However, in terms of remote manipulator performance errors and subject preference, there was no difference between modalities. Voice control of continuous camera functions is not recommended. It is believed that the use of voice input for discrete functions, such as multiplexing or camera switching, could aid performance. Hybrid mixes of voice and manual input may provide the best use of both modalities. This report contributes to a better understanding of the issues that affect the design of an efficient human/telerobot interface.
Estimating Fluctuating Pressures From Distorted Measurements
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Leondes, Cornelius T.
1994-01-01
Two algorithms extract estimates of time-dependent input (upstream) pressures from outputs of pressure sensors located at downstream ends of pneumatic tubes. Effect deconvolutions that account for distoring effects of tube upon pressure signal. Distortion of pressure measurements by pneumatic tubes also discussed in "Distortion of Pressure Signals in Pneumatic Tubes," (ARC-12868). Varying input pressure estimated from measured time-varying output pressure by one of two deconvolution algorithms that take account of measurement noise. Algorithms based on minimum-covariance (Kalman filtering) theory.
Direct system parameter identification of mechanical structures with application to modal analysis
NASA Technical Reports Server (NTRS)
Leuridan, J. M.; Brown, D. L.; Allemang, R. J.
1982-01-01
In this paper a method is described to estimate mechanical structure characteristics in terms of mass, stiffness and damping matrices using measured force input and response data. The estimated matrices can be used to calculate a consistent set of damped natural frequencies and damping values, mode shapes and modal scale factors for the structure. The proposed technique is attractive as an experimental modal analysis method since the estimation of the matrices does not require previous estimation of frequency responses and since the method can be used, without any additional complications, for multiple force input structure testing.
NASA Astrophysics Data System (ADS)
Sjoeholm, K. R.
1981-02-01
The dual approach to the theory of production is used to estimate factor demand functions of the Swedish manufacturing industry. Two approximations of the cost function, the translog and the generalized Leontief models, are used. The price elasticities of the factor demand do not seem to depend on the choice of model. This is at least true as to the sign pattern and as to the inputs capital, labor, total energy and other materials. Total energy is separated into solid fuels, gasoline, fuel oil, electricity and a residual. Fuel oil and electricity are found to be substitutes by both models. Capital and energy are shown to be substitutes. This implies that Swedish industry will save more energy if the capital cost can be reduced. Both models are, in the best versions, able to detect an inappropriate variable. The assumption of perfect competition on the product market, is shown to be inadequate by both models. When this assumption is relaxed, the normal substitution pattern among the inputs is resumed.
Sun, Kangfeng; Ji, Fenzhu; Yan, Xiaoyu; Jiang, Kai; Yang, Shichun
2018-01-01
As NOx emissions legislation for Diesel-engines is becoming more stringent than ever before, an aftertreatment system has been widely used in many countries. Specifically, to reduce the NOx emissions, a selective catalytic reduction(SCR) system has become one of the most promising techniques for Diesel-engine vehicle applications. In the SCR system, input ammonia concentration and ammonia coverage ratio are regarded as essential states in the control-oriental model. Currently, an ammonia sensor placed before the SCR Can is a good strategy for the input ammonia concentration value. However, physical sensor would increase the SCR system cost and the ammonia coverage ratio information cannot be directly measured by physical sensor. Aiming to tackle this problem, an observer based on particle filter(PF) is investigated to estimate the input ammonia concentration and ammonia coverage ratio. Simulation results through the experimentally-validated full vehicle simulator cX-Emission show that the performance of observer based on PF is outstanding, and the estimation error is very small.
Ji, Fenzhu; Yan, Xiaoyu; Jiang, Kai
2018-01-01
As NOx emissions legislation for Diesel-engines is becoming more stringent than ever before, an aftertreatment system has been widely used in many countries. Specifically, to reduce the NOx emissions, a selective catalytic reduction(SCR) system has become one of the most promising techniques for Diesel-engine vehicle applications. In the SCR system, input ammonia concentration and ammonia coverage ratio are regarded as essential states in the control-oriental model. Currently, an ammonia sensor placed before the SCR Can is a good strategy for the input ammonia concentration value. However, physical sensor would increase the SCR system cost and the ammonia coverage ratio information cannot be directly measured by physical sensor. Aiming to tackle this problem, an observer based on particle filter(PF) is investigated to estimate the input ammonia concentration and ammonia coverage ratio. Simulation results through the experimentally-validated full vehicle simulator cX-Emission show that the performance of observer based on PF is outstanding, and the estimation error is very small. PMID:29408924
Simplifications in analyzing positron emission tomography data: effects on outcome measures.
Logan, Jean; Alexoff, David; Kriplani, Aarti
2007-10-01
Initial validation studies of new radiotracers generally involve kinetic models that require a measured arterial input function. This allows for the separation of tissue binding from delivery and blood flow effects. However, when using a tracer in a clinical setting, it is necessary to eliminate arterial blood sampling due to its invasiveness and the extra burden of counting and analyzing the blood samples for metabolites. In some cases, it may also be necessary to replace dynamic scanning with a shortened scanning period some time after tracer injection, as is done with FDG (F-18 fluorodeoxyglucose). These approximations represent loss of information. In this work, we considered several questions related to this: (1) Do differences in experimental conditions (drug treatments) or populations affect the input function, and what effect, if any, does this have on the final outcome measure? (2) How do errors in metabolite measurements enter into results? (3) What errors are incurred if the uptake ratio is used in place of the distribution volume ratio? (4) Is one- or two-point blood sampling any better for FDG data than the standardized uptake value? and (5) If blood sampling is necessary, what alternatives are there to arterial blood sampling? The first three questions were considered in terms of data from human dynamic positron emission tomography (PET) studies under conditions of baseline and drug pretreatment. Data from [11C]raclopride studies and those from the norepinephrine transporter tracer (S,S)-[11C]O-methyl reboxetine were used. Calculation of a metabolic rate for FDG using the operational equation requires a measured input function. We tested a procedure based on two blood samples to estimate the plasma integral and convolution that occur in the operational equation. There are some tracers for which blood sampling is necessary. Strategies for brain studies involve using the internal carotids in estimating the radioactivity after correcting for partial volume and spillover in order to eliminate arterial sampling. Some venous blood samples are still required for metabolite measurements. The ultimate solution to the problem of arterial sampling may be a wrist scanner, which acts as a small PET camera for imaging the arteries in the wrist. This is currently under development.
Mu, Zhijian; Huang, Aiying; Ni, Jiupai; Xie, Deti
2014-01-01
Organic soils are an important source of N2O, but global estimates of these fluxes remain uncertain because measurements are sparse. We tested the hypothesis that N2O fluxes can be predicted from estimates of mineral nitrogen input, calculated from readily-available measurements of CO2 flux and soil C/N ratio. From studies of organic soils throughout the world, we compiled a data set of annual CO2 and N2O fluxes which were measured concurrently. The input of soil mineral nitrogen in these studies was estimated from applied fertilizer nitrogen and organic nitrogen mineralization. The latter was calculated by dividing the rate of soil heterotrophic respiration by soil C/N ratio. This index of mineral nitrogen input explained up to 69% of the overall variability of N2O fluxes, whereas CO2 flux or soil C/N ratio alone explained only 49% and 36% of the variability, respectively. Including water table level in the model, along with mineral nitrogen input, further improved the model with the explanatory proportion of variability in N2O flux increasing to 75%. Unlike grassland or cropland soils, forest soils were evidently nitrogen-limited, so water table level had no significant effect on N2O flux. Our proposed approach, which uses the product of soil-derived CO2 flux and the inverse of soil C/N ratio as a proxy for nitrogen mineralization, shows promise for estimating regional or global N2O fluxes from organic soils, although some further enhancements may be warranted.
GDF v2.0, an enhanced version of GDF
NASA Astrophysics Data System (ADS)
Tsoulos, Ioannis G.; Gavrilis, Dimitris; Dermatas, Evangelos
2007-12-01
An improved version of the function estimation program GDF is presented. The main enhancements of the new version include: multi-output function estimation, capability of defining custom functions in the grammar and selection of the error function. The new version has been evaluated on a series of classification and regression datasets, that are widely used for the evaluation of such methods. It is compared to two known neural networks and outperforms them in 5 (out of 10) datasets. Program summaryTitle of program: GDF v2.0 Catalogue identifier: ADXC_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXC_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 98 147 No. of bytes in distributed program, including test data, etc.: 2 040 684 Distribution format: tar.gz Programming language: GNU C++ Computer: The program is designed to be portable in all systems running the GNU C++ compiler Operating system: Linux, Solaris, FreeBSD RAM: 200000 bytes Classification: 4.9 Does the new version supersede the previous version?: Yes Nature of problem: The technique of function estimation tries to discover from a series of input data a functional form that best describes them. This can be performed with the use of parametric models, whose parameters can adapt according to the input data. Solution method: Functional forms are being created by genetic programming which are approximations for the symbolic regression problem. Reasons for new version: The GDF package was extended in order to be more flexible and user customizable than the old package. The user can extend the package by defining his own error functions and he can extend the grammar of the package by adding new functions to the function repertoire. Also, the new version can perform function estimation of multi-output functions and it can be used for classification problems. Summary of revisions: The following features have been added to the package GDF: Multi-output function approximation. The package can now approximate any function f:R→R. This feature gives also to the package the capability of performing classification and not only regression. User defined function can be added to the repertoire of the grammar, extending the regression capabilities of the package. This feature is limited to 3 functions, but easily this number can be increased. Capability of selecting the error function. The package offers now to the user apart from the mean square error other error functions such as: mean absolute square error, maximum square error. Also, user defined error functions can be added to the set of error functions. More verbose output. The main program displays more information to the user as well as the default values for the parameters. Also, the package gives to the user the capability to define an output file, where the output of the gdf program for the testing set will be stored after the termination of the process. Additional comments: A technical report describing the revisions, experiments and test runs is packaged with the source code. Running time: Depending on the train data.
NASA Astrophysics Data System (ADS)
Kitikun, Medhawin
This dissertation provides a new method of measuring efforts by manufacturing industries to reduce their emissions by curtailing electricity consumption. Employing comprehensive firm-level data from the National Manufacture Annual Surveys of South Korea and Thailand, I construct the measure from estimates of revenue functions by industry. The data consists of firms from more than 20 industries in each year from 1982 to 2005 for Korea and from 2001 to 2008 for Thailand. With a total of more than two million observations, I estimate revenue functions for each industry and year. Here, I use three inputs: number of employees(L), fixed asset stock(K), and electricity consumption(E) and two types of functional forms to represent each industry's revenue function. Second, under market competitive condition, I find that profit maximizing firms deviated their level of electricity usage in production from the profit-maximizing level during the time period for both countries, and I develop a theoretical framework to explain this behavior. Then, I tested the theory using my empirical models. Results support the notion of a hidden environmental value expressed by firms in the form of voluntary deviations from profit-maximizing levels of input demand. The measure used is the gap between the marginal revenue product of electricity and its price. This gap should increase with income, consistent with the Environmental Kuznets Curve literature. My current model provides considerable support for this proposition. Estimates indicate, in most industries, a negative relationship between per-capita income and emissions. In the final section of the dissertation, I consider the equitable distribution of emissions reduction burden under an international agreement such as the reduction effort, Kyoto Protocol. Both developed and developing countries have to cut their emissions to a specific reduction percentage target. Domestically, I present two extreme scenarios. In the first scenario, manufacturing industries take full responsibility for emissions reductions by curtailing their use of energy without any subsidies from the government. Revenue function estimates provide measures of the differential costs imposed on different industries by emissions reductions. In the second scenario, emissions reductions are achieved by changing the mix of electricity generation technologies used by the power generation sector within the country. For the international case, I focus on the fairness of emission reduction responsibility among countries. To be fair to countries at different levels of development and with different rate of carbon emissions, I propose a new method to adjust the timing and rates of emission reductions based on a lifetime cumulative emission per capita.
Katz, Matthew L.; Viney, Tim J.; Nikolic, Konstantin
2016-01-01
Sensory stimuli are encoded by diverse kinds of neurons but the identities of the recorded neurons that are studied are often unknown. We explored in detail the firing patterns of eight previously defined genetically-identified retinal ganglion cell (RGC) types from a single transgenic mouse line. We first introduce a new technique of deriving receptive field vectors (RFVs) which utilises a modified form of mutual information (“Quadratic Mutual Information”). We analysed the firing patterns of RGCs during presentation of short duration (~10 second) complex visual scenes (natural movies). We probed the high dimensional space formed by the visual input for a much smaller dimensional subspace of RFVs that give the most information about the response of each cell. The new technique is very efficient and fast and the derivation of novel types of RFVs formed by the natural scene visual input was possible even with limited numbers of spikes per cell. This approach enabled us to estimate the 'visual memory' of each cell type and the corresponding receptive field area by calculating Mutual Information as a function of the number of frames and radius. Finally, we made predictions of biologically relevant functions based on the RFVs of each cell type. RGC class analysis was complemented with results for the cells’ response to simple visual input in the form of black and white spot stimulation, and their classification on several key physiological metrics. Thus RFVs lead to predictions of biological roles based on limited data and facilitate analysis of sensory-evoked spiking data from defined cell types. PMID:26845435
Fusion of Hard and Soft Information in Nonparametric Density Estimation
2015-06-10
and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for
Estimates of Storage Capacity of Multilayer Perceptron with Threshold Logic Hidden Units.
Kowalczyk, Adam
1997-11-01
We estimate the storage capacity of multilayer perceptron with n inputs, h(1) threshold logic units in the first hidden layer and 1 output. We show that if the network can memorize 50% of all dichotomies of a randomly selected N-tuple of points of R(n) with probability 1, then N=2(nh(1)+1), while at 100% memorization N=nh(1)+1. Furthermore, if the bounds are reached, then the first hidden layer must be fully connected to the input. It is shown that such a network has memory capacity (in the sense of Cover) between nh(1)+1 and 2(nh(1)+1) input patterns and for the most efficient networks in this class between 1 and 2 input patterns per connection. Comparing these results with the recent estimates of VC-dimension we find that in contrast to a single neuron case, the VC-dimension exceeds the capacity for a sufficiently large n and h(1). The results are based on the derivation of an explicit expression for the number of dichotomies which can be implemented by such a network for a special class of N-tuples of input patterns which has a positive probability of being randomly chosen.
Net anthropogenic nitrogen inputs and nitrogen fluxes from Indian watersheds: An initial assessment
NASA Astrophysics Data System (ADS)
Swaney, D. P.; Hong, B.; Paneer Selvam, A.; Howarth, R. W.; Ramesh, R.; Purvaja, R.
2015-01-01
In this paper, we apply an established methodology for estimating Net Anthropogenic Nitrogen Inputs (NANI) to India and its major watersheds. Our primary goal here is to provide initial estimates of major nitrogen inputs of NANI for India, at the country level and for major Indian watersheds, including data sources and parameter estimates, making some assumptions as needed in areas of limited data availability. Despite data limitations, we believe that it is clear that the main anthropogenic N source is agricultural fertilizer, which is being produced and applied at a growing rate, followed by N fixation associated with rice, leguminous crops, and sugar cane. While India appears to be a net exporter of N in food/feed as reported elsewhere (Lassaletta et al., 2013b), the balance of N associated with exports and imports of protein in food and feedstuffs is sensitive to protein content and somewhat uncertain. While correlating watershed N inputs with riverine N fluxes is problematic due in part to limited available riverine data, we have assembled some data for comparative purposes. We also suggest possible improvements in methods for future studies, and the potential for estimating riverine N fluxes to coastal waters.
Yim, Sunghoon; Jeon, Seokhee; Choi, Seungmoon
2016-01-01
In this paper, we present an extended data-driven haptic rendering method capable of reproducing force responses during pushing and sliding interaction on a large surface area. The main part of the approach is a novel input variable set for the training of an interpolation model, which incorporates the position of a proxy - an imaginary contact point on the undeformed surface. This allows us to estimate friction in both sliding and sticking states in a unified framework. Estimating the proxy position is done in real-time based on simulation using a sliding yield surface - a surface defining a border between the sliding and sticking regions in the external force space. During modeling, the sliding yield surface is first identified via an automated palpation procedure. Then, through manual palpation on a target surface, input data and resultant force data are acquired. The data are used to build a radial basis interpolation model. During rendering, this input-output mapping interpolation model is used to estimate force responses in real-time in accordance with the interaction input. Physical performance evaluation demonstrates that our approach achieves reasonably high estimation accuracy. A user study also shows plausible perceptual realism under diverse and extensive exploration.
Operational Retrievals of Evapotranspiration: Are we there yet?
NASA Astrophysics Data System (ADS)
Neale, C. M. U.; Anderson, M. C.; Hain, C.; Schull, M.; Isidro, C., Sr.; Goncalves, I. Z.
2017-12-01
Remote sensing based retrievals of evapotranspiration (ET) have progressed significantly over the last two decades with the improvement of methods and algorithms and the availability of multiple satellite sensors with shortwave and thermal infrared bands on polar orbiting platforms. The modeling approaches include simpler vegetation index (VI) based methods such as the reflectance-based crop coefficient approach coupled with surface reference evapotranspiration estimates to derive actual evapotranspiration of crops or, direct inputs to the Penman-Monteith equation through VI relationships with certain input variables. Methods that are more complex include one-layer or two-layer energy balance approaches that make use of both shortwave and longwave spectral band information to estimate different inputs to the energy balance equation. These models mostly differ in the estimation of sensible heat fluxes. For continental and global scale applications, other satellite-based products such as solar radiation, vegetation leaf area and cover are used as inputs, along with gridded re-analysis weather information. This presentation will review the state-of-the-art in satellite-based evapotranspiration estimation, giving examples of existing efforts to obtain operational ET retrievals over continental and global scales and discussing difficulties and challenges.
NASA Astrophysics Data System (ADS)
Bu, Xiangwei; Wu, Xiaoyan; He, Guangjun; Huang, Jiaqi
2016-03-01
This paper investigates the design of a novel adaptive neural controller for the longitudinal dynamics of a flexible air-breathing hypersonic vehicle with control input constraints. To reduce the complexity of controller design, the vehicle dynamics is decomposed into the velocity subsystem and the altitude subsystem, respectively. For each subsystem, only one neural network is utilized to approach the lumped unknown function. By employing a minimal-learning parameter method to estimate the norm of ideal weight vectors rather than their elements, there are only two adaptive parameters required for neural approximation. Thus, the computational burden is lower than the ones derived from neural back-stepping schemes. Specially, to deal with the control input constraints, additional systems are exploited to compensate the actuators. Lyapunov synthesis proves that all the closed-loop signals involved are uniformly ultimately bounded. Finally, simulation results show that the adopted compensation scheme can tackle actuator constraint effectively and moreover velocity and altitude can stably track their reference trajectories even when the physical limitations on control inputs are in effect.
Gronberg, Jo Ann M.; Spahr, Norman E.
2012-01-01
The U.S. Geological Survey’s National Water-Quality Assessment program requires nutrient input for analysis of the national and regional assessment of water quality. Detailed information on nutrient inputs to the environment are needed to understand and address the many serious problems that arise from excess nutrients in the streams and groundwater of the Nation. This report updates estimated county-level farm and nonfarm nitrogen and phosphorus input from commercial fertilizer sales for the conterminous United States for 1987 through 2006. Estimates were calculated from the Association of American Plant Food Control Officials fertilizer sales data, Census of Agriculture fertilizer expenditures, and U.S. Census Bureau county population. A previous national approach for deriving farm and nonfarm fertilizer nutrient estimates was evaluated, and a revised method for selecting representative states to calculate national farm and nonfarm proportions was developed. A national approach was used to estimate farm and nonfarm fertilizer inputs because not all states distinguish between farm and nonfarm use, and the quality of fertilizer reporting varies from year to year. For states that distinguish between farm and nonfarm use, the spatial distribution of the ratios of nonfarm-to-total fertilizer estimates for nitrogen and phosphorus calculated using the national-based farm and nonfarm proportions were similar to the spatial distribution of the ratios generated using state-based farm and nonfarm proportions. In addition, the relative highs and lows in the temporal distribution of farm and nonfarm nitrogen and phosphorus input at the state level were maintained—the periods of high and low usage coincide between national- and state-based values. With a few exceptions, nonfarm nitrogen estimates were found to be reasonable when compared to the amounts that would result if the lawn application rates recommended by state and university agricultural agencies were used. Also, states with higher nonfarm-to-total fertilizer ratios for nitrogen and phosphorus tended to have higher urban land-use percentages.
An integrated pan-tropical biomass map using multiple reference datasets.
Avitabile, Valerio; Herold, Martin; Heuvelink, Gerard B M; Lewis, Simon L; Phillips, Oliver L; Asner, Gregory P; Armston, John; Ashton, Peter S; Banin, Lindsay; Bayol, Nicolas; Berry, Nicholas J; Boeckx, Pascal; de Jong, Bernardus H J; DeVries, Ben; Girardin, Cecile A J; Kearsley, Elizabeth; Lindsell, Jeremy A; Lopez-Gonzalez, Gabriela; Lucas, Richard; Malhi, Yadvinder; Morel, Alexandra; Mitchard, Edward T A; Nagy, Laszlo; Qie, Lan; Quinones, Marcela J; Ryan, Casey M; Ferry, Slik J W; Sunderland, Terry; Laurin, Gaia Vaglio; Gatti, Roberto Cazzolla; Valentini, Riccardo; Verbeeck, Hans; Wijaya, Arief; Willcock, Simon
2016-04-01
We combined two existing datasets of vegetation aboveground biomass (AGB) (Proceedings of the National Academy of Sciences of the United States of America, 108, 2011, 9899; Nature Climate Change, 2, 2012, 182) into a pan-tropical AGB map at 1-km resolution using an independent reference dataset of field observations and locally calibrated high-resolution biomass maps, harmonized and upscaled to 14 477 1-km AGB estimates. Our data fusion approach uses bias removal and weighted linear averaging that incorporates and spatializes the biomass patterns indicated by the reference data. The method was applied independently in areas (strata) with homogeneous error patterns of the input (Saatchi and Baccini) maps, which were estimated from the reference data and additional covariates. Based on the fused map, we estimated AGB stock for the tropics (23.4 N-23.4 S) of 375 Pg dry mass, 9-18% lower than the Saatchi and Baccini estimates. The fused map also showed differing spatial patterns of AGB over large areas, with higher AGB density in the dense forest areas in the Congo basin, Eastern Amazon and South-East Asia, and lower values in Central America and in most dry vegetation areas of Africa than either of the input maps. The validation exercise, based on 2118 estimates from the reference dataset not used in the fusion process, showed that the fused map had a RMSE 15-21% lower than that of the input maps and, most importantly, nearly unbiased estimates (mean bias 5 Mg dry mass ha(-1) vs. 21 and 28 Mg ha(-1) for the input maps). The fusion method can be applied at any scale including the policy-relevant national level, where it can provide improved biomass estimates by integrating existing regional biomass maps as input maps and additional, country-specific reference datasets. © 2015 John Wiley & Sons Ltd.
Thermomechanical conditions and stresses on the friction stir welding tool
NASA Astrophysics Data System (ADS)
Atthipalli, Gowtam
Friction stir welding has been commercially used as a joining process for aluminum and other soft materials. However, the use of this process in joining of hard alloys is still developing primarily because of the lack of cost effective, long lasting tools. Here I have developed numerical models to understand the thermo mechanical conditions experienced by the FSW tool and to improve its reusability. A heat transfer and visco-plastic flow model is used to calculate the torque, and traverse force on the tool during FSW. The computed values of torque and traverse force are validated using the experimental results for FSW of AA7075, AA2524, AA6061 and Ti-6Al-4V alloys. The computed torque components are used to determine the optimum tool shoulder diameter based on the maximum use of torque and maximum grip of the tool on the plasticized workpiece material. The estimation of the optimum tool shoulder diameter for FSW of AA6061 and AA7075 was verified with experimental results. The computed values of traverse force and torque are used to calculate the maximum shear stress on the tool pin to determine the load bearing ability of the tool pin. The load bearing ability calculations are used to explain the failure of H13 steel tool during welding of AA7075 and commercially pure tungsten during welding of L80 steel. Artificial neural network (ANN) models are developed to predict the important FSW output parameters as function of selected input parameters. These ANN consider tool shoulder radius, pin radius, pin length, welding velocity, tool rotational speed and axial pressure as input parameters. The total torque, sliding torque, sticking torque, peak temperature, traverse force, maximum shear stress and bending stress are considered as the output for ANN models. These output parameters are selected since they define the thermomechanical conditions around the tool during FSW. The developed ANN models are used to understand the effect of various input parameters on the total torque and traverse force during FSW of AA7075 and 1018 mild steel. The ANN models are also used to determine tool safety factor for wide range of input parameters. A numerical model is developed to calculate the strain and strain rates along the streamlines during FSW. The strain and strain rate values are calculated for FSW of AA2524. Three simplified models are also developed for quick estimation of output parameters such as material velocity field, torque and peak temperature. The material velocity fields are computed by adopting an analytical method of calculating velocities for flow of non-compressible fluid between two discs where one is rotating and other is stationary. The peak temperature is estimated based on a non-dimensional correlation with dimensionless heat input. The dimensionless heat input is computed using known welding parameters and material properties. The torque is computed using an analytical function based on shear strength of the workpiece material. These simplified models are shown to be able to predict these output parameters successfully.
Nitrogen balance of a boreal Scots pine forest
NASA Astrophysics Data System (ADS)
Korhonen, J. F. J.; Pihlatie, M.; Pumpanen, J.; Aaltonen, H.; Hari, P.; Levula, J.; Kieloaho, A.-J.; Nikinmaa, E.; Vesala, T.; Ilvesniemi, H.
2013-02-01
The productivity of boreal forests is considered to be limited by low nitrogen (N) availability. Increased atmospheric N deposition has altered the functioning and N cycling of these N-sensitive ecosystems by increasing the availability of reactive nitrogen. The most important components of N pools and fluxes were measured in a boreal Scots pine stand in Hyytiälä, Southern Finland. The measurements at the site allowed direct estimations of nutrient pools in the soil and biomass, inputs from the atmosphere and outputs as drainage flow and gaseous losses from two micro-catchments. N was accumulating in the system, mainly in woody biomass, at a rate of 7 kg N ha-1 yr-1. Nitrogen input as atmospheric deposition was 7.4 kg N ha-1 yr-1. Dry deposition and organic N in wet deposition contributed over half of the inputs in deposition. Total outputs were 0.4 kg N ha-1 yr-1, the most important outputs being N2O emission to the atmosphere and organic N flux in drainage flow. Nitrogen uptake and retranslocation were equally important sources of N for plant growth. Most of the assimilated N originated from decomposition of organic matter, and the fraction of N that could originate directly from deposition was about 30%. In conclusion, atmospheric N deposition fertilizes the site considerably, but there are no signs of N saturation. Further research is needed to estimate soil N2 fluxes (emission and fixation), which may amount up to several kg N ha-1 yr-1.
NASA Astrophysics Data System (ADS)
Cook, Ellyn J.; van der Kaars, Sander
2006-10-01
We review attempts to derive quantitative climatic estimates from Australian pollen data, including the climatic envelope, climatic indicator and modern analogue approaches, and outline the need to pursue alternatives for use as input to, or validation of, simulations by models of past, present and future climate patterns. To this end, we have constructed and tested modern pollen-climate transfer functions for mainland southeastern Australia and Tasmania using the existing southeastern Australian pollen database and for northern Australia using a new pollen database we are developing. After testing for statistical significance, 11 parameters were selected for mainland southeastern Australia, seven for Tasmania and six for northern Australia. The functions are based on weighted-averaging partial least squares regression and their predictive ability evaluated against modern observational climate data using leave-one-out cross-validation. Functions for summer, annual and winter rainfall and temperatures are most robust for southeastern Australia, while in Tasmania functions for minimum temperature of the coldest period, mean winter and mean annual temperature are the most reliable. In northern Australia, annual and summer rainfall and annual and summer moisture indexes are the strongest. The validation of all functions means all can be applied to Quaternary pollen records from these three areas with confidence. Copyright
Cost characteristics of hospitals.
Smet, Mike
2002-09-01
Modern hospitals are complex multi-product organisations. The analysis of a hospital's production and/or cost structure should therefore use the appropriate techniques. Flexible functional forms based on the neo-classical theory of the firm seem to be most suitable. Using neo-classical cost functions implicitly assumes minimisation of (variable) costs given that input prices and outputs are exogenous. Local and global properties of flexible functional forms and short-run versus long-run equilibrium are further issues that require thorough investigation. In order to put the results based on econometric estimations of cost functions in the right perspective, it is important to keep these considerations in mind when using flexible functional forms. The more recent studies seem to agree that hospitals generally do not operate in their long-run equilibrium (they tend to over-invest in capital (capacity and equipment)) and that it is therefore appropriate to estimate a short-run variable cost function. However, few studies explicitly take into account the implicit assumptions and restrictions embedded in the models they use. An alternative method to explain differences in costs uses management accounting techniques to identify the cost drivers of overhead costs. Related issues such as cost-shifting and cost-adjusting behaviour of hospitals and the influence of market structure on competition, prices and costs are also discussed shortly.
Evaluation and uncertainty analysis of regional-scale CLM4.5 net carbon flux estimates
NASA Astrophysics Data System (ADS)
Post, Hanna; Hendricks Franssen, Harrie-Jan; Han, Xujun; Baatz, Roland; Montzka, Carsten; Schmidt, Marius; Vereecken, Harry
2018-01-01
Modeling net ecosystem exchange (NEE) at the regional scale with land surface models (LSMs) is relevant for the estimation of regional carbon balances, but studies on it are very limited. Furthermore, it is essential to better understand and quantify the uncertainty of LSMs in order to improve them. An important key variable in this respect is the prognostic leaf area index (LAI), which is very sensitive to forcing data and strongly affects the modeled NEE. We applied the Community Land Model (CLM4.5-BGC) to the Rur catchment in western Germany and compared estimated and default ecological key parameters for modeling carbon fluxes and LAI. The parameter estimates were previously estimated with the Markov chain Monte Carlo (MCMC) approach DREAM(zs) for four of the most widespread plant functional types in the catchment. It was found that the catchment-scale annual NEE was strongly positive with default parameter values but negative (and closer to observations) with the estimated values. Thus, the estimation of CLM parameters with local NEE observations can be highly relevant when determining regional carbon balances. To obtain a more comprehensive picture of model uncertainty, CLM ensembles were set up with perturbed meteorological input and uncertain initial states in addition to uncertain parameters. C3 grass and C3 crops were particularly sensitive to the perturbed meteorological input, which resulted in a strong increase in the standard deviation of the annual NEE sum (σ
Acton, Jennifer H; Molik, Bablin; Binns, Alison; Court, Helen; Margrain, Tom H
2016-02-24
Visual Rehabilitation Officers help people with a visual impairment maintain their independence. This intervention adopts a flexible, goal-centred approach, which may include training in mobility, use of optical and non-optical aids, and performance of activities of daily living. Although Visual Rehabilitation Officers are an integral part of the low vision service in the United Kingdom, evidence that they are effective is lacking. The purpose of this exploratory trial is to estimate the impact of a Visual Rehabilitation Officer on self-reported visual function, psychosocial and quality-of-life outcomes in individuals with low vision. In this exploratory, assessor-masked, parallel group, randomised controlled trial, participants will be allocated either to receive home visits from a Visual Rehabilitation Officer (n = 30) or to a waiting list control group (n = 30) in a 1:1 ratio. Adult volunteers with a visual impairment, who have been identified as needing rehabilitation officer input by a social worker, will take part. Those with an urgent need for a Visual Rehabilitation Officer or who have a cognitive impairment will be excluded. The primary outcome measure will be self-reported visual function (48-item Veterans Affairs Low Vision Visual Functioning Questionnaire). Secondary outcome measures will include psychological and quality-of-life metrics: the Patient Health Questionnaire (PHQ-9), the Warwick-Edinburgh Mental Well-being Scale (WEMWBS), the Adjustment to Age-related Visual Loss Scale (AVL-12), the Standardised Health-related Quality of Life Questionnaire (EQ-5D) and the UCLA Loneliness Scale. The interviewer collecting the outcomes will be masked to the group allocations. The analysis will be undertaken on a complete case and intention-to-treat basis. Analysis of covariance (ANCOVA) will be applied to follow-up questionnaire scores, with the baseline score as a covariate. This trial is expected to provide robust effect size estimates of the intervention effect. The data will be used to design a large-scale randomised controlled trial to evaluate fully the Visual Rehabilitation Officer intervention. A rigorous evaluation of Rehabilitation Officer input is vital to direct a future low vision rehabilitation strategy and to help direct government resources. The trial was registered with ( ISRCTN44807874 ) on 9 March 2015.
Counting Jobs and Economic Impacts from Distributed Wind in the United States (Poster)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tegen, S.
This conference poster describes the distributed wind Jobs and Economic Development Imapcts (JEDI) model. The goal of this work is to provide a model that estimates jobs and other economic effects associated with the domestic distributed wind industry. The distributed wind JEDI model is a free input-output model that estimates employment and other impacts resulting from an investment in distributed wind installations. Default inputs are from installers and industry experts and are based on existing projects. User input can be minimal (use defaults) or very detailed for more precise results. JEDI can help evaluate potential scenarios, current or future; informmore » stakeholders and decision-makers; assist businesses in evaluating economic development impacts and estimating jobs; assist government organizations with planning and evaluating and developing communities.« less
NASA Technical Reports Server (NTRS)
Iliff, Kenneth W.; Wang, Kon-Sheng Charles
1999-01-01
The subsonic, lateral-directional, stability and control derivatives of the thrust-vectoring F-1 8 High Angle of Attack Research Vehicle (HARV) are extracted from flight data using a maximum likelihood parameter identification technique. State noise is accounted for in the identification formulation and is used to model the uncommanded forcing functions caused by unsteady aerodynamics. Preprogrammed maneuvers provided independent control surface inputs, eliminating problems of identifiability related to correlations between the aircraft controls and states. The HARV derivatives are plotted as functions of angles of attack between 10deg and 70deg and compared to flight estimates from the basic F-18 aircraft and to predictions from ground and wind tunnel tests. Unlike maneuvers of the basic F-18 aircraft, the HARV maneuvers were very precise and repeatable, resulting in tightly clustered estimates with small uncertainty levels. Significant differences were found between flight and prediction; however, some of these differences may be attributed to differences in the range of sideslip or input amplitude over which a given derivative was evaluated, and to differences between the HARV external configuration and that of the basic F-18 aircraft, upon which most of the prediction was based. Some HARV derivative fairings have been adjusted using basic F-18 derivatives (with low uncertainties) to help account for differences in variable ranges and the lack of HARV maneuvers at certain angles of attack.
Analysis of a Split-Plot Experimental Design Applied to a Low-Speed Wind Tunnel Investigation
NASA Technical Reports Server (NTRS)
Erickson, Gary E.
2013-01-01
A procedure to analyze a split-plot experimental design featuring two input factors, two levels of randomization, and two error structures in a low-speed wind tunnel investigation of a small-scale model of a fighter airplane configuration is described in this report. Standard commercially-available statistical software was used to analyze the test results obtained in a randomization-restricted environment often encountered in wind tunnel testing. The input factors were differential horizontal stabilizer incidence and the angle of attack. The response variables were the aerodynamic coefficients of lift, drag, and pitching moment. Using split-plot terminology, the whole plot, or difficult-to-change, factor was the differential horizontal stabilizer incidence, and the subplot, or easy-to-change, factor was the angle of attack. The whole plot and subplot factors were both tested at three levels. Degrees of freedom for the whole plot error were provided by replication in the form of three blocks, or replicates, which were intended to simulate three consecutive days of wind tunnel facility operation. The analysis was conducted in three stages, which yielded the estimated mean squares, multiple regression function coefficients, and corresponding tests of significance for all individual terms at the whole plot and subplot levels for the three aerodynamic response variables. The estimated regression functions included main effects and two-factor interaction for the lift coefficient, main effects, two-factor interaction, and quadratic effects for the drag coefficient, and only main effects for the pitching moment coefficient.
Automated detection of arterial input function in DSC perfusion MRI in a stroke rat model
NASA Astrophysics Data System (ADS)
Yeh, M.-Y.; Lee, T.-H.; Yang, S.-T.; Kuo, H.-H.; Chyi, T.-K.; Liu, H.-L.
2009-05-01
Quantitative cerebral blood flow (CBF) estimation requires deconvolution of the tissue concentration time curves with an arterial input function (AIF). However, image-based determination of AIF in rodent is challenged due to limited spatial resolution. We evaluated the feasibility of quantitative analysis using automated AIF detection and compared the results with commonly applied semi-quantitative analysis. Permanent occlusion of bilateral or unilateral common carotid artery was used to induce cerebral ischemia in rats. The image using dynamic susceptibility contrast method was performed on a 3-T magnetic resonance scanner with a spin-echo echo-planar-image sequence (TR/TE = 700/80 ms, FOV = 41 mm, matrix = 64, 3 slices, SW = 2 mm), starting from 7 s prior to contrast injection (1.2 ml/kg) at four different time points. For quantitative analysis, CBF was calculated by the AIF which was obtained from 10 voxels with greatest contrast enhancement after deconvolution. For semi-quantitative analysis, relative CBF was estimated by the integral divided by the first moment of the relaxivity time curves. We observed if the AIFs obtained in the three different ROIs (whole brain, hemisphere without lesion and hemisphere with lesion) were similar, the CBF ratios (lesion/normal) between quantitative and semi-quantitative analyses might have a similar trend at different operative time points. If the AIFs were different, the CBF ratios might be different. We concluded that using local maximum one can define proper AIF without knowing the anatomical location of arteries in a stroke rat model.
NASA Astrophysics Data System (ADS)
Harrison, Benjamin; Sandiford, Mike; McLaren, Sandra
2016-04-01
Supervised machine learning algorithms attempt to build a predictive model using empirical data. Their aim is to take a known set of input data along with known responses to the data, and adaptively train a model to generate predictions for new data inputs. A key attraction to their use is the ability to perform as function approximators where the definition of an explicit relationship between variables is infeasible. We present a novel means of estimating thermal conductivity using a supervised self-organising map algorithm, trained on about 150 thermal conductivity measurements, and using a suite of five electric logs common to 14 boreholes. A key motivation of the study was to supplement the small number of direct measurements of thermal conductivity with the decades of borehole data acquired in the Gippsland Basin to produce more confident calculations of surface heat flow. A previous attempt to generate estimates from well-log data in the Gippsland Basin using classic petrophysical log interpretation methods was able to produce reasonable synthetic thermal conductivity logs for only four boreholes. The current study has extended this to a further ten boreholes. Interesting outcomes from the study are: the method appears stable at very low sample sizes (< ~100); the SOM permits quantitative analysis of essentially qualitative uncalibrated well-log data; and the method's moderate success at prediction with minimal effort tuning the algorithm's parameters.
Output feedback control of a quadrotor UAV using neural networks.
Dierks, Travis; Jagannathan, Sarangapani
2010-01-01
In this paper, a new nonlinear controller for a quadrotor unmanned aerial vehicle (UAV) is proposed using neural networks (NNs) and output feedback. The assumption on the availability of UAV dynamics is not always practical, especially in an outdoor environment. Therefore, in this work, an NN is introduced to learn the complete dynamics of the UAV online, including uncertain nonlinear terms like aerodynamic friction and blade flapping. Although a quadrotor UAV is underactuated, a novel NN virtual control input scheme is proposed which allows all six degrees of freedom (DOF) of the UAV to be controlled using only four control inputs. Furthermore, an NN observer is introduced to estimate the translational and angular velocities of the UAV, and an output feedback control law is developed in which only the position and the attitude of the UAV are considered measurable. It is shown using Lyapunov theory that the position, orientation, and velocity tracking errors, the virtual control and observer estimation errors, and the NN weight estimation errors for each NN are all semiglobally uniformly ultimately bounded (SGUUB) in the presence of bounded disturbances and NN functional reconstruction errors while simultaneously relaxing the separation principle. The effectiveness of proposed output feedback control scheme is then demonstrated in the presence of unknown nonlinear dynamics and disturbances, and simulation results are included to demonstrate the theoretical conjecture.
NASA Astrophysics Data System (ADS)
Samal, N. R.; Pierson, D. C.; Staehr, P. A.; Pradhanang, S. M.; Smith, D. G.
2013-12-01
Episodic inputs of dissolved and particulate material during storm events can have important effects on lake and reservoir ecosystem function and also impact reservoir drinking water quality. We evaluate the impacts of storm events using vertical profiles of temperature, dissolved oxygen, turbidity, conductivity and chlorophyll automatically collected at 6 hour intervals in Ashokan Reservoir, which is a part of the New York City drinking water supply. Storm driven inputs to the reservoir periodically result in large input of suspended sediments that result in reservoir turbidity levels exceeding 25 NTU, and substantial reductions in the euphotic depth. Dissolved materials associated with these same storms would be expected to stimulate bacterial production. This study involves the use of a conceptual model to calculate depth specific estimates of gross primary production (GPP) and ecosystem respiration (R) using three years of data that included 777 events that increased reservoir turbidity levels to over 25 NTU. Using data from before, during and after storm events, we examine how the balance between GPP and R is influenced by storm related increases in turbidity and dissolved organic matter, which would in turn influence light attenuation and bacterial production. Key words: metabolism, primary production, GPP, respiration, euphotic depth, storm event, reservoir
Measuring Changes in the Economics of Medical Practice.
Fleming, Christopher; Rich, Eugene; DesRoches, Catherine; Reschovsky, James; Kogan, Rachel
2015-08-01
For the latter third of the twentieth century, researchers have estimated production and cost functions for physician practices. Today, those attempting to measure the inputs and outputs of physician practice must account for many recent changes in models of care delivery. In this paper, we review practice inputs and outputs as typically described in research on the economics of medical practice, and consider the implications of the changing organization of medical practice and nature of physician work. This evolving environment has created conceptual challenges in what are the appropriate measures of output from physician work, as well as what inputs should be measured. Likewise, the increasing complexity of physician practice organizations has introduced challenges to finding the appropriate data sources for measuring these constructs. Both these conceptual and data challenges pose measurement issues that must be overcome to study the economics of modern medical practice. Despite these challenges, there are several promising initiatives involving data sharing at the organizational level that could provide a starting point for developing the needed new data sources and metrics for physician inputs and outputs. However, additional efforts will be required to establish data collection approaches and measurements applicable to smaller and single specialty practices. Overcoming these measurement and data challenges will be key to supporting policy-relevant research on the changing economics of medical practice.
Estimating atmospheric parameters and reducing noise for multispectral imaging
Conger, James Lynn
2014-02-25
A method and system for estimating atmospheric radiance and transmittance. An atmospheric estimation system is divided into a first phase and a second phase. The first phase inputs an observed multispectral image and an initial estimate of the atmospheric radiance and transmittance for each spectral band and calculates the atmospheric radiance and transmittance for each spectral band, which can be used to generate a "corrected" multispectral image that is an estimate of the surface multispectral image. The second phase inputs the observed multispectral image and the surface multispectral image that was generated by the first phase and removes noise from the surface multispectral image by smoothing out change in average deviations of temperatures.
Prediction and assimilation of surf-zone processes using a Bayesian network: Part II: Inverse models
Plant, Nathaniel G.; Holland, K. Todd
2011-01-01
A Bayesian network model has been developed to simulate a relatively simple problem of wave propagation in the surf zone (detailed in Part I). Here, we demonstrate that this Bayesian model can provide both inverse modeling and data-assimilation solutions for predicting offshore wave heights and depth estimates given limited wave-height and depth information from an onshore location. The inverse method is extended to allow data assimilation using observational inputs that are not compatible with deterministic solutions of the problem. These inputs include sand bar positions (instead of bathymetry) and estimates of the intensity of wave breaking (instead of wave-height observations). Our results indicate that wave breaking information is essential to reduce prediction errors. In many practical situations, this information could be provided from a shore-based observer or from remote-sensing systems. We show that various combinations of the assimilated inputs significantly reduce the uncertainty in the estimates of water depths and wave heights in the model domain. Application of the Bayesian network model to new field data demonstrated significant predictive skill (R2 = 0.7) for the inverse estimate of a month-long time series of offshore wave heights. The Bayesian inverse results include uncertainty estimates that were shown to be most accurate when given uncertainty in the inputs (e.g., depth and tuning parameters). Furthermore, the inverse modeling was extended to directly estimate tuning parameters associated with the underlying wave-process model. The inverse estimates of the model parameters not only showed an offshore wave height dependence consistent with results of previous studies but the uncertainty estimates of the tuning parameters also explain previously reported variations in the model parameters.
Inferring neural activity from BOLD signals through nonlinear optimization.
Vakorin, Vasily A; Krakovska, Olga O; Borowsky, Ron; Sarty, Gordon E
2007-11-01
The blood oxygen level-dependent (BOLD) fMRI signal does not measure neuronal activity directly. This fact is a key concern for interpreting functional imaging data based on BOLD. Mathematical models describing the path from neural activity to the BOLD response allow us to numerically solve the inverse problem of estimating the timing and amplitude of the neuronal activity underlying the BOLD signal. In fact, these models can be viewed as an advanced substitute for the impulse response function. In this work, the issue of estimating the dynamics of neuronal activity from the observed BOLD signal is considered within the framework of optimization problems. The model is based on the extended "balloon" model and describes the conversion of neuronal signals into the BOLD response through the transitional dynamics of the blood flow-inducing signal, cerebral blood flow, cerebral blood volume and deoxyhemoglobin concentration. Global optimization techniques are applied to find a control input (the neuronal activity and/or the biophysical parameters in the model) that causes the system to follow an admissible solution to minimize discrepancy between model and experimental data. As an alternative to a local linearization (LL) filtering scheme, the optimization method escapes the linearization of the transition system and provides a possibility to search for the global optimum, avoiding spurious local minima. We have found that the dynamics of the neural signals and the physiological variables as well as the biophysical parameters can be robustly reconstructed from the BOLD responses. Furthermore, it is shown that spiking off/on dynamics of the neural activity is the natural mathematical solution of the model. Incorporating, in addition, the expansion of the neural input by smooth basis functions, representing a low-pass filtering, allows us to model local field potential (LFP) solutions instead of spiking solutions.
An adaptive learning control system for large flexible structures
NASA Technical Reports Server (NTRS)
Thau, F. E.
1985-01-01
The objective of the research has been to study the design of adaptive/learning control systems for the control of large flexible structures. In the first activity an adaptive/learning control methodology for flexible space structures was investigated. The approach was based on using a modal model of the flexible structure dynamics and an output-error identification scheme to identify modal parameters. In the second activity, a least-squares identification scheme was proposed for estimating both modal parameters and modal-to-actuator and modal-to-sensor shape functions. The technique was applied to experimental data obtained from the NASA Langley beam experiment. In the third activity, a separable nonlinear least-squares approach was developed for estimating the number of excited modes, shape functions, modal parameters, and modal amplitude and velocity time functions for a flexible structure. In the final research activity, a dual-adaptive control strategy was developed for regulating the modal dynamics and identifying modal parameters of a flexible structure. A min-max approach was used for finding an input to provide modal parameter identification while not exceeding reasonable bounds on modal displacement.
'spup' - an R package for uncertainty propagation in spatial environmental modelling
NASA Astrophysics Data System (ADS)
Sawicka, Kasia; Heuvelink, Gerard
2016-04-01
Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability, including case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected static and interactive visualization methods that are understandable by non-experts with limited background in statistics can be used to summarize and visualize uncertainty about the measured input, model parameters and output of the uncertainty propagation. We demonstrate that the 'spup' package is an effective and easy tool to apply and can be used in multi-disciplinary research and model-based decision support.
'spup' - an R package for uncertainty propagation analysis in spatial environmental modelling
NASA Astrophysics Data System (ADS)
Sawicka, Kasia; Heuvelink, Gerard
2017-04-01
Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability and being able to deal with case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected visualization methods that are understandable by non-experts with limited background in statistics can be used to summarize and visualize uncertainty about the measured input, model parameters and output of the uncertainty propagation. We demonstrate that the 'spup' package is an effective and easy tool to apply and can be used in multi-disciplinary research and model-based decision support.
NASA Astrophysics Data System (ADS)
Maltz, Jonathan S.
2000-11-01
We present an algorithm of reduced computational cost which is able to estimate kinetic model parameters directly from dynamic ECT sinograms made up of temporally inconsistent projections. The algorithm exploits the extreme degree of parameter redundancy inherent in linear combinations of the exponential functions which represent the modes of first-order compartmental systems. The singular value decomposition is employed to find a small set of orthogonal functions, the linear combinations of which are able to accurately represent all modes within the physiologically anticipated range in a given study. The reduced-dimension basis is formed as the convolution of this orthogonal set with a measured input function. The Moore-Penrose pseudoinverse is used to find coefficients of this basis. Algorithm performance is evaluated at realistic count rates using MCAT phantom and clinical 99mTc-teboroxime myocardial study data. Phantom data are modelled as originating from a Poisson process. For estimates recovered from a single slice projection set containing 2.5×105 total counts, recovered tissue responses compare favourably with those obtained using more computationally intensive methods. The corresponding kinetic parameter estimates (coefficients of the new basis) exhibit negligible bias, while parameter variances are low, falling within 30% of the Cramér-Rao lower bound.
RRAWFLOW: Rainfall-Response Aquifer and Watershed Flow Model (v1.11)
NASA Astrophysics Data System (ADS)
Long, A. J.
2014-09-01
The Rainfall-Response Aquifer and Watershed Flow Model (RRAWFLOW) is a lumped-parameter model that simulates streamflow, springflow, groundwater level, solute transport, or cave drip for a measurement point in response to a system input of precipitation, recharge, or solute injection. The RRAWFLOW open-source code is written in the R language and is included in the Supplement to this article along with an example model of springflow. RRAWFLOW includes a time-series process to estimate recharge from precipitation and simulates the response to recharge by convolution; i.e., the unit hydrograph approach. Gamma functions are used for estimation of parametric impulse-response functions (IRFs); a combination of two gamma functions results in a double-peaked IRF. A spline fit to a set of control points is introduced as a new method for estimation of nonparametric IRFs. Other options include the use of user-defined IRFs and different methods to simulate time-variant systems. For many applications, lumped models simulate the system response with equal accuracy to that of distributed models, but moreover, the ease of model construction and calibration of lumped models makes them a good choice for many applications. RRAWFLOW provides professional hydrologists and students with an accessible and versatile tool for lumped-parameter modeling.
Decision Aids for Multiple-Decision Disease Management as Affected by Weather Input Errors
USDA-ARS?s Scientific Manuscript database
Many disease management decision support systems (DSS) rely, exclusively or in part, on weather inputs to calculate an indicator for disease hazard. Error in the weather inputs, typically due to forecasting, interpolation or estimation from off-site sources, may affect model calculations and manage...
Input-variable sensitivity assessment for sediment transport relations
NASA Astrophysics Data System (ADS)
Fernández, Roberto; Garcia, Marcelo H.
2017-09-01
A methodology to assess input-variable sensitivity for sediment transport relations is presented. The Mean Value First Order Second Moment Method (MVFOSM) is applied to two bed load transport equations showing that it may be used to rank all input variables in terms of how their specific variance affects the overall variance of the sediment transport estimation. In sites where data are scarce or nonexistent, the results obtained may be used to (i) determine what variables would have the largest impact when estimating sediment loads in the absence of field observations and (ii) design field campaigns to specifically measure those variables for which a given transport equation is most sensitive; in sites where data are readily available, the results would allow quantifying the effect that the variance associated with each input variable has on the variance of the sediment transport estimates. An application of the method to two transport relations using data from a tropical mountain river in Costa Rica is implemented to exemplify the potential of the method in places where input data are limited. Results are compared against Monte Carlo simulations to assess the reliability of the method and validate its results. For both of the sediment transport relations used in the sensitivity analysis, accurate knowledge of sediment size was found to have more impact on sediment transport predictions than precise knowledge of other input variables such as channel slope and flow discharge.
Hill, Mary Catherine
1992-01-01
This report documents a new version of the U.S. Geological Survey modular, three-dimensional, finite-difference, ground-water flow model (MODFLOW) which, with the new Parameter-Estimation Package that also is documented in this report, can be used to estimate parameters by nonlinear regression. The new version of MODFLOW is called MODFLOWP (pronounced MOD-FLOW*P), and functions nearly identically to MODFLOW when the ParameterEstimation Package is not used. Parameters are estimated by minimizing a weighted least-squares objective function by the modified Gauss-Newton method or by a conjugate-direction method. Parameters used to calculate the following MODFLOW model inputs can be estimated: Transmissivity and storage coefficient of confined layers; hydraulic conductivity and specific yield of unconfined layers; vertical leakance; vertical anisotropy (used to calculate vertical leakance); horizontal anisotropy; hydraulic conductance of the River, Streamflow-Routing, General-Head Boundary, and Drain Packages; areal recharge rates; maximum evapotranspiration; pumpage rates; and the hydraulic head at constant-head boundaries. Any spatial variation in parameters can be defined by the user. Data used to estimate parameters can include existing independent estimates of parameter values, observed hydraulic heads or temporal changes in hydraulic heads, and observed gains and losses along head-dependent boundaries (such as streams). Model output includes statistics for analyzing the parameter estimates and the model; these statistics can be used to quantify the reliability of the resulting model, to suggest changes in model construction, and to compare results of models constructed in different ways.
NASA Astrophysics Data System (ADS)
Koven, C. D.; Schuur, E.; Schaedel, C.; Bohn, T. J.; Burke, E.; Chen, G.; Chen, X.; Ciais, P.; Grosse, G.; Harden, J. W.; Hayes, D. J.; Hugelius, G.; Jafarov, E. E.; Krinner, G.; Kuhry, P.; Lawrence, D. M.; MacDougall, A.; Marchenko, S. S.; McGuire, A. D.; Natali, S.; Nicolsky, D.; Olefeldt, D.; Peng, S.; Romanovsky, V. E.; Schaefer, K. M.; Strauss, J.; Treat, C. C.; Turetsky, M. R.
2015-12-01
We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation-Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soil temperature. The trajectories are determined according to a 3-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100.
Implementation of a Wavefront-Sensing Algorithm
NASA Technical Reports Server (NTRS)
Smith, Jeffrey S.; Dean, Bruce; Aronstein, David
2013-01-01
A computer program has been written as a unique implementation of an image-based wavefront-sensing algorithm reported in "Iterative-Transform Phase Retrieval Using Adaptive Diversity" (GSC-14879-1), NASA Tech Briefs, Vol. 31, No. 4 (April 2007), page 32. This software was originally intended for application to the James Webb Space Telescope, but is also applicable to other segmented-mirror telescopes. The software is capable of determining optical-wavefront information using, as input, a variable number of irradiance measurements collected in defocus planes about the best focal position. The software also uses input of the geometrical definition of the telescope exit pupil (otherwise denoted the pupil mask) to identify the locations of the segments of the primary telescope mirror. From the irradiance data and mask information, the software calculates an estimate of the optical wavefront (a measure of performance) of the telescope generally and across each primary mirror segment specifically. The software is capable of generating irradiance data, wavefront estimates, and basis functions for the full telescope and for each primary-mirror segment. Optionally, each of these pieces of information can be measured or computed outside of the software and incorporated during execution of the software.
Closed-loop model identification of cooperative manipulators holding deformable objects
NASA Astrophysics Data System (ADS)
Alkathiri, A. A.; Akmeliawati, R.; Azlan, N. Z.
2017-11-01
This paper presents system identification to obtain the closed-loop models of a couple of cooperative manipulators in a system, which function to hold deformable objects. The system works using the master-slave principle. In other words, one of the manipulators is position-controlled through encoder feedback, while a force sensor gives feedback to the other force-controlled manipulator. Using the closed-loop input and output data, the closed-loop models, which are useful for model-based control design, are estimated. The criteria for model validation are a 95% fit between the measured and simulated output of the estimated models and residual analysis. The results show that for both position and force control respectively, the fits are 95.73% and 95.88%.
A pdf-Free Change Detection Test Based on Density Difference Estimation.
Bu, Li; Alippi, Cesare; Zhao, Dongbin
2018-02-01
The ability to detect online changes in stationarity or time variance in a data stream is a hot research topic with striking implications. In this paper, we propose a novel probability density function-free change detection test, which is based on the least squares density-difference estimation method and operates online on multidimensional inputs. The test does not require any assumption about the underlying data distribution, and is able to operate immediately after having been configured by adopting a reservoir sampling mechanism. Thresholds requested to detect a change are automatically derived once a false positive rate is set by the application designer. Comprehensive experiments validate the effectiveness in detection of the proposed method both in terms of detection promptness and accuracy.
The Aviation System Analysis Capability Airport Capacity and Delay Models
NASA Technical Reports Server (NTRS)
Lee, David A.; Nelson, Caroline; Shapiro, Gerald
1998-01-01
The ASAC Airport Capacity Model and the ASAC Airport Delay Model support analyses of technologies addressing airport capacity. NASA's Aviation System Analysis Capability (ASAC) Airport Capacity Model estimates the capacity of an airport as a function of weather, Federal Aviation Administration (FAA) procedures, traffic characteristics, and the level of technology available. Airport capacity is presented as a Pareto frontier of arrivals per hour versus departures per hour. The ASAC Airport Delay Model allows the user to estimate the minutes of arrival delay for an airport, given its (weather dependent) capacity. Historical weather observations and demand patterns are provided by ASAC as inputs to the delay model. The ASAC economic models can translate a reduction in delay minutes into benefit dollars.
Space-variant restoration of images degraded by camera motion blur.
Sorel, Michal; Flusser, Jan
2008-02-01
We examine the problem of restoration from multiple images degraded by camera motion blur. We consider scenes with significant depth variations resulting in space-variant blur. The proposed algorithm can be applied if the camera moves along an arbitrary curve parallel to the image plane, without any rotations. The knowledge of camera trajectory and camera parameters is not necessary. At the input, the user selects a region where depth variations are negligible. The algorithm belongs to the group of variational methods that estimate simultaneously a sharp image and a depth map, based on the minimization of a cost functional. To initialize the minimization, it uses an auxiliary window-based depth estimation algorithm. Feasibility of the algorithm is demonstrated by three experiments with real images.
A cross-correlation-based estimate of the galaxy luminosity function
NASA Astrophysics Data System (ADS)
van Daalen, Marcel P.; White, Martin
2018-06-01
We extend existing methods for using cross-correlations to derive redshift distributions for photometric galaxies, without using photometric redshifts. The model presented in this paper simultaneously yields highly accurate and unbiased redshift distributions and, for the first time, redshift-dependent luminosity functions, using only clustering information and the apparent magnitudes of the galaxies as input. In contrast to many existing techniques for recovering unbiased redshift distributions, the output of our method is not degenerate with the galaxy bias b(z), which is achieved by modelling the shape of the luminosity bias. We successfully apply our method to a mock galaxy survey and discuss improvements to be made before applying our model to real data.
Radial Basis Function Neural Network Application to Power System Restoration Studies
Sadeghkhani, Iman; Ketabi, Abbas; Feuillet, Rene
2012-01-01
One of the most important issues in power system restoration is overvoltages caused by transformer switching. These overvoltages might damage some equipment and delay power system restoration. This paper presents a radial basis function neural network (RBFNN) to study transformer switching overvoltages. To achieve good generalization capability for developed RBFNN, equivalent parameters of the network are added to RBFNN inputs. The developed RBFNN is trained with the worst-case scenario of switching angle and remanent flux and tested for typical cases. The simulated results for a partial of 39-bus New England test system show that the proposed technique can estimate the peak values and duration of switching overvoltages with good accuracy. PMID:22792093
Layer-specific input to distinct cell types in layer 6 of monkey primary visual cortex.
Briggs, F; Callaway, E M
2001-05-15
Layer 6 of monkey V1 contains a physiologically and anatomically diverse population of excitatory pyramidal neurons. Distinctive arborization patterns of axons and dendrites within the functionally specialized cortical layers define eight types of layer 6 pyramidal neurons and suggest unique information processing roles for each cell type. To address how input sources contribute to cellular function, we examined the laminar sources of functional excitatory input onto individual layer 6 pyramidal neurons using scanning laser photostimulation. We find that excitatory input sources correlate with cell type. Class I neurons with axonal arbors selectively targeting magnocellular (M) recipient layer 4Calpha receive input from M-dominated layer 4B, whereas class I neurons whose axonal arbors target parvocellular (P) recipient layer 4Cbeta receive input from P-dominated layer 2/3. Surprisingly, these neuronal types do not differ significantly in the inputs they receive directly from layers 4Calpha or 4Cbeta. Class II cells, which lack dense axonal arbors within layer 4C, receive excitatory input from layers targeted by their local axons. Specifically, type IIA cells project axons to and receive input from the deep but not superficial layers. Type IIB neurons project to and receive input from the deepest and most superficial, but not middle layers. Type IIC neurons arborize throughout the cortical layers and tend to receive inputs from all cortical layers. These observations have implications for the functional roles of different layer 6 cell types in visual information processing.
Assessing the effect of elevated carbon dioxide on soil carbon: a comparison of four meta-analyses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hungate, B. A.; van Groenigen, K.; Six, J.
2009-08-01
Soil is the largest reservoir of organic carbon (C) in the terrestrial biosphere and soil C has a relatively long mean residence time. Rising atmospheric carbon dioxide (CO{sub 2}) concentrations generally increase plant growth and C input to soil, suggesting that soil might help mitigate atmospheric CO{sub 2} rise and global warming. But to what extent mitigation will occur is unclear. The large size of the soil C pool not only makes it a potential buffer against rising atmospheric CO{sub 2}, but also makes it difficult to measure changes amid the existing background. Meta-analysis is one tool that can overcomemore » the limited power of single studies. Four recent meta-analyses addressed this issue but reached somewhat different conclusions about the effect of elevated CO{sub 2} on soil C accumulation, especially regarding the role of nitrogen (N) inputs. Here, we assess the extent of differences between these conclusions and propose a new analysis of the data. The four meta-analyses included different studies, derived different effect size estimates from common studies, used different weighting functions and metrics of effect size, and used different approaches to address nonindependence of effect sizes. Although all factors influenced the mean effect size estimates and subsequent inferences, the approach to independence had the largest influence. We recommend that meta-analysts critically assess and report choices about effect size metrics and weighting functions, and criteria for study selection and independence. Such decisions need to be justified carefully because they affect the basis for inference. Our new analysis, with a combined data set, confirms that the effect of elevated CO{sub 2} on net soil C accumulation increases with the addition of N fertilizers. Although the effect at low N inputs was not significant, statistical power to detect biogeochemically important effect sizes at low N is limited, even with meta-analysis, suggesting the continued need for long-term experiments.« less
Bias and uncertainty in regression-calibrated models of groundwater flow in heterogeneous media
Cooley, R.L.; Christensen, S.
2006-01-01
Groundwater models need to account for detailed but generally unknown spatial variability (heterogeneity) of the hydrogeologic model inputs. To address this problem we replace the large, m-dimensional stochastic vector ?? that reflects both small and large scales of heterogeneity in the inputs by a lumped or smoothed m-dimensional approximation ????*, where ?? is an interpolation matrix and ??* is a stochastic vector of parameters. Vector ??* has small enough dimension to allow its estimation with the available data. The consequence of the replacement is that model function f(????*) written in terms of the approximate inputs is in error with respect to the same model function written in terms of ??, ??,f(??), which is assumed to be nearly exact. The difference f(??) - f(????*), termed model error, is spatially correlated, generates prediction biases, and causes standard confidence and prediction intervals to be too small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate ??* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear regression methods are extended to analyze the revised method. The analysis develops analytical expressions for bias terms reflecting the interaction of model nonlinearity and model error, for correction factors needed to adjust the sizes of confidence and prediction intervals for this interaction, and for correction factors needed to adjust the sizes of confidence and prediction intervals for possible use of a diagonal weight matrix in place of the correct one. If terms expressing the degree of intrinsic nonlinearity for f(??) and f(????*) are small, then most of the biases are small and the correction factors are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis. ?? 2005 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Carrasco, D.; Trenti, M.; Mutch, S.; Oesch, P. A.
2018-06-01
The luminosity function is a fundamental observable for characterising how galaxies form and evolve throughout the cosmic history. One key ingredient to derive this measurement from the number counts in a survey is the characterisation of the completeness and redshift selection functions for the observations. In this paper, we present GLACiAR, an open python tool available on GitHub to estimate the completeness and selection functions in galaxy surveys. The code is tailored for multiband imaging surveys aimed at searching for high-redshift galaxies through the Lyman-break technique, but it can be applied broadly. The code generates artificial galaxies that follow Sérsic profiles with different indexes and with customisable size, redshift, and spectral energy distribution properties, adds them to input images, and measures the recovery rate. To illustrate this new software tool, we apply it to quantify the completeness and redshift selection functions for J-dropouts sources (redshift z 10 galaxies) in the Hubble Space Telescope Brightest of Reionizing Galaxies Survey. Our comparison with a previous completeness analysis on the same dataset shows overall agreement, but also highlights how different modelling assumptions for the artificial sources can impact completeness estimates.
The series-elastic shock absorber: tendons attenuate muscle power during eccentric actions
Azizi, Emanuel
2010-01-01
Elastic tendons can act as muscle power amplifiers or energy-conserving springs during locomotion. We used an in situ muscle-tendon preparation to examine the mechanical function of tendons during lengthening contractions, when muscles absorb energy. Force, length, and power were measured in the lateral gastrocnemius muscle of wild turkeys. Sonomicrometry was used to measure muscle fascicle length independently from muscle-tendon unit (MTU) length, as measured by a muscle lever system (servomotor). A series of ramp stretches of varying velocities was applied to the MTU in fully activated muscles. Fascicle length changes were decoupled from length changes imposed on the MTU by the servomotor. Under most conditions, muscle fascicles shortened on average, while the MTU lengthened. Energy input to the MTU during the fastest lengthenings was −54.4 J/kg, while estimated work input to the muscle fascicles during this period was only −11.24 J/kg. This discrepancy indicates that energy was first absorbed by elastic elements, then released to do work on muscle fascicles after the lengthening phase of the contraction. The temporary storage of energy by elastic elements also resulted in a significant attenuation of power input to the muscle fascicles. At the fastest lengthening rates, peak instantaneous power input to the MTU reached −2,143.9 W/kg, while peak power input to the fascicles was only −557.6 W/kg. These results demonstrate that tendons may act as mechanical buffers by limiting peak muscle forces, lengthening rates, and power inputs during energy-absorbing contractions. PMID:20507964
NASA Technical Reports Server (NTRS)
Zacharias, G. L.; Young, L. R.
1981-01-01
Measurements are made of manual control performance in the closed-loop task of nulling perceived self-rotation velocity about an earth-vertical axis. Self-velocity estimation is modeled as a function of the simultaneous presentation of vestibular and peripheral visual field motion cues. Based on measured low-frequency operator behavior in three visual field environments, a parallel channel linear model is proposed which has separate visual and vestibular pathways summing in a complementary manner. A dual-input describing function analysis supports the complementary model; vestibular cues dominate sensation at higher frequencies. The describing function model is extended by the proposal of a nonlinear cue conflict model, in which cue weighting depends on the level of agreement between visual and vestibular cues.
Circular Regression in a Dual-Phase Lock-In Amplifier for Coherent Detection of Weak Signal
Wang, Gaoxuan; Reboul, Serge; Fertein, Eric
2017-01-01
Lock-in amplification (LIA) is an effective approach for recovery of weak signal buried in noise. Determination of the input signal amplitude in a classical dual-phase LIA is based on incoherent detection which leads to a biased estimation at low signal-to-noise ratio. This article presents, for the first time to our knowledge, a new architecture of LIA involving phase estimation with a linear-circular regression for coherent detection. The proposed phase delay estimate, between the input signal and a reference, is defined as the maximum-likelihood of a set of observations distributed according to a von Mises distribution. In our implementation this maximum is obtained with a Newton Raphson algorithm. We show that the proposed LIA architecture provides an unbiased estimate of the input signal amplitude. Theoretical simulations with synthetic data demonstrate that the classical LIA estimates are biased for SNR of the input signal lower than −20 dB, while the proposed LIA is able to accurately recover the weak signal amplitude. The novel approach is applied to an optical sensor for accurate measurement of NO2 concentrations at the sub-ppbv level in the atmosphere. Side-by-side intercomparison measurements with a commercial LIA (SR830, Stanford Research Inc., Sunnyvale, CA, USA ) demonstrate that the proposed LIA has an identical performance in terms of measurement accuracy and precision but with simplified hardware architecture. PMID:29135951
Estimating Environmental Compliance Costs for Industry (1981)
The paper discusses the pros and cons of existing approaches to compliance cost estimation such as ex post survey estimation and ex ante estimation techniques (input cost accounting methods, engineering process models and, econometric models).
RCHILD - an R-package for flexible use of the landscape evolution model CHILD
NASA Astrophysics Data System (ADS)
Dietze, Michael
2014-05-01
Landscape evolution models provide powerful approaches to numerically assess earth surface processes, to quantify rates of landscape change, infer sediment transfer rates, estimate sediment budgets, investigate the consequences of changes in external drivers on a geomorphic system, to provide spatio-temporal interpolations between known landscape states or to test conceptual hypotheses. CHILD (Channel-Hillslope Integrated Landscape Development Model) is one of the most-used models of landscape change in the context of at least tectonic and geomorphologic process interactions. Running CHILD from command line and working with the model output can be a rather awkward task (static model control via text input file, only numeric output in text files). The package RCHILD is a collection of functions for the free statistical software R that help using CHILD in a flexible, dynamic and user-friendly way. The comprised functions allow creating maps, real-time scenes, animations and further thematic plots from model output. The model input files can be modified dynamically and, hence, (feedback-related) changes in external factors can be implemented iteratively. Output files can be written to common formats that can be readily imported to standard GIS software. This contribution presents the basic functionality of the model CHILD as visualised and modified by the package. A rough overview of the available functions is given. Application examples help to illustrate the great potential of numeric modelling of geomorphologic processes.
Powers, Randall K.; Türker, Kemal S.
2010-01-01
The amplitude and time course of synaptic potentials in human motoneurons can be estimated in tonically discharging motor units by measuring stimulus-evoked changes in the rate and probability of motor unit action potentials. However, in spite of the fact that some of these techniques have been used for over thirty years, there is still no consensus on the best way to estimate the characteristics of synaptic potentials or on the accuracy of these estimates. In this review, we compare different techniques for estimating synaptic potentials from human motor unit discharge and also discuss relevant animal models in which estimated synaptic potentials can be compared to those directly measured from intracellular recordings. We also review the experimental evidence on how synaptic noise and intrinsic motoneuron properties influence their responses to synaptic inputs. Finally, we consider to what extent recordings of single motor unit discharge in humans can be used to distinguish the contribution of changes in synaptic inputs versus changes in intrinsic motoneuron properties to altered motoneuron responses following CNS injury. PMID:20427230
NASA Astrophysics Data System (ADS)
Ali, Abebe Mohammed; Darvishzadeh, Roshanak; Skidmore, Andrew K.; Duren, Iris van; Heiden, Uta; Heurich, Marco
2016-03-01
Assessments of ecosystem functioning rely heavily on quantification of vegetation properties. The search is on for methods that produce reliable and accurate baseline information on plant functional traits. In this study, the inversion of the PROSPECT radiative transfer model was used to estimate two functional leaf traits: leaf dry matter content (LDMC) and specific leaf area (SLA). Inversion of PROSPECT usually aims at quantifying its direct input parameters. This is the first time the technique has been used to indirectly model LDMC and SLA. Biophysical parameters of 137 leaf samples were measured in July 2013 in the Bavarian Forest National Park, Germany. Spectra of the leaf samples were measured using an ASD FieldSpec3 equipped with an integrating sphere. PROSPECT was inverted using a look-up table (LUT) approach. The LUTs were generated with and without using prior information. The effect of incorporating prior information on the retrieval accuracy was studied before and after stratifying the samples into broadleaf and conifer categories. The estimated values were evaluated using R2 and normalized root mean square error (nRMSE). Among the retrieved variables the lowest nRMSE (0.0899) was observed for LDMC. For both traits higher R2 values (0.83 for LDMC and 0.89 for SLA) were discovered in the pooled samples. The use of prior information improved accuracy of the retrieved traits. The strong correlation between the estimated traits and the NIR/SWIR region of the electromagnetic spectrum suggests that these leaf traits could be assessed at canopy level by using remotely sensed data.
Analysis and selection of optimal function implementations in massively parallel computer
Archer, Charles Jens [Rochester, MN; Peters, Amanda [Rochester, MN; Ratterman, Joseph D [Rochester, MN
2011-05-31
An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.
Horsager, Jacob; Munk, Ole Lajord; Sørensen, Michael
2015-01-01
Metabolic liver function can be measured by dynamic PET/CT with the radio-labelled galactose-analogue 2-[(18)F]fluoro-2-deoxy-D-galactose ((18)F-FDGal) in terms of hepatic systemic clearance of (18)F-FDGal (K, ml blood/ml liver tissue/min). The method requires arterial blood sampling from a radial artery (arterial input function), and the aim of this study was to develop a method for extracting an image-derived, non-invasive input function from a volume of interest (VOI). Dynamic (18)F-FDGal PET/CT data from 16 subjects without liver disease (healthy subjects) and 16 patients with liver cirrhosis were included in the study. Five different input VOIs were tested: four in the abdominal aorta and one in the left ventricle of the heart. Arterial input function from manual blood sampling was available for all subjects. K*-values were calculated using time-activity curves (TACs) from each VOI as input and compared to the K-value calculated using arterial blood samples as input. Each input VOI was tested on PET data reconstructed with and without resolution modelling. All five image-derived input VOIs yielded K*-values that correlated significantly with K calculated using arterial blood samples. Furthermore, TACs from two different VOIs yielded K*-values that did not statistically deviate from K calculated using arterial blood samples. A semicircle drawn in the posterior part of the abdominal aorta was the only VOI that was successful for both healthy subjects and patients as well as for PET data reconstructed with and without resolution modelling. Metabolic liver function using (18)F-FDGal PET/CT can be measured without arterial blood samples by using input data from a semicircle VOI drawn in the posterior part of the abdominal aorta.
Estimating Return on Investment in Translational Research: Methods and Protocols
Trochim, William; Dilts, David M.; Kirk, Rosalind
2014-01-01
Assessing the value of clinical and translational research funding on accelerating the translation of scientific knowledge is a fundamental issue faced by the National Institutes of Health and its Clinical and Translational Awards (CTSA). To address this issue, the authors propose a model for measuring the return on investment (ROI) of one key CTSA program, the clinical research unit (CRU). By estimating the economic and social inputs and outputs of this program, this model produces multiple levels of ROI: investigator, program and institutional estimates. A methodology, or evaluation protocol, is proposed to assess the value of this CTSA function, with specific objectives, methods, descriptions of the data to be collected, and how data are to be filtered, analyzed, and evaluated. This paper provides an approach CTSAs could use to assess the economic and social returns on NIH and institutional investments in these critical activities. PMID:23925706
Estimating return on investment in translational research: methods and protocols.
Grazier, Kyle L; Trochim, William M; Dilts, David M; Kirk, Rosalind
2013-12-01
Assessing the value of clinical and translational research funding on accelerating the translation of scientific knowledge is a fundamental issue faced by the National Institutes of Health (NIH) and its Clinical and Translational Awards (CTSAs). To address this issue, the authors propose a model for measuring the return on investment (ROI) of one key CTSA program, the clinical research unit (CRU). By estimating the economic and social inputs and outputs of this program, this model produces multiple levels of ROI: investigator, program, and institutional estimates. A methodology, or evaluation protocol, is proposed to assess the value of this CTSA function, with specific objectives, methods, descriptions of the data to be collected, and how data are to be filtered, analyzed, and evaluated. This article provides an approach CTSAs could use to assess the economic and social returns on NIH and institutional investments in these critical activities.
Probalistic Criticality Consequence Evaluation (SCPB:N/A)
DOE Office of Scientific and Technical Information (OSTI.GOV)
P. Gottlieb; J.W. Davis; J.R. Massari
1996-09-04
This analysis is prepared by the Mined Geologic Disposal System (MGDS) Waste Package Development (WPD) department with the objective of providing a comprehensive, conservative estimate of the consequences of the criticality which could possibly occur as the result of commercial spent nuclear fuel emplaced in the underground repository at Yucca Mountain. The consequences of criticality are measured principally in terms of the resulting changes in radionuclide inventory as a function of the power level and duration of the criticality. The purpose of this analysis is to extend the prior estimates of increased radionuclide inventory (Refs. 5.52 and 5.54), for bothmore » internal and external criticality. This analysis, and similar estimates and refinements to be completed before the end of fiscal year 1997, will be provided as input to Total System Performance Assessment-Viability Assessment (TSPA-VA) to demonstrate compliance with the repository performance objectives.« less
Learning Spatially-Smooth Mappings in Non-Rigid Structure from Motion
Hamsici, Onur C.; Gotardo, Paulo F.U.; Martinez, Aleix M.
2013-01-01
Non-rigid structure from motion (NRSFM) is a classical underconstrained problem in computer vision. A common approach to make NRSFM more tractable is to constrain 3D shape deformation to be smooth over time. This constraint has been used to compress the deformation model and reduce the number of unknowns that are estimated. However, temporal smoothness cannot be enforced when the data lacks temporal ordering and its benefits are less evident when objects undergo abrupt deformations. This paper proposes a new NRSFM method that addresses these problems by considering deformations as spatial variations in shape space and then enforcing spatial, rather than temporal, smoothness. This is done by modeling each 3D shape coefficient as a function of its input 2D shape. This mapping is learned in the feature space of a rotation invariant kernel, where spatial smoothness is intrinsically defined by the mapping function. As a result, our model represents shape variations compactly using custom-built coefficient bases learned from the input data, rather than a pre-specified set such as the Discrete Cosine Transform. The resulting kernel-based mapping is a by-product of the NRSFM solution and leads to another fundamental advantage of our approach: for a newly observed 2D shape, its 3D shape is recovered by simply evaluating the learned function. PMID:23946937
Learning Spatially-Smooth Mappings in Non-Rigid Structure from Motion.
Hamsici, Onur C; Gotardo, Paulo F U; Martinez, Aleix M
2012-01-01
Non-rigid structure from motion (NRSFM) is a classical underconstrained problem in computer vision. A common approach to make NRSFM more tractable is to constrain 3D shape deformation to be smooth over time. This constraint has been used to compress the deformation model and reduce the number of unknowns that are estimated. However, temporal smoothness cannot be enforced when the data lacks temporal ordering and its benefits are less evident when objects undergo abrupt deformations. This paper proposes a new NRSFM method that addresses these problems by considering deformations as spatial variations in shape space and then enforcing spatial, rather than temporal, smoothness. This is done by modeling each 3D shape coefficient as a function of its input 2D shape. This mapping is learned in the feature space of a rotation invariant kernel, where spatial smoothness is intrinsically defined by the mapping function. As a result, our model represents shape variations compactly using custom-built coefficient bases learned from the input data, rather than a pre-specified set such as the Discrete Cosine Transform. The resulting kernel-based mapping is a by-product of the NRSFM solution and leads to another fundamental advantage of our approach: for a newly observed 2D shape, its 3D shape is recovered by simply evaluating the learned function.
Mu, Zhijian; Huang, Aiying; Ni, Jiupai; Xie, Deti
2014-01-01
Organic soils are an important source of N2O, but global estimates of these fluxes remain uncertain because measurements are sparse. We tested the hypothesis that N2O fluxes can be predicted from estimates of mineral nitrogen input, calculated from readily-available measurements of CO2 flux and soil C/N ratio. From studies of organic soils throughout the world, we compiled a data set of annual CO2 and N2O fluxes which were measured concurrently. The input of soil mineral nitrogen in these studies was estimated from applied fertilizer nitrogen and organic nitrogen mineralization. The latter was calculated by dividing the rate of soil heterotrophic respiration by soil C/N ratio. This index of mineral nitrogen input explained up to 69% of the overall variability of N2O fluxes, whereas CO2 flux or soil C/N ratio alone explained only 49% and 36% of the variability, respectively. Including water table level in the model, along with mineral nitrogen input, further improved the model with the explanatory proportion of variability in N2O flux increasing to 75%. Unlike grassland or cropland soils, forest soils were evidently nitrogen-limited, so water table level had no significant effect on N2O flux. Our proposed approach, which uses the product of soil-derived CO2 flux and the inverse of soil C/N ratio as a proxy for nitrogen mineralization, shows promise for estimating regional or global N2O fluxes from organic soils, although some further enhancements may be warranted. PMID:24798347
Zavou, Christina; Kkoushi, Antria; Koutsou, Achilleas; Christodoulou, Chris
2017-11-01
The aim of the current work is twofold: firstly to adapt an existing method measuring the input synchrony of a neuron driven only by excitatory inputs in such a way so as to account for inhibitory inputs as well and secondly to further appropriately adapt this measure so as to be correctly utilised on experimentally-recorded data. The existing method uses the normalized pre-spike slope (NPSS) of the membrane potential, resulting from observing the slope of depolarization of the membrane potential of a neuron prior to the moment of crossing the threshold within a short period of time, to identify the response-relevant input synchrony and through it to infer the operational mode of a neuron. The first adaptation of NPSS is made such that its upper bound calculation accommodates for the higher possible slope values caused by the lower average and minimum membrane potential values due to inhibitory inputs. Results indicate that when the input spike trains arrive randomly, the modified NPSS works as expected inferring that the neuron is operating as a temporal integrator. When the input spike trains arrive in perfect synchrony though, the modified NPSS works as expected only when the level of inhibition is much higher than the level of excitation. This suggests that calculation of the upper bound of the NPSS should be a function of the ratio between excitatory and inhibitory inputs in order to be able to correctly capture perfect synchrony at a neuron's input. In addition, we effectively demonstrate a process which has to be followed when aiming to use the NPSS on real neuron recordings. This process, which relies on empirical observations of the slope of depolarisation for estimating the bounds for the range of observed interspike interval lengths, is successfully applied to experimentally-recorded data showing that through it both a real neuron's operational mode and the amount of input synchrony that caused its firing can be inferred. Copyright © 2017 Elsevier B.V. All rights reserved.
A channel estimation scheme for MIMO-OFDM systems
NASA Astrophysics Data System (ADS)
He, Chunlong; Tian, Chu; Li, Xingquan; Zhang, Ce; Zhang, Shiqi; Liu, Chaowen
2017-08-01
In view of the contradiction of the time-domain least squares (LS) channel estimation performance and the practical realization complexity, a reduced complexity channel estimation method for multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) based on pilot is obtained. This approach can transform the complexity of MIMO-OFDM channel estimation problem into a simple single input single output-orthogonal frequency division multiplexing (SISO-OFDM) channel estimation problem and therefore there is no need for large matrix pseudo-inverse, which greatly reduces the complexity of algorithms. Simulation results show that the bit error rate (BER) performance of the obtained method with time orthogonal training sequences and linear minimum mean square error (LMMSE) criteria is better than that of time-domain LS estimator and nearly optimal performance.
Estimating Most Productive Scale Size in Data Envelopment Analysis with Integer Value Data
NASA Astrophysics Data System (ADS)
Dwi Sari, Yunita; Angria S, Layla; Efendi, Syahril; Zarlis, Muhammad
2018-01-01
The most productive scale size (MPSS) is a measurement that states how resources should be organized and utilized to achieve optimal results. The most productive scale size (MPSS) can be used as a benchmark for the success of an industry or company in producing goods or services. To estimate the most productive scale size (MPSS), each decision making unit (DMU) should pay attention the level of input-output efficiency, by data envelopment analysis (DEA) method decision making unit (DMU) can identify units used as references that can help to find the cause and solution from inefficiencies can optimize productivity that main advantage in managerial applications. Therefore, data envelopment analysis (DEA) is chosen to estimating most productive scale size (MPSS) that will focus on the input of integer value data with the CCR model and the BCC model. The purpose of this research is to find the best solution for estimating most productive scale size (MPSS) with input of integer value data in data envelopment analysis (DEA) method.
Real-Time Stability and Control Derivative Extraction From F-15 Flight Data
NASA Technical Reports Server (NTRS)
Smith, Mark S.; Moes, Timothy R.; Morelli, Eugene A.
2003-01-01
A real-time, frequency-domain, equation-error parameter identification (PID) technique was used to estimate stability and control derivatives from flight data. This technique is being studied to support adaptive control system concepts currently being developed by NASA (National Aeronautics and Space Administration), academia, and industry. This report describes the basic real-time algorithm used for this study and implementation issues for onboard usage as part of an indirect-adaptive control system. A confidence measures system for automated evaluation of PID results is discussed. Results calculated using flight data from a modified F-15 aircraft are presented. Test maneuvers included pilot input doublets and automated inputs at several flight conditions. Estimated derivatives are compared to aerodynamic model predictions. Data indicate that the real-time PID used for this study performs well enough to be used for onboard parameter estimation. For suitable test inputs, the parameter estimates converged rapidly to sufficient levels of accuracy. The devised confidence measures used were moderately successful.
Wu, Xianhua; Yang, Lingjuan; Guo, Ji; Lu, Huaguo; Chen, Yunfeng; Sun, Jian
2014-01-01
Concentrating on consuming coefficient, partition coefficient, and Leontief inverse matrix, relevant concepts and algorithms are developed for estimating the impact of meteorological services including the associated (indirect, complete) economic effect. Subsequently, quantitative estimations are particularly obtained for the meteorological services in Jiangxi province by utilizing the input-output method. It is found that the economic effects are noticeably rescued by the preventive strategies developed from both the meteorological information and internal relevance (interdependency) in the industrial economic system. Another finding is that the ratio range of input in the complete economic effect on meteorological services is about 1 : 108.27–1 : 183.06, remarkably different from a previous estimation based on the Delphi method (1 : 30–1 : 51). Particularly, economic effects of meteorological services are higher for nontraditional users of manufacturing, wholesale and retail trades, services sector, tourism and culture, and art and lower for traditional users of agriculture, forestry, livestock, fishery, and construction industries. PMID:24578666
Wu, Xianhua; Wei, Guo; Yang, Lingjuan; Guo, Ji; Lu, Huaguo; Chen, Yunfeng; Sun, Jian
2014-01-01
Concentrating on consuming coefficient, partition coefficient, and Leontief inverse matrix, relevant concepts and algorithms are developed for estimating the impact of meteorological services including the associated (indirect, complete) economic effect. Subsequently, quantitative estimations are particularly obtained for the meteorological services in Jiangxi province by utilizing the input-output method. It is found that the economic effects are noticeably rescued by the preventive strategies developed from both the meteorological information and internal relevance (interdependency) in the industrial economic system. Another finding is that the ratio range of input in the complete economic effect on meteorological services is about 1 : 108.27-1 : 183.06, remarkably different from a previous estimation based on the Delphi method (1 : 30-1 : 51). Particularly, economic effects of meteorological services are higher for nontraditional users of manufacturing, wholesale and retail trades, services sector, tourism and culture, and art and lower for traditional users of agriculture, forestry, livestock, fishery, and construction industries.
Variance-based interaction index measuring heteroscedasticity
NASA Astrophysics Data System (ADS)
Ito, Keiichi; Couckuyt, Ivo; Poles, Silvia; Dhaene, Tom
2016-06-01
This work is motivated by the need to deal with models with high-dimensional input spaces of real variables. One way to tackle high-dimensional problems is to identify interaction or non-interaction among input parameters. We propose a new variance-based sensitivity interaction index that can detect and quantify interactions among the input variables of mathematical functions and computer simulations. The computation is very similar to first-order sensitivity indices by Sobol'. The proposed interaction index can quantify the relative importance of input variables in interaction. Furthermore, detection of non-interaction for screening can be done with as low as 4 n + 2 function evaluations, where n is the number of input variables. Using the interaction indices based on heteroscedasticity, the original function may be decomposed into a set of lower dimensional functions which may then be analyzed separately.
Load estimator (LOADEST): a FORTRAN program for estimating constituent loads in streams and rivers
Runkel, Robert L.; Crawford, Charles G.; Cohn, Timothy A.
2004-01-01
LOAD ESTimator (LOADEST) is a FORTRAN program for estimating constituent loads in streams and rivers. Given a time series of streamflow, additional data variables, and constituent concentration, LOADEST assists the user in developing a regression model for the estimation of constituent load (calibration). Explanatory variables within the regression model include various functions of streamflow, decimal time, and additional user-specified data variables. The formulated regression model then is used to estimate loads over a user-specified time interval (estimation). Mean load estimates, standard errors, and 95 percent confidence intervals are developed on a monthly and(or) seasonal basis. The calibration and estimation procedures within LOADEST are based on three statistical estimation methods. The first two methods, Adjusted Maximum Likelihood Estimation (AMLE) and Maximum Likelihood Estimation (MLE), are appropriate when the calibration model errors (residuals) are normally distributed. Of the two, AMLE is the method of choice when the calibration data set (time series of streamflow, additional data variables, and concentration) contains censored data. The third method, Least Absolute Deviation (LAD), is an alternative to maximum likelihood estimation when the residuals are not normally distributed. LOADEST output includes diagnostic tests and warnings to assist the user in determining the appropriate estimation method and in interpreting the estimated loads. This report describes the development and application of LOADEST. Sections of the report describe estimation theory, input/output specifications, sample applications, and installation instructions.
NASA Astrophysics Data System (ADS)
Wu, Z. Y.; Zhang, L.; Wang, X. M.; Munger, J. W.
2015-07-01
Small pollutant concentration gradients between levels above a plant canopy result in large uncertainties in estimated air-surface exchange fluxes when using existing micrometeorological gradient methods, including the aerodynamic gradient method (AGM) and the modified Bowen ratio method (MBR). A modified micrometeorological gradient method (MGM) is proposed in this study for estimating O3 dry deposition fluxes over a forest canopy using concentration gradients between a level above and a level below the canopy top, taking advantage of relatively large gradients between these levels due to significant pollutant uptake in the top layers of the canopy. The new method is compared with the AGM and MBR methods and is also evaluated using eddy-covariance (EC) flux measurements collected at the Harvard Forest Environmental Measurement Site, Massachusetts, during 1993-2000. All three gradient methods (AGM, MBR, and MGM) produced similar diurnal cycles of O3 dry deposition velocity (Vd(O3)) to the EC measurements, with the MGM method being the closest in magnitude to the EC measurements. The multi-year average Vd(O3) differed significantly between these methods, with the AGM, MBR, and MGM method being 2.28, 1.45, and 1.18 times that of the EC, respectively. Sensitivity experiments identified several input parameters for the MGM method as first-order parameters that affect the estimated Vd(O3). A 10% uncertainty in the wind speed attenuation coefficient or canopy displacement height can cause about 10% uncertainty in the estimated Vd(O3). An unrealistic leaf area density vertical profile can cause an uncertainty of a factor of 2.0 in the estimated Vd(O3). Other input parameters or formulas for stability functions only caused an uncertainly of a few percent. The new method provides an alternative approach to monitoring/estimating long-term deposition fluxes of similar pollutants over tall canopies.
Satellite Estimates of Surface Short-wave Fluxes: Issues of Implementation
NASA Technical Reports Server (NTRS)
Wang, H.; Pinker, Rachel; Minnis, Patrick
2006-01-01
Surface solar radiation reaching the Earth's surface is the primary forcing function of the land surface energy and water cycle. Therefore, there is a need for information on this parameter, preferably, at global scale. Satellite based estimates are now available at accuracies that meet the demands of many scientific objectives. Selection of an approach to estimate such fluxes requires consideration of trade-offs between the use of multi-spectral observations of cloud optical properties that are more difficult to implement at large scales, and methods that are simplified but easier to implement. In this study, an evaluation of such trade-offs will be performed. The University of Maryland Surface Radiation Model (UMD/SRB) has been used to reprocess five years of GOES-8 satellite observations over the United States to ensure updated calibration and improved cloud detection over snow. The UMD/SRB model was subsequently modified to allow input of information on aerosol and cloud optical depth with information from independent satellite sources. Specifically, the cloud properties from the Atmospheric Radiation Measurement (ARM) Satellite Data Analysis Program (Minnis et al., 1995) are used to drive the modified version of the model to estimate surface short-wave fluxes over the Southern Great Plain ARM sites for a twelve month period. The auxiliary data needed as model inputs such as aerosol optical depth, spectral surface albedo, water vapor and total column ozone amount were kept the same for both versions of the model. The estimated shortwave fluxes are evaluated against ground observations at the ARM Central Facility and four satellite ARM sites. During summer, the estimated fluxes based on cloud properties derived from the multi-spectral approach were in better agreement with ground measurements than those derived from the UMD/SRB model. However, in winter, the fluxes derived with the UMD/SRB model were in better agreement with ground observations than those estimated from cloud properties provided by the ARM Satellite Data Analysis Program. During the transition periods, the results were comparable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jung, Yoojin; Doughty, Christine
Input and output files used for fault characterization through numerical simulation using iTOUGH2. The synthetic data for the push period are generated by running a forward simulation (input parameters are provided in iTOUGH2 Brady GF6 Input Parameters.txt [InvExt6i.txt]). In general, the permeability of the fault gouge, damage zone, and matrix are assumed to be unknown. The input and output files are for the inversion scenario where only pressure transients are available at the monitoring well located 200 m above the injection well and only the fault gouge permeability is estimated. The input files are named InvExt6i, INPUT.tpl, FOFT.ins, CO2TAB, andmore » the output files are InvExt6i.out, pest.fof, and pest.sav (names below are display names). The table graphic in the data files below summarizes the inversion results, and indicates the fault gouge permeability can be estimated even if imperfect guesses are used for matrix and damage zone permeabilities, and permeability anisotropy is not taken into account.« less
Microbial risk assessment in heterogeneous aquifers: 1. Pathogen transport
NASA Astrophysics Data System (ADS)
Molin, S.; Cvetkovic, V.
2010-05-01
Pathogen transport in heterogeneous aquifers is investigated for microbial risk assessment. A point source with time-dependent input of pathogens is assumed, exemplified as a simple on-site sanitation installation, intermingled with water supply wells. Any pathogen transmission pathway (realization) to the receptor from a postulated infection hazard is viewed as a random event, with the hydraulic conductivity varying spatially. For aquifers where VAR[lnK] < 1 and the integral scale is finite, we provide relatively simple semianalytical expressions for pathogen transport that incorporate the colloid filtration theory. We test a wide range of Damkohler numbers in order to assess the significance of rate limitations on the aquifer barrier function. Even slow immobile inactivation may notably affect the retention of pathogens. Analytical estimators for microbial peak discharge are evaluated and are shown to be applicable using parameters representative of rotavirus and Hepatitis A with input of 10-20 days duration.
Bi-Objective Optimal Control Modification Adaptive Control for Systems with Input Uncertainty
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2012-01-01
This paper presents a new model-reference adaptive control method based on a bi-objective optimal control formulation for systems with input uncertainty. A parallel predictor model is constructed to relate the predictor error to the estimation error of the control effectiveness matrix. In this work, we develop an optimal control modification adaptive control approach that seeks to minimize a bi-objective linear quadratic cost function of both the tracking error norm and predictor error norm simultaneously. The resulting adaptive laws for the parametric uncertainty and control effectiveness uncertainty are dependent on both the tracking error and predictor error, while the adaptive laws for the feedback gain and command feedforward gain are only dependent on the tracking error. The optimal control modification term provides robustness to the adaptive laws naturally from the optimal control framework. Simulations demonstrate the effectiveness of the proposed adaptive control approach.