Perceptual Calibration for Immersive Display Environments
Ponto, Kevin; Gleicher, Michael; Radwin, Robert G.; Shin, Hyun Joon
2013-01-01
The perception of objects, depth, and distance has been repeatedly shown to be divergent between virtual and physical environments. We hypothesize that many of these discrepancies stem from incorrect geometric viewing parameters, specifically that physical measurements of eye position are insufficiently precise to provide proper viewing parameters. In this paper, we introduce a perceptual calibration procedure derived from geometric models. While most research has used geometric models to predict perceptual errors, we instead use these models inversely to determine perceptually correct viewing parameters. We study the advantages of these new psychophysically determined viewing parameters compared to the commonly used measured viewing parameters in an experiment with 20 subjects. The perceptually calibrated viewing parameters for the subjects generally produced new virtual eye positions that were wider and deeper than standard practices would estimate. Our study shows that perceptually calibrated viewing parameters can significantly improve depth acuity, distance estimation, and the perception of shape. PMID:23428454
View Estimation Based on Value System
NASA Astrophysics Data System (ADS)
Takahashi, Yasutake; Shimada, Kouki; Asada, Minoru
Estimation of a caregiver's view is one of the most important capabilities for a child to understand the behavior demonstrated by the caregiver, that is, to infer the intention of behavior and/or to learn the observed behavior efficiently. We hypothesize that the child develops this ability in the same way as behavior learning motivated by an intrinsic reward, that is, he/she updates the model of the estimated view of his/her own during the behavior imitated from the observation of the behavior demonstrated by the caregiver based on minimizing the estimation error of the reward during the behavior. From this view, this paper shows a method for acquiring such a capability based on a value system from which values can be obtained by reinforcement learning. The parameters of the view estimation are updated based on the temporal difference error (hereafter TD error: estimation error of the state value), analogous to the way such that the parameters of the state value of the behavior are updated based on the TD error. Experiments with simple humanoid robots show the validity of the method, and the developmental process parallel to young children's estimation of its own view during the imitation of the observed behavior of the caregiver is discussed.
Use of geographic information management systems (GIMS) for nitrogen management
NASA Astrophysics Data System (ADS)
Diker, Kenan
1998-11-01
Geographic Information Management Systems (GIMS) was investigated in this study to develop an efficient nitrogen management scheme for corn. The study was conducted on two experimental corn sites. The first site consisted of six non-replicated plots where the canopy reflectance of corn at six nitrogen fertilizer levels was investigated. The reflectance measurements were conducted for nadir and 75sp° view angles. Data from these plots were used to develop relationships between reflectance data and soil and plant parameters. The second site had four corn plots fertilized by different methods such as spoon-fed, pre-plant and side-dress, which created nitrogen variability within the field. Soil and plant nitrogen as well as leaf area, biomass, percent cover measurements, and canopy reflectance data were collected at various growth stages from both sites during the 1995 and 1996 growing seasons. Relationships were developed between the Nitrogen Reflectance Index (NRI) developed by Bausch et al. (1994) and soil and plant variables. Spatial dependence of data was determined by geostatistical methods; variability was mapped in ArcView. Results of this study indicated that the NRI is a better estimator of plant nitrogen status than chlorophyll meter measurements. The NRI can successfully be used to estimate the spatial distribution of soil nitrogen estimates through the plant nitrogen status as well as plant parameters and the yield potential. GIS mapping of measured and estimated soil nitrogen agreed except in locations where hot spots were measured. The NRI value of 0.95 seemed to be the critical value for plant nitrogen status especially for the 75sp° view. The nadir view tended to underestimate plant and soil parameters, whereas, the 75sp° view slightly overestimated these parameters. If available, the 75sp° view data should be used before the tasseling stage for reflectance measurements to reduce the soil background effect. However, it is sensitive to windy conditions. After tasseling, the nadir view should be used because the 75sp° view is obstructed by tassels. Total soil nitrogen at the V6 growth stage was underestimated by the NRI for both view angles. Results also indicated that a nitrogen prescription could be estimated at various growth stages.
Simulation studies of wide and medium field of view earth radiation data analysis
NASA Technical Reports Server (NTRS)
Green, R. N.
1978-01-01
A parameter estimation technique is presented to estimate the radiative flux distribution over the earth from radiometer measurements at satellite altitude. The technique analyzes measurements from a wide field of view (WFOV), horizon to horizon, nadir pointing sensor with a mathematical technique to derive the radiative flux estimates at the top of the atmosphere for resolution elements smaller than the sensor field of view. A computer simulation of the data analysis technique is presented for both earth-emitted and reflected radiation. Zonal resolutions are considered as well as the global integration of plane flux. An estimate of the equator-to-pole gradient is obtained from the zonal estimates. Sensitivity studies of the derived flux distribution to directional model errors are also presented. In addition to the WFOV results, medium field of view results are presented.
HEART: an automated beat-to-beat cardiovascular analysis package using Matlab.
Schroeder, M J Mark J; Perreault, Bill; Ewert, D L Daniel L; Koenig, S C Steven C
2004-07-01
A computer program is described for beat-to-beat analysis of cardiovascular parameters from high-fidelity pressure and flow waveforms. The Hemodynamic Estimation and Analysis Research Tool (HEART) is a post-processing analysis software package developed in Matlab that enables scientists and clinicians to document, load, view, calibrate, and analyze experimental data that have been digitally saved in ascii or binary format. Analysis routines include traditional hemodynamic parameter estimates as well as more sophisticated analyses such as lumped arterial model parameter estimation and vascular impedance frequency spectra. Cardiovascular parameter values of all analyzed beats can be viewed and statistically analyzed. An attractive feature of the HEART program is the ability to analyze data with visual quality assurance throughout the process, thus establishing a framework toward which Good Laboratory Practice (GLP) compliance can be obtained. Additionally, the development of HEART on the Matlab platform provides users with the flexibility to adapt or create study specific analysis files according to their specific needs. Copyright 2003 Elsevier Ltd.
Andújar, Dionisio; Fernández-Quintanilla, César; Dorado, José
2015-06-04
In energy crops for biomass production a proper plant structure is important to optimize wood yields. A precise crop characterization in early stages may contribute to the choice of proper cropping techniques. This study assesses the potential of the Microsoft Kinect for Windows v.1 sensor to determine the best viewing angle of the sensor to estimate the plant biomass based on poplar seedling geometry. Kinect Fusion algorithms were used to generate a 3D point cloud from the depth video stream. The sensor was mounted in different positions facing the tree in order to obtain depth (RGB-D) images from different angles. Individuals of two different ages, e.g., one month and one year old, were scanned. Four different viewing angles were compared: top view (0°), 45° downwards view, front view (90°) and ground upwards view (-45°). The ground-truth used to validate the sensor readings consisted of a destructive sampling in which the height, leaf area and biomass (dry weight basis) were measured in each individual plant. The depth image models agreed well with 45°, 90° and -45° measurements in one-year poplar trees. Good correlations (0.88 to 0.92) between dry biomass and the area measured with the Kinect were found. In addition, plant height was accurately estimated with a few centimeters error. The comparison between different viewing angles revealed that top views showed poorer results due to the fact the top leaves occluded the rest of the tree. However, the other views led to good results. Conversely, small poplars showed better correlations with actual parameters from the top view (0°). Therefore, although the Microsoft Kinect for Windows v.1 sensor provides good opportunities for biomass estimation, the viewing angle must be chosen taking into account the developmental stage of the crop and the desired parameters. The results of this study indicate that Kinect is a promising tool for a rapid canopy characterization, i.e., for estimating crop biomass production, with several important advantages: low cost, low power needs and a high frame rate (frames per second) when dynamic measurements are required.
Parameter Estimation for Geoscience Applications Using a Measure-Theoretic Approach
NASA Astrophysics Data System (ADS)
Dawson, C.; Butler, T.; Mattis, S. A.; Graham, L.; Westerink, J. J.; Vesselinov, V. V.; Estep, D.
2016-12-01
Effective modeling of complex physical systems arising in the geosciences is dependent on knowing parameters which are often difficult or impossible to measure in situ. In this talk we focus on two such problems, estimating parameters for groundwater flow and contaminant transport, and estimating parameters within a coastal ocean model. The approach we will describe, proposed by collaborators D. Estep, T. Butler and others, is based on a novel stochastic inversion technique based on measure theory. In this approach, given a probability space on certain observable quantities of interest, one searches for the sets of highest probability in parameter space which give rise to these observables. When viewed as mappings between sets, the stochastic inversion problem is well-posed in certain settings, but there are computational challenges related to the set construction. We will focus the talk on estimating scalar parameters and fields in a contaminant transport setting, and in estimating bottom friction in a complicated near-shore coastal application.
Robust gaze-steering of an active vision system against errors in the estimated parameters
NASA Astrophysics Data System (ADS)
Han, Youngmo
2015-01-01
Gaze-steering is often used to broaden the viewing range of an active vision system. Gaze-steering procedures are usually based on estimated parameters such as image position, image velocity, depth and camera calibration parameters. However, there may be uncertainties in these estimated parameters because of measurement noise and estimation errors. In this case, robust gaze-steering cannot be guaranteed. To compensate for such problems, this paper proposes a gaze-steering method based on a linear matrix inequality (LMI). In this method, we first propose a proportional derivative (PD) control scheme on the unit sphere that does not use depth parameters. This proposed PD control scheme can avoid uncertainties in the estimated depth and camera calibration parameters, as well as inconveniences in their estimation process, including the use of auxiliary feature points and highly non-linear computation. Furthermore, the control gain of the proposed PD control scheme on the unit sphere is designed using LMI such that the designed control is robust in the presence of uncertainties in the other estimated parameters, such as image position and velocity. Simulation results demonstrate that the proposed method provides a better compensation for uncertainties in the estimated parameters than the contemporary linear method and steers the gaze of the camera more steadily over time than the contemporary non-linear method.
Overview and benchmark analysis of fuel cell parameters estimation for energy management purposes
NASA Astrophysics Data System (ADS)
Kandidayeni, M.; Macias, A.; Amamou, A. A.; Boulon, L.; Kelouwani, S.; Chaoui, H.
2018-03-01
Proton exchange membrane fuel cells (PEMFCs) have become the center of attention for energy conversion in many areas such as automotive industry, where they confront a high dynamic behavior resulting in their characteristics variation. In order to ensure appropriate modeling of PEMFCs, accurate parameters estimation is in demand. However, parameter estimation of PEMFC models is highly challenging due to their multivariate, nonlinear, and complex essence. This paper comprehensively reviews PEMFC models parameters estimation methods with a specific view to online identification algorithms, which are considered as the basis of global energy management strategy design, to estimate the linear and nonlinear parameters of a PEMFC model in real time. In this respect, different PEMFC models with different categories and purposes are discussed first. Subsequently, a thorough investigation of PEMFC parameter estimation methods in the literature is conducted in terms of applicability. Three potential algorithms for online applications, Recursive Least Square (RLS), Kalman filter, and extended Kalman filter (EKF), which has escaped the attention in previous works, have been then utilized to identify the parameters of two well-known semi-empirical models in the literature, Squadrito et al. and Amphlett et al. Ultimately, the achieved results and future challenges are discussed.
Measurement of surface physical properties and radiation balance for KUREX-91 study
NASA Technical Reports Server (NTRS)
Walter-Shea, Elizabeth A.; Blad, Blaine L.; Mesarch, Mark A.; Hays, Cynthia J.
1992-01-01
Biophysical properties and radiation balance components were measured at the Streletskaya Steppe Reserve of the Russian Republic in July 1991. Steppe vegetation parameters characterized include leaf area index (LAI), leaf angle distribution, mean tilt angle, canopy height, leaf spectral properties, leaf water potential, fraction of absorbed photosynthetically active radiation (APAR), and incoming and outgoing shortwave and longwave radiation. Research results, biophysical parameters, radiation balance estimates, and sun-view geometry effects on estimating APAR are discussed. Incoming and outgoing radiation streams are estimated using bidirectional spectral reflectances and bidirectional thermal emittances. Good agreement between measured and modeled estimates of the radiation balance were obtained.
NASA Astrophysics Data System (ADS)
Chen, B.; Su, J. H.; Guo, L.; Chen, J.
2017-06-01
This paper puts forward a maximum power estimation method based on the photovoltaic array (PVA) model to solve the optimization problems about group control of the PV water pumping systems (PVWPS) at the maximum power point (MPP). This method uses the improved genetic algorithm (GA) for model parameters estimation and identification in view of multi P-V characteristic curves of a PVA model, and then corrects the identification results through least square method. On this basis, the irradiation level and operating temperature under any condition are able to estimate so an accurate PVA model is established and the MPP none-disturbance estimation is achieved. The simulation adopts the proposed GA to determine parameters, and the results verify the accuracy and practicability of the methods.
On robust parameter estimation in brain-computer interfacing
NASA Astrophysics Data System (ADS)
Samek, Wojciech; Nakajima, Shinichi; Kawanabe, Motoaki; Müller, Klaus-Robert
2017-12-01
Objective. The reliable estimation of parameters such as mean or covariance matrix from noisy and high-dimensional observations is a prerequisite for successful application of signal processing and machine learning algorithms in brain-computer interfacing (BCI). This challenging task becomes significantly more difficult if the data set contains outliers, e.g. due to subject movements, eye blinks or loose electrodes, as they may heavily bias the estimation and the subsequent statistical analysis. Although various robust estimators have been developed to tackle the outlier problem, they ignore important structural information in the data and thus may not be optimal. Typical structural elements in BCI data are the trials consisting of a few hundred EEG samples and indicating the start and end of a task. Approach. This work discusses the parameter estimation problem in BCI and introduces a novel hierarchical view on robustness which naturally comprises different types of outlierness occurring in structured data. Furthermore, the class of minimum divergence estimators is reviewed and a robust mean and covariance estimator for structured data is derived and evaluated with simulations and on a benchmark data set. Main results. The results show that state-of-the-art BCI algorithms benefit from robustly estimated parameters. Significance. Since parameter estimation is an integral part of various machine learning algorithms, the presented techniques are applicable to many problems beyond BCI.
Feature Extraction for Pose Estimation. A Comparison Between Synthetic and Real IR Imagery
1991-12-01
determine the orientation of the sensor relative to the target ....... ........................ 33 4. Effects of changing sensor and target parameters...Reference object is a T-62 tank facing the viewer (sensor/target parameters set equal to zero). NOTE: Changing the target parameters produces...anomalous results. For these images, the field of view (FOV) was not changed .......................... 35 5. Image anomalies from changing the target
Finding Intrinsic and Extrinsic Viewing Parameters from a Single Realist Painting
NASA Astrophysics Data System (ADS)
Jordan, Tadeusz; Stork, David G.; Khoo, Wai L.; Zhu, Zhigang
In this paper we studied the geometry of a three-dimensional tableau from a single realist painting - Scott Fraser’s Three way vanitas (2006). The tableau contains a carefully chosen complex arrangement of objects including a moth, egg, cup, and strand of string, glass of water, bone, and hand mirror. Each of the three plane mirrors presents a different view of the tableau from a virtual camera behind each mirror and symmetric to the artist’s viewing point. Our new contribution was to incorporate single-view geometric information extracted from the direct image of the wooden mirror frames in order to obtain the camera models of both the real camera and the three virtual cameras. Both the intrinsic and extrinsic parameters are estimated for the direct image and the images in three plane mirrors depicted within the painting.
Aircraft Engine Thrust Estimator Design Based on GSA-LSSVM
NASA Astrophysics Data System (ADS)
Sheng, Hanlin; Zhang, Tianhong
2017-08-01
In view of the necessity of highly precise and reliable thrust estimator to achieve direct thrust control of aircraft engine, based on support vector regression (SVR), as well as least square support vector machine (LSSVM) and a new optimization algorithm - gravitational search algorithm (GSA), by performing integrated modelling and parameter optimization, a GSA-LSSVM-based thrust estimator design solution is proposed. The results show that compared to particle swarm optimization (PSO) algorithm, GSA can find unknown optimization parameter better and enables the model developed with better prediction and generalization ability. The model can better predict aircraft engine thrust and thus fulfills the need of direct thrust control of aircraft engine.
Statistical Constraints on Station Clock Parameters in the NRCAN PPP Estimation Process
2008-12-01
e.g., Two-Way Satellite Time and Frequency Transfer ( TWSTFT ), GPS Common View (CV), and GPS P3 [9]. Finally, PPP shows a 2- times improvement in...the collocated Two-Way Satellite Time and Frequency Technique ( TWSTFT ) estimates for the same baseline. The TWSTFT estimates are available every 2...periodicity is due to the thermal variations described in the previous section, while the divergence between both PPP solutions and TWSTFT estimates is due
Parameter estimation using meta-heuristics in systems biology: a comprehensive review.
Sun, Jianyong; Garibaldi, Jonathan M; Hodgman, Charlie
2012-01-01
This paper gives a comprehensive review of the application of meta-heuristics to optimization problems in systems biology, mainly focussing on the parameter estimation problem (also called the inverse problem or model calibration). It is intended for either the system biologist who wishes to learn more about the various optimization techniques available and/or the meta-heuristic optimizer who is interested in applying such techniques to problems in systems biology. First, the parameter estimation problems emerging from different areas of systems biology are described from the point of view of machine learning. Brief descriptions of various meta-heuristics developed for these problems follow, along with outlines of their advantages and disadvantages. Several important issues in applying meta-heuristics to the systems biology modelling problem are addressed, including the reliability and identifiability of model parameters, optimal design of experiments, and so on. Finally, we highlight some possible future research directions in this field.
NASA Astrophysics Data System (ADS)
Lim, Sungsoo; Lee, Seohyung; Kim, Jun-geon; Lee, Daeho
2018-01-01
The around-view monitoring (AVM) system is one of the major applications of advanced driver assistance systems and intelligent transportation systems. We propose an on-line calibration method, which can compensate misalignments for AVM systems. Most AVM systems use fisheye undistortion, inverse perspective transformation, and geometrical registration methods. To perform these procedures, the parameters for each process must be known; the procedure by which the parameters are estimated is referred to as the initial calibration. However, when only using the initial calibration data, we cannot compensate misalignments, caused by changing equilibria of cars. Moreover, even small changes such as tire pressure levels, passenger weight, or road conditions can affect a car's equilibrium. Therefore, to compensate for this misalignment, additional techniques are necessary, specifically an on-line calibration method. On-line calibration can recalculate homographies, which can correct any degree of misalignment using the unique features of ordinary parking lanes. To extract features from the parking lanes, this method uses corner detection and a pattern matching algorithm. From the extracted features, homographies are estimated using random sample consensus and parameter estimation. Finally, the misaligned epipolar geographies are compensated via the estimated homographies. Thus, the proposed method can render image planes parallel to the ground. This method does not require any designated patterns and can be used whenever cars are placed in a parking lot. The experimental results show the robustness and efficiency of the method.
NASA Astrophysics Data System (ADS)
Konno, S.; Mita, A.
2014-03-01
Recently, the demand of the building spaces to respond to increase of single aged households and the diversification of life style is increasing. Smart house is one of them, but it is difficult for them to be changed and renovated. Therefore, we suggest Biofied builing. In biofied building, we use a mobile robot to get concious and unconcious information about residents and try to make it more secure and comfort builing spaces by realizing the intraction between residents and builing spaces. Walking parameters are one of the most important unconscious information about residents. They are an indicator of autonomy of elderly, and changes of stride length and walking speed may be pridictive of a future fall and a cognitive impairment. By observing their walking and informing residents their walking state, they can forestall such dangers and it helps them to live more securely and autonomously. Many methods to estimate walking parameters have been studied. The famous ones are to use accelerometers and a motion capture camera. Walking parameters estimated by them are high precise but the sensors are attached to a human body in these method and it can make human's walk different from the original walk. Furthermore, some elderly feel it to invade them. In this work, Kinect which can get information about human untouchably was used on the mobile robot. A stride time, stride length, and walking speed were estimated from the back view of human by following him or her. Evaluation was done for 10m, 5m, 4m, and 3m in whole walking. As a result, the proposal system can estimate walking parameters of the walk more than 3m.
Sayseng, Vincent; Grondin, Julien; Konofagou, Elisa E
2018-05-01
Coherent compounding methods using the full or partial transmit aperture have been investigated as a possible means of increasing strain measurement accuracy in cardiac strain imaging; however, the optimal transmit parameters in either compounding approach have yet to be determined. The relationship between strain estimation accuracy and transmit parameters-specifically the subaperture, angular aperture, tilt angle, number of virtual sources, and frame rate-in partial aperture (subaperture compounding) and full aperture (steered compounding) fundamental mode cardiac imaging was thus investigated and compared. Field II simulation of a 3-D cylindrical annulus undergoing deformation and twist was developed to evaluate accuracy of 2-D strain estimation in cross-sectional views. The tradeoff between frame rate and number of virtual sources was then investigated via transthoracic imaging in the parasternal short-axis view of five healthy human subjects, using the strain filter to quantify estimation precision. Finally, the optimized subaperture compounding sequence (25-element subperture, 90° angular aperture, 10 virtual sources, 300-Hz frame rate) was compared to the optimized steered compounding sequence (60° angular aperture, 15° tilt, 10 virtual sources, 300-Hz frame rate) via transthoracic imaging of five healthy subjects. Both approaches were determined to estimate cumulative radial strain with statistically equivalent precision (subaperture compounding E(SNRe %) = 3.56, and steered compounding E(SNRe %) = 4.26).
A quasi-dense matching approach and its calibration application with Internet photos.
Wan, Yanli; Miao, Zhenjiang; Wu, Q M Jonathan; Wang, Xifu; Tang, Zhen; Wang, Zhifei
2015-03-01
This paper proposes a quasi-dense matching approach to the automatic acquisition of camera parameters, which is required for recovering 3-D information from 2-D images. An affine transformation-based optimization model and a new matching cost function are used to acquire quasi-dense correspondences with high accuracy in each pair of views. These correspondences can be effectively detected and tracked at the sub-pixel level in multiviews with our neighboring view selection strategy. A two-layer iteration algorithm is proposed to optimize 3-D quasi-dense points and camera parameters. In the inner layer, different optimization strategies based on local photometric consistency and a global objective function are employed to optimize the 3-D quasi-dense points and camera parameters, respectively. In the outer layer, quasi-dense correspondences are resampled to guide a new estimation and optimization process of the camera parameters. We demonstrate the effectiveness of our algorithm with several experiments.
MATERIAL PARAMETER ESTIMATION USING TERAHERTZ TIME-DOMAIN SPECTROSCOPY. (R827122)
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Havens, Timothy C; Roggemann, Michael C; Schulz, Timothy J; Brown, Wade W; Beyer, Jeff T; Otten, L John
2002-05-20
We discuss a method of data reduction and analysis that has been developed for a novel experiment to detect anisotropic turbulence in the tropopause and to measure the spatial statistics of these flows. The experimental concept is to make measurements of temperature at 15 points on a hexagonal grid for altitudes from 12,000 to 18,000 m while suspended from a balloon performing a controlled descent. From the temperature data, we estimate the index of refraction and study the spatial statistics of the turbulence-induced index of refraction fluctuations. We present and evaluate the performance of a processing approach to estimate the parameters of an anisotropic model for the spatial power spectrum of the turbulence-induced index of refraction fluctuations. A Gaussian correlation model and a least-squares optimization routine are used to estimate the parameters of the model from the measurements. In addition, we implemented a quick-look algorithm to have a computationally nonintensive way of viewing the autocorrelation function of the index fluctuations. The autocorrelation of the index of refraction fluctuations is binned and interpolated onto a uniform grid from the sparse points that exist in our experiment. This allows the autocorrelation to be viewed with a three-dimensional plot to determine whether anisotropy exists in a specific data slab. Simulation results presented here show that, in the presence of the anticipated levels of measurement noise, the least-squares estimation technique allows turbulence parameters to be estimated with low rms error.
Earth orbital teleoperator visual system evaluation program
NASA Technical Reports Server (NTRS)
Frederick, P. N.; Shields, N. L., Jr.; Kirkpatrick, M., III
1977-01-01
Visual system parameters and stereoptic television component geometries were evaluated for optimum viewing. The accuracy of operator range estimation using a Fresnell stereo television system with a three dimensional cursor was examined. An operator's ability to align three dimensional targets using vidicon tube and solid state television cameras as part of a Fresnell stereoptic system was evaluated. An operator's ability to discriminate between varied color samples viewed with a color television system was determined.
ORTHONORMAL RESIDUALS IN GEOSTATISTICS: MODEL CRITICISM AND PARAMETER ESTIMATION. (R825689C037)
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
A NEW VARIANCE ESTIMATOR FOR PARAMETERS OF SEMI-PARAMETRIC GENERALIZED ADDITIVE MODELS. (R829213)
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
ESTIMATION OF OCTANOL/WATER PARTITION COEFFICIENTS USING LSER PARAMETERS. (R825370C064)
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Stretchy binary classification.
Toh, Kar-Ann; Lin, Zhiping; Sun, Lei; Li, Zhengguo
2018-01-01
In this article, we introduce an analytic formulation for compressive binary classification. The formulation seeks to solve the least ℓ p -norm of the parameter vector subject to a classification error constraint. An analytic and stretchable estimation is conjectured where the estimation can be viewed as an extension of the pseudoinverse with left and right constructions. Our variance analysis indicates that the estimation based on the left pseudoinverse is unbiased and the estimation based on the right pseudoinverse is biased. Sparseness can be obtained for the biased estimation under certain mild conditions. The proposed estimation is investigated numerically using both synthetic and real-world data. Copyright © 2017 Elsevier Ltd. All rights reserved.
Biophysical and spectral modeling for crop identification and assessment
NASA Technical Reports Server (NTRS)
Goel, N. S. (Principal Investigator)
1984-01-01
The development of a technique for estimating all canopy parameters occurring in a canopy reflectance model from the measured canopy reflectance data is summarized. The Suits and the SAIL model for a uniform and homogeneous crop canopy were used to determine if the leaf area index and the leaf angle distribution could be estimated. Optimal solar/view angles for measuring CR were also investigated. The use of CR in many wavelengths or spectral bands and of linear and nonlinear transforms of CRs for various solar/view angles and various spectral bands is discussed as well as the inversion of rediance data inside the canopy, angle transforms for filtering out terrain slope effects, and modification of one dimensional models.
N-mixture models for estimating population size from spatially replicated counts
Royle, J. Andrew
2004-01-01
Spatial replication is a common theme in count surveys of animals. Such surveys often generate sparse count data from which it is difficult to estimate population size while formally accounting for detection probability. In this article, i describe a class of models (n-mixture models) which allow for estimation of population size from such data. The key idea is to view site-specific population sizes, n, as independent random variables distributed according to some mixing distribution (e.g., Poisson). Prior parameters are estimated from the marginal likelihood of the data, having integrated over the prior distribution for n. Carroll and lombard (1985, journal of american statistical association 80, 423-426) proposed a class of estimators based on mixing over a prior distribution for detection probability. Their estimator can be applied in limited settings, but is sensitive to prior parameter values that are fixed a priori. Spatial replication provides additional information regarding the parameters of the prior distribution on n that is exploited by the n-mixture models and which leads to reasonable estimates of abundance from sparse data. A simulation study demonstrates superior operating characteristics (bias, confidence interval coverage) of the n-mixture estimator compared to the caroll and lombard estimator. Both estimators are applied to point count data on six species of birds illustrating the sensitivity to choice of prior on p and substantially different estimates of abundance as a consequence.
Borchers, D L; Langrock, R
2015-12-01
We develop maximum likelihood methods for line transect surveys in which animals go undetected at distance zero, either because they are stochastically unavailable while within view or because they are missed when they are available. These incorporate a Markov-modulated Poisson process model for animal availability, allowing more clustered availability events than is possible with Poisson availability models. They include a mark-recapture component arising from the independent-observer survey, leading to more accurate estimation of detection probability given availability. We develop models for situations in which (a) multiple detections of the same individual are possible and (b) some or all of the availability process parameters are estimated from the line transect survey itself, rather than from independent data. We investigate estimator performance by simulation, and compare the multiple-detection estimators with estimators that use only initial detections of individuals, and with a single-observer estimator. Simultaneous estimation of detection function parameters and availability model parameters is shown to be feasible from the line transect survey alone with multiple detections and double-observer data but not with single-observer data. Recording multiple detections of individuals improves estimator precision substantially when estimating the availability model parameters from survey data, and we recommend that these data be gathered. We apply the methods to estimate detection probability from a double-observer survey of North Atlantic minke whales, and find that double-observer data greatly improve estimator precision here too. © 2015 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.
Patient-specific bronchoscopy visualization through BRDF estimation and disocclusion correction.
Chung, Adrian J; Deligianni, Fani; Shah, Pallav; Wells, Athol; Yang, Guang-Zhong
2006-04-01
This paper presents an image-based method for virtual bronchoscope with photo-realistic rendering. The technique is based on recovering bidirectional reflectance distribution function (BRDF) parameters in an environment where the choice of viewing positions, directions, and illumination conditions are restricted. Video images of bronchoscopy examinations are combined with patient-specific three-dimensional (3-D) computed tomography data through two-dimensional (2-D)/3-D registration and shading model parameters are then recovered by exploiting the restricted lighting configurations imposed by the bronchoscope. With the proposed technique, the recovered BRDF is used to predict the expected shading intensity, allowing a texture map independent of lighting conditions to be extracted from each video frame. To correct for disocclusion artefacts, statistical texture synthesis was used to recreate the missing areas. New views not present in the original bronchoscopy video are rendered by evaluating the BRDF with different viewing and illumination parameters. This allows free navigation of the acquired 3-D model with enhanced photo-realism. To assess the practical value of the proposed technique, a detailed visual scoring that involves both real and rendered bronchoscope images is conducted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Guoping; Mayes, Melanie; Parker, Jack C
2010-01-01
We implemented the widely used CXTFIT code in Excel to provide flexibility and added sensitivity and uncertainty analysis functions to improve transport parameter estimation and to facilitate model discrimination for multi-tracer experiments on structured soils. Analytical solutions for one-dimensional equilibrium and nonequilibrium convection dispersion equations were coded as VBA functions so that they could be used as ordinary math functions in Excel for forward predictions. Macros with user-friendly interfaces were developed for optimization, sensitivity analysis, uncertainty analysis, error propagation, response surface calculation, and Monte Carlo analysis. As a result, any parameter with transformations (e.g., dimensionless, log-transformed, species-dependent reactions, etc.) couldmore » be estimated with uncertainty and sensitivity quantification for multiple tracer data at multiple locations and times. Prior information and observation errors could be incorporated into the weighted nonlinear least squares method with a penalty function. Users are able to change selected parameter values and view the results via embedded graphics, resulting in a flexible tool applicable to modeling transport processes and to teaching students about parameter estimation. The code was verified by comparing to a number of benchmarks with CXTFIT 2.0. It was applied to improve parameter estimation for four typical tracer experiment data sets in the literature using multi-model evaluation and comparison. Additional examples were included to illustrate the flexibilities and advantages of CXTFIT/Excel. The VBA macros were designed for general purpose and could be used for any parameter estimation/model calibration when the forward solution is implemented in Excel. A step-by-step tutorial, example Excel files and the code are provided as supplemental material.« less
A general approach to double-moment normalization of drop size distributions
NASA Astrophysics Data System (ADS)
Lee, G. W.; Sempere-Torres, D.; Uijlenhoet, R.; Zawadzki, I.
2003-04-01
Normalization of drop size distributions (DSDs) is re-examined here. First, we present an extension of scaling normalization using one moment of the DSD as a parameter (as introduced by Sempere-Torres et al, 1994) to a scaling normalization using two moments as parameters of the normalization. It is shown that the normalization of Testud et al. (2001) is a particular case of the two-moment scaling normalization. Thus, a unified vision of the question of DSDs normalization and a good model representation of DSDs is given. Data analysis shows that from the point of view of moment estimation least square regression is slightly more effective than moment estimation from the normalized average DSD.
Application of latent variable model in Rosenberg self-esteem scale.
Leung, Shing-On; Wu, Hui-Ping
2013-01-01
Latent Variable Models (LVM) are applied to Rosenberg Self-Esteem Scale (RSES). Parameter estimations automatically give negative signs hence no recoding is necessary for negatively scored items. Bad items can be located through parameter estimate, item characteristic curves and other measures. Two factors are extracted with one on self-esteem and the other on the degree to take moderate views, with the later not often being covered in previous studies. A goodness-of-fit measure based on two-way margins is used but more works are needed. Results show that scaling provided by models with more formal statistical ground correlated highly with conventional method, which may provide justification for usual practice.
Computer vision research with new imaging technology
NASA Astrophysics Data System (ADS)
Hou, Guangqi; Liu, Fei; Sun, Zhenan
2015-12-01
Light field imaging is capable of capturing dense multi-view 2D images in one snapshot, which record both intensity values and directions of rays simultaneously. As an emerging 3D device, the light field camera has been widely used in digital refocusing, depth estimation, stereoscopic display, etc. Traditional multi-view stereo (MVS) methods only perform well on strongly texture surfaces, but the depth map contains numerous holes and large ambiguities on textureless or low-textured regions. In this paper, we exploit the light field imaging technology on 3D face modeling in computer vision. Based on a 3D morphable model, we estimate the pose parameters from facial feature points. Then the depth map is estimated through the epipolar plane images (EPIs) method. At last, the high quality 3D face model is exactly recovered via the fusing strategy. We evaluate the effectiveness and robustness on face images captured by a light field camera with different poses.
Neural Net Gains Estimation Based on an Equivalent Model
Aguilar Cruz, Karen Alicia; Medel Juárez, José de Jesús; Fernández Muñoz, José Luis; Esmeralda Vigueras Velázquez, Midory
2016-01-01
A model of an Equivalent Artificial Neural Net (EANN) describes the gains set, viewed as parameters in a layer, and this consideration is a reproducible process, applicable to a neuron in a neural net (NN). The EANN helps to estimate the NN gains or parameters, so we propose two methods to determine them. The first considers a fuzzy inference combined with the traditional Kalman filter, obtaining the equivalent model and estimating in a fuzzy sense the gains matrix A and the proper gain K into the traditional filter identification. The second develops a direct estimation in state space, describing an EANN using the expected value and the recursive description of the gains estimation. Finally, a comparison of both descriptions is performed; highlighting the analytical method describes the neural net coefficients in a direct form, whereas the other technique requires selecting into the Knowledge Base (KB) the factors based on the functional error and the reference signal built with the past information of the system. PMID:27366146
Neural Net Gains Estimation Based on an Equivalent Model.
Aguilar Cruz, Karen Alicia; Medel Juárez, José de Jesús; Fernández Muñoz, José Luis; Esmeralda Vigueras Velázquez, Midory
2016-01-01
A model of an Equivalent Artificial Neural Net (EANN) describes the gains set, viewed as parameters in a layer, and this consideration is a reproducible process, applicable to a neuron in a neural net (NN). The EANN helps to estimate the NN gains or parameters, so we propose two methods to determine them. The first considers a fuzzy inference combined with the traditional Kalman filter, obtaining the equivalent model and estimating in a fuzzy sense the gains matrix A and the proper gain K into the traditional filter identification. The second develops a direct estimation in state space, describing an EANN using the expected value and the recursive description of the gains estimation. Finally, a comparison of both descriptions is performed; highlighting the analytical method describes the neural net coefficients in a direct form, whereas the other technique requires selecting into the Knowledge Base (KB) the factors based on the functional error and the reference signal built with the past information of the system.
Examining view angle effects on leaf N estimation in wheat using field reflectance spectroscopy
NASA Astrophysics Data System (ADS)
Song, Xiao; Feng, Wei; He, Li; Xu, Duanyang; Zhang, Hai-Yan; Li, Xiao; Wang, Zhi-Jie; Coburn, Craig A.; Wang, Chen-Yang; Guo, Tian-Cai
2016-12-01
Real-time, nondestructive monitoring of crop nitrogen (N) status is a critical factor for precision N management during wheat production. Over a 3-year period, we analyzed different wheat cultivars grown under different experimental conditions in China and Canada and studied the effects of viewing angle on the relationships between various vegetation indices (VIs) and leaf nitrogen concentration (LNC) using hyperspectral data from 11 field experiments. The objective was to improve the prediction accuracy by minimizing the effects of viewing angle on LNC estimation to construct a novel vegetation index (VI) for use under different experimental conditions. We examined the stability of previously reported optimum VIs obtained from 13 traditional indices for estimating LNC at 13 viewing zenith angles (VZAs) in the solar principal plane (SPP). Backscattering direction showed better index performance than forward scattering direction. Red-edge VIs including modified normalized difference vegetation index (mND705), ratio index within the red edge region (RI-1dB) and normalized difference red edge index (NDRE) were highly correlated with LNC, as confirmed by high R2 determination coefficients. However, these common VIs tended to saturation, as the relationships strongly depended on experimental conditions. To overcome the influence of VZA on VIs, the chlorophyll- and LNC-sensitive NDRE index was divided by the floating-position water band index (FWBI) to generate the integrated narrow-band vegetation index. The highest correlation between the novel NDRE/FWBI parameter and LNC (R2 = 0.852) occurred at -10°, while the lowest correlation (R2 = 0.745) occurred at 60°. NDRE/FWBI was more highly correlated with LNC than existing commonly used VIs at an identical viewing zenith angle. Upon further analysis of angle combinations, our novel VI exhibited the best performance, with the best prediction accuracy at 0° to -20° (R2 = 0.838, RMSE = 0.360) and relatively good accuracy at 0° to -30° (R2 = 0.835, RMSE = 0.366). As it is possible to monitor plant N status over a wide range of angles using portable spectrometers, viewing angles of as much as 0° to -30° are common. Consequently, we developed a united model across angles of 0° to -30° to reduce the effects of viewing angle on LNC prediction in wheat. The proposed combined NDRE/FWBI parameter, designated the wide-angle-adaptability nitrogen index (WANI), is superior for estimating LNC in wheat on a regional scale in China and Canada.
Earth orbital teleoperator visual system evaluation program
NASA Technical Reports Server (NTRS)
Shields, N. L., Jr.; Kirkpatrick, M., III; Frederick, P. N.; Malone, T. B.
1975-01-01
Empirical tests of range estimation accuracy and resolution, via television, under monoptic and steroptic viewing conditions are discussed. Test data are used to derive man machine interface requirements and make design decisions for an orbital remote manipulator system. Remote manipulator system visual tasks are given and the effects of system parameters of these tasks are evaluated.
Area Estimation of Deep-Sea Surfaces from Oblique Still Images
Souto, Miguel; Afonso, Andreia; Calado, António; Madureira, Pedro; Campos, Aldino
2015-01-01
Estimating the area of seabed surfaces from pictures or videos is an important problem in seafloor surveys. This task is complex to achieve with moving platforms such as submersibles, towed or remotely operated vehicles (ROV), where the recording camera is typically not static and provides an oblique view of the seafloor. A new method for obtaining seabed surface area estimates is presented here, using the classical set up of two laser devices fixed to the ROV frame projecting two parallel lines over the seabed. By combining lengths measured directly from the image containing the laser lines, the area of seabed surfaces is estimated, as well as the camera’s distance to the seabed, pan and tilt angles. The only parameters required are the distance between the parallel laser lines and the camera’s horizontal and vertical angles of view. The method was validated with a controlled in situ experiment using a deep-sea ROV, yielding an area estimate error of 1.5%. Further applications and generalizations of the method are discussed, with emphasis on deep-sea applications. PMID:26177287
1994-02-15
0. Faugeras. Three dimensional vision, a geometric viewpoint. MIT Press, 1993. [19] 0 . D. Faugeras and S. Maybank . Motion from point mathces...multiplicity of solutions. Int. J. of Computer Vision, 1990. 1201 0.D. Faugeras, Q.T. Luong, and S.J. Maybank . Camera self-calibration: theory and...Kalrnan filter-based algorithms for estimating depth from image sequences. Int. J. of computer vision, 1989. [41] S. Maybank . Theory of
2016-05-11
new physically -based prediction models for all-weather path attenuation estimation at Ka, V and W band from multi- channel microwave radiometric data...of new physically -based prediction models for all-weather path attenuation estimation at Ka, V and W band from multi- channel microwave radiometric...the medium behavior at these frequency bands from both a physical and a statistical point of view (e.g., [5]-[7]). However, these campaigns are
NASA Astrophysics Data System (ADS)
Odijk, Dennis; Zhang, Baocheng; Khodabandeh, Amir; Odolinski, Robert; Teunissen, Peter J. G.
2016-01-01
The concept of integer ambiguity resolution-enabled Precise Point Positioning (PPP-RTK) relies on appropriate network information for the parameters that are common between the single-receiver user that applies and the network that provides this information. Most of the current methods for PPP-RTK are based on forming the ionosphere-free combination using dual-frequency Global Navigation Satellite System (GNSS) observations. These methods are therefore restrictive in the light of the development of new multi-frequency GNSS constellations, as well as from the point of view that the PPP-RTK user requires ionospheric corrections to obtain integer ambiguity resolution results based on short observation time spans. The method for PPP-RTK that is presented in this article does not have above limitations as it is based on the undifferenced, uncombined GNSS observation equations, thereby keeping all parameters in the model. Working with the undifferenced observation equations implies that the models are rank-deficient; not all parameters are unbiasedly estimable, but only combinations of them. By application of S-system theory the model is made of full rank by constraining a minimum set of parameters, or S-basis. The choice of this S-basis determines the estimability and the interpretation of the parameters that are transmitted to the PPP-RTK users. As this choice is not unique, one has to be very careful when comparing network solutions in different S-systems; in that case the S-transformation, which is provided by the S-system method, should be used to make the comparison. Knowing the estimability and interpretation of the parameters estimated by the network is shown to be crucial for a correct interpretation of the estimable PPP-RTK user parameters, among others the essential ambiguity parameters, which have the integer property which is clearly following from the interpretation of satellite phase biases from the network. The flexibility of the S-system method is furthermore demonstrated by the fact that all models in this article are derived in multi-epoch mode, allowing to incorporate dynamic model constraints on all or subsets of parameters.
Using Citizen Science Reports to Define the Equatorial Extent of Auroral Visibility
NASA Technical Reports Server (NTRS)
Case, N. A.; MacDonald, E. A.; Viereck, R.
2016-01-01
An aurora may often be viewed hundreds of kilometers equatorward of the auroral oval owing to its altitude. As such, the NOAA Space Weather Prediction Center (SWPC) Aurora Forecast product provides a "view line" to demonstrate the equatorial extent of auroral visibility, assuming that it is sufficiently bright and high in altitude. The view line in the SWPC product is based upon the latitude of the brightest aurora, for each hemisphere, as specified by the real-time oval variation, assessment, tracking, intensity, and online nowcasting (OVATION) Prime (2010) aurora precipitation model. In this study, we utilize nearly 500 citizen science auroral reports to compare with the view line provided by an updated SWPC aurora forecast product using auroral precipitation data from OVATION Prime (2013). The citizen science observations were recorded during March and April 2015 using the Aurorasaurus platform and cover one large geomagnetic storm and several smaller events. We find that this updated SWPC view line is conservative in its estimate and that the aurora is often viewable further equatorward than Is indicated by the forecast. By using the citizen reports to modify the scaling parameters used to link the OVATION Prime (2013) model to the view line, we produce a new view line estimate that more accurately represents the equatorial extent of visible aurora. An OVATION Prime (2013) energy flux-based equatorial boundary view line is also developed and is found to provide the best overall agreement with the citizen science reports, with an accuracy of 91 percent.
Feng, Ssj; Sechopoulos, I
2012-06-01
To develop an objective model of the shape of the compressed breast undergoing mammographic or tomosynthesis acquisition. Automated thresholding and edge detection was performed on 984 anonymized digital mammograms (492 craniocaudal (CC) view mammograms and 492 medial lateral oblique (MLO) view mammograms), to extract the edge of each breast. Principal Component Analysis (PCA) was performed on these edge vectors to identify a limited set of parameters and eigenvectors that. These parameters and eigenvectors comprise a model that can be used to describe the breast shapes present in acquired mammograms and to generate realistic models of breasts undergoing acquisition. Sample breast shapes were then generated from this model and evaluated. The mammograms in the database were previously acquired for a separate study and authorized for use in further research. The PCA successfully identified two principal components and their corresponding eigenvectors, forming the basis for the breast shape model. The simulated breast shapes generated from the model are reasonable approximations of clinically acquired mammograms. Using PCA, we have obtained models of the compressed breast undergoing mammographic or tomosynthesis acquisition based on objective analysis of a large image database. Up to now, the breast in the CC view has been approximated as a semi-circular tube, while there has been no objectively-obtained model for the MLO view breast shape. Such models can be used for various breast imaging research applications, such as x-ray scatter estimation and correction, dosimetry estimates, and computer-aided detection and diagnosis. © 2012 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Xiong, Yan; Reichenbach, Stephen E.
1999-01-01
Understanding of hand-written Chinese characters is at such a primitive stage that models include some assumptions about hand-written Chinese characters that are simply false. So Maximum Likelihood Estimation (MLE) may not be an optimal method for hand-written Chinese characters recognition. This concern motivates the research effort to consider alternative criteria. Maximum Mutual Information Estimation (MMIE) is an alternative method for parameter estimation that does not derive its rationale from presumed model correctness, but instead examines the pattern-modeling problem in automatic recognition system from an information- theoretic point of view. The objective of MMIE is to find a set of parameters in such that the resultant model allows the system to derive from the observed data as much information as possible about the class. We consider MMIE for recognition of hand-written Chinese characters using on a simplified hidden Markov Random Field. MMIE provides improved performance improvement over MLE in this application.
Measuring the Viewing Angle of GW170817 with Electromagnetic and Gravitational Waves
NASA Astrophysics Data System (ADS)
Finstad, Daniel; De, Soumi; Brown, Duncan A.; Berger, Edo; Biwer, Christopher M.
2018-06-01
The joint detection of gravitational waves (GWs) and electromagnetic (EM) radiation from the binary neutron star merger GW170817 ushered in a new era of multi-messenger astronomy. Joint GW–EM observations can be used to measure the parameters of the binary with better precision than either observation alone. Here, we use joint GW–EM observations to measure the viewing angle of GW170817, the angle between the binary’s angular momentum and the line of sight. We combine a direct measurement of the distance to the host galaxy of GW170817 (NGC 4993) of 40.7 ± 2.36 Mpc with the Laser Interferometer Gravitational-wave Observatory (LIGO)/Virgo GW data and find that the viewing angle is {32}-13+10 +/- 1.7 degrees (90% confidence, statistical, and systematic errors). We place a conservative lower limit on the viewing angle of ≥13°, which is robust to the choice of prior. This measurement provides a constraint on models of the prompt γ-ray and radio/X-ray afterglow emission associated with the merger; for example, it is consistent with the off-axis viewing angle inferred for a structured jet model. We provide for the first time the full posterior samples from Bayesian parameter estimation of LIGO/Virgo data to enable further analysis by the community.
Pulkkinen, Aki; Cox, Ben T; Arridge, Simon R; Goh, Hwan; Kaipio, Jari P; Tarvainen, Tanja
2016-11-01
Estimation of optical absorption and scattering of a target is an inverse problem associated with quantitative photoacoustic tomography. Conventionally, the problem is expressed as two folded. First, images of initial pressure distribution created by absorption of a light pulse are formed based on acoustic boundary measurements. Then, the optical properties are determined based on these photoacoustic images. The optical stage of the inverse problem can thus suffer from, for example, artefacts caused by the acoustic stage. These could be caused by imperfections in the acoustic measurement setting, of which an example is a limited view acoustic measurement geometry. In this work, the forward model of quantitative photoacoustic tomography is treated as a coupled acoustic and optical model and the inverse problem is solved by using a Bayesian approach. Spatial distribution of the optical properties of the imaged target are estimated directly from the photoacoustic time series in varying acoustic detection and optical illumination configurations. It is numerically demonstrated, that estimation of optical properties of the imaged target is feasible in limited view acoustic detection setting.
Collaborative sparse priors for multi-view ATR
NASA Astrophysics Data System (ADS)
Li, Xuelu; Monga, Vishal
2018-04-01
Recent work has seen a surge of sparse representation based classification (SRC) methods applied to automatic target recognition problems. While traditional SRC approaches used l0 or l1 norm to quantify sparsity, spike and slab priors have established themselves as the gold standard for providing general tunable sparse structures on vectors. In this work, we employ collaborative spike and slab priors that can be applied to matrices to encourage sparsity for the problem of multi-view ATR. That is, target images captured from multiple views are expanded in terms of a training dictionary multiplied with a coefficient matrix. Ideally, for a test image set comprising of multiple views of a target, coefficients corresponding to its identifying class are expected to be active, while others should be zero, i.e. the coefficient matrix is naturally sparse. We develop a new approach to solve the optimization problem that estimates the sparse coefficient matrix jointly with the sparsity inducing parameters in the collaborative prior. ATR problems are investigated on the mid-wave infrared (MWIR) database made available by the US Army Night Vision and Electronic Sensors Directorate, which has a rich collection of views. Experimental results show that the proposed joint prior and coefficient estimation method (JPCEM) can: 1.) enable improved accuracy when multiple views vs. a single one are invoked, and 2.) outperform state of the art alternatives particularly when training imagery is limited.
In-theater piracy: finding where the pirate was
NASA Astrophysics Data System (ADS)
Chupeau, Bertrand; Massoudi, Ayoub; Lefèbvre, Frédéric
2008-02-01
Pirate copies of feature films are proliferating on the Internet. DVD rip or screener recording methods involve the duplication of officially distributed media whereas 'cam' versions are illicitly captured with handheld camcorders in movie theaters. Several, complementary, multimedia forensic techniques such as copy identification, forensic tracking marks or sensor forensics can deter those clandestine recordings. In the case of camcorder capture in a theater, the image is often geometrically distorted, the main artifact being the trapezoidal effect, also known as 'keystoning', due to a capture viewing axis not being perpendicular to the screen. In this paper we propose to analyze the geometric distortions in a pirate copy to determine the camcorder viewing angle to the screen perpendicular and derive the approximate position of the pirate in the theater. The problem is first of all geometrically defined, by describing the general projection and capture setup, and by identifying unknown parameters and estimates. The estimation approach based on the identification of an eight-parameter homographic model of the 'keystoning' effect is then presented. A validation experiment based on ground truth collected in a real movie theater is reported, and the accuracy of the proposed method is assessed.
A large-scale, long-term study of scale drift: The micro view and the macro view
NASA Astrophysics Data System (ADS)
He, W.; Li, S.; Kingsbury, G. G.
2016-11-01
The development of measurement scales for use across years and grades in educational settings provides unique challenges, as instructional approaches, instructional materials, and content standards all change periodically. This study examined the measurement stability of a set of Rasch measurement scales that have been in place for almost 40 years. In order to investigate the stability of these scales, item responses were collected from a large set of students who took operational adaptive tests using items calibrated to the measurement scales. For the four scales that were examined, item samples ranged from 2183 to 7923 items. Each item was administered to at least 500 students in each grade level, resulting in approximately 3000 responses per item. Stability was examined at the micro level analysing change in item parameter estimates that have occurred since the items were first calibrated. It was also examined at the macro level, involving groups of items and overall test scores for students. Results indicated that individual items had changes in their parameter estimates, which require further analysis and possible recalibration. At the same time, the results at the total score level indicate substantial stability in the measurement scales over the span of their use.
Advanced interactive display formats for terminal area traffic control
NASA Technical Reports Server (NTRS)
Grunwald, Arthur J.
1996-01-01
This report describes the basic design considerations for perspective air traffic control displays. A software framework has been developed for manual viewing parameter setting (MVPS) in preparation for continued, ongoing developments on automated viewing parameter setting (AVPS) schemes. Two distinct modes of MVPS operations are considered, both of which utilize manipulation pointers imbedded in the three-dimensional scene: (1) direct manipulation of the viewing parameters -- in this mode the manipulation pointers act like the control-input device, through which the viewing parameter changes are made. Part of the parameters are rate controlled, and part of them position controlled. This mode is intended for making fast, iterative small changes in the parameters. (2) Indirect manipulation of the viewing parameters -- this mode is intended primarily for introducing large, predetermined changes in the parameters. Requests for changes in viewing parameter setting are entered manually by the operator by moving viewing parameter manipulation pointers on the screen. The motion of these pointers, which are an integral part of the 3-D scene, is limited to the boundaries of the screen. This arrangement has been chosen in order to preserve the correspondence between the spatial lay-outs of the new and the old viewing parameter setting, a feature which contributes to preventing spatial disorientation of the operator. For all viewing operations, e.g. rotation, translation and ranging, the actual change is executed automatically by the system, through gradual transitions with an exponentially damped, sinusoidal velocity profile, in this work referred to as 'slewing' motions. The slewing functions, which eliminate discontinuities in the viewing parameter changes, are designed primarily for enhancing the operator's impression that he, or she, is dealing with an actually existing physical system, rather than an abstract computer-generated scene. The proposed, continued research efforts will deal with the development of automated viewing parameter setting schemes. These schemes employ an optimization strategy, aimed at identifying the best possible vantage point, from which the air traffic control scene can be viewed for a given traffic situation. They determine whether a change in viewing parameter setting is required and determine the dynamic path along which the change to the new viewing parameter setting should take place.
RENEW v3.2 user's manual, maintenance estimation simulation for Space Station Freedom Program
NASA Technical Reports Server (NTRS)
Bream, Bruce L.
1993-01-01
RENEW is a maintenance event estimation simulation program developed in support of the Space Station Freedom Program (SSFP). This simulation uses reliability and maintainability (R&M) and logistics data to estimate both average and time dependent maintenance demands. The simulation uses Monte Carlo techniques to generate failure and repair times as a function of the R&M and logistics parameters. The estimates are generated for a single type of orbital replacement unit (ORU). The simulation has been in use by the SSFP Work Package 4 prime contractor, Rocketdyne, since January 1991. The RENEW simulation gives closer estimates of performance since it uses a time dependent approach and depicts more factors affecting ORU failure and repair than steady state average calculations. RENEW gives both average and time dependent demand values. Graphs of failures over the mission period and yearly failure occurrences are generated. The averages demand rate for the ORU over the mission period is also calculated. While RENEW displays the results in graphs, the results are also available in a data file for further use by spreadsheets or other programs. The process of using RENEW starts with keyboard entry of the R&M and operational data. Once entered, the data may be saved in a data file for later retrieval. The parameters may be viewed and changed after entry using RENEW. The simulation program runs the number of Monte Carlo simulations requested by the operator. Plots and tables of the results can be viewed on the screen or sent to a printer. The results of the simulation are saved along with the input data. Help screens are provided with each menu and data entry screen.
NASA Astrophysics Data System (ADS)
Flores, J. C.
2015-12-01
For ancient civilizations, the shift from disorder to organized urban settlements is viewed as a phase-transition simile. The number of monumental constructions, assumed to be a signature of civilization processes, corresponds to the order parameter, and effective connectivity becomes related to the control parameter. Based on parameter estimations from archaeological and paleo-climatological data, this study analyzes the rise and fall of the ancient Caral civilization on the South Pacific coast during a period of small ENSO fluctuations (approximately 4500 BP). Other examples considered include civilizations on Easter Island and the Maya Lowlands. This work considers a typical nonlinear third order evolution equation and numerical simulations.
Realtime Reconstruction of an Animating Human Body from a Single Depth Camera.
Chen, Yin; Cheng, Zhi-Quan; Lai, Chao; Martin, Ralph R; Dang, Gang
2016-08-01
We present a method for realtime reconstruction of an animating human body,which produces a sequence of deforming meshes representing a given performance captured by a single commodity depth camera. We achieve realtime single-view mesh completion by enhancing the parameterized SCAPE model.Our method, which we call Realtime SCAPE, performs full-body reconstruction without the use of markers.In Realtime SCAPE, estimations of body shape parameters and pose parameters, needed for reconstruction, are decoupled. Intrinsic body shape is first precomputed for a given subject, by determining shape parameters with the aid of a body shape database. Subsequently, per-frame pose parameter estimation is performed by means of linear blending skinning (LBS); the problem is decomposed into separately finding skinning weights and transformations. The skinning weights are also determined offline from the body shape database,reducing online reconstruction to simply finding the transformations in LBS. Doing so is formulated as a linear variational problem;carefully designed constraints are used to impose temporal coherence and alleviate artifacts. Experiments demonstrate that our method can produce full-body mesh sequences with high fidelity.
NASA Astrophysics Data System (ADS)
Miller, D. L.; Roberts, D. A.; Clarke, K. C.; Peters, E. B.; Menzer, O.; Lin, Y.; McFadden, J. P.
2017-12-01
Gross primary productivity (GPP) is commonly estimated with remote sensing techniques over large regions of Earth; however, urban areas are typically excluded due to a lack of light use efficiency (LUE) parameters specific to urban vegetation and challenges stemming from the spatial heterogeneity of urban land cover. In this study, we estimated GPP during the middle of the growing season, both within and among vegetation and land use types, in the Minneapolis-Saint Paul, Minnesota metropolitan region (52.1% vegetation cover). We derived LUE parameters for specific urban vegetation types using estimates of GPP from eddy covariance and tree sap flow-based CO2 flux observations and fraction of absorbed photosynthetically active radiation derived from 2-m resolution WorldView-2 satellite imagery. We produced a pixel-based hierarchical land cover classification of built-up and vegetated urban land cover classes distinguishing deciduous broadleaf trees, evergreen needleleaf trees, turf grass, and golf course grass from impervious and soil surfaces. The overall classification accuracy was 80% (kappa = 0.73). The mapped GPP estimates were within 12% of estimates from independent tall tower eddy covariance measurements. Mean GPP estimates ( ± standard deviation; g C m-2 day-1) for the entire study area from highest to lowest were: golf course grass (11.77 ± 1.20), turf grass (6.05 ± 1.07), evergreen needleleaf trees (5.81 ± 0.52), and deciduous broadleaf trees (2.52 ± 0.25). Turf grass GPP had a larger coefficient of variation (0.18) than the other vegetation classes ( 0.10). Mean land use GPP for the full study area varied as a function of percent vegetation cover. Urban GPP in general, both including and excluding non-vegetated areas, was less than half that of literature estimates for nearby natural forests and grasslands.
Satellite angular velocity estimation based on star images and optical flow techniques.
Fasano, Giancarmine; Rufino, Giancarlo; Accardo, Domenico; Grassi, Michele
2013-09-25
An optical flow-based technique is proposed to estimate spacecraft angular velocity based on sequences of star-field images. It does not require star identification and can be thus used to also deliver angular rate information when attitude determination is not possible, as during platform de tumbling or slewing. Region-based optical flow calculation is carried out on successive star images preprocessed to remove background. Sensor calibration parameters, Poisson equation, and a least-squares method are then used to estimate the angular velocity vector components in the sensor rotating frame. A theoretical error budget is developed to estimate the expected angular rate accuracy as a function of camera parameters and star distribution in the field of view. The effectiveness of the proposed technique is tested by using star field scenes generated by a hardware-in-the-loop testing facility and acquired by a commercial-off-the shelf camera sensor. Simulated cases comprise rotations at different rates. Experimental results are presented which are consistent with theoretical estimates. In particular, very accurate angular velocity estimates are generated at lower slew rates, while in all cases the achievable accuracy in the estimation of the angular velocity component along boresight is about one order of magnitude worse than the other two components.
Satellite Angular Velocity Estimation Based on Star Images and Optical Flow Techniques
Fasano, Giancarmine; Rufino, Giancarlo; Accardo, Domenico; Grassi, Michele
2013-01-01
An optical flow-based technique is proposed to estimate spacecraft angular velocity based on sequences of star-field images. It does not require star identification and can be thus used to also deliver angular rate information when attitude determination is not possible, as during platform de tumbling or slewing. Region-based optical flow calculation is carried out on successive star images preprocessed to remove background. Sensor calibration parameters, Poisson equation, and a least-squares method are then used to estimate the angular velocity vector components in the sensor rotating frame. A theoretical error budget is developed to estimate the expected angular rate accuracy as a function of camera parameters and star distribution in the field of view. The effectiveness of the proposed technique is tested by using star field scenes generated by a hardware-in-the-loop testing facility and acquired by a commercial-off-the shelf camera sensor. Simulated cases comprise rotations at different rates. Experimental results are presented which are consistent with theoretical estimates. In particular, very accurate angular velocity estimates are generated at lower slew rates, while in all cases the achievable accuracy in the estimation of the angular velocity component along boresight is about one order of magnitude worse than the other two components. PMID:24072023
On the upper tail of Italian firms’ size distribution
NASA Astrophysics Data System (ADS)
Cirillo, Pasquale; Hüsler, Jürg
2009-04-01
In this paper we analyze the upper tail of the size distribution of Italian companies with limited liability belonging to the CEBI database. Size is defined in terms of net worth. In particular, we show that the largest firms follow a power law distribution, according to the well-known Pareto law, for which we give estimates of the shape parameter. Such a behavior seems to be quite persistent over time, view that for almost 20 years of observations, the shape parameter is always in the vicinity of 1.8. The power law hypothesis is also positively tested using graphical and analytical methods.
Test of a geometric model for the modification stage of simple impact crater development
NASA Technical Reports Server (NTRS)
Grieve, R. A. F.; Coderre, J. M.; Rupert, J.; Garvin, J. B.
1989-01-01
This paper presents a geometric model describing the geometry of the transient cavity of an impact crater and the subsequent collapse of its walls to form a crater filled by an interior breccia lens. The model is tested by comparing the volume of slump material calculated from known dimensional parameters with the volume of the breccia lens estimated on the basis of observational data. Results obtained from the model were found to be consistent with observational data, particularly in view of the highly sensitive nature of the model to input parameters.
Modeling depth from motion parallax with the motion/pursuit ratio
Nawrot, Mark; Ratzlaff, Michael; Leonard, Zachary; Stroyan, Keith
2014-01-01
The perception of unambiguous scaled depth from motion parallax relies on both retinal image motion and an extra-retinal pursuit eye movement signal. The motion/pursuit ratio represents a dynamic geometric model linking these two proximal cues to the ratio of depth to viewing distance. An important step in understanding the visual mechanisms serving the perception of depth from motion parallax is to determine the relationship between these stimulus parameters and empirically determined perceived depth magnitude. Observers compared perceived depth magnitude of dynamic motion parallax stimuli to static binocular disparity comparison stimuli at three different viewing distances, in both head-moving and head-stationary conditions. A stereo-viewing system provided ocular separation for stereo stimuli and monocular viewing of parallax stimuli. For each motion parallax stimulus, a point of subjective equality (PSE) was estimated for the amount of binocular disparity that generates the equivalent magnitude of perceived depth from motion parallax. Similar to previous results, perceived depth from motion parallax had significant foreshortening. Head-moving conditions produced even greater foreshortening due to the differences in the compensatory eye movement signal. An empirical version of the motion/pursuit law, termed the empirical motion/pursuit ratio, which models perceived depth magnitude from these stimulus parameters, is proposed. PMID:25339926
NASA Technical Reports Server (NTRS)
Abbas, M. M.; Kostiuk, T.; Ogilvie, K. W.
1975-01-01
The performance of an upconversion system is examined for observation of astronomical sources in the low to middle infrared spectral range. Theoretical values for the performance parameters of an upconversion system for astronomical observations are evaluated in view of the conversion efficiencies, spectral resolution, field of view, minimum detectable source brightness and source flux. Experimental results of blackbody measurements and molecular absorption spectrum measurements using a lithium niobate upconverter with an argon-ion laser as the pump are presented. Estimates of the expected optimum sensitivity of an upconversion device which may be built with the presently available components are given.
ERIC Educational Resources Information Center
Casstevens, Thomas W.; And Others
This document consists of five units which all view applications of mathematics to American politics. The first three view calculus applications, the last two deal with applications of algebra. The first module is geared to teach a student how to: 1) compute estimates of the value of the parameters in negative exponential models; and draw…
Simultaneous emission and transmission scanning in PET oncology: the effect on parameter estimation
NASA Astrophysics Data System (ADS)
Meikle, S. R.; Eberl, S.; Hooper, P. K.; Fulham, M. J.
1997-02-01
The authors investigated potential sources of bias due to simultaneous emission and transmission (SET) scanning and their effect on parameter estimation in dynamic positron emission tomography (PET) oncology studies. The sources of bias considered include: i) variation in transmission spillover (into the emission window) throughout the field of view, ii) increased scatter arising from rod sources, and iii) inaccurate deadtime correction. Net bias was calculated as a function of the emission count rate and used to predict distortion in [/sup 18/F]2-fluoro-2-deoxy-D-glucose (FDG) and [/sup 11/C]thymidine tissue curves simulating the normal liver and metastatic involvement of the liver. The effect on parameter estimates was assessed by spectral analysis and compartmental modeling. The various sources of bias approximately cancel during the early part of the study when count rate is maximal. Scatter dominates in the latter part of the study, causing apparently decreased tracer clearance which is more marked for thymidine than for FDG. The irreversible disposal rate constant, K/sub i/, was overestimated by <10% for FDG and >30% for thymidine. The authors conclude that SET has a potential role in dynamic FDG PET but is not suitable for /sup 11/C-labeled compounds.
Maximum likelihood estimation in calibrating a stereo camera setup.
Muijtjens, A M; Roos, J M; Arts, T; Hasman, A
1999-02-01
Motion and deformation of the cardiac wall may be measured by following the positions of implanted radiopaque markers in three dimensions, using two x-ray cameras simultaneously. Regularly, calibration of the position measurement system is obtained by registration of the images of a calibration object, containing 10-20 radiopaque markers at known positions. Unfortunately, an accidental change of the position of a camera after calibration requires complete recalibration. Alternatively, redundant information in the measured image positions of stereo pairs can be used for calibration. Thus, a separate calibration procedure can be avoided. In the current study a model is developed that describes the geometry of the camera setup by five dimensionless parameters. Maximum Likelihood (ML) estimates of these parameters were obtained in an error analysis. It is shown that the ML estimates can be found by application of a nonlinear least squares procedure. Compared to the standard unweighted least squares procedure, the ML method resulted in more accurate estimates without noticeable bias. The accuracy of the ML method was investigated in relation to the object aperture. The reconstruction problem appeared well conditioned as long as the object aperture is larger than 0.1 rad. The angle between the two viewing directions appeared to be the parameter that was most likely to cause major inaccuracies in the reconstruction of the 3-D positions of the markers. Hence, attempts to improve the robustness of the method should primarily focus on reduction of the error in this parameter.
Tyo, J Scott; LaCasse, Charles F; Ratliff, Bradley M
2009-10-15
Microgrid polarimeters operate by integrating a focal plane array with an array of micropolarizers. The Stokes parameters are estimated by comparing polarization measurements from pixels in a neighborhood around the point of interest. The main drawback is that the measurements used to estimate the Stokes vector are made at different locations, leading to a false polarization signature owing to instantaneous field-of-view (IFOV) errors. We demonstrate for the first time, to our knowledge, that spatially band limited polarization images can be ideally reconstructed with no IFOV error by using a linear system framework.
Measuring and modeling near-surface reflected and emitted radiation fluxes at the FIFE site
NASA Technical Reports Server (NTRS)
Blad, Blaine L.; Walter-Shea, Elizabeth A.; Starks, Patrick J.; Vining, Roel C.; Hays, Cynthia J.; Mesarch, Mark A.
1990-01-01
Information is presented pertaining to the measurement and estimation of reflected and emitted components of the radiation balance. Information is included about reflectance and transmittance of solar radiation from and through the leaves of some grass and forb prairie species, bidirectional reflectance from a prairie canopy is discussed and measured and estimated fluxes are described of incoming and outgoing longwave and shortwave radiation. Results of the study showed only very small differences in reflectances and transmittances for the adaxial and abaxial surfaces of grass species in the visible and infrared wavebands, but some differences in the infrared wavebands were noted for the forbs. Reflectance from the prairie canopy changed as a function of solar and view zenith angles in the solar principal plane with definite asymmetry about nadir. The surface temperature of prairie canopies was found to vary by as much as 5 C depending on view zenith and azimuth position and on the solar azimuth. Aerodynamic temperature calculated from measured sensible heat fluxes ranged from 0 to 3 C higher than nadir-viewed temperatures. Models were developed to estimate incoming and reflected shortwave radiation from data collected with a Barnes Modular Multiband Radiometer. Several algorithms for estimating incoming longwave radiation were evaluated and compared to actual measures of that parameter. Net radiation was calculated using the estimated components of the shortwave radiation streams, determined from the algorithms developed, and from the longwave radiation streams provided by the Brunt, modified Deacon, and the Stefan-Boltzmann models. Estimates of net radiation were compared to measured values and found to be within the measurement error of the net radiometers used in the study.
Bowen, Spencer L.; Byars, Larry G.; Michel, Christian J.; Chonde, Daniel B.; Catana, Ciprian
2014-01-01
Kinetic parameters estimated from dynamic 18F-fluorodeoxyglucose PET acquisitions have been used frequently to assess brain function in humans. Neglecting partial volume correction (PVC) for a dynamic series has been shown to produce significant bias in model estimates. Accurate PVC requires a space-variant model describing the reconstructed image spatial point spread function (PSF) that accounts for resolution limitations, including non-uniformities across the field of view due to the parallax effect. For OSEM, image resolution convergence is local and influenced significantly by the number of iterations, the count density, and background-to-target ratio. As both count density and background-to-target values for a brain structure can change during a dynamic scan, the local image resolution may also concurrently vary. When PVC is applied post-reconstruction the kinetic parameter estimates may be biased when neglecting the frame-dependent resolution. We explored the influence of the PVC method and implementation on kinetic parameters estimated by fitting 18F-fluorodeoxyglucose dynamic data acquired on a dedicated brain PET scanner and reconstructed with and without PSF modelling in the OSEM algorithm. The performance of several PVC algorithms was quantified with a phantom experiment, an anthropomorphic Monte Carlo simulation, and a patient scan. Using the last frame reconstructed image only for regional spread function (RSF) generation, as opposed to computing RSFs for each frame independently, and applying perturbation GTM PVC with PSF based OSEM produced the lowest magnitude bias kinetic parameter estimates in most instances, although at the cost of increased noise compared to the PVC methods utilizing conventional OSEM. Use of the last frame RSFs for PVC with no PSF modelling in the OSEM algorithm produced the lowest bias in CMRGlc estimates, although by less than 5% in most cases compared to the other PVC methods. The results indicate that the PVC implementation and choice of PSF modelling in the reconstruction can significantly impact model parameters. PMID:24052021
A Multistage Approach for Image Registration.
Bowen, Francis; Hu, Jianghai; Du, Eliza Yingzi
2016-09-01
Successful image registration is an important step for object recognition, target detection, remote sensing, multimodal content fusion, scene blending, and disaster assessment and management. The geometric and photometric variations between images adversely affect the ability for an algorithm to estimate the transformation parameters that relate the two images. Local deformations, lighting conditions, object obstructions, and perspective differences all contribute to the challenges faced by traditional registration techniques. In this paper, a novel multistage registration approach is proposed that is resilient to view point differences, image content variations, and lighting conditions. Robust registration is realized through the utilization of a novel region descriptor which couples with the spatial and texture characteristics of invariant feature points. The proposed region descriptor is exploited in a multistage approach. A multistage process allows the utilization of the graph-based descriptor in many scenarios thus allowing the algorithm to be applied to a broader set of images. Each successive stage of the registration technique is evaluated through an effective similarity metric which determines subsequent action. The registration of aerial and street view images from pre- and post-disaster provide strong evidence that the proposed method estimates more accurate global transformation parameters than traditional feature-based methods. Experimental results show the robustness and accuracy of the proposed multistage image registration methodology.
Direct determination of geometric alignment parameters for cone-beam scanners
Mennessier, C; Clackdoyle, R; Noo, F
2009-01-01
This paper describes a comprehensive method for determining the geometric alignment parameters for cone-beam scanners (often called calibrating the scanners or performing geometric calibration). The method is applicable to x-ray scanners using area detectors, or to SPECT systems using pinholes or cone-beam converging collimators. Images of an alignment test object (calibration phantom) fixed in the field of view of the scanner are processed to determine the nine geometric parameters for each view. The parameter values are found directly using formulae applied to the projected positions of the test object marker points onto the detector. Each view is treated independently, and no restrictions are made on the position of the cone vertex, or on the position or orientation of the detector. The proposed test object consists of 14 small point-like objects arranged with four points on each of three orthogonal lines, and two points on a diagonal line. This test object is shown to provide unique solutions for all possible scanner geometries, even when partial measurement information is lost by points superimposing in the calibration scan. For the many situations where the cone vertex stays reasonably close to a central plane (for circular, planar, or near-planar trajectories), a simpler version of the test object is appropriate. The simpler object consists of six points, two per orthogonal line, but with some restrictions on the positioning of the test object. This paper focuses on the principles and mathematical justifications for the method. Numerical simulations of the calibration process and reconstructions using estimated parameters are also presented to validate the method and to provide evidence of the robustness of the technique. PMID:19242049
NASA Astrophysics Data System (ADS)
Scharnagl, Benedikt; Durner, Wolfgang
2013-04-01
Models are inherently imperfect because they simplify processes that are themselves imperfectly known and understood. Moreover, the input variables and parameters needed to run a model are typically subject to various sources of error. As a consequence of these imperfections, model predictions will always deviate from corresponding observations. In most applications in soil hydrology, these deviations are clearly not random but rather show a systematic structure. From a statistical point of view, this systematic mismatch may be a reason for concern because it violates one of the basic assumptions made in inverse parameter estimation: the assumption of independence of the residuals. But what are the consequences of simply ignoring the autocorrelation in the residuals, as it is current practice in soil hydrology? Are the parameter estimates still valid even though the statistical foundation they are based on is partially collapsed? Theory and practical experience from other fields of science have shown that violation of the independence assumption will result in overconfident uncertainty bounds and that in some cases it may lead to significantly different optimal parameter values. In our contribution, we present three soil hydrological case studies, in which the effect of autocorrelated residuals on the estimated parameters was investigated in detail. We explicitly accounted for autocorrelated residuals using a formal likelihood function that incorporates an autoregressive model. The inverse problem was posed in a Bayesian framework, and the posterior probability density function of the parameters was estimated using Markov chain Monte Carlo simulation. In contrast to many other studies in related fields of science, and quite surprisingly, we found that the first-order autoregressive model, often abbreviated as AR(1), did not work well in the soil hydrological setting. We showed that a second-order autoregressive, or AR(2), model performs much better in these applications, leading to parameter and uncertainty estimates that satisfy all the underlying statistical assumptions. For theoretical reasons, these estimates are deemed more reliable than those estimates based on the neglect of autocorrelation in the residuals. In compliance with theory and results reported in the literature, our results showed that parameter uncertainty bounds were substantially wider if autocorrelation in the residuals was explicitly accounted for, and also the optimal parameter vales were slightly different in this case. We argue that the autoregressive model presented here should be used as a matter of routine in inverse modeling of soil hydrological processes.
Advanced interactive display formats for terminal area traffic control
NASA Technical Reports Server (NTRS)
Grunwald, Arthur J.
1995-01-01
The basic design considerations for perspective Air Traffic Control displays are described. A software framework has been developed for manual viewing parameter setting (MVPS) in preparation for continued, ongoing developments on automated viewing parameter setting (AVPS) schemes. The MVPS system is based on indirect manipulation of the viewing parameters. Requests for changes in viewing parameter setting are entered manually by the operator by moving viewing parameter manipulation pointers on the screen. The motion of these pointers, which are an integral part of the 3-D scene, is limited to the boundaries of screen. This arrangement has been chosen, in order to preserve the correspondence between the new and the old viewing parameter setting, a feature which contributes to preventing spatial disorientation of the operator. For all viewing operations, e.g. rotation, translation and ranging, the actual change is executed automatically by the system, through gradual transitions with an exponentially damped, sinusoidal velocity profile, in this work referred to as 'slewing' motions. The slewing functions, which eliminate discontinuities in the viewing parameter changes, are designed primarily for enhancing the operator's impression that he, or she, is dealing with an actually existing physical system, rather than an abstract computer generated scene. Current, ongoing efforts deal with the development of automated viewing parameter setting schemes. These schemes employ an optimization strategy, aimed at identifying the best possible vantage point, from which the Air Traffic Control scene can be viewed, for a given traffic situation.
Nonaka, Fumitaka; Hasebe, Satoshi; Ohtsuki, Hiroshi
2004-01-01
To evaluate the convergence accommodation to convergence (CA/C) ratio in strabismic patients and to clarify its clinical implications. Seventy-eight consecutive patients (mean age: 12.9 +/- 6.0 years) with intermittent exotropia and decompensated exophoria who showed binocular fusion at least at near viewing were recruited. The CA/C ratio was estimated by measuring accommodative responses induced by horizontal prisms with different magnitudes under accommodation feedback open-loop conditions. The CA/C ratios were compared with accommodative convergence to accommodation (AC/A) ratios and other clinical parameters. A linear regression analysis indicated that the mean (+/-SD) CA/C ratio was 0.080 +/- 0.043 D/prism diopter or 0.48 +/- 0.26 D/meter angle. There was no inverse or reciprocal relationship between CA/C and AC/A ratios. The patients with lower CA/C ratios tended to have smaller tonic accommodation under binocular viewing conditions and larger exodeviation at near viewing. The CA/C ratio, like the AC/A ratio, is an independent parameter that characterizes clinical features. A lower CA/C may be beneficial for the vergence control system to compensate for ocular misalignment with minimum degradation of accommodation accuracy.
Chlorophyll content retrieval from hyperspectral remote sensing imagery.
Yang, Xiguang; Yu, Ying; Fan, Wenyi
2015-07-01
Chlorophyll content is the essential parameter in the photosynthetic process determining leaf spectral variation in visible bands. Therefore, the accurate estimation of the forest canopy chlorophyll content is a significant foundation in assessing forest growth and stress affected by diseases. Hyperspectral remote sensing with high spatial resolution can be used for estimating chlorophyll content. In this study, the chlorophyll content was retrieved step by step using Hyperion imagery. Firstly, the spectral curve of the leaf was analyzed, 25 spectral characteristic parameters were identified through the correlation coefficient matrix, and a leaf chlorophyll content inversion model was established using a stepwise regression method. Secondly, the pixel reflectance was converted into leaf reflectance by a geometrical-optical model (4-scale). The three most important parameters of reflectance conversion, including the multiple scattering factor (M 0 ), and the probability of viewing the sunlit tree crown (P T ) and the background (P G ), were estimated by leaf area index (LAI), respectively. The results indicated that M 0 , P T , and P G could be described as a logarithmic function of LAI, with all R (2) values above 0.9. Finally, leaf chlorophyll content was retrieved with RMSE = 7.3574 μg/cm(2), and canopy chlorophyll content per unit ground surface area was estimated based on leaf chlorophyll content and LAI. Chlorophyll content mapping can be useful for the assessment of forest growth stage and diseases.
M-dwarf exoplanet surface density distribution. A log-normal fit from 0.07 to 400 AU
NASA Astrophysics Data System (ADS)
Meyer, Michael R.; Amara, Adam; Reggiani, Maddalena; Quanz, Sascha P.
2018-04-01
Aims: We fit a log-normal function to the M-dwarf orbital surface density distribution of gas giant planets, over the mass range 1-10 times that of Jupiter, from 0.07 to 400 AU. Methods: We used a Markov chain Monte Carlo approach to explore the likelihoods of various parameter values consistent with point estimates of the data given our assumed functional form. Results: This fit is consistent with radial velocity, microlensing, and direct-imaging observations, is well-motivated from theoretical and phenomenological points of view, and predicts results of future surveys. We present probability distributions for each parameter and a maximum likelihood estimate solution. Conclusions: We suggest that this function makes more physical sense than other widely used functions, and we explore the implications of our results on the design of future exoplanet surveys.
Airborne Doppler Wind Lidar Post Data Processing Software DAPS-LV
NASA Technical Reports Server (NTRS)
Kavaya, Michael J. (Inventor); Beyon, Jeffrey Y. (Inventor); Koch, Grady J. (Inventor)
2015-01-01
Systems, methods, and devices of the present invention enable post processing of airborne Doppler wind LIDAR data. In an embodiment, airborne Doppler wind LIDAR data software written in LabVIEW may be provided and may run two versions of different airborne wind profiling algorithms. A first algorithm may be the Airborne Wind Profiling Algorithm for Doppler Wind LIDAR ("APOLO") using airborne wind LIDAR data from two orthogonal directions to estimate wind parameters, and a second algorithm may be a five direction based method using pseudo inverse functions to estimate wind parameters. The various embodiments may enable wind profiles to be compared using different algorithms, may enable wind profile data for long haul color displays to be generated, may display long haul color displays, and/or may enable archiving of data at user-selectable altitudes over a long observation period for data distribution and population.
Fabietti, P G; Calabrese, G; Iorio, M; Bistoni, S; Brunetti, P; Sarti, E; Benedetti, M M
2001-10-01
Nine type 1 diabetic patients were studied for 24 hours. During this period they were given three calibrated meals. The glycemia was feedback-controlled by means of an artificial pancreas. The blood concentration of glucose and the infusion speed of the insulin were measured every minute. The experimental data referring to each of the three meals were used to estimate the parameters of a mathematical model suitable for describing the glycemic response of diabetic patients at meals and at the i.v. infusion of exogenous insulin. From the estimate a marked dispersion of the parameters was found, both interindividual and intraindividual. Nevertheless the models thus obtained seem to be usable for the synthesis of a feedback controller, especially in view of creating a portable artificial pancreas that now seems possible owing to the realization (so far experimental) of sufficiently reliable glucose concentration sensors.
GNSS-derived Geocenter Coordinates Viewed by Perturbation Theory
NASA Astrophysics Data System (ADS)
Meindl, Michael; Beutler, Gerhard; Thaller, Daniela; Dach, Rolf; Jäggi, Adrian; Rothacher, Markus
2013-04-01
Time series of geocenter coordinates were determined with data of the two global navigation satellite systems (GNSS) GPS and GLONASS. The data was recorded in the years 2008-2011 by a global network of 92 combined GPS/GLONASS receivers. Two types of solutions were generated for each system, one including the estimation of geocenter coordinates and one without these parameters. A fair agreement for GPS and GLONASS estimates was found in the x- and y-coordinate series of the geocenter. Artifacts do, however, clearly show up in the z-coordinate. Large periodic variations in the GLONASS geocenter z-coordinates of about 40 cm peak-to-peak are related to the maximum elevation angles of the Sun above/below the orbital planes of the satellite system. A detailed analysis revealed that these artifacts are almost uniquely governed by the differences of the estimates of direct solar radiation pressure (SRP) in the two solution series (with and without geocenter estimation). This effect can be explained by first-order perturbation theory of celestial mechanics. The relation between the geocenter z-coordinate and the corresponding SRP parameters will be presented. Our theory is applicable to all satellite observing techniques. In addition to GNSS, we applied it to satellite laser ranging (SLR) solutions based on LAGEOS observations. The correlation between geocenter and SRP parameters is not a critical issue for SLR, because these parameters do not have to be estimated. This basic difference between SLR and GNSS analyses explains why SLR is an excellent tool to determine geodetic datum parameters like the geocenter coordinates. The correlation between orbit parameters and the z-component of the geocenter is not limited to a particular orbit model, e.g., that of CODE. The issue should be studied for alternative (e.g., box-wing) models: As soon as non-zero mean values (over one revolution) of the out-of-plane force component exist, one has to expect biased geocenter estimates. The insights gained here should be seriously taken into account in the orbit modeling discussion currently taking place within the IGS.
NASA Astrophysics Data System (ADS)
Chiu, Y.; Nishikawa, T.
2013-12-01
With the increasing complexity of parameter-structure identification (PSI) in groundwater modeling, there is a need for robust, fast, and accurate optimizers in the groundwater-hydrology field. For this work, PSI is defined as identifying parameter dimension, structure, and value. In this study, Voronoi tessellation and differential evolution (DE) are used to solve the optimal PSI problem. Voronoi tessellation is used for automatic parameterization, whereby stepwise regression and the error covariance matrix are used to determine the optimal parameter dimension. DE is a novel global optimizer that can be used to solve nonlinear, nondifferentiable, and multimodal optimization problems. It can be viewed as an improved version of genetic algorithms and employs a simple cycle of mutation, crossover, and selection operations. DE is used to estimate the optimal parameter structure and its associated values. A synthetic numerical experiment of continuous hydraulic conductivity distribution was conducted to demonstrate the proposed methodology. The results indicate that DE can identify the global optimum effectively and efficiently. A sensitivity analysis of the control parameters (i.e., the population size, mutation scaling factor, crossover rate, and mutation schemes) was performed to examine their influence on the objective function. The proposed DE was then applied to solve a complex parameter-estimation problem for a small desert groundwater basin in Southern California. Hydraulic conductivity, specific yield, specific storage, fault conductance, and recharge components were estimated simultaneously. Comparison of DE and a traditional gradient-based approach (PEST) shows DE to be more robust and efficient. The results of this work not only provide an alternative for PSI in groundwater models, but also extend DE applications towards solving complex, regional-scale water management optimization problems.
Cox, Louis Anthony Tony
2006-12-01
This article introduces an approach to estimating the uncertain potential effects on lung cancer risk of removing a particular constituent, cadmium (Cd), from cigarette smoke, given the useful but incomplete scientific information available about its modes of action. The approach considers normal cell proliferation; DNA repair inhibition in normal cells affected by initiating events; proliferation, promotion, and progression of initiated cells; and death or sparing of initiated and malignant cells as they are further transformed to become fully tumorigenic. Rather than estimating unmeasured model parameters by curve fitting to epidemiological or animal experimental tumor data, we attempt rough estimates of parameters based on their biological interpretations and comparison to corresponding genetic polymorphism data. The resulting parameter estimates are admittedly uncertain and approximate, but they suggest a portfolio approach to estimating impacts of removing Cd that gives usefully robust conclusions. This approach views Cd as creating a portfolio of uncertain health impacts that can be expressed as biologically independent relative risk factors having clear mechanistic interpretations. Because Cd can act through many distinct biological mechanisms, it appears likely (subjective probability greater than 40%) that removing Cd from cigarette smoke would reduce smoker risks of lung cancer by at least 10%, although it is possible (consistent with what is known) that the true effect could be much larger or smaller. Conservative estimates and assumptions made in this calculation suggest that the true impact could be greater for some smokers. This conclusion appears to be robust to many scientific uncertainties about Cd and smoking effects.
Kendall, W.L.; Nichols, J.D.
2002-01-01
Temporary emigration was identified some time ago as causing potential problems in capture-recapture studies, and in the last five years approaches have been developed for dealing with special cases of this general problem. Temporary emigration can be viewed more generally as involving transitions to and from an unobservable state, and frequently the state itself is one of biological interest (e.g., 'nonbreeder'). Development of models that permit estimation of relevant parameters in the presence of an unobservable state requires either extra information (e.g., as supplied by Pollock's robust design) or the following classes of model constraints: reducing the order of Markovian transition probabilities, imposing a degree of determinism on transition probabilities, removing state specificity of survival probabilities, and imposing temporal constancy of parameters. The objective of the work described in this paper is to investigate estimability of model parameters under a variety of models that include an unobservable state. Beginning with a very general model and no extra information, we used numerical methods to systematically investigate the use of ancillary information and constraints to yield models that are useful for estimation. The result is a catalog of models for which estimation is possible. An example analysis of sea turtle capture-recapture data under two different models showed similar point estimates but increased precision for the model that incorporated ancillary data (the robust design) when compared to the model with deterministic transitions only. This comparison and the results of our numerical investigation of model structures lead to design suggestions for capture-recapture studies in the presence of an unobservable state.
Spectral gap optimization of order parameters for sampling complex molecular systems
Tiwary, Pratyush; Berne, B. J.
2016-01-01
In modern-day simulations of many-body systems, much of the computational complexity is shifted to the identification of slowly changing molecular order parameters called collective variables (CVs) or reaction coordinates. A vast array of enhanced-sampling methods are based on the identification and biasing of these low-dimensional order parameters, whose fluctuations are important in driving rare events of interest. Here, we describe a new algorithm for finding optimal low-dimensional CVs for use in enhanced-sampling biasing methods like umbrella sampling, metadynamics, and related methods, when limited prior static and dynamic information is known about the system, and a much larger set of candidate CVs is specified. The algorithm involves estimating the best combination of these candidate CVs, as quantified by a maximum path entropy estimate of the spectral gap for dynamics viewed as a function of that CV. The algorithm is called spectral gap optimization of order parameters (SGOOP). Through multiple practical examples, we show how this postprocessing procedure can lead to optimization of CV and several orders of magnitude improvement in the convergence of the free energy calculated through metadynamics, essentially giving the ability to extract useful information even from unsuccessful metadynamics runs. PMID:26929365
NASA Astrophysics Data System (ADS)
Ramoelo, Abel; Cho, M. A.; Mathieu, R.; Madonsela, S.; van de Kerchove, R.; Kaszta, Z.; Wolff, E.
2015-12-01
Land use and climate change could have huge impacts on food security and the health of various ecosystems. Leaf nitrogen (N) and above-ground biomass are some of the key factors limiting agricultural production and ecosystem functioning. Leaf N and biomass can be used as indicators of rangeland quality and quantity. Conventional methods for assessing these vegetation parameters at landscape scale level are time consuming and tedious. Remote sensing provides a bird-eye view of the landscape, which creates an opportunity to assess these vegetation parameters over wider rangeland areas. Estimation of leaf N has been successful during peak productivity or high biomass and limited studies estimated leaf N in dry season. The estimation of above-ground biomass has been hindered by the signal saturation problems using conventional vegetation indices. The objective of this study is to monitor leaf N and above-ground biomass as an indicator of rangeland quality and quantity using WorldView-2 satellite images and random forest technique in the north-eastern part of South Africa. Series of field work to collect samples for leaf N and biomass were undertaken in March 2013, April or May 2012 (end of wet season) and July 2012 (dry season). Several conventional and red edge based vegetation indices were computed. Overall results indicate that random forest and vegetation indices explained over 89% of leaf N concentrations for grass and trees, and less than 89% for all the years of assessment. The red edge based vegetation indices were among the important variables for predicting leaf N. For the biomass, random forest model explained over 84% of biomass variation in all years, and visible bands including red edge based vegetation indices were found to be important. The study demonstrated that leaf N could be monitored using high spatial resolution with the red edge band capability, and is important for rangeland assessment and monitoring.
Estimation of the state of solar activity type stars by virtual observations of CrAVO
NASA Astrophysics Data System (ADS)
Dolgov, A. A.; Shlyapnikov, A. A.
2012-05-01
The results of precosseing of negatives with direct images of the sky from CrAO glass library are presented in this work, which became a part of on-line archive of the Crimean Astronomical Virtual Observatory (CrAVO). Based on the obtained data, the parameters of dwarf stars have been estimated, included in the catalog "Stars with solar-type activity" (GTSh10). The following matters are considered: searching methodology of negatives with positions of studied stars and with calculated limited magnitude; image viewing and reduction with the facilities of the International Virtual Observatory; the preliminary results of the photometry of studied objects.
NASA Astrophysics Data System (ADS)
Zheng, Yuejiu; Ouyang, Minggao; Han, Xuebing; Lu, Languang; Li, Jianqiu
2018-02-01
Sate of charge (SOC) estimation is generally acknowledged as one of the most important functions in battery management system for lithium-ion batteries in new energy vehicles. Though every effort is made for various online SOC estimation methods to reliably increase the estimation accuracy as much as possible within the limited on-chip resources, little literature discusses the error sources for those SOC estimation methods. This paper firstly reviews the commonly studied SOC estimation methods from a conventional classification. A novel perspective focusing on the error analysis of the SOC estimation methods is proposed. SOC estimation methods are analyzed from the views of the measured values, models, algorithms and state parameters. Subsequently, the error flow charts are proposed to analyze the error sources from the signal measurement to the models and algorithms for the widely used online SOC estimation methods in new energy vehicles. Finally, with the consideration of the working conditions, choosing more reliable and applicable SOC estimation methods is discussed, and the future development of the promising online SOC estimation methods is suggested.
Determining index of refraction from polarimetric hyperspectral radiance measurements
NASA Astrophysics Data System (ADS)
Martin, Jacob A.; Gross, Kevin C.
2015-09-01
Polarimetric hyperspectral imaging (P-HSI) combines two of the most common remote sensing modalities. This work leverages the combination of these techniques to improve material classification. Classifying and identifying materials requires parameters which are invariant to changing viewing conditions, and most often a material's reflectivity or emissivity is used. Measuring these most often requires assumptions be made about the material and atmospheric conditions. Combining both polarimetric and hyperspectral imaging, we propose a method to remotely estimate the index of refraction of a material. In general, this is an underdetermined problem because both the real and imaginary components of index of refraction are unknown at every spectral point. By modeling the spectral variation of the index of refraction using a few parameters, however, the problem can be made overdetermined. A number of different functions can be used to describe this spectral variation, and some are discussed here. Reducing the number of spectral parameters to fit allows us to add parameters which estimate atmospheric downwelling radiance and transmittance. Additionally, the object temperature is added as a fit parameter. The set of these parameters that best replicate the measured data is then found using a bounded Nelder-Mead simplex search algorithm. Other search algorithms are also examined and discussed. Results show that this technique has promise but also some limitations, which are the subject of ongoing work.
NASA Astrophysics Data System (ADS)
Sramek, Benjamin Koerner
The ability to deliver conformal dose distributions in radiation therapy through intensity modulation and the potential for tumor dose escalation to improve treatment outcome has necessitated an increase in localization accuracy of inter- and intra-fractional patient geometry. Megavoltage cone-beam CT imaging using the treatment beam and onboard electronic portal imaging device is one option currently being studied for implementation in image-guided radiation therapy. However, routine clinical use is predicated upon continued improvements in image quality and patient dose delivered during acquisition. The formal statement of hypothesis for this investigation was that the conformity of planned to delivered dose distributions in image-guided radiation therapy could be further enhanced through the application of kilovoltage scatter correction and intermediate view estimation techniques to megavoltage cone-beam CT imaging, and that normalized dose measurements could be acquired and inter-compared between multiple imaging geometries. The specific aims of this investigation were to: (1) incorporate the Feldkamp, Davis and Kress filtered backprojection algorithm into a program to reconstruct a voxelized linear attenuation coefficient dataset from a set of acquired megavoltage cone-beam CT projections, (2) characterize the effects on megavoltage cone-beam CT image quality resulting from the application of Intermediate View Interpolation and Intermediate View Reprojection techniques to limited-projection datasets, (3) incorporate the Scatter and Primary Estimation from Collimator Shadows (SPECS) algorithm into megavoltage cone-beam CT image reconstruction and determine the set of SPECS parameters which maximize image quality and quantitative accuracy, and (4) evaluate the normalized axial dose distributions received during megavoltage cone-beam CT image acquisition using radiochromic film and thermoluminescent dosimeter measurements in anthropomorphic pelvic and head and neck phantoms. The conclusions of this investigation were: (1) the implementation of intermediate view estimation techniques to megavoltage cone-beam CT produced improvements in image quality, with the largest impact occurring for smaller numbers of initially-acquired projections, (2) the SPECS scatter correction algorithm could be successfully incorporated into projection data acquired using an electronic portal imaging device during megavoltage cone-beam CT image reconstruction, (3) a large range of SPECS parameters were shown to reduce cupping artifacts as well as improve reconstruction accuracy, with application to anthropomorphic phantom geometries improving the percent difference in reconstructed electron density for soft tissue from -13.6% to -2.0%, and for cortical bone from -9.7% to 1.4%, (4) dose measurements in the anthropomorphic phantoms showed consistent agreement between planar measurements using radiochromic film and point measurements using thermoluminescent dosimeters, and (5) a comparison of normalized dose measurements acquired with radiochromic film to those calculated using multiple treatment planning systems, accelerator-detector combinations, patient geometries and accelerator outputs produced a relatively good agreement.
Estimating Function Approaches for Spatial Point Processes
NASA Astrophysics Data System (ADS)
Deng, Chong
Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting second-order intensity function of spatial point processes. However, the original second-order quasi-likelihood is barely feasible due to the intense computation and high memory requirement needed to solve a large linear system. Motivated by the existence of geometric regular patterns in the stationary point processes, we find a lower dimension representation of the optimal weight function and propose a reduced second-order quasi-likelihood approach. Through a simulation study, we show that the proposed method not only demonstrates superior performance in fitting the clustering parameter but also merits in the relaxation of the constraint of the tuning parameter, H. Third, we studied the quasi-likelihood type estimating funciton that is optimal in a certain class of first-order estimating functions for estimating the regression parameter in spatial point process models. Then, by using a novel spectral representation, we construct an implementation that is computationally much more efficient and can be applied to more general setup than the original quasi-likelihood method.
The Area Coverage of Geophysical Fields as a Function of Sensor Field-of View
NASA Technical Reports Server (NTRS)
Key, Jeffrey R.
1994-01-01
In many remote sensing studies of geophysical fields such as clouds, land cover, or sea ice characteristics, the fractional area coverage of the field in an image is estimated as the proportion of pixels that have the characteristic of interest (i.e., are part of the field) as determined by some thresholding operation. The effect of sensor field-of-view on this estimate is examined by modeling the unknown distribution of subpixel area fraction with the beta distribution, whose two parameters depend upon the true fractional area coverage, the pixel size, and the spatial structure of the geophysical field. Since it is often not possible to relate digital number, reflectance, or temperature to subpixel area fraction, the statistical models described are used to determine the effect of pixel size and thresholding operations on the estimate of area fraction for hypothetical geophysical fields. Examples are given for simulated cumuliform clouds and linear openings in sea ice, whose spatial structures are described by an exponential autocovariance function. It is shown that the rate and direction of change in total area fraction with changing pixel size depends on the true area fraction, the spatial structure, and the thresholding operation used.
Ionospheric effects during severe space weather events seen in ionospheric service data products
NASA Astrophysics Data System (ADS)
Jakowski, Norbert; Danielides, Michael; Mayer, Christoph; Borries, Claudia
Space weather effects are closely related to complex perturbation processes in the magnetosphere-ionosphere-thermosphere systems, initiated by enhanced solar energy input. To understand and model complex space weather processes, different views on the same subject are helpful. One of the ionosphere key parameters is the Total Electron Content (TEC) which provides a first or-der approximation of the ionospheric range error in Global Navigation Satellite System (GNSS) applications. Additionally, horizontal gradients and time rate of change of TEC are important for estimating the perturbation degree of the ionosphere. TEC maps can effectively be gener-ated using ground based GNSS measurements from global receiver networks. Whereas ground based GNSS measurements provide good horizontal resolution, space based radio occultation measurements can complete the view by providing information on the vertical plasma density distribution. The combination of ground based TEC and vertical sounding measurements pro-vide essential information on the shape of the vertical electron density profile by computing the equivalent slab thickness at the ionosonde station site. Since radio beacon measurements at 150/400 MHz are well suited to trace the horizontal structure of Travelling Ionospheric Dis-turbances (TIDs), these data products essentially complete GNSS based TEC mapping results. Radio scintillation data products, characterising small scale irregularities in the ionosphere, are useful to estimate the continuity and availability of transionospheric radio signals. The different data products are addressed while discussing severe space weather events in the ionosphere e.g. events in October/November 2003. The complementary view of different near real time service data products is helpful to better understand the complex dynamics of ionospheric perturbation processes and to forecast the development of parameters customers are interested in.
The plant virus microscope image registration method based on mismatches removing.
Wei, Lifang; Zhou, Shucheng; Dong, Heng; Mao, Qianzhuo; Lin, Jiaxiang; Chen, Riqing
2016-01-01
The electron microscopy is one of the major means to observe the virus. The view of virus microscope images is limited by making specimen and the size of the camera's view field. To solve this problem, the virus sample is produced into multi-slice for information fusion and image registration techniques are applied to obtain large field and whole sections. Image registration techniques have been developed in the past decades for increasing the camera's field of view. Nevertheless, these approaches typically work in batch mode and rely on motorized microscopes. Alternatively, the methods are conceived just to provide visually pleasant registration for high overlap ratio image sequence. This work presents a method for virus microscope image registration acquired with detailed visual information and subpixel accuracy, even when overlap ratio of image sequence is 10% or less. The method proposed focus on the correspondence set and interimage transformation. A mismatch removal strategy is proposed by the spatial consistency and the components of keypoint to enrich the correspondence set. And the translation model parameter as well as tonal inhomogeneities is corrected by the hierarchical estimation and model select. In the experiments performed, we tested different registration approaches and virus images, confirming that the translation model is not always stationary, despite the fact that the images of the sample come from the same sequence. The mismatch removal strategy makes building registration of virus microscope images at subpixel accuracy easier and optional parameters for building registration according to the hierarchical estimation and model select strategies make the proposed method high precision and reliable for low overlap ratio image sequence. Copyright © 2015 Elsevier Ltd. All rights reserved.
2015-09-01
2000): 493–497. Anno , George H. Anno , Siegmund J . Baum, H. Rodney Withers, and Robert W. Young. “Symptomatology of Acute Radiation Effects in Humans...Radiological and Nuclear (CBRN) Defense ( J -8/JRO) and the U.S. Army Office of The Surgeon General (OTSG). The views, opinions, and findings should not be...treatment 3 George H. Anno et al., “Symptomatology of Acute Radiation Effects in Humans after
NASA Astrophysics Data System (ADS)
Kauzlaric, Martina; Schädler, Bruno; Weingartner, Rolf
2014-05-01
The main objective of the MontanAqua transdisciplinary project is to develop strategies moving towards a more sustainable water resources management in the Crans-Montana-Sierre region (Valais, Switzerland) in view of global change. Therefore, a detailed assessment of the available water resources in the study area today and in the future is needed. The study region is situated in the inner alpine zone, with strong altitudinal precipitation gradients: from the precipitation rich alpine ridge down to the dry Rhône plain. A typical plateau glacier on top of the ridge is partly drained through the karstic underground formations and linked to various springs to either side of the water divide. The main anthropogenic influences on the system are reservoirs and diversions to the irrigation channels. Thus, the study area does not cover a classical hydrological basin as the water flows frequently across natural hydrographic boundaries. This is a big challenge from a hydrological point of view, as we cannot easily achieve a closed, measured water balance. Over and above, a lack of comprehensive historical data in the catchment reduces the degree of process conceptualization possible, as well as prohibits usual parameter estimation procedures. The Penn State Integrated Hydrologic Model (PIHM) (Kumar, 2009) has been selected to estimate the available natural water resource for the whole study area. It is a semi-discrete, physically-based model which includes: channel routing, overland flow, subsurface saturated and unsaturated flow, rainfall interception, snow melting and evapotranspiration. Its unstructured mesh decomposition offers a flexible domain decomposition strategy for efficient and accurate integration of the physiographic, climatic and hydrographic watershed. The model was modified in order to be more suitable for a karstified mountainous catchment: it now includes the possibility to punctually add external sources, and the temperature-index approach for estimating melt was adjusted to include the influence of solar radiation. No parameter calibration in a classical sense was used as sufficient observations are missing. Hence, parameters are estimated with values obtained from the literature, catchment boundaries were determined basing on tracer experiments, as well as the relationship between precipitation, spring- and river-discharge. Historical data such as river discharge, infiltration experiments and snow and glacier mass balance measurements were used to validate simulations. Here some case studies are presented, illustrating the difficulty of estimating snowmelt and icemelt parameters, of judging their correctness, as well as the consequent sensitivity of the regional water balance. REFERENCES Kumar, M. 2009: Toward a hydrologic modeling system. PhD Thesis, Departement of civil and Environmental engineering, Pennsylvania State University, USA.
Validation and calibration of structural models that combine information from multiple sources.
Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A
2017-02-01
Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.
SEPARABLE FACTOR ANALYSIS WITH APPLICATIONS TO MORTALITY DATA
Fosdick, Bailey K.; Hoff, Peter D.
2014-01-01
Human mortality data sets can be expressed as multiway data arrays, the dimensions of which correspond to categories by which mortality rates are reported, such as age, sex, country and year. Regression models for such data typically assume an independent error distribution or an error model that allows for dependence along at most one or two dimensions of the data array. However, failing to account for other dependencies can lead to inefficient estimates of regression parameters, inaccurate standard errors and poor predictions. An alternative to assuming independent errors is to allow for dependence along each dimension of the array using a separable covariance model. However, the number of parameters in this model increases rapidly with the dimensions of the array and, for many arrays, maximum likelihood estimates of the covariance parameters do not exist. In this paper, we propose a submodel of the separable covariance model that estimates the covariance matrix for each dimension as having factor analytic structure. This model can be viewed as an extension of factor analysis to array-valued data, as it uses a factor model to estimate the covariance along each dimension of the array. We discuss properties of this model as they relate to ordinary factor analysis, describe maximum likelihood and Bayesian estimation methods, and provide a likelihood ratio testing procedure for selecting the factor model ranks. We apply this methodology to the analysis of data from the Human Mortality Database, and show in a cross-validation experiment how it outperforms simpler methods. Additionally, we use this model to impute mortality rates for countries that have no mortality data for several years. Unlike other approaches, our methodology is able to estimate similarities between the mortality rates of countries, time periods and sexes, and use this information to assist with the imputations. PMID:25489353
NASA Astrophysics Data System (ADS)
Melesse, Assefa; Hajigholizadeh, Mohammad; Blakey, Tara
2017-04-01
In this study, Landsat 8 and Sea-Viewing Wide Field-of-View Sensor (SeaWIFS) sensors were used to model the spatiotemporal changes of four water quality parameters: Landsat 8 (turbidity, chlorophyll-a (chl-a), total phosphate, and total nitrogen) and Sea-Viewing Wide Field-of-View Sensor (SeaWIFS) (algal blooms). The study was conducted in Florda bay, south Florida and model outputs were compared with in-situ observed data. The Landsat 8 based study found that, the predictive models to estimate chl-a and turbidity concentrations, developed through the use of stepwise multiple linear regression (MLR), gave high coefficients of determination in dry season (wet season) (R2 = 0.86(0.66) for chl-a and R2 = 0.84(0.63) for turbidity). Total phosphate and TN were estimated using best-fit multiple linear regression models as a function of Landsat TM and OLI,127 and ground data and showed a high coefficient of determination in dry season (wet season) (R2 = 0.74(0.69) for total phosphate and R2 = 0.82(0.82) for TN). Similarly, the ability of SeaWIFS for chl-a retrieval from optically shallow coastal waters by applying algorithms specific to the pixels' benthic class was evaluated. Benthic class was determined through satellite image-based classification methods. It was found that benthic class based chl-a modeling algorithm was better than the existing regionally-tuned approach. Evaluation of the residuals indicated the potential for further improvement to chl-a estimation through finer characterization of benthic environments. Key words: Landsat, SeaWIFS, water quality, Florida bay, Chl-a, turbidity
NASA Astrophysics Data System (ADS)
Bowen, Spencer L.; Byars, Larry G.; Michel, Christian J.; Chonde, Daniel B.; Catana, Ciprian
2013-10-01
Kinetic parameters estimated from dynamic 18F-fluorodeoxyglucose (18F-FDG) PET acquisitions have been used frequently to assess brain function in humans. Neglecting partial volume correction (PVC) for a dynamic series has been shown to produce significant bias in model estimates. Accurate PVC requires a space-variant model describing the reconstructed image spatial point spread function (PSF) that accounts for resolution limitations, including non-uniformities across the field of view due to the parallax effect. For ordered subsets expectation maximization (OSEM), image resolution convergence is local and influenced significantly by the number of iterations, the count density, and background-to-target ratio. As both count density and background-to-target values for a brain structure can change during a dynamic scan, the local image resolution may also concurrently vary. When PVC is applied post-reconstruction the kinetic parameter estimates may be biased when neglecting the frame-dependent resolution. We explored the influence of the PVC method and implementation on kinetic parameters estimated by fitting 18F-FDG dynamic data acquired on a dedicated brain PET scanner and reconstructed with and without PSF modelling in the OSEM algorithm. The performance of several PVC algorithms was quantified with a phantom experiment, an anthropomorphic Monte Carlo simulation, and a patient scan. Using the last frame reconstructed image only for regional spread function (RSF) generation, as opposed to computing RSFs for each frame independently, and applying perturbation geometric transfer matrix PVC with PSF based OSEM produced the lowest magnitude bias kinetic parameter estimates in most instances, although at the cost of increased noise compared to the PVC methods utilizing conventional OSEM. Use of the last frame RSFs for PVC with no PSF modelling in the OSEM algorithm produced the lowest bias in cerebral metabolic rate of glucose estimates, although by less than 5% in most cases compared to the other PVC methods. The results indicate that the PVC implementation and choice of PSF modelling in the reconstruction can significantly impact model parameters.
Bowen, Spencer L; Byars, Larry G; Michel, Christian J; Chonde, Daniel B; Catana, Ciprian
2013-10-21
Kinetic parameters estimated from dynamic (18)F-fluorodeoxyglucose ((18)F-FDG) PET acquisitions have been used frequently to assess brain function in humans. Neglecting partial volume correction (PVC) for a dynamic series has been shown to produce significant bias in model estimates. Accurate PVC requires a space-variant model describing the reconstructed image spatial point spread function (PSF) that accounts for resolution limitations, including non-uniformities across the field of view due to the parallax effect. For ordered subsets expectation maximization (OSEM), image resolution convergence is local and influenced significantly by the number of iterations, the count density, and background-to-target ratio. As both count density and background-to-target values for a brain structure can change during a dynamic scan, the local image resolution may also concurrently vary. When PVC is applied post-reconstruction the kinetic parameter estimates may be biased when neglecting the frame-dependent resolution. We explored the influence of the PVC method and implementation on kinetic parameters estimated by fitting (18)F-FDG dynamic data acquired on a dedicated brain PET scanner and reconstructed with and without PSF modelling in the OSEM algorithm. The performance of several PVC algorithms was quantified with a phantom experiment, an anthropomorphic Monte Carlo simulation, and a patient scan. Using the last frame reconstructed image only for regional spread function (RSF) generation, as opposed to computing RSFs for each frame independently, and applying perturbation geometric transfer matrix PVC with PSF based OSEM produced the lowest magnitude bias kinetic parameter estimates in most instances, although at the cost of increased noise compared to the PVC methods utilizing conventional OSEM. Use of the last frame RSFs for PVC with no PSF modelling in the OSEM algorithm produced the lowest bias in cerebral metabolic rate of glucose estimates, although by less than 5% in most cases compared to the other PVC methods. The results indicate that the PVC implementation and choice of PSF modelling in the reconstruction can significantly impact model parameters.
NASA Astrophysics Data System (ADS)
Królak, Andrzej; Trzaskoma, Pawel
1996-05-01
Application of wavelet analysis to the estimation of parameters of the broad-band gravitational-wave signal emitted by a binary system is investigated. A method of instantaneous frequency extraction first proposed in this context by Innocent and Vinet is used. The gravitational-wave signal from a binary is investigated from the point of view of signal analysis theory and it is shown that such a signal is characterized by a large time - bandwidth product. This property enables the extraction of frequency modulation from the wavelet transform of the signal. The wavelet transform of the chirp signal from a binary is calculated analytically. Numerical simulations with the noisy chirp signal are performed. The gravitational-wave signal from a binary is taken in the quadrupole approximation and it is buried in noise corresponding to three different values of the signal-to-noise ratio and the wavelet method to extract the frequency modulation of the signal is applied. Then, from the frequency modulation, the chirp mass parameter of the binary is estimated. It is found that the chirp mass can be estimated to a good accuracy, typically of the order of (20/0264-9381/13/5/006/img5% where 0264-9381/13/5/006/img6 is the optimal signal-to-noise ratio. It is also shown that the post-Newtonian effects in the gravitational wave signal from a binary can be discriminated to a satisfactory accuracy.
Black-Litterman model on non-normal stock return (Case study four banks at LQ-45 stock index)
NASA Astrophysics Data System (ADS)
Mahrivandi, Rizki; Noviyanti, Lienda; Setyanto, Gatot Riwi
2017-03-01
The formation of the optimal portfolio is a method that can help investors to minimize risks and optimize profitability. One model for the optimal portfolio is a Black-Litterman (BL) model. BL model can incorporate an element of historical data and the views of investors to form a new prediction about the return of the portfolio as a basis for preparing the asset weighting models. BL model has two fundamental problems, the assumption of normality and estimation parameters on the market Bayesian prior framework that does not from a normal distribution. This study provides an alternative solution where the modelling of the BL model stock returns and investor views from non-normal distribution.
Kinetic characterisation of primer mismatches in allele-specific PCR: a quantitative assessment.
Waterfall, Christy M; Eisenthal, Robert; Cobb, Benjamin D
2002-12-20
A novel method of estimating the kinetic parameters of Taq DNA polymerase during rapid cycle PCR is presented. A model was constructed using a simplified sigmoid function to represent substrate accumulation during PCR in combination with the general equation describing high substrate inhibition for Michaelis-Menten enzymes. The PCR progress curve was viewed as a series of independent reactions where initial rates were accurately measured for each cycle. Kinetic parameters were obtained for allele-specific PCR (AS-PCR) amplification to examine the effect of mismatches on amplification. A high degree of correlation was obtained providing evidence of substrate inhibition as a major cause of the plateau phase that occurs in the later cycles of PCR.
Feasibility of pulse wave velocity estimation from low frame rate US sequences in vivo
NASA Astrophysics Data System (ADS)
Zontak, Maria; Bruce, Matthew; Hippke, Michelle; Schwartz, Alan; O'Donnell, Matthew
2017-03-01
The pulse wave velocity (PWV) is considered one of the most important clinical parameters to evaluate CV risk, vascular adaptation, etc. There has been substantial work attempting to measure the PWV in peripheral vessels using ultrasound (US). This paper presents a fully automatic algorithm for PWV estimation from the human carotid using US sequences acquired with a Logic E9 scanner (modified for RF data capture) and a 9L probe. Our algorithm samples the pressure wave in time by tracking wall displacements over the sequence, and estimates the PWV by calculating the temporal shift between two sampled waves at two distinct locations. Several recent studies have utilized similar ideas along with speckle tracking tools and high frame rate (above 1 KHz) sequences to estimate the PWV. To explore PWV estimation in a more typical clinical setting, we used focused-beam scanning, which yields relatively low frame rates and small fields of view (e.g., 200 Hz for 16.7 mm filed of view). For our application, a 200 Hz frame rate is low. In particular, the sub-frame temporal accuracy required for PWV estimation between locations 16.7 mm apart, ranges from 0.82 of a frame for 4m/s, to 0.33 for 10m/s. When the distance is further reduced (to 0.28 mm between two beams), the sub-frame precision is in parts per thousand (ppt) of the frame (5 ppt for 10m/s). As such, the contributions of our algorithm and this paper are: 1. Ability to work with low frame-rate ( 200Hz) and decreased lateral field of view. 2. Fully automatic segmentation of the wall intima (using raw RF images). 3. Collaborative Speckle Tracking of 2D axial and lateral carotid wall motion. 4. Outlier robust PWV calculation from multiple votes using RANSAC. 5. Algorithm evaluation on volunteers of different ages and health conditions.
Wavelet extractor: A Bayesian well-tie and wavelet extraction program
NASA Astrophysics Data System (ADS)
Gunning, James; Glinsky, Michael E.
2006-06-01
We introduce a new open-source toolkit for the well-tie or wavelet extraction problem of estimating seismic wavelets from seismic data, time-to-depth information, and well-log suites. The wavelet extraction model is formulated as a Bayesian inverse problem, and the software will simultaneously estimate wavelet coefficients, other parameters associated with uncertainty in the time-to-depth mapping, positioning errors in the seismic imaging, and useful amplitude-variation-with-offset (AVO) related parameters in multi-stack extractions. It is capable of multi-well, multi-stack extractions, and uses continuous seismic data-cube interpolation to cope with the problem of arbitrary well paths. Velocity constraints in the form of checkshot data, interpreted markers, and sonic logs are integrated in a natural way. The Bayesian formulation allows computation of full posterior uncertainties of the model parameters, and the important problem of the uncertain wavelet span is addressed uses a multi-model posterior developed from Bayesian model selection theory. The wavelet extraction tool is distributed as part of the Delivery seismic inversion toolkit. A simple log and seismic viewing tool is included in the distribution. The code is written in Java, and thus platform independent, but the Seismic Unix (SU) data model makes the inversion particularly suited to Unix/Linux environments. It is a natural companion piece of software to Delivery, having the capacity to produce maximum likelihood wavelet and noise estimates, but will also be of significant utility to practitioners wanting to produce wavelet estimates for other inversion codes or purposes. The generation of full parameter uncertainties is a crucial function for workers wishing to investigate questions of wavelet stability before proceeding to more advanced inversion studies.
NASA Astrophysics Data System (ADS)
Yadav, Vinod; Singh, Arbind Kumar; Dixit, Uday Shanker
2017-08-01
Flat rolling is one of the most widely used metal forming processes. For proper control and optimization of the process, modelling of the process is essential. Modelling of the process requires input data about material properties and friction. In batch production mode of rolling with newer materials, it may be difficult to determine the input parameters offline. In view of it, in the present work, a methodology to determine these parameters online by the measurement of exit temperature and slip is verified experimentally. It is observed that the inverse prediction of input parameters could be done with a reasonable accuracy. It was also assessed experimentally that there is a correlation between micro-hardness and flow stress of the material; however the correlation between surface roughness and reduction is not that obvious.
NASA Technical Reports Server (NTRS)
McFarland, Shane M.
2008-01-01
Field of view has always been a design feature paramount to helmet design, and in particular space suit design, where the helmet must provide an adequate field of view for a large range of activities, environments, and body positions. For Project Constellation, a slightly different approach to helmet requirement maturation was utilized; one that was less a direct function of body position and suit pressure and more a function of the mission segment in which the field of view is required. Through taxonimization of various parameters that affect suited FOV, as well as consideration for possible nominal and contingency operations during that mission segment, a reduction process was able to condense the large number of possible outcomes to only six unique field of view angle requirements that still captured all necessary variables without sacrificing fidelity. The specific field of view angles were defined by considering mission segment activities, historical performance of other suits, comparison between similar requirements (pressure visor up versus down, etc.), estimated requirements from other teams for field of view (Orion, Altair, EVA), previous field of view tests, medical data for shirtsleeve field of view performance, and mapping of visual field data to generate 45degree off-axis field of view requirements. Full resolution of several specific field of view angle requirements warranted further work, which consisted of low and medium fidelity field of view testing in the rear entry ISuit and DO27 helmet prototype. This paper serves to document this reduction progress and followup testing employed to write the Constellation requirements for helmet field of view.
Trojan War displayed as a full annihilation-diffusion-reaction model
NASA Astrophysics Data System (ADS)
Flores, J. C.
2017-02-01
The diffusive pair annihilation model with embedded topological domains and archaeological data is applied in an analysis of the hypothetical Trojan-Greek war during the late Bronze Age. Estimations of parameter are explicitly made for critical dynamics of the model. In particular, the 8-metre walls of Troy could be viewed as the effective shield that provided the technological difference between the two armies. Suggestively, the numbers in The Iliad are quite sound, being in accord with Lanchester's laws of warfare.
Coherent broadband sonar signal processing with the environmentally corrected matched filter
NASA Astrophysics Data System (ADS)
Camin, Henry John, III
The matched filter is the standard approach for coherently processing active sonar signals, where knowledge of the transmitted waveform is used in the detection and parameter estimation of received echoes. Matched filtering broadband signals provides higher levels of range resolution and reverberation noise suppression than can be realized through narrowband processing. Since theoretical processing gains are proportional to the signal bandwidth, it is typically desirable to utilize the widest band signals possible. However, as signal bandwidth increases, so do environmental effects that tend to decrease correlation between the received echo and the transmitted waveform. This is especially true for ultra wideband signals, where the bandwidth exceeds an octave or approximately 70% fractional bandwidth. This loss of coherence often results in processing gains and range resolution much lower than theoretically predicted. Wiener filtering, commonly used in image processing to improve distorted and noisy photos, is investigated in this dissertation as an approach to correct for these environmental effects. This improved signal processing, Environmentally Corrected Matched Filter (ECMF), first uses a Wiener filter to estimate the environmental transfer function and then again to correct the received signal using this estimate. This process can be viewed as a smarter inverse or whitening filter that adjusts behavior according to the signal to noise ratio across the spectrum. Though the ECMF is independent of bandwidth, it is expected that ultra wideband signals will see the largest improvement, since they tend to be more impacted by environmental effects. The development of the ECMF and demonstration of improved parameter estimation with its use are the primary emphases in this dissertation. Additionally, several new contributions to the field of sonar signal processing made in conjunction with the development of the ECMF are described. A new, nondimensional wideband ambiguity function is presented as a way to view the behavior of the matched filter with and without the decorrelating environmental effects; a new, integrated phase broadband angle estimation method is developed and compared to existing methods; and a new, asymptotic offset phase angle variance model is presented. Several data sets are used to demonstrate these new contributions. High fidelity Sonar Simulation Toolset (SST) synthetic data is used to characterize the theoretical performance. Two in-water data sets were used to verify assumptions that were made during the development of the ECMF. Finally, a newly collected in-air data set containing ultra wideband signals was used in lieu of a cost prohibitive underwater experiment to demonstrate the effectiveness of the ECMF at improving parameter estimates.
NASA Astrophysics Data System (ADS)
Zubarev, A. E.; Nadezhdina, I. E.; Brusnikin, E. S.; Karachevtseva, I. P.; Oberst, J.
2016-09-01
The new technique for generation of coordinate control point networks based on photogrammetric processing of heterogeneous planetary images (obtained at different time, scale, with different illumination or oblique view) is developed. The technique is verified with the example for processing the heterogeneous information obtained by remote sensing of Ganymede by the spacecraft Voyager-1, -2 and Galileo. Using this technique the first 3D control point network for Ganymede is formed: the error of the altitude coordinates obtained as a result of adjustment is less than 5 km. The new control point network makes it possible to obtain basic geodesic parameters of the body (axes size) and to estimate forced librations. On the basis of the control point network, digital terrain models (DTMs) with different resolutions are generated and used for mapping the surface of Ganymede with different levels of detail (Zubarev et al., 2015b).
Advanced multilateration theory, software development, and data processing: The MICRODOT system
NASA Technical Reports Server (NTRS)
Escobal, P. R.; Gallagher, J. F.; Vonroos, O. H.
1976-01-01
The process of geometric parameter estimation to accuracies of one centimeter, i.e., multilateration, is defined and applications are listed. A brief functional explanation of the theory is presented. Next, various multilateration systems are described in order of increasing system complexity. Expected systems accuracy is discussed from a general point of view and a summary of the errors is listed. An outline of the design of a software processing system for multilateration, called MICRODOT, is presented next. The links of this software, which can be used for multilateration data simulations or operational data reduction, are examined on an individual basis. Functional flow diagrams are presented to aid in understanding the software capability. MICRODOT capability is described with respect to vehicle configurations, interstation coordinate reduction, geophysical parameter estimation, and orbit determination. Numerical results obtained from MICRODOT via data simulations are displayed both for hypothetical and real world vehicle/station configurations such as used in the GEOS-3 Project. These simulations show the inherent power of the multilateration procedure.
Using diurnal temperature signals to infer vertical groundwater-surface water exchange
Irvine, Dylan J.; Briggs, Martin A.; Lautz, Laura K.; Gordon, Ryan P.; McKenzie, Jeffrey M.; Cartwright, Ian
2017-01-01
Heat is a powerful tracer to quantify fluid exchange between surface water and groundwater. Temperature time series can be used to estimate pore water fluid flux, and techniques can be employed to extend these estimates to produce detailed plan-view flux maps. Key advantages of heat tracing include cost-effective sensors and ease of data collection and interpretation, without the need for expensive and time-consuming laboratory analyses or induced tracers. While the collection of temperature data in saturated sediments is relatively straightforward, several factors influence the reliability of flux estimates that are based on time series analysis (diurnal signals) of recorded temperatures. Sensor resolution and deployment are particularly important in obtaining robust flux estimates in upwelling conditions. Also, processing temperature time series data involves a sequence of complex steps, including filtering temperature signals, selection of appropriate thermal parameters, and selection of the optimal analytical solution for modeling. This review provides a synthesis of heat tracing using diurnal temperature oscillations, including details on optimal sensor selection and deployment, data processing, model parameterization, and an overview of computing tools available. Recent advances in diurnal temperature methods also provide the opportunity to determine local saturated thermal diffusivity, which can improve the accuracy of fluid flux modeling and sensor spacing, which is related to streambed scour and deposition. These parameters can also be used to determine the reliability of flux estimates from the use of heat as a tracer.
Estimating Soil Moisture Using Polsar Data: a Machine Learning Approach
NASA Astrophysics Data System (ADS)
Khedri, E.; Hasanlou, M.; Tabatabaeenejad, A.
2017-09-01
Soil moisture is an important parameter that affects several environmental processes. This parameter has many important functions in numerous sciences including agriculture, hydrology, aerology, flood prediction, and drought occurrence. However, field procedures for moisture calculations are not feasible in a vast agricultural region territory. This is due to the difficulty in calculating soil moisture in vast territories and high-cost nature as well as spatial and local variability of soil moisture. Polarimetric synthetic aperture radar (PolSAR) imaging is a powerful tool for estimating soil moisture. These images provide a wide field of view and high spatial resolution. For estimating soil moisture, in this study, a model of support vector regression (SVR) is proposed based on obtained data from AIRSAR in 2003 in C, L, and P channels. In this endeavor, sequential forward selection (SFS) and sequential backward selection (SBS) are evaluated to select suitable features of polarized image dataset for high efficient modeling. We compare the obtained data with in-situ data. Output results show that the SBS-SVR method results in higher modeling accuracy compared to SFS-SVR model. Statistical parameters obtained from this method show an R2 of 97% and an RMSE of lower than 0.00041 (m3/m3) for P, L, and C channels, which has provided better accuracy compared to other feature selection algorithms.
GF-7 Imaging Simulation and Dsm Accuracy Estimate
NASA Astrophysics Data System (ADS)
Yue, Q.; Tang, X.; Gao, X.
2017-05-01
GF-7 satellite is a two-line-array stereo imaging satellite for surveying and mapping which will be launched in 2018. Its resolution is about 0.8 meter at subastral point corresponding to a 20 km width of cloth, and the viewing angle of its forward and backward cameras are 5 and 26 degrees. This paper proposed the imaging simulation method of GF-7 stereo images. WorldView-2 stereo images were used as basic data for simulation. That is, we didn't use DSM and DOM as basic data (we call it "ortho-to-stereo" method) but used a "stereo-to-stereo" method, which will be better to reflect the difference of geometry and radiation in different looking angle. The shortage is that geometric error will be caused by two factors, one is different looking angles between basic image and simulated image, another is not very accurate or no ground reference data. We generated DSM by WorldView-2 stereo images. The WorldView-2 DSM was not only used as reference DSM to estimate the accuracy of DSM generated by simulated GF-7 stereo images, but also used as "ground truth" to establish the relationship between WorldView-2 image point and simulated image point. Static MTF was simulated on the instantaneous focal plane "image" by filtering. SNR was simulated in the electronic sense, that is, digital value of WorldView-2 image point was converted to radiation brightness and used as radiation brightness of simulated GF-7 camera. This radiation brightness will be converted to electronic number n according to physical parameters of GF-7 camera. The noise electronic number n1 will be a random number between -√n and √n. The overall electronic number obtained by TDI CCD will add and converted to digital value of simulated GF-7 image. Sinusoidal curves with different amplitude, frequency and initial phase were used as attitude curves. Geometric installation errors of CCD tiles were also simulated considering the rotation and translation factors. An accuracy estimate was made for DSM generated from simulated images.
Brandsch, Rainer
2017-10-01
Migration modelling provides reliable migration estimates from food-contact materials (FCM) to food or food simulants based on mass-transfer parameters like diffusion and partition coefficients related to individual materials. In most cases, mass-transfer parameters are not readily available from the literature and for this reason are estimated with a given uncertainty. Historically, uncertainty was accounted for by introducing upper limit concepts first, turning out to be of limited applicability due to highly overestimated migration results. Probabilistic migration modelling gives the possibility to consider uncertainty of the mass-transfer parameters as well as other model inputs. With respect to a functional barrier, the most important parameters among others are the diffusion properties of the functional barrier and its thickness. A software tool that accepts distribution as inputs and is capable of applying Monte Carlo methods, i.e., random sampling from the input distributions of the relevant parameters (i.e., diffusion coefficient and layer thickness), predicts migration results with related uncertainty and confidence intervals. The capabilities of probabilistic migration modelling are presented in the view of three case studies (1) sensitivity analysis, (2) functional barrier efficiency and (3) validation by experimental testing. Based on the predicted migration by probabilistic migration modelling and related exposure estimates, safety evaluation of new materials in the context of existing or new packaging concepts is possible. Identifying associated migration risk and potential safety concerns in the early stage of packaging development is possible. Furthermore, dedicated material selection exhibiting required functional barrier efficiency under application conditions becomes feasible. Validation of the migration risk assessment by probabilistic migration modelling through a minimum of dedicated experimental testing is strongly recommended.
Leaf Area Index Estimation Using Chinese GF-1 Wide Field View Data in an Agriculture Region.
Wei, Xiangqin; Gu, Xingfa; Meng, Qingyan; Yu, Tao; Zhou, Xiang; Wei, Zheng; Jia, Kun; Wang, Chunmei
2017-07-08
Leaf area index (LAI) is an important vegetation parameter that characterizes leaf density and canopy structure, and plays an important role in global change study, land surface process simulation and agriculture monitoring. The wide field view (WFV) sensor on board the Chinese GF-1 satellite can acquire multi-spectral data with decametric spatial resolution, high temporal resolution and wide coverage, which are valuable data sources for dynamic monitoring of LAI. Therefore, an automatic LAI estimation algorithm for GF-1 WFV data was developed based on the radiative transfer model and LAI estimation accuracy of the developed algorithm was assessed in an agriculture region with maize as the dominated crop type. The radiative transfer model was firstly used to simulate the physical relationship between canopy reflectance and LAI under different soil and vegetation conditions, and then the training sample dataset was formed. Then, neural networks (NNs) were used to develop the LAI estimation algorithm using the training sample dataset. Green, red and near-infrared band reflectances of GF-1 WFV data were used as the input variables of the NNs, as well as the corresponding LAI was the output variable. The validation results using field LAI measurements in the agriculture region indicated that the LAI estimation algorithm could achieve satisfactory results (such as R² = 0.818, RMSE = 0.50). In addition, the developed LAI estimation algorithm had potential to operationally generate LAI datasets using GF-1 WFV land surface reflectance data, which could provide high spatial and temporal resolution LAI data for agriculture, ecosystem and environmental management researches.
Hakim, Alex D.
2011-01-01
To record sleep, actigraph devices are worn on the wrist and record movements that can be used to estimate sleep parameters with specialized algorithms in computer software programs. With the recent establishment of a Current Procedural Terminology code for wrist actigraphy, this technology is being used increasingly in clinical settings as actigraphy has the advantage of providing objective information on sleep habits in the patient’s natural sleep environment. Actigraphy has been well validated for the estimation of nighttime sleep parameters across age groups, but the validity of the estimation of sleep-onset latency and daytime sleeping is limited. Clinical guidelines and research suggest that wrist actigraphy is particularly useful in the documentation of sleep patterns prior to a multiple sleep latency test, in the evaluation of circadian rhythm sleep disorders, to evaluate treatment outcomes, and as an adjunct to home monitoring of sleep-disordered breathing. Actigraphy has also been well studied in the evaluation of sleep in the context of depression and dementia. Although actigraphy should not be viewed as a substitute for clinical interviews, sleep diaries, or overnight polysomnography when indicated, it can provide useful information about sleep in the natural sleep environment and/or when extended monitoring is clinically indicated. PMID:21652563
Zan, Yunlong; Long, Yong; Chen, Kewei; Li, Biao; Huang, Qiu; Gullberg, Grant T
2017-07-01
Our previous works have found that quantitative analysis of 123 I-MIBG kinetics in the rat heart with dynamic single-photon emission computed tomography (SPECT) offers the potential to quantify the innervation integrity at an early stage of left ventricular hypertrophy. However, conventional protocols involving a long acquisition time for dynamic imaging reduce the animal survival rate and thus make longitudinal analysis difficult. The goal of this work was to develop a procedure to reduce the total acquisition time by selecting nonuniform acquisition times for projection views while maintaining the accuracy and precision of estimated physiologic parameters. Taking dynamic cardiac imaging with 123 I-MIBG in rats as an example, we generated time activity curves (TACs) of regions of interest (ROIs) as ground truths based on a direct four-dimensional reconstruction of experimental data acquired from a rotating SPECT camera, where TACs represented as the coefficients of B-spline basis functions were used to estimate compartmental model parameters. By iteratively adjusting the knots (i.e., control points) of B-spline basis functions, new TACs were created according to two rules: accuracy and precision. The accuracy criterion allocates the knots to achieve low relative entropy between the estimated left ventricular blood pool TAC and its ground truth so that the estimated input function approximates its real value and thus the procedure yields an accurate estimate of model parameters. The precision criterion, via the D-optimal method, forces the estimated parameters to be as precise as possible, with minimum variances. Based on the final knots obtained, a new protocol of 30 min was built with a shorter acquisition time that maintained a 5% error in estimating rate constants of the compartment model. This was evaluated through digital simulations. The simulation results showed that our method was able to reduce the acquisition time from 100 to 30 min for the cardiac study of rats with 123 I-MIBG. Compared to a uniform interval dynamic SPECT protocol (1 s acquisition interval, 30 min acquisition time), the newly proposed protocol with nonuniform interval achieved comparable (K1 and k2, P = 0.5745 for K1 and P = 0.0604 for k2) or better (Distribution Volume, DV, P = 0.0004) performance for parameter estimates with less storage and shorter computational time. In this study, a procedure was devised to shorten the acquisition time while maintaining the accuracy and precision of estimated physiologic parameters in dynamic SPECT imaging. The procedure was designed for 123 I-MIBG cardiac imaging in rat studies; however, it has the potential to be extended to other applications, including patient studies involving the acquisition of dynamic SPECT data. © 2017 American Association of Physicists in Medicine.
NASA Technical Reports Server (NTRS)
Lisano, Michael E.
2007-01-01
Recent literature in applied estimation theory reflects growing interest in the sigma-point (also called unscented ) formulation for optimal sequential state estimation, often describing performance comparisons with extended Kalman filters as applied to specific dynamical problems [c.f. 1, 2, 3]. Favorable attributes of sigma-point filters are described as including a lower expected error for nonlinear even non-differentiable dynamical systems, and a straightforward formulation not requiring derivation or implementation of any partial derivative Jacobian matrices. These attributes are particularly attractive, e.g. in terms of enabling simplified code architecture and streamlined testing, in the formulation of estimators for nonlinear spaceflight mechanics systems, such as filter software onboard deep-space robotic spacecraft. As presented in [4], the Sigma-Point Consider Filter (SPCF) algorithm extends the sigma-point filter algorithm to the problem of consider covariance analysis. Considering parameters in a dynamical system, while estimating its state, provides an upper bound on the estimated state covariance, which is viewed as a conservative approach to designing estimators for problems of general guidance, navigation and control. This is because, whether a parameter in the system model is observable or not, error in the knowledge of the value of a non-estimated parameter will increase the actual uncertainty of the estimated state of the system beyond the level formally indicated by the covariance of an estimator that neglects errors or uncertainty in that parameter. The equations for SPCF covariance evolution are obtained in a fashion similar to the derivation approach taken with standard (i.e. linearized or extended) consider parameterized Kalman filters (c.f. [5]). While in [4] the SPCF and linear-theory consider filter (LTCF) were applied to an illustrative linear dynamics/linear measurement problem, in the present work examines the SPCF as applied to nonlinear sequential consider covariance analysis, i.e. in the presence of nonlinear dynamics and nonlinear measurements. A simple SPCF for orbit determination, exemplifying an algorithm hosted in the guidance, navigation and control (GN&C) computer processor of a hypothetical robotic spacecraft, was implemented, and compared with an identically-parameterized (standard) extended, consider-parameterized Kalman filter. The onboard filtering scenario examined is a hypothetical spacecraft orbit about a small natural body with imperfectly-known mass. The formulations, relative complexities, and performances of the filters are compared and discussed.
Variability-aware compact modeling and statistical circuit validation on SRAM test array
NASA Astrophysics Data System (ADS)
Qiao, Ying; Spanos, Costas J.
2016-03-01
Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose a variability-aware compact model characterization methodology based on stepwise parameter selection. Transistor I-V measurements are obtained from bit transistor accessible SRAM test array fabricated using a collaborating foundry's 28nm FDSOI technology. Our in-house customized Monte Carlo simulation bench can incorporate these statistical compact models; and simulation results on SRAM writability performance are very close to measurements in distribution estimation. Our proposed statistical compact model parameter extraction methodology also has the potential of predicting non-Gaussian behavior in statistical circuit performances through mixtures of Gaussian distributions.
NASA Astrophysics Data System (ADS)
Hou, W. Z.; Li, Z. Q.; Zheng, F. X.; Qie, L. L.
2018-04-01
This paper evaluates the information content for the retrieval of key aerosol microphysical and surface properties for multispectral single-viewing satellite polarimetric measurements cantered at 410, 443, 555, 670, 865, 1610 and 2250 nm over bright land. To conduct the information content analysis, the synthetic data are simulated by the Unified Linearized Vector Radiative Transfer Model (UNLVTM) with the intensity and polarization together over bare soil surface for various scenarios. Following the optimal estimation theory, a principal component analysis method is employed to reconstruct the multispectral surface reflectance from 410 nm to 2250 nm, and then integrated with a linear one-parametric BPDF model to represent the contribution of polarized surface reflectance, thus further to decouple the surface-atmosphere contribution from the TOA measurements. Focusing on two different aerosol models with the aerosol optical depth equal to 0.8 at 550 nm, the total DFS and DFS component of each retrieval aerosol and surface parameter are analysed. The DFS results show that the key aerosol microphysical properties, such as the fine- and coarse-mode columnar volume concentration, the effective radius and the real part of complex refractive index at 550 nm, could be well retrieved with the surface parameters simultaneously over bare soil surface type. The findings of this study can provide the guidance to the inversion algorithm development over bright surface land by taking full use of the single-viewing satellite polarimetric measurements.
Analysis of Seasonal Chlorophyll-a Using An Adjoint Three-Dimensional Ocean Carbon Cycle Model
NASA Astrophysics Data System (ADS)
Tjiputra, J.; Winguth, A.; Polzin, D.
2004-12-01
The misfit between numerical ocean model and observations can be reduced using data assimilation. This can be achieved by optimizing the model parameter values using adjoint model. The adjoint model minimizes the model-data misfit by estimating the sensitivity or gradient of the cost function with respect to initial condition, boundary condition, or parameters. The adjoint technique was used to assimilate seasonal chlorophyll-a data from the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) satellite to a marine biogeochemical model HAMOCC5.1. An Identical Twin Experiment (ITE) was conducted to test the robustness of the model and the non-linearity level of the forward model. The ITE experiment successfully recovered most of the perturbed parameter to their initial values, and identified the most sensitive ecosystem parameters, which contribute significantly to model-data bias. The regional assimilations of SeaWiFS chlorophyll-a data into the model were able to reduce the model-data misfit (i.e. the cost function) significantly. The cost function reduction mostly occurred in the high latitudes (e.g. the model-data misfit in the northern region during summer season was reduced by 54%). On the other hand, the equatorial regions appear to be relatively stable with no strong reduction in cost function. The optimized parameter set is used to forecast the carbon fluxes between marine ecosystem compartments (e.g. Phytoplankton, Zooplankton, Nutrients, Particulate Organic Carbon, and Dissolved Organic Carbon). The a posteriori model run using the regional best-fit parameterization yields approximately 36 PgC/yr of global net primary productions in the euphotic zone.
NASA Astrophysics Data System (ADS)
Ermida, Sofia; DaCamara, Carlos C.; Trigo, Isabel F.; Pires, Ana C.; Ghent, Darren
2017-04-01
Land Surface Temperature (LST) is a key climatological variable and a diagnostic parameter of land surface conditions. Remote sensing constitutes the most effective method to observe LST over large areas and on a regular basis. Although LST estimation from remote sensing instruments operating in the Infrared (IR) is widely used and has been performed for nearly 3 decades, there is still a list of open issues. One of these is the LST dependence on viewing and illumination geometry. This effect introduces significant discrepancies among LST estimations from different sensors, overlapping in space and time, that are not related to uncertainties in the methodologies or input data used. Furthermore, these directional effects deviate LST products from an ideally defined LST, which should represent to the ensemble of directional radiometric temperature of all surface elements within the FOV. Angular effects on LST are here conveniently estimated by means of a kernel model of the surface thermal emission, which describes the angular dependence of LST as a function of viewing and illumination geometry. The model is calibrated using LST data as provided by a wide range of sensors to optimize spatial coverage, namely: 1) a LEO sensor - the Moderate Resolution Imaging Spectroradiometer (MODIS) on-board NASA's TERRA and AQUA; and 2) 3 GEO sensors - the Spinning Enhanced Visible and Infrared Imager (SEVIRI) on-board EUMETSAT's Meteosat Second Generation (MSG), the Japanese Meteorological Imager (JAMI) on-board the Japanese Meteorological Association (JMA) Multifunction Transport SATellite (MTSAT-2), and NASA's Geostationary Operational Environmental Satellites (GOES). As shown in our previous feasibility studies the sampling of illumination and view angles has a high impact on the obtained model parameters. This impact may be mitigated when the sampling size is increased by aggregating pixels with similar surface conditions. Here we propose a methodology where land surface is stratified by means of a cluster analysis using information on land cover type, fraction of vegetation cover and topography. The kernel model is then adjusted to LST data corresponding to each cluster. It is shown that the quality of the cluster based kernel model is very close to the pixel based one. Furthermore, the reduced number of parameters (limited to the number of identified clusters, instead of a pixel-by-pixel model calibration) allows improving the kernel model trough the incorporation of a seasonal component. The application of the here discussed procedure towards the harmonization of LST products from multi-sensors is on the framework of the ESA DUE GlobTemperature project.
Detection, Identification, Location, and Remote Sensing Using SAW RFID Sensor Tags
NASA Technical Reports Server (NTRS)
Barton, Richard J.; Kennedy, Timothy F.; Williams, Robert M.; Fink, Patrick W.; Ngo, Phong H.
2009-01-01
The Electromagnetic Systems Branch (EV4) of the Avionic Systems Division at NASA Johnson Space Center in Houston, TX is studying the utility of surface acoustic wave (SAW) radiofrequency identification (RFID) tags for multiple wireless applications including detection, identification, tracking, and remote sensing of objects on the lunar surface, monitoring of environmental test facilities, structural shape and health monitoring, and nondestructive test and evaluation of assets. For all of these applications, it is anticipated that the system utilized to interrogate the SAW RFID tags may need to operate at fairly long range and in the presence of considerable multipath and multiple-access interference. Towards that end, EV4 is developing a prototype SAW RFID wireless interrogation system for use in such environments called the Passive Adaptive RFID Sensor Equipment (PARSED) system. The system utilizes a digitally beam-formed planar receiving antenna array to extend range and provide direction-of-arrival information coupled with an approximate maximum-likelihood signal processing algorithm to provide near-optimal estimation of both range and temperature. The system is capable of forming a large number of beams within the field of view and resolving the information from several tags within each beam. The combination of both spatial and waveform discrimination provides the capability to track and monitor telemetry from a large number of objects appearing simultaneously within the field of view of the receiving array. In this paper, we will consider the application of the PARSEQ system to the problem of simultaneous detection, identification, localization, and temperature estimation for multiple objects. We will summarize the overall design of the PARSEQ system and present a detailed description of the design and performance of the signal detection and estimation algorithms incorporated in the system. The system is currently configured only to measure temperature (jointly with range and tag ID), but future versions will be revised to measure parameters other than temperature as SAW tags capable of interfacing with external sensors become available. It is anticipated that the estimation of arbitrary parameters measured using SAW-based sensors will be based on techniques very similar to the joint range and temperature estimation techniques described in this paper.
An analytically based numerical method for computing view factors in real urban environments
NASA Astrophysics Data System (ADS)
Lee, Doo-Il; Woo, Ju-Wan; Lee, Sang-Hyun
2018-01-01
A view factor is an important morphological parameter used in parameterizing in-canyon radiative energy exchange process as well as in characterizing local climate over urban environments. For realistic representation of the in-canyon radiative processes, a complete set of view factors at the horizontal and vertical surfaces of urban facets is required. Various analytical and numerical methods have been suggested to determine the view factors for urban environments, but most of the methods provide only sky-view factor at the ground level of a specific location or assume simplified morphology of complex urban environments. In this study, a numerical method that can determine the sky-view factors ( ψ ga and ψ wa ) and wall-view factors ( ψ gw and ψ ww ) at the horizontal and vertical surfaces is presented for application to real urban morphology, which are derived from an analytical formulation of the view factor between two blackbody surfaces of arbitrary geometry. The established numerical method is validated against the analytical sky-view factor estimation for ideal street canyon geometries, showing a consolidate confidence in accuracy with errors of less than 0.2 %. Using a three-dimensional building database, the numerical method is also demonstrated to be applicable in determining the sky-view factors at the horizontal (roofs and roads) and vertical (walls) surfaces in real urban environments. The results suggest that the analytically based numerical method can be used for the radiative process parameterization of urban numerical models as well as for the characterization of local urban climate.
Kotasidis, F A; Mehranian, A; Zaidi, H
2016-05-07
Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image reconstruction can substantially prevent kinetic parameter error propagation either from erroneous kinetic modelling, inter-frame motion or emission/transmission mismatch. Furthermore, we demonstrate the benefits of TOF in parameter estimation when conventional post-reconstruction (3D) methods are used and compare the potential improvements to direct 4D methods. Further improvements could possibly be achieved in the future by combining TOF direct 4D image reconstruction with adaptive kinetic models and inter-frame motion correction schemes.
NASA Astrophysics Data System (ADS)
Kotasidis, F. A.; Mehranian, A.; Zaidi, H.
2016-05-01
Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image reconstruction can substantially prevent kinetic parameter error propagation either from erroneous kinetic modelling, inter-frame motion or emission/transmission mismatch. Furthermore, we demonstrate the benefits of TOF in parameter estimation when conventional post-reconstruction (3D) methods are used and compare the potential improvements to direct 4D methods. Further improvements could possibly be achieved in the future by combining TOF direct 4D image reconstruction with adaptive kinetic models and inter-frame motion correction schemes.
NASA Astrophysics Data System (ADS)
Poudyal, R.; Singh, M. K.; Gatebe, C. K.; Gautam, R.; Varnai, T.
2015-12-01
Using airborne Cloud Absorption Radiometer (CAR) reflectance measurements of smoke, an empirical relationship between reflectances measured at different sun-satellite geometry is established, in this study. It is observed that reflectance of smoke aerosol at any viewing zenith angle can be computed using a linear combination of reflectance at two viewing zenith angles. One of them should be less than 30° and other must be greater than 60°. We found that the parameters of the linear combination computation follow a third order polynomial function of the viewing geometry. Similar relationships were also established for different relative azimuth angles. Reflectance at any azimuth angle can be written as a linear combination of measurements at two different azimuth angles. One must be in the forward scattering direction and the other in backward scattering, with both close to the principal plane. These relationships allowed us to create an Angular Distribution Model (ADM) for smoke, which can estimate reflectances in any direction based on measurements taken in four view directions. The model was tested by calculating the ADM parameters using CAR data from the SCAR-B campaign, and applying these parameters to different smoke cases at three spectral channels (340nm, 380nm and 470nm). We also tested our modelled smoke ADM formulas with Absorbing Aerosol Index (AAI) directly computed from the CAR data, based on 340nm and 380nm, which is probably the first study to analyze the complete multi-angular distribution of AAI for smoke aerosols. The RMSE (and mean error) of predicted reflectance for SCAR-B and ARCTAS smoke ADMs were found to be 0.002 (1.5%) and 0.047 (6%), respectively. The accuracy of the ADM formulation is also tested through radiative transfer simulations for a wide variety of situations (varying smoke loading, underlying surface types, etc.).
Anisotropic universe with magnetized dark energy
NASA Astrophysics Data System (ADS)
Goswami, G. K.; Dewangan, R. N.; Yadav, Anil Kumar
2016-04-01
In the present work we have searched the existence of the late time acceleration of the Universe filled with cosmic fluid and uniform magnetic field as source of matter in anisotropic Heckmann-Schucking space-time. The observed acceleration of universe has been explained by introducing a positive cosmological constant Λ in the Einstein's field equation which is mathematically equivalent to vacuum energy with equation of state (EOS) parameter set equal to -1. The present values of the matter and the dark energy parameters (Ωm)0 & (Ω_{Λ})0 are estimated in view of the latest 287 high red shift (0.3 ≤ z ≤1.4) SN Ia supernova data's of observed apparent magnitude along with their possible error taken from Union 2.1 compilation. It is found that the best fit value for (Ωm)0 & (Ω_{Λ})0 are 0.2820 & 0.7177 respectively which are in good agreement with recent astrophysical observations in the latest surveys like WMAP [2001-2013], Planck [latest 2015] & BOSS. Various physical parameters such as the matter and dark energy densities, the present age of the universe and deceleration parameter have been obtained on the basis of the values of (Ωm)0 & (Ω_{Λ})0. Also we have estimated that the acceleration would have begun in the past at z = 0.71131 ˜6.2334 Gyrs before from present.
NASA Astrophysics Data System (ADS)
Belov, S. Yu.; Belova, I. N.
2017-11-01
Monitoring of the earth's surface by remote sensing in the short-wave band can provide quick identification of some characteristics of natural systems. This band range allows one to diagnose subsurface aspects of the earth, as the scattering parameter is affected by irregularities in the dielectric permittivity of subsurface structures. This method based on the organization of the monitoring probe may detect changes in these environments, for example, to assess seismic hazard, hazardous natural phenomena such as earthquakes, as well as some man-made hazards and etc. The problem of measuring and accounting for the scattering power of the earth's surface in the short-range of radio waves is important for a number of purposes, such as diagnosing properties of the medium, which is of interest for geological, environmental studies. In this paper, we propose a new method for estimating the parameters of incoherent signal/noise ratio. The paper presents the results of comparison of the measurement method from the point of view of their admissible relative analytical errors. The new method is suggested. Analysis of analytical error of estimation of this parameter allowed to recommend new method instead of standard method. A comparative analysis and shows that the analytical (relative) accuracy of the determination of this parameter new method on the order exceeds the widely-used standard method.
Mclean, Elizabeth L; Forrester, Graham E
2018-04-01
We tested whether fishers' local ecological knowledge (LEK) of two fish life-history parameters, size at maturity (SAM) at maximum body size (MS), was comparable to scientific estimates (SEK) of the same parameters, and whether LEK influenced fishers' perceptions of sustainability. Local ecological knowledge was documented for 82 fishers from a small-scale fishery in Samaná Bay, Dominican Republic, whereas SEK was compiled from the scientific literature. Size at maturity estimates derived from LEK and SEK overlapped for most of the 15 commonly harvested species (10 of 15). In contrast, fishers' maximum size estimates were usually lower than (eight species), or overlapped with (five species) scientific estimates. Fishers' size-based estimates of catch composition indicate greater potential for overfishing than estimates based on SEK. Fishers' estimates of size at capture relative to size at maturity suggest routine inclusion of juveniles in the catch (9 of 15 species), and fishers' estimates suggest that harvested fish are substantially smaller than maximum body size for most species (11 of 15 species). Scientific estimates also suggest that harvested fish are generally smaller than maximum body size (13 of 15), but suggest that the catch is dominated by adults for most species (9 of 15 species), and that juveniles are present in the catch for fewer species (6 of 15). Most Samaná fishers characterized the current state of their fishery as poor (73%) and as having changed for the worse over the past 20 yr (60%). Fishers stated that concern about overfishing, catching small fish, and catching immature fish contributed to these perceptions, indicating a possible influence of catch-size composition on their perceptions. Future work should test this link more explicitly because we found no evidence that the minority of fishers with more positive perceptions of their fishery reported systematically different estimates of catch-size composition than those with the more negative majority view. Although fishers' and scientific estimates of size at maturity and maximum size parameters sometimes differed, the fact that fishers make routine quantitative assessments of maturity and body size suggests potential for future collaborative monitoring efforts to generate estimates usable by scientists and meaningful to fishers. © 2017 by the Ecological Society of America.
Pazzola, Michele; Cipolat-Gotet, Claudio; Bittante, Giovanni; Cecchinato, Alessio; Dettori, Maria L; Vacca, Giuseppe M
2018-04-01
The present study investigated the effect of somatic cell count, lactose, and pH on sheep milk composition, coagulation properties (MCP), and curd firming (CF) parameters. Individual milk samples were collected from 1,114 Sarda ewes reared in 23 farms. Milk composition, somatic cell count, single point MCP (rennet coagulation time, RCT; curd firming time, k 20 ; and curd firmness, a 30 , a 45 , and a 60 ), and CF model parameters were achieved. Phenotypic traits were statistically analyzed using a mixed model to estimate the effects of the different levels of milk somatic cell score (SCS), lactose, and pH, respectively. Additive genetic, herd, and residual correlations among these 3 traits, and with milk composition, MCP and CF parameters, were inferred using a Bayesian approach. From a phenotypic point of view, higher SCS levels caused a delayed gelification of milk. Lactose concentration and pH were significant for many milk quality traits, with a very intense effect on both coagulation times and curd firming. These traits (RCT, RCT estimated using the curd firming over time equation, and k 20 ) showed an unfavorable increase of about 20% from the highest to the lowest level of lactose. Milk samples with pH values lower than 6.56 versus higher than 6.78 were characterized by an increase of RCT (from 6.00 to 14.3 min) and k 20 (from 1.65 to 2.65 min) and a decrease of all the 3 curd firmness traits. From a genetic point of view, the marginal posterior distribution of heritability estimates evidenced a large and exploitable variability for all 3 phenotypes. The mean intra-farm heritability estimates were 0.173 for SCS, 0.418 for lactose content, and 0.206 for pH. Lactose (favorably), and SCS and pH (unfavorably), at phenotypic and genetic levels, were correlated mainly with RCT and RCT estimated using the curd firming over time equation and scarcely with the other curd firming traits. The SCS, lactose, and pH were significantly correlated with each other's. In conclusion, results reported in the present study suggest that SCS, pH, and lactose affect, contemporarily and independently, milk quality and MCP. These phenotypes, easily available during milk recording schemes measured by infrared spectra prediction, could be used as potential indicators traits for improving cheese-making ability of ovine milk. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Vision-guided gripping of a cylinder
NASA Technical Reports Server (NTRS)
Nicewarner, Keith E.; Kelley, Robert B.
1991-01-01
The motivation for vision-guided servoing is taken from tasks in automated or telerobotic space assembly and construction. Vision-guided servoing requires the ability to perform rapid pose estimates and provide predictive feature tracking. Monocular information from a gripper-mounted camera is used to servo the gripper to grasp a cylinder. The procedure is divided into recognition and servo phases. The recognition stage verifies the presence of a cylinder in the camera field of view. Then an initial pose estimate is computed and uncluttered scan regions are selected. The servo phase processes only the selected scan regions of the image. Given the knowledge, from the recognition phase, that there is a cylinder in the image and knowing the radius of the cylinder, 4 of the 6 pose parameters can be estimated with minimal computation. The relative motion of the cylinder is obtained by using the current pose and prior pose estimates. The motion information is then used to generate a predictive feature-based trajectory for the path of the gripper.
Using GLONASS signal for clock synchronization
NASA Technical Reports Server (NTRS)
Gouzhva, Yuri G.; Gevorkyan, Arvid G.; Bogdanov, Pyotr P.; Ovchinnikov, Vitaly V.
1994-01-01
Although in accuracy parameters GLONASS is correlated with GPS, using GLONASS signals for high-precision clock synchronization was, up to the recent time, of limited utility due to the lack of specialized time receivers. In order to improve this situation, in late 1992 the Russian Institute of Radionavigation and Time (RMT) began to develop a GLONASS time receiver using as a basis the airborne ASN-16 receiver. This paper presents results of estimating user clock synchronization accuracy via GLONASS signals using ASN-16 receiver in the direct synchronization and common-view modes.
Liu, Dengyuan; Rao, Yunshuang; Zeng, Huan; Zhang, Fan; Wang, Lu; Xie, Yaojie; Sharma, Manoj; Zhao, Yong
2018-01-01
Objectives: This study aimed to assess the prevalence of prolonged television, computer, and mobile phone viewing times and examined related sociodemographic factors among Chinese pregnant women. Methods: In this study, a cross-sectional survey was implemented among 2400 Chinese pregnant women in 16 hospitals of 5 provinces from June to August in 2015, and the response rate of 97.76%. We excluded women with serious complications and cognitive disorders. The women were asked about their television, computer, and mobile phone viewing during pregnancy. Prolonged television watching or computer viewing was defined as spending more than two hours on television or computer viewing per day. Prolonged mobile phone viewing was watching more than one hour on mobile phone per day. Results: Among 2345 pregnant women, about 25.1% reported prolonged television viewing, 20.6% reported prolonged computer viewing, and 62.6% reported prolonged mobile phone viewing. Pregnant women with long mobile phone viewing times were likely have long TV (Estimate = 0.080, Standard Error (SE) = 0.016, p < 0.001) and computer viewing times (Estimate = 0.053, SE = 0.022, p = 0.015). Pregnant women with long TV (Estimate = 0.134, SE = 0.027, p < 0.001) and long computer viewing times (Estimate = 0.049, SE = 0.020, p = 0.015) were likely have long mobile phone viewing times. Pregnant women with long TV viewing times were less likely to have long computer viewing times (Estimate = −0.032, SE = 0.015, p = 0.035), and pregnant women with long computer viewing times were less likely have long TV viewing times (Estimate = −0.059, SE = 0.028, p = 0.035). Pregnant women in their second pregnancy had lower prolonged computer viewing times than those in their first pregnancy (Odds Ratio (OR) 0.56, 95% Confidence Interval (CI) 0.42–0.74). Pregnant women in their second pregnancy were more likely have longer prolonged mobile phone viewing times than those in their first pregnancy (OR 1.25, 95% CI 1.01–1.55). Conclusions: The high prevalence rate of prolonged TV, computer, and mobile phone viewing times was common for pregnant women in their first and second pregnancy. This study preliminarily explored the relationship between sociodemographic factors and prolonged screen time to provide some indication for future interventions related to decreasing screen-viewing times during pregnancy in China. PMID:29495439
Near Shore Wave Modeling and applications to wave energy estimation
NASA Astrophysics Data System (ADS)
Zodiatis, G.; Galanis, G.; Hayes, D.; Nikolaidis, A.; Kalogeri, C.; Adam, A.; Kallos, G.; Georgiou, G.
2012-04-01
The estimation of the wave energy potential at the European coastline is receiving increased attention the last years as a result of the adaptation of novel policies in the energy market, the concernsfor global warming and the nuclear energy security problems. Within this framework, numerical wave modeling systems keep a primary role in the accurate description of wave climate and microclimate that is a prerequisite for any wave energy assessment study. In the present work two of the most popular wave models are used for the estimation of the wave parameters at the coastline of Cyprus: The latest parallel version of the wave model WAM (ECMWF version), which employs new parameterization of shallow water effects, and the SWAN model, classically used for near shore wave simulations. The results obtained from the wave models near shores are studied by an energy estimation point of view: The wave parameters that mainly affect the energy temporal and spatial distribution, that is the significant wave height and the mean wave period, are statistically analyzed,focusing onpossible different aspects captured by the two models. Moreover, the wave spectrum distribution prevailing in different areas are discussed contributing, in this way, to the wave energy assessmentin the area. This work is a part of two European projects focusing on the estimation of the wave energy distribution around Europe: The MARINA platform (http://www.marina-platform.info/ index.aspx) and the Ewave (http://www.oceanography.ucy.ac.cy/ewave/) projects.
Using Diurnal Temperature Signals to Infer Vertical Groundwater-Surface Water Exchange.
Irvine, Dylan J; Briggs, Martin A; Lautz, Laura K; Gordon, Ryan P; McKenzie, Jeffrey M; Cartwright, Ian
2017-01-01
Heat is a powerful tracer to quantify fluid exchange between surface water and groundwater. Temperature time series can be used to estimate pore water fluid flux, and techniques can be employed to extend these estimates to produce detailed plan-view flux maps. Key advantages of heat tracing include cost-effective sensors and ease of data collection and interpretation, without the need for expensive and time-consuming laboratory analyses or induced tracers. While the collection of temperature data in saturated sediments is relatively straightforward, several factors influence the reliability of flux estimates that are based on time series analysis (diurnal signals) of recorded temperatures. Sensor resolution and deployment are particularly important in obtaining robust flux estimates in upwelling conditions. Also, processing temperature time series data involves a sequence of complex steps, including filtering temperature signals, selection of appropriate thermal parameters, and selection of the optimal analytical solution for modeling. This review provides a synthesis of heat tracing using diurnal temperature oscillations, including details on optimal sensor selection and deployment, data processing, model parameterization, and an overview of computing tools available. Recent advances in diurnal temperature methods also provide the opportunity to determine local saturated thermal diffusivity, which can improve the accuracy of fluid flux modeling and sensor spacing, which is related to streambed scour and deposition. These parameters can also be used to determine the reliability of flux estimates from the use of heat as a tracer. © 2016, National Ground Water Association.
Robust estimation of mammographic breast density: a patient-based approach
NASA Astrophysics Data System (ADS)
Heese, Harald S.; Erhard, Klaus; Gooßen, Andre; Bulow, Thomas
2012-02-01
Breast density has become an established risk indicator for developing breast cancer. Current clinical practice reflects this by grading mammograms patient-wise as entirely fat, scattered fibroglandular, heterogeneously dense, or extremely dense based on visual perception. Existing (semi-) automated methods work on a per-image basis and mimic clinical practice by calculating an area fraction of fibroglandular tissue (mammographic percent density). We suggest a method that follows clinical practice more strictly by segmenting the fibroglandular tissue portion directly from the joint data of all four available mammographic views (cranio-caudal and medio-lateral oblique, left and right), and by subsequently calculating a consistently patient-based mammographic percent density estimate. In particular, each mammographic view is first processed separately to determine a region of interest (ROI) for segmentation into fibroglandular and adipose tissue. ROI determination includes breast outline detection via edge-based methods, peripheral tissue suppression via geometric breast height modeling, and - for medio-lateral oblique views only - pectoral muscle outline detection based on optimizing a three-parameter analytic curve with respect to local appearance. Intensity harmonization based on separately acquired calibration data is performed with respect to compression height and tube voltage to facilitate joint segmentation of available mammographic views. A Gaussian mixture model (GMM) on the joint histogram data with a posteriori calibration guided plausibility correction is finally employed for tissue separation. The proposed method was tested on patient data from 82 subjects. Results show excellent correlation (r = 0.86) to radiologist's grading with deviations ranging between -28%, (q = 0.025) and +16%, (q = 0.975).
Forest canopy height estimation using double-frequency repeat pass interferometry
NASA Astrophysics Data System (ADS)
Karamvasis, Kleanthis; Karathanassi, Vassilia
2015-06-01
In recent years, many efforts have been made in order to assess forest stand parameters from remote sensing data, as a mean to estimate the above-ground carbon stock of forests in the context of the Kyoto protocol. Synthetic aperture radar interferometry (InSAR) techniques have gained traction in last decade as a viable technology for vegetation parameter estimation. Many works have shown that forest canopy height, which is a critical parameter for quantifying the terrestrial carbon cycle, can be estimated with InSAR. However, research is still needed to understand further the interaction of SAR signals with forest canopy and to develop an operational method for forestry applications. This work discusses the use of repeat pass interferometry with ALOS PALSAR (L band) HH polarized and COSMO Skymed (X band) HH polarized acquisitions over the Taxiarchis forest (Chalkidiki, Greece), in order to produce accurate digital elevation models (DEMs) and estimate canopy height with interferometric processing. The effect of wavelength-dependent penetration depth into the canopy is known to be strong, and could potentially lead to forest canopy height mapping using dual-wavelength SAR interferometry at X- and L-band. The method is based on scattering phase center separation at different wavelengths. It involves the generation of a terrain elevation model underneath the forest canopy from repeat-pass L-band InSAR data as well as the generation of a canopy surface elevation model from repeat pass X-band InSAR data. The terrain model is then used to remove the terrain component from the repeat pass interferometric X-band elevation model, so as to enable the forest canopy height estimation. The canopy height results were compared to a field survey with 6.9 m root mean square error (RMSE). The effects of vegetation characteristics, SAR incidence angle and view geometry, and terrain slope on the accuracy of the results have also been studied in this work.
Mohammadi, Mohammad Hossein; Vanclooster, Marnik
2012-05-01
Solute transport in partially saturated soils is largely affected by fluid velocity distribution and pore size distribution within the solute transport domain. Hence, it is possible to describe the solute transport process in terms of the pore size distribution of the soil, and indirectly in terms of the soil hydraulic properties. In this paper, we present a conceptual approach that allows predicting the parameters of the Convective Lognormal Transfer model from knowledge of soil moisture and the Soil Moisture Characteristic (SMC), parameterized by means of the closed-form model of Kosugi (1996). It is assumed that in partially saturated conditions, the air filled pore volume act as an inert solid phase, allowing the use of the Arya et al. (1999) pragmatic approach to estimate solute travel time statistics from the saturation degree and SMC parameters. The approach is evaluated using a set of partially saturated transport experiments as presented by Mohammadi and Vanclooster (2011). Experimental results showed that the mean solute travel time, μ(t), increases proportionally with the depth (travel distance) and decreases with flow rate. The variance of solute travel time σ²(t) first decreases with flow rate up to 0.4-0.6 Ks and subsequently increases. For all tested BTCs predicted solute transport with μ(t) estimated from the conceptual model performed much better as compared to predictions with μ(t) and σ²(t) estimated from calibration of solute transport at shallow soil depths. The use of μ(t) estimated from the conceptual model therefore increases the robustness of the CLT model in predicting solute transport in heterogeneous soils at larger depths. In view of the fact that reasonable indirect estimates of the SMC can be made from basic soil properties using pedotransfer functions, the presented approach may be useful for predicting solute transport at field or watershed scales. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Rainieri, Carlo; Fabbrocino, Giovanni
2015-08-01
In the last few decades large research efforts have been devoted to the development of methods for automated detection of damage and degradation phenomena at an early stage. Modal-based damage detection techniques are well-established methods, whose effectiveness for Level 1 (existence) and Level 2 (location) damage detection is demonstrated by several studies. The indirect estimation of tensile loads in cables and tie-rods is another attractive application of vibration measurements. It provides interesting opportunities for cheap and fast quality checks in the construction phase, as well as for safety evaluations and structural maintenance over the structure lifespan. However, the lack of automated modal identification and tracking procedures has been for long a relevant drawback to the extensive application of the above-mentioned techniques in the engineering practice. An increasing number of field applications of modal-based structural health and performance assessment are appearing after the development of several automated output-only modal identification procedures in the last few years. Nevertheless, additional efforts are still needed to enhance the robustness of automated modal identification algorithms, control the computational efforts and improve the reliability of modal parameter estimates (in particular, damping). This paper deals with an original algorithm for automated output-only modal parameter estimation. Particular emphasis is given to the extensive validation of the algorithm based on simulated and real datasets in view of continuous monitoring applications. The results point out that the algorithm is fairly robust and demonstrate its ability to provide accurate and precise estimates of the modal parameters, including damping ratios. As a result, it has been used to develop systems for vibration-based estimation of tensile loads in cables and tie-rods. Promising results have been achieved for non-destructive testing as well as continuous monitoring purposes. They are documented in the last sections of the paper.
NASA Astrophysics Data System (ADS)
Mohammadi, Mohammad Hossein; Vanclooster, Marnik
2012-05-01
Solute transport in partially saturated soils is largely affected by fluid velocity distribution and pore size distribution within the solute transport domain. Hence, it is possible to describe the solute transport process in terms of the pore size distribution of the soil, and indirectly in terms of the soil hydraulic properties. In this paper, we present a conceptual approach that allows predicting the parameters of the Convective Lognormal Transfer model from knowledge of soil moisture and the Soil Moisture Characteristic (SMC), parameterized by means of the closed-form model of Kosugi (1996). It is assumed that in partially saturated conditions, the air filled pore volume act as an inert solid phase, allowing the use of the Arya et al. (1999) pragmatic approach to estimate solute travel time statistics from the saturation degree and SMC parameters. The approach is evaluated using a set of partially saturated transport experiments as presented by Mohammadi and Vanclooster (2011). Experimental results showed that the mean solute travel time, μt, increases proportionally with the depth (travel distance) and decreases with flow rate. The variance of solute travel time σ2t first decreases with flow rate up to 0.4-0.6 Ks and subsequently increases. For all tested BTCs predicted solute transport with μt estimated from the conceptual model performed much better as compared to predictions with μt and σ2t estimated from calibration of solute transport at shallow soil depths. The use of μt estimated from the conceptual model therefore increases the robustness of the CLT model in predicting solute transport in heterogeneous soils at larger depths. In view of the fact that reasonable indirect estimates of the SMC can be made from basic soil properties using pedotransfer functions, the presented approach may be useful for predicting solute transport at field or watershed scales.
The joined wing - An overview. [aircraft tandem wings in diamond configurations
NASA Technical Reports Server (NTRS)
Wolkovitch, J.
1985-01-01
The joined wing is a new type of aircraft configuration which employs tandem wings arranged to form diamond shapes in plan view and front view. Wind-tunnel tests and finite-element structural analyses have shown that the joined wing provides the following advantages over a comparable wing-plus-tail system; lighter weight and higher stiffness, higher span-efficiency factor, higher trimmed maximum lift coefficient, lower wave drag, plus built-in direct lift and direct sideforce control capability. A summary is given of research performed on the joined wing. Calculated joined wing weights are correlated with geometric parameters to provide simple weight estimation methods. The results of low-speed and transonic wind-tunnel tests are summarized, and guidelines for design of joined-wing aircraft are given. Some example joined-wing designs are presented and related configurations having connected wings are reviewed.
Possible influences on color constancy by motion of color targets and by attention-controlled gaze.
Wan, Lifang; Shinomori, Keizo
2018-04-01
We investigated the influence of motion on color constancy using a chromatic stimulus presented in various conditions (static, motion, and rotation). Attention to the stimulus and background was also controlled in different gaze modes, constant fixation of the stimulus, and random viewing of the stimulus. Color constancy was examined in six young observers using a haploscopic view of a computer monitor. The target and background were illuminated in simulation by red, green, blue, and yellow, shifted from daylight (D65) by specific color differences along L - M or S - (L + M) axes on the equiluminance plane. The standard pattern (under D65) and test pattern (under the color illuminant) of a 5-deg square were presented side by side, consisting of 1.2-deg square targets with one of 12 colors at each center, surrounded by 230 background ellipses consisting of eight other colors. The central color targets in both patterns flipped between top and bottom locations at the rate of 3 deg/s in the motion condition. The results indicated an average reduction of color constancy over the 12 test colors by motion. The random viewing parameter indicated better color constancy by more attention to the background, although the difference was not significant. Color constancy of the four color illuminations was better to worse in green, red, yellow, and blue, respectively. The reduction of color constancy by motion could be explained by less contribution of the illumination estimation effect on color constancy. In the motion with constant fixation condition, the retina strongly adapted to the mean chromaticity of the background. However, motion resulted in less attention to the color of the background, causing a weaker effect of the illumination estimation. Conversely, in the static state with a random viewing condition, more attention to the background colors caused a stronger illumination estimation effect, and color constancy was improved overall.
Disturbance observer based active and adaptive synchronization of energy resource chaotic system.
Wei, Wei; Wang, Meng; Li, Donghai; Zuo, Min; Wang, Xiaoyi
2016-11-01
In this paper, synchronization of a three-dimensional energy resource chaotic system is considered. For the sake of achieving the synchronization between the drive and response systems, two different nonlinear control approaches, i.e. active control with known parameters and adaptive control with unknown parameters, have been designed. In order to guarantee the transient performance, finite-time boundedness (FTB) and finite-time stability (FTS) are introduced in the design of active control and adaptive control, respectively. Simultaneously, in view of the existence of disturbances, a new disturbance observer is proposed to estimate the disturbance. The conditions of the asymptotic stability for the closed-loop system are obtained. Numerical simulations are provided to illustrate the proposed approaches. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gajek, Z.
2004-05-01
The electronic properties of the actinide ions in the series of semi-conducting, antiferromagnetic compounds: dioxides, AnO2 and oxychalcogenides, AnOY, where An=U, Np and Y=S, Se, are re-examined from the point of view of the consistency of the crystal field (CF) model. The discussion is based on the supposition that the effective metal-ligand interaction solely determines the net CF effect in non-metallic compounds. The main question we address here is, whether a reliable, consistent description of the CF effect in terms of the intrinsic parameters can be achieved for this particular family of compounds. Encouraging calculations reported previously for the AnO2 and UOY series serve as a reference data in the present estimation of electronic structure parameters for neptunium oxychalcogenides.
An automatic calibration procedure for remote eye-gaze tracking systems.
Model, Dmitri; Guestrin, Elias D; Eizenman, Moshe
2009-01-01
Remote gaze estimation systems use calibration procedures to estimate subject-specific parameters that are needed for the calculation of the point-of-gaze. In these procedures, subjects are required to fixate on a specific point or points at specific time instances. Advanced remote gaze estimation systems can estimate the optical axis of the eye without any personal calibration procedure, but use a single calibration point to estimate the angle between the optical axis and the visual axis (line-of-sight). This paper presents a novel automatic calibration procedure that does not require active user participation. To estimate the angles between the optical and visual axes of each eye, this procedure minimizes the distance between the intersections of the visual axes of the left and right eyes with the surface of a display while subjects look naturally at the display (e.g., watching a video clip). Simulation results demonstrate that the performance of the algorithm improves as the range of viewing angles increases. For a subject sitting 75 cm in front of an 80 cm x 60 cm display (40" TV) the standard deviation of the error in the estimation of the angles between the optical and visual axes is 0.5 degrees.
Simulation and Correction of Triana-Viewed Earth Radiation Budget with ERBE/ISCCP Data
NASA Technical Reports Server (NTRS)
Huang, Jian-Ping; Minnis, Patrick; Doelling, David R.; Valero, Francisco P. J.
2002-01-01
This paper describes the simulation of the earth radiation budget (ERB) as viewed by Triana and the development of correction models for converting Trianaviewed radiances into a complete ERB. A full range of Triana views and global radiation fields are simulated using a combination of datasets from ERBE (Earth Radiation Budget Experiment) and ISCCP (International Satellite Cloud Climatology Project) and analyzed with a set of empirical correction factors specific to the Triana views. The results show that the accuracy of global correction factors to estimate ERB from Triana radiances is a function of the Triana position relative to the Lagrange-1 (L1) or the Sun location. Spectral analysis of the global correction factor indicates that both shortwave (SW; 0.2 - 5.0 microns) and longwave (LW; 5 -50 microns) parameters undergo seasonal and diurnal cycles that dominate the periodic fluctuations. The diurnal cycle, especially its amplitude, is also strongly dependent on the seasonal cycle. Based on these results, models are developed to correct the radiances for unviewed areas and anisotropic emission and reflection. A preliminary assessment indicates that these correction models can be applied to Triana radiances to produce the most accurate global ERB to date.
NASA Technical Reports Server (NTRS)
Weaver, W. L.; Green, R. N.
1980-01-01
Geometric shape factors were computed and applied to satellite simulated irradiance measurements to estimate Earth emitted flux densities for global and zonal scales and for areas smaller than the detector field of view (FOV). Wide field of view flat plate detectors were emphasized, but spherical detectors were also studied. The radiation field was modeled after data from the Nimbus 2 and 3 satellites. At a satellite altitude of 600 km, zonal estimates were in error 1.0 to 1.2 percent and global estimates were in error less than 0.2 percent. Estimates with unrestricted field of view (UFOV) detectors were about the same for Lambertian and limb darkening radiation models. The opposite was found for restricted field of view detectors. The UFOV detectors are found to be poor estimators of flux density from the total FOV and are shown to be much better as estimators of flux density from a circle centered at the FOV with an area significantly smaller than that for the total FOV.
NASA Technical Reports Server (NTRS)
Weaver, W. L.; Green, R. N.
1980-01-01
A study was performed on the use of geometric shape factors to estimate earth-emitted flux densities from radiation measurements with wide field-of-view flat-plate radiometers on satellites. Sets of simulated irradiance measurements were computed for unrestricted and restricted field-of-view detectors. In these simulations, the earth radiation field was modeled using data from Nimbus 2 and 3. Geometric shape factors were derived and applied to these data to estimate flux densities on global and zonal scales. For measurements at a satellite altitude of 600 km, estimates of zonal flux density were in error 1.0 to 1.2%, and global flux density errors were less than 0.2%. Estimates with unrestricted field-of-view detectors were about the same for Lambertian and non-Lambertian radiation models, but were affected by satellite altitude. The opposite was found for the restricted field-of-view detectors.
Using Diffraction Tomography to Estimate Marine Animal Size
NASA Astrophysics Data System (ADS)
Jaffe, J. S.; Roberts, P.
In this article we consider the development of acoustic methods which have the potential to size marine animals. The proposed technique uses scattered sound in order to invert for both animal size and shape. The technique uses the Distorted Wave Born Approximation (DWBA) in order to model sound scattered from these organisms. The use of the DWBA also provides a valuable context for formulating data analysis techniques in order to invert for parameters of the animal. Although 3-dimensional observations can be obtained from a complete set of views, due to the difficulty of collecting full 3-dimensional scatter, it is useful to simplify the inversion by approximating the animal by a few parameters. Here, the animals are modeled as 3-dimensional ellipsoids. This reduces the complexity of the problem to a determination of the 3 semi axes for the x, y and z dimensions from just a few radial spokes through the 3-dimensional Fourier Transform. In order to test the idea, simulated scatter data is taken from a 3-dimensional model of a marine animal and the resultant data are inverted in order to estimate animal shape
NASA Astrophysics Data System (ADS)
Becker, M.; Bour, O.; Le Borgne, T.; Longuevergne, L.; Lavenant, N.; Cole, M. C.; Guiheneuf, N.
2017-12-01
Determining hydraulic and transport connectivity in fractured bedrock has long been an important objective in contaminant hydrogeology, petroleum engineering, and geothermal operations. A persistent obstacle to making this determination is that the characteristic length scale is nearly impossible to determine in sparsely fractured networks. Both flow and transport occur through an unknown structure of interconnected fracture and/or fracture zones making the actual length that water or solutes travel undetermined. This poses difficulties for flow and transport models. For, example, hydraulic equations require a separation distance between pumping and observation well to determine hydraulic parameters. When wells pairs are close, the structure of the network can influence the interpretation of well separation and the flow dimension of the tested system. This issue is explored using hydraulic tests conducted in a shallow fractured crystalline rock. Periodic (oscillatory) slug tests were performed at the Ploemeur fractured rock test site located in Brittany, France. Hydraulic connectivity was examined between three zones in one well and four zones in another, located 6 m apart in map view. The wells are sufficiently close, however, that the tangential distance between the tested zones ranges between 6 and 30 m. Using standard periodic formulations of radial flow, estimates of storativity scale inversely with the square of the separation distance and hydraulic diffusivity directly with the square of the separation distance. Uncertainty in the connection paths between the two wells leads to an order of magnitude uncertainty in estimates of storativity and hydraulic diffusivity, although estimates of transmissivity are unaffected. The assumed flow dimension results in alternative estimates of hydraulic parameters. In general, one is faced with the prospect of assuming the hydraulic parameter and inverting the separation distance, or vice versa. Similar uncertainties exist, for instance, when trying to invert transport parameters from tracer mean residence time. This field test illustrates that when dealing with fracture networks, there is a need for analytic methods of complexity that lie between simple radial solutions and discrete fracture network models.
Automated comprehensive Adolescent Idiopathic Scoliosis assessment using MVC-Net.
Wu, Hongbo; Bailey, Chris; Rasoulinejad, Parham; Li, Shuo
2018-05-18
Automated quantitative estimation of spinal curvature is an important task for the ongoing evaluation and treatment planning of Adolescent Idiopathic Scoliosis (AIS). It solves the widely accepted disadvantage of manual Cobb angle measurement (time-consuming and unreliable) which is currently the gold standard for AIS assessment. Attempts have been made to improve the reliability of automated Cobb angle estimation. However, it is very challenging to achieve accurate and robust estimation of Cobb angles due to the need for correctly identifying all the required vertebrae in both Anterior-posterior (AP) and Lateral (LAT) view x-rays. The challenge is especially evident in LAT x-ray where occlusion of vertebrae by the ribcage occurs. We therefore propose a novel Multi-View Correlation Network (MVC-Net) architecture that can provide a fully automated end-to-end framework for spinal curvature estimation in multi-view (both AP and LAT) x-rays. The proposed MVC-Net uses our newly designed multi-view convolution layers to incorporate joint features of multi-view x-rays, which allows the network to mitigate the occlusion problem by utilizing the structural dependencies of the two views. The MVC-Net consists of three closely-linked components: (1) a series of X-modules for joint representation of spinal structure (2) a Spinal Landmark Estimator network for robust spinal landmark estimation, and (3) a Cobb Angle Estimator network for accurate Cobb Angles estimation. By utilizing an iterative multi-task training algorithm to train the Spinal Landmark Estimator and Cobb Angle Estimator in tandem, the MVC-Net leverages the multi-task relationship between landmark and angle estimation to reliably detect all the required vertebrae for accurate Cobb angles estimation. Experimental results on 526 x-ray images from 154 patients show an impressive 4.04° Circular Mean Absolute Error (CMAE) in AP Cobb angle and 4.07° CMAE in LAT Cobb angle estimation, which demonstrates the MVC-Net's capability of robust and accurate estimation of Cobb angles in multi-view x-rays. Our method therefore provides clinicians with a framework for efficient, accurate, and reliable estimation of spinal curvature for comprehensive AIS assessment. Copyright © 2018. Published by Elsevier B.V.
Cai, Xi; Han, Guang; Song, Xin; Wang, Jinkuan
2017-11-01
single-camera-based gait monitoring is unobtrusive, inexpensive, and easy-to-use to monitor daily gait of seniors in their homes. However, most studies require subjects to walk perpendicularly to camera's optical axis or along some specified routes, which limits its application in elderly home monitoring. To build unconstrained monitoring environments, we propose a method to measure step length symmetry ratio (a useful gait parameter representing gait symmetry without significant relationship with age) from unconstrained straight walking using a single camera, without strict restrictions on walking directions or routes. according to projective geometry theory, we first develop a calculation formula of step length ratio for the case of unconstrained straight-line walking. Then, to adapt to general cases, we propose to modify noncollinear footprints, and accordingly provide general procedure for step length ratio extraction from unconstrained straight walking. Our method achieves a mean absolute percentage error (MAPE) of 1.9547% for 15 subjects' normal and abnormal side-view gaits, and also obtains satisfactory MAPEs for non-side-view gaits (2.4026% for 45°-view gaits and 3.9721% for 30°-view gaits). The performance is much better than a well-established monocular gait measurement system suitable only for side-view gaits with a MAPE of 3.5538%. Independently of walking directions, our method can accurately estimate step length ratios from unconstrained straight walking. This demonstrates our method is applicable for elders' daily gait monitoring to provide valuable information for elderly health care, such as abnormal gait recognition, fall risk assessment, etc. single-camera-based gait monitoring is unobtrusive, inexpensive, and easy-to-use to monitor daily gait of seniors in their homes. However, most studies require subjects to walk perpendicularly to camera's optical axis or along some specified routes, which limits its application in elderly home monitoring. To build unconstrained monitoring environments, we propose a method to measure step length symmetry ratio (a useful gait parameter representing gait symmetry without significant relationship with age) from unconstrained straight walking using a single camera, without strict restrictions on walking directions or routes. according to projective geometry theory, we first develop a calculation formula of step length ratio for the case of unconstrained straight-line walking. Then, to adapt to general cases, we propose to modify noncollinear footprints, and accordingly provide general procedure for step length ratio extraction from unconstrained straight walking. Our method achieves a mean absolute percentage error (MAPE) of 1.9547% for 15 subjects' normal and abnormal side-view gaits, and also obtains satisfactory MAPEs for non-side-view gaits (2.4026% for 45°-view gaits and 3.9721% for 30°-view gaits). The performance is much better than a well-established monocular gait measurement system suitable only for side-view gaits with a MAPE of 3.5538%. Independently of walking directions, our method can accurately estimate step length ratios from unconstrained straight walking. This demonstrates our method is applicable for elders' daily gait monitoring to provide valuable information for elderly health care, such as abnormal gait recognition, fall risk assessment, etc.
NASA Astrophysics Data System (ADS)
Edwards, L. L.; Harvey, T. F.; Freis, R. P.; Pitovranov, S. E.; Chernokozhin, E. V.
1992-10-01
The accuracy associated with assessing the environmental consequences of an accidental release of radioactivity is highly dependent on our knowledge of the source term characteristics and, in the case when the radioactivity is condensed on particles, the particle size distribution, all of which are generally poorly known. This paper reports on the development of a numerical technique that integrates the radiological measurements with atmospheric dispersion modeling. This results in a more accurate particle-size distribution and particle injection height estimation when compared with measurements of high explosive dispersal of (239)Pu. The estimation model is based on a non-linear least squares regression scheme coupled with the ARAC three-dimensional atmospheric dispersion models. The viability of the approach is evaluated by estimation of ADPIC model input parameters such as the ADPIC particle size mean aerodynamic diameter, the geometric standard deviation, and largest size. Additionally we estimate an optimal 'coupling coefficient' between the particles and an explosive cloud rise model. The experimental data are taken from the Clean Slate 1 field experiment conducted during 1963 at the Tonopah Test Range in Nevada. The regression technique optimizes the agreement between the measured and model predicted concentrations of (239)Pu by varying the model input parameters within their respective ranges of uncertainties. The technique generally estimated the measured concentrations within a factor of 1.5, with the worst estimate being within a factor of 5, very good in view of the complexity of the concentration measurements, the uncertainties associated with the meteorological data, and the limitations of the models. The best fit also suggest a smaller mean diameter and a smaller geometric standard deviation on the particle size as well as a slightly weaker particle to cloud coupling than previously reported.
A switched systems approach to image-based estimation
NASA Astrophysics Data System (ADS)
Parikh, Anup
With the advent of technological improvements in imaging systems and computational resources, as well as the development of image-based reconstruction techniques, it is necessary to understand algorithm performance when subject to real world conditions. Specifically, this dissertation focuses on the stability and performance of a class of image-based observers in the presence of intermittent measurements, caused by e.g., occlusions, limited FOV, feature tracking losses, communication losses, or finite frame rates. Observers or filters that are exponentially stable under persistent observability may have unbounded error growth during intermittent sensing, even while providing seemingly accurate state estimates. In Chapter 3, dwell time conditions are developed to guarantee state estimation error convergence to an ultimate bound for a class of observers while undergoing measurement loss. Bounds are developed on the unstable growth of the estimation errors during the periods when the object being tracked is not visible. A Lyapunov-based analysis for the switched system is performed to develop an inequality in terms of the duration of time the observer can view the moving object and the duration of time the object is out of the field of view. In Chapter 4, a motion model is used to predict the evolution of the states of the system while the object is not visible. This reduces the growth rate of the bounding function to an exponential and enables the use of traditional switched systems Lyapunov analysis techniques. The stability analysis results in an average dwell time condition to guarantee state error convergence with a known decay rate. In comparison with the results in Chapter 3, the estimation errors converge to zero rather than a ball, with relaxed switching conditions, at the cost of requiring additional information about the motion of the feature. In some applications, a motion model of the object may not be available. Numerous adaptive techniques have been developed to compensate for unknown parameters or functions in system dynamics; however, persistent excitation (PE) conditions are typically required to ensure parameter convergence, i.e., learning. Since the motion model is needed in the predictor, model learning is desired; however, PE is difficult to insure a priori and infeasible to check online for nonlinear systems. Concurrent learning (CL) techniques have been developed to use recorded data and a relaxed excitation condition to ensure convergence. In CL, excitation is only required for a finite period of time, and the recorded data can be checked to determine if it is sufficiently rich. However, traditional CL requires knowledge of state derivatives, which are typically not measured and require extensive filter design and tuning to develop satisfactory estimates. In Chapter 5 of this dissertation, a novel formulation of CL is developed in terms of an integral (ICL), removing the need to estimate state derivatives while preserving parameter convergence properties. Using ICL, an estimator is developed in Chapter 6 for simultaneously estimating the pose of an object as well as learning a model of its motion for use in a predictor when the object is not visible. A switched systems analysis is provided to demonstrate the stability of the estimation and prediction with learning scheme. Dwell time conditions as well as excitation conditions are developed to ensure estimation errors converge to an arbitrarily small bound. Experimental results are provided to illustrate the performance of each of the developed estimation schemes. The dissertation concludes with a discussion of the contributions and limitations of the developed techniques, as well as avenues for future extensions.
Zhang, Yu; Teng, Poching; Shimizu, Yo; Hosoi, Fumiki; Omasa, Kenji
2016-01-01
For plant breeding and growth monitoring, accurate measurements of plant structure parameters are very crucial. We have, therefore, developed a high efficiency Multi-Camera Photography (MCP) system combining Multi-View Stereovision (MVS) with the Structure from Motion (SfM) algorithm. In this paper, we measured six variables of nursery paprika plants and investigated the accuracy of 3D models reconstructed from photos taken by four lens types at four different positions. The results demonstrated that error between the estimated and measured values was small, and the root-mean-square errors (RMSE) for leaf width/length and stem height/diameter were 1.65 mm (R2 = 0.98) and 0.57 mm (R2 = 0.99), respectively. The accuracies of the 3D model reconstruction of leaf and stem by a 28-mm lens at the first and third camera positions were the highest, and the number of reconstructed fine-scale 3D model shape surfaces of leaf and stem is the most. The results confirmed the practicability of our new method for the reconstruction of fine-scale plant model and accurate estimation of the plant parameters. They also displayed that our system is a good system for capturing high-resolution 3D images of nursery plants with high efficiency. PMID:27314348
Adjustment and validation of a simulation tool for CSP plants based on parabolic trough technology
NASA Astrophysics Data System (ADS)
García-Barberena, Javier; Ubani, Nora
2016-05-01
The present work presents the validation process carried out for a simulation tool especially designed for the energy yield assessment of concentrating solar plants based on parabolic through (PT) technology. The validation has been carried out by comparing the model estimations with real data collected from a commercial CSP plant. In order to adjust the model parameters used for the simulation, 12 different days were selected among one-year of operational data measured at the real plant. The 12 days were simulated and the estimations compared with the measured data, focusing on the most important variables from the simulation point of view: temperatures, pressures and mass flow of the solar field, gross power, parasitic power, and net power delivered by the plant. Based on these 12 days, the key parameters for simulating the model were properly fixed and the simulation of a whole year performed. The results obtained for a complete year simulation showed very good agreement for the gross and net electric total production. The estimations for these magnitudes show a 1.47% and 2.02% BIAS respectively. The results proved that the simulation software describes with great accuracy the real operation of the power plant and correctly reproduces its transient behavior.
Analysis of Spin Financial Market by GARCH Model
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya
2013-08-01
A spin model is used for simulations of financial markets. To determine return volatility in the spin financial market we use the GARCH model often used for volatility estimation in empirical finance. We apply the Bayesian inference performed by the Markov Chain Monte Carlo method to the parameter estimation of the GARCH model. It is found that volatility determined by the GARCH model exhibits "volatility clustering" also observed in the real financial markets. Using volatility determined by the GARCH model we examine the mixture-of-distribution hypothesis (MDH) suggested for the asset return dynamics. We find that the returns standardized by volatility are approximately standard normal random variables. Moreover we find that the absolute standardized returns show no significant autocorrelation. These findings are consistent with the view of the MDH for the return dynamics.
NASA Astrophysics Data System (ADS)
Svehla, D.; Rothacher, M.
2016-12-01
Is it possible to process Lunar Laser Ranging (LLR) measurements in the geocentric frame in a similar way SLR measurements are modelled for GPS satellites and estimate all global reference frame parameters like in the case of GPS? The answer is yes. We managed to process Lunar laser measurements to Apollo and Luna retro-reflectors on the Moon in a similar way we are processing SLR measurements to GPS satellites. We make use of the latest Lunar libration models and DE430 ephemerides given in the Solar system baricentric frame and model uplink and downlink Lunar laser ranges in the geocentric frame as one way measurements, similar to SLR measurements to GPS satellites. In the first part of this contribution we present the estimation of the Lunar orbit as well as the Earth orientation parameters (including UT1 or UT0) with this new formulation. In the second part, we form common-view double-difference LLR measurements between two Lunar retro-reflectors and two LLR telescopes to show the actual noise of the LLR measurements. Since, by forming double-differences of LLR measurements, all range biases are removed and orbit errors are significantly reduced (the Lunar orbit is much farther away than the GPS orbits), one can consider double-difference LLR as an "orbit-free" and "bias-free" differential approach. In the end, we make a comparison with the SLR double-difference approach with Galileo satellites, where we already demonstrated submillimeter precision, and discuss possible combination of LLR and SLR to GNSS satellites using double-difference approach.
Quantile regression models of animal habitat relationships
Cade, Brian S.
2003-01-01
Typically, all factors that limit an organism are not measured and included in statistical models used to investigate relationships with their environment. If important unmeasured variables interact multiplicatively with the measured variables, the statistical models often will have heterogeneous response distributions with unequal variances. Quantile regression is an approach for estimating the conditional quantiles of a response variable distribution in the linear model, providing a more complete view of possible causal relationships between variables in ecological processes. Chapter 1 introduces quantile regression and discusses the ordering characteristics, interval nature, sampling variation, weighting, and interpretation of estimates for homogeneous and heterogeneous regression models. Chapter 2 evaluates performance of quantile rankscore tests used for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1). A permutation F test maintained better Type I errors than the Chi-square T test for models with smaller n, greater number of parameters p, and more extreme quantiles τ. Both versions of the test required weighting to maintain correct Type I errors when there was heterogeneity under the alternative model. An example application related trout densities to stream channel width:depth. Chapter 3 evaluates a drop in dispersion, F-ratio like permutation test for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1). Chapter 4 simulates from a large (N = 10,000) finite population representing grid areas on a landscape to demonstrate various forms of hidden bias that might occur when the effect of a measured habitat variable on some animal was confounded with the effect of another unmeasured variable (spatially and not spatially structured). Depending on whether interactions of the measured habitat and unmeasured variable were negative (interference interactions) or positive (facilitation interactions), either upper (τ > 0.5) or lower (τ < 0.5) quantile regression parameters were less biased than mean rate parameters. Sampling (n = 20 - 300) simulations demonstrated that confidence intervals constructed by inverting rankscore tests provided valid coverage of these biased parameters. Quantile regression was used to estimate effects of physical habitat resources on a bivalve mussel (Macomona liliana) in a New Zealand harbor by modeling the spatial trend surface as a cubic polynomial of location coordinates.
Rouse, William A.; Houseknecht, David W.
2016-02-11
In 2012, the U.S. Geological Survey completed an assessment of undiscovered, technically recoverable oil and gas resources in three source rocks of the Alaska North Slope, including the lower part of the Jurassic to Lower Cretaceous Kingak Shale. In order to identify organic shale potential in the absence of a robust geochemical dataset from the lower Kingak Shale, we introduce two quantitative parameters, $\\Delta DT_\\bar{x}$ and $\\Delta DT_z$, estimated from wireline logs from exploration wells and based in part on the commonly used delta-log resistivity ($\\Delta \\text{ }log\\text{ }R$) technique. Calculation of $\\Delta DT_\\bar{x}$ and $\\Delta DT_z$ is intended to produce objective parameters that may be proportional to the quality and volume, respectively, of potential source rocks penetrated by a well and can be used as mapping parameters to convey the spatial distribution of source-rock potential. Both the $\\Delta DT_\\bar{x}$ and $\\Delta DT_z$ mapping parameters show increased source-rock potential from north to south across the North Slope, with the largest values at the toe of clinoforms in the lower Kingak Shale. Because thermal maturity is not considered in the calculation of $\\Delta DT_\\bar{x}$ or $\\Delta DT_z$, total organic carbon values for individual wells cannot be calculated on the basis of $\\Delta DT_\\bar{x}$ or $\\Delta DT_z$ alone. Therefore, the $\\Delta DT_\\bar{x}$ and $\\Delta DT_z$ mapping parameters should be viewed as first-step reconnaissance tools for identifying source-rock potential.
Ensemble-Based Parameter Estimation in a Coupled General Circulation Model
Liu, Y.; Liu, Z.; Zhang, S.; ...
2014-09-10
Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less
Spaceborne SAR Data for Aboveground-Biomass Retrieval of Indian Tropical Forests
NASA Astrophysics Data System (ADS)
Khati, U.; Singh, G.; Musthafa, M.
2017-12-01
Forests are important and indispensable part of the terrestrial ecosystems, and have a direct impact on the global carbon cycle. Forest biophysical parameters such as forest stand height and forest above-ground biomass (AGB) are forest health indicators. Measuring the forest biomass using traditional ground survey techniques are man-power consuming and have very low spatial coverage. Satellite based remote sensing techniques provide synoptic view of the earth with continuous measurements over large, inaccessible forest regions. Satellite Synthetic Aperture Radar (SAR) data has been shown to be sensitive to these forest bio-physical parameters and have been extensively utilized over boreal and tropical forests. However, there are limited studies over Indian tropical forests due to lack of auxiliary airborne data and difficulties in manual in situ data collection. In this research work we utilize spaceborne data from TerraSAR-X/TanDEM-X and ALOS-2/PALSAR-2 and implement both Polarimetric SAR and PolInSAR techniques for retrieval of AGB of a managed tropical forest in India. The TerraSAR-X/TanDEM-X provide a single-baseline PolInSAR data robust to temporal decorrelation. This would be used to accurately estimate the forest stand height. The retrieved height would be an input parameter for modelling AGB using the L-band ALOS-2/PALSAR-2 data. The IWCM model is extensively utilized to estimate AGB from SAR observations. In this research we utilize the six component scattering power decomposition (6SD) parameters and modify the IWCM based technique for a better retrieval of forest AGB. PolInSAR data shows a high estimation accuracy with r2 of 0.8 and a RMSE of 2 m. With this accurate height provided as input to the modified model along with 6SD parameters shows promising results. The results are validated with extensive field based measurements, and are further analysed in detail.
Experimental criteria for the determination of fractal parameters of premixed turbulent flames
NASA Astrophysics Data System (ADS)
Shepherd, I. G.; Cheng, Robert K.; Talbot, L.
1992-10-01
The influence of spatial resolution, digitization noise, the number of records used for averaging, and the method of analysis on the determination of the fractal parameters of a high Damköhler number, methane/air, premixed, turbulent stagnation-point flame are investigated in this paper. The flow exit velocity was 5 m/s and the turbulent Reynolds number was 70 based on a integral scale of 3 mm and a turbulent intensity of 7%. The light source was a copper vapor laser which delivered 20 nsecs, 5 mJ pulses at 4 kHz and the tomographic cross-sections of the flame were recorded by a high speed movie camera. The spatial resolution of the images is 155 × 121 μm/pixel with a field of view of 50 × 65 mm. The stepping caliper technique for obtaining the fractal parameters is found to give the clearest indication of the cutoffs and the effects of noise. It is necessary to ensemble average the results from more than 25 statistically independent images to reduce sufficiently the scatter in the fractal parameters. The effects of reduced spatial resolution on fractal plots are estimated by artificial degradation of the resolution of the digitized flame boundaries. The effect of pixel resolution, an apparent increase in flame length below the inner scale rolloff, appears in the fractal plots when the measurent scale is less than approximately twice the pixel resolution. Although a clearer determination of fractal parameters is obtained by local averaging of the flame boundaries which removes digitization noise, at low spatial resolution this technique can reduce the fractal dimension. The degree of fractal isotropy of the flame surface can have a significant effect on the estimation of the flame surface area and hence burning rate from two-dimensional images. To estimate this isotropy a determination of the outer cutoff is required and three-dimensional measurements are probably also necessary.
Variable disparity-motion estimation based fast three-view video coding
NASA Astrophysics Data System (ADS)
Bae, Kyung-Hoon; Kim, Seung-Cheol; Hwang, Yong Seok; Kim, Eun-Soo
2009-02-01
In this paper, variable disparity-motion estimation (VDME) based 3-view video coding is proposed. In the encoding, key-frame coding (KFC) based motion estimation and variable disparity estimation (VDE) for effectively fast three-view video encoding are processed. These proposed algorithms enhance the performance of 3-D video encoding/decoding system in terms of accuracy of disparity estimation and computational overhead. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm's PSNRs is 37.66 and 40.55 dB, and the processing time is 0.139 and 0.124 sec/frame, respectively.
Estimating corresponding locations in ipsilateral breast tomosynthesis views
NASA Astrophysics Data System (ADS)
van Schie, Guido; Tanner, Christine; Karssemeijer, Nico
2011-03-01
To improve cancer detection in mammography, breast exams usually consist of two views per breast. To combine information from both views, radiologists and multiview computer-aided detection (CAD) systems need to match corresponding regions in the two views. In digital breast tomosynthesis (DBT), finding corresponding regions in ipsilateral volumes may be a difficult and time-consuming task for radiologists, because many slices have to be inspected individually. In this study we developed a method to quickly estimate corresponding locations in ipsilateral tomosynthesis views by applying a mathematical transformation. First a compressed breast model is matched to the tomosynthesis view containing a point of interest. Then we decompress, rotate and compress again to estimate the location of the corresponding point in the ipsilateral view. In this study we use a simple elastically deformable sphere model to obtain an analytical solution for the transformation in a given DBT case. The model is matched to the volume by using automatic segmentation of the pectoral muscle, breast tissue and nipple. For validation we annotated 181 landmarks in both views and applied our method to each location. Results show a median 3D distance between the actual location and estimated location of 1.5 cm; a good starting point for a feature based local search method to link lesions for a multiview CAD system. Half of the estimated locations were at most 1 slice away from the actual location, making our method useful as a tool in mammographic workstations to interactively find corresponding locations in ipsilateral tomosynthesis views.
A head motion estimation algorithm for motion artifact correction in dental CT imaging
NASA Astrophysics Data System (ADS)
Hernandez, Daniel; Elsayed Eldib, Mohamed; Hegazy, Mohamed A. A.; Hye Cho, Myung; Cho, Min Hyoung; Lee, Soo Yeol
2018-03-01
A small head motion of the patient can compromise the image quality in a dental CT, in which a slow cone-beam scan is adopted. We introduce a retrospective head motion estimation method by which we can estimate the motion waveform from the projection images without employing any external motion monitoring devices. We compute the cross-correlation between every two successive projection images, which results in a sinusoid-like displacement curve over the projection view when there is no patient motion. However, the displacement curve deviates from the sinusoid-like form when patient motion occurs. We develop a method to estimate the motion waveform with a single parameter derived from the displacement curve with aid of image entropy minimization. To verify the motion estimation method, we use a lab-built micro-CT that can emulate major head motions during dental CT scans, such as tilting and nodding, in a controlled way. We find that the estimated motion waveform conforms well to the actual motion waveform. To further verify the motion estimation method, we correct the motion artifacts with the estimated motion waveform. After motion artifact correction, the corrected images look almost identical to the reference images, with structural similarity index values greater than 0.81 in the phantom and rat imaging studies.
Precise interferometric tracking of the DSCS II geosynchronous orbiter
NASA Astrophysics Data System (ADS)
Border, J. S.; Donivan, F. F., Jr.; Shiomi, T.; Kawano, N.
1986-01-01
A demonstration of the precise tracking of a geosynchronous orbiter by radio metric techniques based on very-long-baseline interferometry (VLBI) has been jointly conducted by the Jet Propulsion Laboratory and Japan's Radio Research Laboratory. Simultaneous observations of a U.S. Air Force communications satellite from tracking stations in California, Australia, and Japan have determined the satellite's position with an accuracy of a few meters. Accuracy claims are based on formal statistics, which include the effects of errors in non-estimated parameters and which are supported by a chi-squared of less than one, and on the consistency of orbit solutions from disjoint data sets. A study made to assess the impact of shorter baselines and reduced data noise concludes that with a properly designed system, similar accuracy could be obtained for either a satellite viewed from stations located within the continental U.S. or for a satellite viewed from stations within Japanese territory.
Scanning laser ophthalmoscopy: optimized testing strategies for psychophysics
NASA Astrophysics Data System (ADS)
Van de Velde, Frans J.
1996-12-01
Retinal function can be evaluated with the scanning laser ophthalmoscope (SLO). the main advantage is a precise localization of the psychophysical stimulus on the retina. Four alternative forced choice (4AFC) and parameter estimation by sequential testing (PEST) are classic adaptive algorithms that have been optimized for use with the SLO, and combined with strategies to correct for small eye movements. Efficient calibration procedures are essential for quantitative microperimetry. These techniques measure precisely visual acuity and retinal sensitivity at distinct locations on the retina. A combined 632 nm and IR Maxwellian view illumination provides a maximal transmittance through the ocular media and has a animal interference with xanthophyll or hemoglobin. Future modifications of the instrument include the possibility of binocular evaluation, Maxwellian view control, fundus tracking using normalized gray-scale correlation, and microphotocoagulation. The techniques are useful in low vision rehabilitation and the application of laser to the retina.
NASA Astrophysics Data System (ADS)
Jayachandra Babu, M.; Sandeep, N.; Ali, M. E.; Nuhait, Abdullah O.
The boundary layer flow across a slendering stretching sheet has gotten awesome consideration due to its inexhaustible pragmatic applications in nuclear reactor technology, acoustical components, chemical and manufacturing procedures, for example, polymer extrusion, and machine design. By keeping this in view, we analyzed the two-dimensional MHD flow across a slendering stretching sheet within the sight of variable viscosity and viscous dissipation. The sheet is thought to be convectively warmed. Convective boundary conditions through heat and mass are employed. Similarity transformations used to change over the administering nonlinear partial differential equations as a group of nonlinear ordinary differential equations. Runge-Kutta based shooting technique is utilized to solve the converted equations. Numerical estimations of the physical parameters involved in the problem are calculated for the friction factor, local Nusselt and Sherwood numbers. Viscosity variation parameter and chemical reaction parameter shows the opposite impact to each other on the concentration profile. Heat and mass transfer Biot numbers are helpful to enhance the temperature and concentration respectively.
Helioseismology: some current issues concerning model calibration
NASA Astrophysics Data System (ADS)
Gough, D. O.
2002-01-01
Aspects of helioseismic model calibration pertinent to asteroseismological inference are reviewed, with a view to establishing the uncertainties associated with some of the properties of the structure of distant stars that can be inferred from the asteroseismic data to be obtained by Eddington. It is shown that the seismic data to be accrued by Eddington will raise our ability to diagnose the structure of stars enormously, even though some previous estimates of the errors in the derived stellar parameters appear likely to have been somewhat optimistic, because the contribution from the imperfect knowledge of the underlying physics was not accounted for.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horne, Steve M.; Thoreson, Greg G.; Theisen, Lisa A.
2016-05-01
The Gamma Detector Response and Analysis Software–Detector Response Function (GADRAS-DRF) application computes the response of gamma-ray and neutron detectors to incoming radiation. This manual provides step-by-step procedures to acquaint new users with the use of the application. The capabilities include characterization of detector response parameters, plotting and viewing measured and computed spectra, analyzing spectra to identify isotopes, and estimating source energy distributions from measured spectra. GADRAS-DRF can compute and provide detector responses quickly and accurately, giving users the ability to obtain usable results in a timely manner (a matter of seconds or minutes).
NASA Astrophysics Data System (ADS)
Xu, Peiliang
2018-06-01
The numerical integration method has been routinely used by major institutions worldwide, for example, NASA Goddard Space Flight Center and German Research Center for Geosciences (GFZ), to produce global gravitational models from satellite tracking measurements of CHAMP and/or GRACE types. Such Earth's gravitational products have found widest possible multidisciplinary applications in Earth Sciences. The method is essentially implemented by solving the differential equations of the partial derivatives of the orbit of a satellite with respect to the unknown harmonic coefficients under the conditions of zero initial values. From the mathematical and statistical point of view, satellite gravimetry from satellite tracking is essentially the problem of estimating unknown parameters in the Newton's nonlinear differential equations from satellite tracking measurements. We prove that zero initial values for the partial derivatives are incorrect mathematically and not permitted physically. The numerical integration method, as currently implemented and used in mathematics and statistics, chemistry and physics, and satellite gravimetry, is groundless, mathematically and physically. Given the Newton's nonlinear governing differential equations of satellite motion with unknown equation parameters and unknown initial conditions, we develop three methods to derive new local solutions around a nominal reference orbit, which are linked to measurements to estimate the unknown corrections to approximate values of the unknown parameters and the unknown initial conditions. Bearing in mind that satellite orbits can now be tracked almost continuously at unprecedented accuracy, we propose the measurement-based perturbation theory and derive global uniformly convergent solutions to the Newton's nonlinear governing differential equations of satellite motion for the next generation of global gravitational models. Since the solutions are global uniformly convergent, theoretically speaking, they are able to extract smallest possible gravitational signals from modern and future satellite tracking measurements, leading to the production of global high-precision, high-resolution gravitational models. By directly turning the nonlinear differential equations of satellite motion into the nonlinear integral equations, and recognizing the fact that satellite orbits are measured with random errors, we further reformulate the links between satellite tracking measurements and the global uniformly convergent solutions to the Newton's governing differential equations as a condition adjustment model with unknown parameters, or equivalently, the weighted least squares estimation of unknown differential equation parameters with equality constraints, for the reconstruction of global high-precision, high-resolution gravitational models from modern (and future) satellite tracking measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y.; Liu, Z.; Zhang, S.
Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less
Attitude Estimation for Large Field-of-View Sensors
NASA Technical Reports Server (NTRS)
Cheng, Yang; Crassidis, John L.; Markley, F. Landis
2005-01-01
The QUEST measurement noise model for unit vector observations has been widely used in spacecraft attitude estimation for more than twenty years. It was derived under the approximation that the noise lies in the tangent plane of the respective unit vector and is axially symmetrically distributed about the vector. For large field-of-view sensors, however, this approximation may be poor, especially when the measurement falls near the edge of the field of view. In this paper a new measurement noise model is derived based on a realistic noise distribution in the focal-plane of a large field-of-view sensor, which shows significant differences from the QUEST model for unit vector observations far away from the sensor boresight. An extended Kalman filter for attitude estimation is then designed with the new measurement noise model. Simulation results show that with the new measurement model the extended Kalman filter achieves better estimation performance using large field-of-view sensor observations.
Arbitrage and Volatility in Chinese Stock's Markets
NASA Astrophysics Data System (ADS)
Lu, Shu Quan; Ito, Takao; Zhang, Jianbo
From the point of view of no-arbitrage pricing, what matters is how much volatility the stock has, for volatility measures the amount of profit that can be made from shorting stocks and purchasing options. With the short-sales constraints or in the absence of options, however, high volatility is likely to mean arbitrage from stock market. As emerging stock markets for China, investors are increasingly concerned about volatilities of Chinese two stock markets. We estimate volatility's models for Chinese stock markets' indexes using Markov chain Monte Carlo (MCMC) method and GARCH. We find that estimated values of volatility parameters are very high for all data frequencies. It suggests that stock returns are extremely volatile even at long term intervals in Chinese markets. Furthermore, this result could be considered that there seems to be arbitrage opportunities in Chinese stock markets.
Analytical Estimation of the Scale of Earth-Like Planetary Magnetic Fields
NASA Astrophysics Data System (ADS)
Bologna, Mauro; Tellini, Bernardo
2014-10-01
In this paper we analytically estimate the magnetic field scale of planets with physical core conditions similar to that of Earth from a statistical physics point of view. We evaluate the magnetic field on the basis of the physical parameters of the center of the planet, such as density, temperature, and core size. We look at the contribution of the Seebeck effect on the magnetic field, showing that a thermally induced electrical current can exist in a rotating fluid sphere. We apply our calculations to Earth, where the currents would be driven by the temperature difference at the outer-inner core boundary, Jupiter and the Jupiter's satellite Ganymede. In each case we show that the thermal generation of currents leads to a magnetic field scale comparable to the observed fields of the considered celestial bodies.
Effects of additional data on Bayesian clustering.
Yamazaki, Keisuke
2017-10-01
Hierarchical probabilistic models, such as mixture models, are used for cluster analysis. These models have two types of variables: observable and latent. In cluster analysis, the latent variable is estimated, and it is expected that additional information will improve the accuracy of the estimation of the latent variable. Many proposed learning methods are able to use additional data; these include semi-supervised learning and transfer learning. However, from a statistical point of view, a complex probabilistic model that encompasses both the initial and additional data might be less accurate due to having a higher-dimensional parameter. The present paper presents a theoretical analysis of the accuracy of such a model and clarifies which factor has the greatest effect on its accuracy, the advantages of obtaining additional data, and the disadvantages of increasing the complexity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Estimating Right Atrial Pressure Using Ultrasounds: An Old Issue Revisited With New Methods.
De Vecchis, Renato; Baldi, Cesare; Giandomenico, Giuseppe; Di Maio, Marco; Giasi, Anna; Cioppa, Carmela
2016-08-01
Knowledge of the right atrial pressure (RAP) values is critical to ascertain the existence of a state of hemodynamic congestion, irrespective of the possible presence of signs and symptoms of clinical congestion and cardiac overload that can be lacking in some conditions of concealed or clinically misleading cardiac decompensation. In addition, a more reliable estimate of RAP would make it possible to determine more accurately also the systolic pulmonary arterial pressure with the only echocardiographic methods. The authors briefly illustrate some of the criteria that have been implemented to obtain a non-invasive RAP estimate, some of which have been approved by current guidelines and others are still awaiting official endorsement from the Scientific Societies of Cardiology. There is a representation of the sometimes opposing views of researchers who have studied the problem, and the prospects for development of new diagnostic criteria are outlined, in particular those derived from the matched use of two- and three-dimensional echocardiographic parameters.
Wegleitner, Eric J.; Isermann, Daniel A.
2017-01-01
Many biologists use digital images for estimating ages of fish, but the use of images could lead to differences in age estimates and precision because image capture can produce changes in light and clarity compared to directly viewing structures through a microscope. We used sectioned sagittal otoliths from 132 Largemouth Bass Micropterus salmoides and sectioned dorsal spines and otoliths from 157 Walleyes Sander vitreus to determine whether age estimates and among‐reader precision were similar when annuli were enumerated directly through a microscope or from digital images. Agreement of ages between viewing methods for three readers were highest for Largemouth Bass otoliths (75–89% among readers), followed by Walleye otoliths (63–70%) and Walleye dorsal spines (47–64%). Most discrepancies (72–96%) were ±1 year, and differences were more prevalent for age‐5 and older fish. With few exceptions, mean ages estimated from digital images were similar to ages estimated via directly viewing the structures through the microscope, and among‐reader precision did not vary between viewing methods for each structure. However, the number of disagreements we observed suggests that biologists should assess potential differences in age structure that could arise if images of calcified structures are used in the age estimation process.
[Review of estimation on oceanic primary productivity by using remote sensing methods.
Xu, Hong Yun; Zhou, Wei Feng; Ji, Shi Jian
2016-09-01
Accuracy estimation of oceanic primary productivity is of great significance in the assessment and management of fisheries resources, marine ecology systems, global change and other fields. The traditional measurement and estimation of oceanic primary productivity has to rely on in situ sample data by vessels. Satellite remote sensing has advantages of providing dynamic and eco-environmental parameters of ocean surface at large scale in real time. Thus, satellite remote sensing has increasingly become an important means for oceanic primary productivity estimation on large spatio-temporal scale. Combining with the development of ocean color sensors, the models to estimate the oceanic primary productivity by satellite remote sensing have been developed that could be mainly summarized as chlorophyll-based, carbon-based and phytoplankton absorption-based approach. The flexibility and complexity of the three kinds of models were presented in the paper. On this basis, the current research status for global estimation of oceanic primary productivity was analyzed and evaluated. In view of these, four research fields needed to be strengthened in further stu-dy: 1) Global oceanic primary productivity estimation should be segmented and studied, 2) to dee-pen the research on absorption coefficient of phytoplankton, 3) to enhance the technology of ocea-nic remote sensing, 4) to improve the in situ measurement of primary productivity.
Harvesting rockfall hazard evaluation parameters from Google Earth Street View
NASA Astrophysics Data System (ADS)
Partsinevelos, Panagiotis; Agioutantis, Zacharias; Tripolitsiotis, Achilles; Steiakakis, Chrysanthos; Mertikas, Stelios
2015-04-01
Rockfall incidents along highways and railways prove extremely dangerous for properties, infrastructures and human lives. Several qualitative metrics such as the Rockfall Hazard Rating System (RHRS) and the Colorado Rockfall Hazard Rating System (CRHRS) have been established to estimate rockfall potential and provide risk maps in order to control and monitor rockfall incidents. The implementation of such metrics for efficient and reliable risk modeling require accurate knowledge of multi-parametric attributes such as the geological, geotechnical, topographic parameters of the study area. The Missouri Rockfall Hazard Rating System (MORH RS) identifies the most potentially problematic areas using digital video logging for the determination of parameters like slope height and angle, face irregularities, etc. This study aims to harvest in a semi-automated approach geometric and qualitative measures through open source platforms that may provide 3-dimensional views of the areas of interest. More specifically, the Street View platform from Google Maps, is hereby used to provide essential information that can be used towards 3-dimensional reconstruction of slopes along highways. The potential of image capturing along a programmable virtual route to provide the input data for photogrammetric processing is also evaluated. Moreover, qualitative characterization of the geological and geotechnical status, based on the Street View images, is performed. These attributes are then integrated to deliver a GIS-based rockfall hazard map. The 3-dimensional models are compared to actual photogrammetric measures in a rockfall prone area in Crete, Greece while in-situ geotechnical characterization is also used to compare and validate the hazard risk. This work is considered as the first step towards the exploitation of open source platforms to improve road safety and the development of an operational system where authorized agencies (i.e., civil protection) will be able to acquire near-real time hazard maps based on video images retrieved either by open source platforms, operational unmanned aerial vehicles, and/or simple video recordings from users. This work has been performed under the framework of the "Cooperation 2011" project ISTRIA (11_SYN_9_13989) funded from the Operational Program "Competitiveness and Entrepreneurship" (co-funded by the European Regional Development Fund (ERDF)) and managed by the Greek General Secretariat for Research and Technology.
CAUSAL INFERENCE WITH A GRAPHICAL HIERARCHY OF INTERVENTIONS
Shpitser, Ilya; Tchetgen, Eric Tchetgen
2017-01-01
Identifying causal parameters from observational data is fraught with subtleties due to the issues of selection bias and confounding. In addition, more complex questions of interest, such as effects of treatment on the treated and mediated effects may not always be identified even in data where treatment assignment is known and under investigator control, or may be identified under one causal model but not another. Increasingly complex effects of interest, coupled with a diversity of causal models in use resulted in a fragmented view of identification. This fragmentation makes it unnecessarily difficult to determine if a given parameter is identified (and in what model), and what assumptions must hold for this to be the case. This, in turn, complicates the development of estimation theory and sensitivity analysis procedures. In this paper, we give a unifying view of a large class of causal effects of interest, including novel effects not previously considered, in terms of a hierarchy of interventions, and show that identification theory for this large class reduces to an identification theory of random variables under interventions from this hierarchy. Moreover, we show that one type of intervention in the hierarchy is naturally associated with queries identified under the Finest Fully Randomized Causally Interpretable Structure Tree Graph (FFRCISTG) model of Robins (via the extended g-formula), and another is naturally associated with queries identified under the Non-Parametric Structural Equation Model with Independent Errors (NPSEM-IE) of Pearl, via a more general functional we call the edge g-formula. Our results motivate the study of estimation theory for the edge g-formula, since we show it arises both in mediation analysis, and in settings where treatment assignment has unobserved causes, such as models associated with Pearl’s front-door criterion. PMID:28919652
CAUSAL INFERENCE WITH A GRAPHICAL HIERARCHY OF INTERVENTIONS.
Shpitser, Ilya; Tchetgen, Eric Tchetgen
2016-12-01
Identifying causal parameters from observational data is fraught with subtleties due to the issues of selection bias and confounding. In addition, more complex questions of interest, such as effects of treatment on the treated and mediated effects may not always be identified even in data where treatment assignment is known and under investigator control, or may be identified under one causal model but not another. Increasingly complex effects of interest, coupled with a diversity of causal models in use resulted in a fragmented view of identification. This fragmentation makes it unnecessarily difficult to determine if a given parameter is identified (and in what model), and what assumptions must hold for this to be the case. This, in turn, complicates the development of estimation theory and sensitivity analysis procedures. In this paper, we give a unifying view of a large class of causal effects of interest, including novel effects not previously considered, in terms of a hierarchy of interventions, and show that identification theory for this large class reduces to an identification theory of random variables under interventions from this hierarchy. Moreover, we show that one type of intervention in the hierarchy is naturally associated with queries identified under the Finest Fully Randomized Causally Interpretable Structure Tree Graph (FFRCISTG) model of Robins (via the extended g-formula), and another is naturally associated with queries identified under the Non-Parametric Structural Equation Model with Independent Errors (NPSEM-IE) of Pearl, via a more general functional we call the edge g-formula. Our results motivate the study of estimation theory for the edge g-formula, since we show it arises both in mediation analysis, and in settings where treatment assignment has unobserved causes, such as models associated with Pearl's front-door criterion.
NASA Astrophysics Data System (ADS)
Kusaka, Takashi; Miyazaki, Go
2014-10-01
When monitoring target areas covered with vegetation from a satellite, it is very useful to estimate the vegetation index using the surface anisotropic reflectance, which is dependent on both solar and viewing geometries, from satellite data. In this study, the algorithm for estimating optical properties of atmospheric aerosols such as the optical thickness (τ), the refractive index (Nr), the mixing ratio of small particles in the bimodal log-normal distribution function (C) and the bidirectional reflectance (R) from only the radiance and polarization at the 865nm channel received by the PARASOL/POLDER is described. Parameters of the bimodal log-normal distribution function: mean radius, r1, standard deviation, σ1, of fine aerosols, and r2, σ2 of coarse aerosols were fixed, and these values were estimated from monthly averaged size distribution at AERONET sites managed by NASA near the target area. Moreover, it is assumed that the contribution of the surface reflectance with directional anisotropy to the polarized radiance received by the satellite is small because it is shown from our ground-based polarization measurements of light ray reflected by the grassland that degrees of polarization of the reflected light by the grassland are very low values at the 865nm channel. First aerosol properties were estimated from only the polarized radiance and then the bidirectional reflectance given by the Ross-Li BRDF model was estimated from only the total radiance at target areas in PARASOL/POLDER data over the Japanese islands taken on April 28, 2012 and April 25, 2010. The estimated optical thickness of aerosols was checked with those given in AERONET sites and the estimated parameters of BRDF were compared with those of vegetation measured from the radio-controlled helicopter. Consequently, it is shown that the algorithm described in the present study provides reasonable values for aerosol properties and surface bidirectional reflectance.
NASA Astrophysics Data System (ADS)
Tong, M.; Xue, M.
2006-12-01
An important source of model error for convective-scale data assimilation and prediction is microphysical parameterization. This study investigates the possibility of estimating up to five fundamental microphysical parameters, which are closely involved in the definition of drop size distribution of microphysical species in a commonly used single-moment ice microphysics scheme, using radar observations and the ensemble Kalman filter method. The five parameters include the intercept parameters for rain, snow and hail/graupel, and the bulk densities of hail/graupel and snow. Parameter sensitivity and identifiability are first examined. The ensemble square-root Kalman filter (EnSRF) is employed for simultaneous state and parameter estimation. OSS experiments are performed for a model-simulated supercell storm, in which the five microphysical parameters are estimated individually or in different combinations starting from different initial guesses. When error exists in only one of the microphysical parameters, the parameter can be successfully estimated without exception. The estimation of multiple parameters is found to be less robust, with end results of estimation being sensitive to the realization of the initial parameter perturbation. This is believed to be because of the reduced parameter identifiability and the existence of non-unique solutions. The results of state estimation are, however, always improved when simultaneous parameter estimation is performed, even when the estimated parameters values are not accurate.
Structural parameters of young star clusters: fractal analysis
NASA Astrophysics Data System (ADS)
Hetem, A.
2017-07-01
A unified view of star formation in the Universe demand detailed and in-depth studies of young star clusters. This work is related to our previous study of fractal statistics estimated for a sample of young stellar clusters (Gregorio-Hetem et al. 2015, MNRAS 448, 2504). The structural properties can lead to significant conclusions about the early stages of cluster formation: 1) virial conditions can be used to distinguish warm collapsed; 2) bound or unbound behaviour can lead to conclusions about expansion; and 3) fractal statistics are correlated to the dynamical evolution and age. The technique of error bars estimation most used in the literature is to adopt inferential methods (like bootstrap) to estimate deviation and variance, which are valid only for an artificially generated cluster. In this paper, we expanded the number of studied clusters, in order to enhance the investigation of the cluster properties and dynamic evolution. The structural parameters were compared with fractal statistics and reveal that the clusters radial density profile show a tendency of the mean separation of the stars increase with the average surface density. The sample can be divided into two groups showing different dynamic behaviour, but they have the same dynamic evolution, since the entire sample was revealed as being expanding objects, for which the substructures do not seem to have been completely erased. These results are in agreement with the simulations adopting low surface densities and supervirial conditions.
Contribution to Estimating Bearing Capacity of Pile in Clayey Soils
NASA Astrophysics Data System (ADS)
Drusa, Marián; Gago, Filip; Vlček, Jozef
2016-12-01
The estimation of real geotechnical parameters is key factor for safe and economic design of geotechnical structures. One of these are pile foundations, which require proper design and evaluation due to accessing more deep foundation soil and because remediation work of not bearable piles or broken piles is a crucial operation. For this reason, geotechnical field testing like cone penetration test (CPT), standard penetration (SPT) or dynamic penetration test (DP) are realized in order to receive continuous information about soil strata. Comparing with rotary core drilling type of survey with sampling, these methods are more progressive. From engineering geologist point of view, it is more important to know geological characterization of locality but geotechnical engineers have more interest above the real geotechnical parameters of foundation soils. The role of engineering geologist cannot be underestimated because important geological processes in origin or during history can explain behaviour of a geological environment. In effort to streamline the survey, investigation by penetration tests is done as it is able to provide enough information for designers. This paper deals with actual trends in pile foundation design; because there are no new standards and usable standards are very old. Estimation of the bearing capacity of a single pile can be demonstrated on the example of determination of the cone factor Nk from CPT testing. Then results were compared with other common methods.
Bussières, Philippe
2014-05-12
Because it is difficult to obtain transverse views of the plant phloem sieve plate pores, which are short tubes, to estimate their number and diameters, a method based on longitudinal views is proposed. This method uses recent methods to estimate the number and the sizes of approximately circular objects from their images, given by slices perpendicular to the objects. Moreover, because such longitudinal views are obtained from slices that are rather close to the plate centres whereas the pore size may vary with the pore distance from the plate edge, a sieve plate reconstruction model was developed and incorporated in the method to consider this bias. The method was successfully tested with published longitudinal views of phloem of Soybean and an exceptional entire transverse view from the same tissue. The method was also validated with simulated slices in two sieve plates from Cucurbita and Phaseolus. This method will likely be useful to estimate and to model the hydraulic conductivity and the architecture of the plant phloem, and it could have applications for other materials with approximately cylindrical structures.
Military display performance parameters
NASA Astrophysics Data System (ADS)
Desjardins, Daniel D.; Meyer, Frederick
2012-06-01
The military display market is analyzed in terms of four of its segments: avionics, vetronics, dismounted soldier, and command and control. Requirements are summarized for a number of technology-driving parameters, to include luminance, night vision imaging system compatibility, gray levels, resolution, dimming range, viewing angle, video capability, altitude, temperature, shock and vibration, etc., for direct-view and virtual-view displays in cockpits and crew stations. Technical specifications are discussed for selected programs.
Incremental Multi-view 3D Reconstruction Starting from Two Images Taken by a Stereo Pair of Cameras
NASA Astrophysics Data System (ADS)
El hazzat, Soulaiman; Saaidi, Abderrahim; Karam, Antoine; Satori, Khalid
2015-03-01
In this paper, we present a new method for multi-view 3D reconstruction based on the use of a binocular stereo vision system constituted of two unattached cameras to initialize the reconstruction process. Afterwards , the second camera of stereo vision system (characterized by varying parameters) moves to capture more images at different times which are used to obtain an almost complete 3D reconstruction. The first two projection matrices are estimated by using a 3D pattern with known properties. After that, 3D scene points are recovered by triangulation of the matched interest points between these two images. The proposed approach is incremental. At each insertion of a new image, the camera projection matrix is estimated using the 3D information already calculated and new 3D points are recovered by triangulation from the result of the matching of interest points between the inserted image and the previous image. For the refinement of the new projection matrix and the new 3D points, a local bundle adjustment is performed. At first, all projection matrices are estimated, the matches between consecutive images are detected and Euclidean sparse 3D reconstruction is obtained. So, to increase the number of matches and have a more dense reconstruction, the Match propagation algorithm, more suitable for interesting movement of the camera, was applied on the pairs of consecutive images. The experimental results show the power and robustness of the proposed approach.
Attitude determination and parameter estimation using vector observations - Theory
NASA Technical Reports Server (NTRS)
Markley, F. Landis
1989-01-01
Procedures for attitude determination based on Wahba's loss function are generalized to include the estimation of parameters other than the attitude, such as sensor biases. Optimization with respect to the attitude is carried out using the q-method, which does not require an a priori estimate of the attitude. Optimization with respect to the other parameters employs an iterative approach, which does require an a priori estimate of these parameters. Conventional state estimation methods require a priori estimates of both the parameters and the attitude, while the algorithm presented in this paper always computes the exact optimal attitude for given values of the parameters. Expressions for the covariance of the attitude and parameter estimates are derived.
NASA Astrophysics Data System (ADS)
Lee, Harim; Moon, Y.-J.; Na, Hyeonock; Jang, Soojeong; Lee, Jae-Ok
2015-12-01
To prepare for when only single-view observations are available, we have made a test whether the 3-D parameters (radial velocity, angular width, and source location) of halo coronal mass ejections (HCMEs) from single-view observations are consistent with those from multiview observations. For this test, we select 44 HCMEs from December 2010 to June 2011 with the following conditions: partial and full HCMEs by SOHO and limb CMEs by twin STEREO spacecraft when they were approximately in quadrature. In this study, we compare the 3-D parameters of the HCMEs from three different methods: (1) a geometrical triangulation method, the STEREO CAT tool developed by NASA/CCMC, for multiview observations using STEREO/SECCHI and SOHO/LASCO data, (2) the graduated cylindrical shell (GCS) flux rope model for multiview observations using STEREO/SECCHI data, and (3) an ice cream cone model for single-view observations using SOHO/LASCO data. We find that the radial velocities and the source locations of the HCMEs from three methods are well consistent with one another with high correlation coefficients (≥0.9). However, the angular widths by the ice cream cone model are noticeably underestimated for broad CMEs larger than 100° and several partial HCMEs. A comparison between the 3-D CME parameters directly measured from twin STEREO spacecraft and the above 3-D parameters shows that the parameters from multiview are more consistent with the STEREO measurements than those from single view.
NASA Astrophysics Data System (ADS)
Moreno, H. A.; Ogden, F. L.; Steinke, R. C.; Alvarez, L. V.
2015-12-01
Triangulated Irregular Networks (TINs) are increasingly popular for terrain representation in high performance surface and hydrologic modeling by their skill to capture significant changes in surface forms such as topographical summits, slope breaks, ridges, valley floors, pits and cols. This work presents a methodology for estimating slope, aspect and the components of the incoming solar radiation by using a vectorial approach within a topocentric coordinate system by establishing geometric relations between groups of TIN elements and the sun position. A normal vector to the surface of each TIN element describes slope and aspect while spherical trigonometry allows computing a unit vector defining the position of the sun at each hour and DOY. Thus, a dot product determines the radiation flux at each TIN element. Remote shading is computed by scanning the projection of groups of TIN elements in the direction of the closest perpendicular plane to the sun vector. Sky view fractions are computed by a simplified scanning algorithm in prescribed directions and are useful to determine diffuse radiation. Finally, remote radiation scattering is computed from the sky view factor complementary functions for prescribed albedo values of the surrounding terrain only for significant angles above the horizon. This methodology represents an improvement on the current algorithms to compute terrain and radiation parameters on TINs in an efficient manner. All terrain features (e.g. slope, aspect, sky view factors and remote sheltering) can be pre-computed and stored for easy access for a subsequent ground surface or hydrologic simulation.
Documentation of a spreadsheet for time-series analysis and drawdown estimation
Halford, Keith J.
2006-01-01
Drawdowns during aquifer tests can be obscured by barometric pressure changes, earth tides, regional pumping, and recharge events in the water-level record. These stresses can create water-level fluctuations that should be removed from observed water levels prior to estimating drawdowns. Simple models have been developed for estimating unpumped water levels during aquifer tests that are referred to as synthetic water levels. These models sum multiple time series such as barometric pressure, tidal potential, and background water levels to simulate non-pumping water levels. The amplitude and phase of each time series are adjusted so that synthetic water levels match measured water levels during periods unaffected by an aquifer test. Differences between synthetic and measured water levels are minimized with a sum-of-squares objective function. Root-mean-square errors during fitting and prediction periods were compared multiple times at four geographically diverse sites. Prediction error equaled fitting error when fitting periods were greater than or equal to four times prediction periods. The proposed drawdown estimation approach has been implemented in a spreadsheet application. Measured time series are independent so that collection frequencies can differ and sampling times can be asynchronous. Time series can be viewed selectively and magnified easily. Fitting and prediction periods can be defined graphically or entered directly. Synthetic water levels for each observation well are created with earth tides, measured time series, moving averages of time series, and differences between measured and moving averages of time series. Selected series and fitting parameters for synthetic water levels are stored and drawdowns are estimated for prediction periods. Drawdowns can be viewed independently and adjusted visually if an anomaly skews initial drawdowns away from 0. The number of observations in a drawdown time series can be reduced by averaging across user-defined periods. Raw or reduced drawdown estimates can be copied from the spreadsheet application or written to tab-delimited ASCII files.
A control theoretic model of driver steering behavior
NASA Technical Reports Server (NTRS)
Donges, E.
1977-01-01
A quantitative description of driver steering behavior such as a mathematical model is presented. The steering task is divided into two levels: (1) the guidance level involving the perception of the instantaneous and future course of the forcing function provided by the forward view of the road, and the response to it in an anticipatory open-loop control mode; (2) the stabilization level whereby any occuring deviations from the forcing function are compensated for in a closed-loop control mode. This concept of the duality of the driver's steering activity led to a newly developed two-level model of driver steering behavior. Its parameters are identified on the basis of data measured in driving simulator experiments. The parameter estimates of both levels of the model show significant dependence on the experimental situation which can be characterized by variables such as vehicle speed and desired path curvature.
Nishino, Ko; Lombardi, Stephen
2011-01-01
We introduce a novel parametric bidirectional reflectance distribution function (BRDF) model that can accurately encode a wide variety of real-world isotropic BRDFs with a small number of parameters. The key observation we make is that a BRDF may be viewed as a statistical distribution on a unit hemisphere. We derive a novel directional statistics distribution, which we refer to as the hemispherical exponential power distribution, and model real-world isotropic BRDFs as mixtures of it. We derive a canonical probabilistic method for estimating the parameters, including the number of components, of this novel directional statistics BRDF model. We show that the model captures the full spectrum of real-world isotropic BRDFs with high accuracy, but a small footprint. We also demonstrate the advantages of the novel BRDF model by showing its use for reflection component separation and for exploring the space of isotropic BRDFs.
Improved national growth rate method: a comment.
Begum, N
1991-09-01
Rahman's 1987 paper on an improvement in the National Growth Rate Method (NGRM) is discussed. Rahman's assumption is that migration in/out of a city of region is constant, and because the method requires minimal data, it is suitable for application in developing countries. This assumption means that the model is inappropriate for developing countries which are known to have nonuniform rates of population change. Size of city also affects the migration pattern, where larger cities with greater numbers of industrial and business concerns and social services receive a rapid influx of new migrants. This view is also reflected in Rahman's paper. The example is given that Dhaka SMA, Bangladesh received 60% more migrants in 2 periods: 130,000 in migrants/year from 1974 to 1981 vs. 82,000/year from 1961 to 1974. Chittagong, Khulna, and Rajshahi SMA's had similar growth from 1961 to 1981, but there was a slower rate in the 2nd period. Positive contributions of the Rahman paper are the identification of the problems of the nuisance parameter. Rahman points out that the definition of the migration rate is flawed by the traditional NGRM parameter describing the natural increase of migrants. It is stated that recognition of this flaw and the development of a simple case of uniform migration is a good beginning for developing a more realistic model of migration. It is suggested that an extra parameter to represent departure from uniformity in the estimation be introduced. More data would be required. If the task is to use only 2 censuses for estimation of a single parameter, then there is a seemingly insurmountable problem.
Global model for the lithospheric strength and effective elastic thickness
NASA Astrophysics Data System (ADS)
Tesauro, Magdala; Kaban, Mikhail K.; Cloetingh, Sierd A. P. L.
2013-08-01
Global distribution of the strength and effective elastic thickness (Te) of the lithosphere are estimated using physical parameters from recent crustal and lithospheric models. For the Te estimation we apply a new approach, which provides a possibility to take into account variations of Young modulus (E) within the lithosphere. In view of the large uncertainties affecting strength estimates, we evaluate global strength and Te distributions for possible end-member 'hard' (HRM) and a 'soft' (SRM) rheology models of the continental crust. Temperature within the lithosphere has been estimated using a recent tomography model of Ritsema et al. (2011), which has much higher horizontal resolution than previous global models. Most of the strength is localized in the crust for the HRM and in the mantle for the SRM. These results contribute to the long debates on applicability of the "crème brulée" or "jelly-sandwich" model for the lithosphere structure. Changing from the SRM to HRM turns most of the continental areas from the totally decoupled mode to the fully coupled mode of the lithospheric layers. However, in the areas characterized by a high thermal regime and thick crust, the layers remain decoupled even for the HRM. At the same time, for the inner part of the cratons the lithospheric layers are coupled in both models. Therefore, rheological variations lead to large changes in the integrated strength and Te distribution in the regions characterized by intermediate thermal conditions. In these areas temperature uncertainties have a greater effect, since this parameter principally determines rheological behavior. Comparison of the Te estimates for both models with those determined from the flexural loading and spectral analysis shows that the 'hard' rheology is likely applicable for cratonic areas, whereas the 'soft' rheology is more representative for young orogens.
Generalized estimators of avian abundance from count survey data
Royle, J. Andrew
2004-01-01
I consider modeling avian abundance from spatially referenced bird count data collected according to common protocols such as capture?recapture, multiple observer, removal sampling and simple point counts. Small sample sizes and large numbers of parameters have motivated many analyses that disregard the spatial indexing of the data, and thus do not provide an adequate treatment of spatial structure. I describe a general framework for modeling spatially replicated data that regards local abundance as a random process, motivated by the view that the set of spatially referenced local populations (at the sample locations) constitute a metapopulation. Under this view, attention can be focused on developing a model for the variation in local abundance independent of the sampling protocol being considered. The metapopulation model structure, when combined with the data generating model, define a simple hierarchical model that can be analyzed using conventional methods. The proposed modeling framework is completely general in the sense that broad classes of metapopulation models may be considered, site level covariates on detection and abundance may be considered, and estimates of abundance and related quantities may be obtained for sample locations, groups of locations, unsampled locations. Two brief examples are given, the first involving simple point counts, and the second based on temporary removal counts. Extension of these models to open systems is briefly discussed.
Flight-path estimation in passive low-altitude flight by visual cues
NASA Technical Reports Server (NTRS)
Grunwald, Arthur J.; Kohn, S.
1993-01-01
A series of experiments was conducted, in which subjects had to estimate the flight path while passively being flown in straight or in curved motion over several types of nominally flat, textured terrain. Three computer-generated terrain types were investigated: (1) a random 'pole' field, (2) a flat field consisting of random rectangular patches, and (3) a field of random parallelepipeds. Experimental parameters were the velocity-to-height (V/h) ratio, the viewing distance, and the terrain type. Furthermore, the effect of obscuring parts of the visual field was investigated. Assumptions were made about the basic visual-field information by analyzing the pattern of line-of-sight (LOS) rate vectors in the visual field. The experimental results support these assumptions and show that, for both a straight as well as a curved flight path, the estimation accuracy and estimation times improve with the V/h ratio. Error scores for the curved flight path are found to be about 3 deg in visual angle higher than for the straight flight path, and the sensitivity to the V/h ratio is found to be considerably larger. For the straight motion, the flight path could be estimated successfully from local areas in the far field. Curved flight-path estimates have to rely on the entire LOS rate pattern.
APPLICATION OF RADIOISOTOPES TO THE QUANTITATIVE CHROMATOGRAPHY OF FATTY ACIDS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Budzynski, A.Z.; Zubrzycki, Z.J.; Campbell, I.G.
1959-10-31
The paper reports work done on the use of I/sup 131/, Zn/sup 65/, Sr/sup 90/, Zr/sup 95/, Ce/sup 144/ for the quantitative estimation of fatty acids on paper chromatograms, and for determination of the degree of usaturation of components of resolved fatty acid mixtures. I/sup 131/ is used to iodinate unsaturated fatty acids, and the amount of such acids is determined from the radiochromatogram. The degree of unsaturation of fatty acids is determined by estimation of the specific activiiy of spots. The other isotopes have been examined from the point of view of their suitability for estimation of total amountsmore » of fatty acids by formation of insoluble radioactive soaps held on the chromatogram. In particular, work is reported on the quantitative estimation of saturated fatty acids by measurement of the activity of their insoluble soaps with radioactive metals. Various quantitative relationships are described between amount of fatty acid in spot and such parameters as radiometrically estimated spot length, width, maximum intensity, and integrated spot activity. A convenient detection apparatus for taking radiochromatograms is also described. In conjunction with conventional chromatographic methods for resolving fatty acids the method permits the estimation of composition of fatty acid mixtures obtained from biological material. (auth)« less
Yobbi, D.K.
2000-01-01
A nonlinear least-squares regression technique for estimation of ground-water flow model parameters was applied to an existing model of the regional aquifer system underlying west-central Florida. The regression technique minimizes the differences between measured and simulated water levels. Regression statistics, including parameter sensitivities and correlations, were calculated for reported parameter values in the existing model. Optimal parameter values for selected hydrologic variables of interest are estimated by nonlinear regression. Optimal estimates of parameter values are about 140 times greater than and about 0.01 times less than reported values. Independently estimating all parameters by nonlinear regression was impossible, given the existing zonation structure and number of observations, because of parameter insensitivity and correlation. Although the model yields parameter values similar to those estimated by other methods and reproduces the measured water levels reasonably accurately, a simpler parameter structure should be considered. Some possible ways of improving model calibration are to: (1) modify the defined parameter-zonation structure by omitting and/or combining parameters to be estimated; (2) carefully eliminate observation data based on evidence that they are likely to be biased; (3) collect additional water-level data; (4) assign values to insensitive parameters, and (5) estimate the most sensitive parameters first, then, using the optimized values for these parameters, estimate the entire data set.
ERIC Educational Resources Information Center
Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.
2016-01-01
The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…
An improved method for nonlinear parameter estimation: a case study of the Rössler model
NASA Astrophysics Data System (ADS)
He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan
2016-08-01
Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.
NASA Astrophysics Data System (ADS)
Hendricks Franssen, H. J.; Post, H.; Vrugt, J. A.; Fox, A. M.; Baatz, R.; Kumbhar, P.; Vereecken, H.
2015-12-01
Estimation of net ecosystem exchange (NEE) by land surface models is strongly affected by uncertain ecosystem parameters and initial conditions. A possible approach is the estimation of plant functional type (PFT) specific parameters for sites with measurement data like NEE and application of the parameters at other sites with the same PFT and no measurements. This upscaling strategy was evaluated in this work for sites in Germany and France. Ecosystem parameters and initial conditions were estimated with NEE-time series of one year length, or a time series of only one season. The DREAM(zs) algorithm was used for the estimation of parameters and initial conditions. DREAM(zs) is not limited to Gaussian distributions and can condition to large time series of measurement data simultaneously. DREAM(zs) was used in combination with the Community Land Model (CLM) v4.5. Parameter estimates were evaluated by model predictions at the same site for an independent verification period. In addition, the parameter estimates were evaluated at other, independent sites situated >500km away with the same PFT. The main conclusions are: i) simulations with estimated parameters reproduced better the NEE measurement data in the verification periods, including the annual NEE-sum (23% improvement), annual NEE-cycle and average diurnal NEE course (error reduction by factor 1,6); ii) estimated parameters based on seasonal NEE-data outperformed estimated parameters based on yearly data; iii) in addition, those seasonal parameters were often also significantly different from their yearly equivalents; iv) estimated parameters were significantly different if initial conditions were estimated together with the parameters. We conclude that estimated PFT-specific parameters improve land surface model predictions significantly at independent verification sites and for independent verification periods so that their potential for upscaling is demonstrated. However, simulation results also indicate that possibly the estimated parameters mask other model errors. This would imply that their application at climatic time scales would not improve model predictions. A central question is whether the integration of many different data streams (e.g., biomass, remotely sensed LAI) could solve the problems indicated here.
NASA Astrophysics Data System (ADS)
Verrelst, Jochem; Rivera, J. P.; Alonso, L.; Guanter, L.; Moreno, J.
2012-04-01
ESA’s upcoming satellites Sentinel-2 (S2) and Sentinel-3 (S3) aim to ensure continuity for Landsat 5/7, SPOT- 5, SPOT-Vegetation and Envisat MERIS observations by providing superspectral images of high spatial and temporal resolution. S2 and S3 will deliver near real-time operational products with a high accuracy for land monitoring. This unprecedented data availability leads to an urgent need for developing robust and accurate retrieval methods. Machine learning regression algorithms could be powerful candidates for the estimation of biophysical parameters from satellite reflectance measurements because of their ability to perform adaptive, nonlinear data fitting. By using data from the ESA-led field campaign SPARC (Barrax, Spain), it was recently found [1] that Gaussian processes regression (GPR) outperformed competitive machine learning algorithms such as neural networks, support vector regression) and kernel ridge regression both in terms of accuracy and computational speed. For various Sentinel configurations (S2-10m, S2- 20m, S2-60m and S3-300m) three important biophysical parameters were estimated: leaf chlorophyll content (Chl), leaf area index (LAI) and fractional vegetation cover (FVC). GPR was the only method that reached the 10% precision required by end users in the estimation of Chl. In view of implementing the regressor into operational monitoring applications, here the portability of locally trained GPR models to other images was evaluated. The associated confidence maps proved to be a good indicator for evaluating the robustness of the trained models. Consistent retrievals were obtained across the different images, particularly over agricultural sites. To make the method suitable for operational use, however, the poorer confidences over bare soil areas suggest that the training dataset should be expanded with inputs from various land cover types.
SU-E-I-25: Determining Tube Current, Tube Voltage and Pitch Suitable for Low- Dose Lung Screening CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, K; Matthews, K
2014-06-01
Purpose: The quality of a computed tomography (CT) image and the dose delivered during its acquisition depend upon the acquisition parameters used. Tube current, tube voltage, and pitch are acquisition parameters that potentially affect image quality and dose. This study investigated physicians' abilities to characterize small, solid nodules in low-dose CT images for combinations of current, voltage and pitch, for three CT scanner models. Methods: Lung CT images was acquired of a Data Spectrum anthropomorphic torso phantom with various combinations of pitch, tube current, and tube voltage; this phantom was used because acrylic beads of various sizes could be placedmore » within the lung compartments to simulate nodules. The phantom was imaged on two 16-slice scanners and a 64-slice scanner. The acquisition parameters spanned a range of estimated CTDI levels; the CTDI estimates from the acquisition software were verified by measurement. Several experienced radiologists viewed the phantom lung CT images and noted nodule location, size and shape, as well as the acceptability of overall image quality. Results: Image quality for assessment of nodules was deemed unsatisfactory for all scanners at 80 kV (any tube current) and at 35 mA (any tube voltage). Tube current of 50 mA or more at 120 kV resulted in similar assessments from all three scanners. Physician-measured sphere diameters were closer to actual diameters for larger spheres, higher tube current, and higher kV. Pitch influenced size measurements less for larger spheres than for smaller spheres. CTDI was typically overestimated by the scanner software compared to measurement. Conclusion: Based on this survey of acquisition parameters, a low-dose CT protocol of 120 kV, 50 mA, and pitch of 1.4 is recommended to balance patient dose and acceptable image quality. For three models of scanners, this protocol resulted in estimated CTDIs from 2.9–3.6 mGy.« less
A Neural-Dynamic Architecture for Concurrent Estimation of Object Pose and Identity
Lomp, Oliver; Faubel, Christian; Schöner, Gregor
2017-01-01
Handling objects or interacting with a human user about objects on a shared tabletop requires that objects be identified after learning from a small number of views and that object pose be estimated. We present a neurally inspired architecture that learns object instances by storing features extracted from a single view of each object. Input features are color and edge histograms from a localized area that is updated during processing. The system finds the best-matching view for the object in a novel input image while concurrently estimating the object’s pose, aligning the learned view with current input. The system is based on neural dynamics, computationally operating in real time, and can handle dynamic scenes directly off live video input. In a scenario with 30 everyday objects, the system achieves recognition rates of 87.2% from a single training view for each object, while also estimating pose quite precisely. We further demonstrate that the system can track moving objects, and that it can segment the visual array, selecting and recognizing one object while suppressing input from another known object in the immediate vicinity. Evaluation on the COIL-100 dataset, in which objects are depicted from different viewing angles, revealed recognition rates of 91.1% on the first 30 objects, each learned from four training views. PMID:28503145
Estimability of geodetic parameters from space VLBI observables
NASA Technical Reports Server (NTRS)
Adam, Jozsef
1990-01-01
The feasibility of space very long base interferometry (VLBI) observables for geodesy and geodynamics is investigated. A brief review of space VLBI systems from the point of view of potential geodetic application is given. A selected notational convention is used to jointly treat the VLBI observables of different types of baselines within a combined ground/space VLBI network. The basic equations of the space VLBI observables appropriate for convariance analysis are derived and included. The corresponding equations for the ground-to-ground baseline VLBI observables are also given for a comparison. The simplified expression of the mathematical models for both space VLBI observables (time delay and delay rate) include the ground station coordinates, the satellite orbital elements, the earth rotation parameters, the radio source coordinates, and clock parameters. The observation equations with these parameters were examined in order to determine which of them are separable or nonseparable. Singularity problems arising from coordinate system definition and critical configuration are studied. Linear dependencies between partials are analytically derived. The mathematical models for ground-space baseline VLBI observables were tested with simulation data in the frame of some numerical experiments. Singularity due to datum defect is confirmed.
Devbhuti, Pritesh; Sikdar, Debasis; Saha, Achintya; Sengupta, Chandana
2011-01-01
A drug may cause alteration in blood-lipid profile and induce lipid peroxidation phenomena on administration in the body. Antioxidant may play beneficial role to control the negative alteration in lipid profile and lipid peroxidation. In view of this context, the present in vivo study was carried out to evaluate the role of ascorbic acid as antioxidant on netilmicin-induced alteration of blood lipid profile and peroxidation parameters. Rabbits were used as experimental animals and blood was collected to estimate blood-lipid profiles, such as total cholesterol (TCh), high density lipoprotein cholesterol (HDL-Ch), low density lipoprotein cholesterol (LDL-Ch), very low density lipoprotein cholesterol (VLDL-Ch), triglycerides (Tg), phospholipids (PL), and total lipids (TL), as well as peroxidation parameters, such as malondialdehyde (MDA), 4-hydroxy-2-nonenal (HNE), reduced glutathione (GSH) and nitric oxide (NO). The results revealed that netilmicin caused significant enhancement of MDA, HNE, TCh, LDL-Ch, VLDL-Ch, Tg levels and reduction in GSH, NO, HDL-Ch, PL, TL levels. On co-administration, ascorbic acid was found to be effective in reducing netilmicin-induced negative alterations of the above parameters.
[DPOAE in tinnitus patients with cochlear hearing loss considering hyperacusis and misophonia].
Sztuka, Aleksandra; Pośpiech, Lucyna; Gawron, Wojciech; Dudek, Krzysztof
2006-01-01
The most probable place generating tinnitus in auditory pathway are outer hair cells (OHC) inside cochlea. To asses their activity otoacoustic emission is used. The goal of the investigation was estimation the features of otoemission DPOAE in groups with tinnitus patients with cochlear hearing loss, estimation of diagnostic value of DPOAE parameters for analysis of function of the cochlea in investigated patients emphasizing DPOAE parameters most useful in localizing tinnitus generators and estimation of hypothetic influence of hyperacusis and misophony on parameters of DPOAE in tinnitus patients with cochlear hearing loss. The material of the study were 42 tinnitus patients with cochlear hearing loss. In the control group there were 21 patients without tinnitus with the same type of hearing loss. Then tinnitus patients were divided into three subgroups--with hyperacusis, misophony and without both of them, based on audiologic findings. after taking view on tinnitus and physical examination in all the patients pure tone and impedance audiometry, supratreshold tests, ABR and audiometric average and discomfort level were evaluated. Then otoemission DPOAE was measured in three procedures. First the amplitudes of two points per octave were assessed, in second--"fine structure" method-- 16-20 points per octave (f2/f1 = 1.2, L1 = L2 = 70 dB). Third procedure included recording of growth rate function in three series for input tones of value f2 = 2002, 4004, 6006 Hz (f2/f1= 1.22) and levels L1=L2, growing by degrees of 5dB in each series. DPOAE amplitudes in recording of 2 points per octave and fine structure method are very valuable parameters for estimation of cochlear function in tinnitus patients with cochlear hearing loss. Decreasing of DPOAE amplitudes in patients with cochlear hearing loss and tinnitus suggests significant role of OHC pathology, unbalanced by IHC injury in generation of tinnitus in patients with hearing loss of cochlear localization. DPOAE fine structure provides us the additional information about DPOAE amplitude recorded in two points per octave, spreading the amount of frequencies f2, where differences are noticed in comparison of two groups--tinnitus patients and control. Function growth rate cannot be the only parameter in estimation of DPOAE in tinnitus patients with cochlear hearing loss, also including subjects with hyperacusis and misophony. Hyperacusis has important influence on DPOAE amplitude, increases essentially amplitude of DPOAE in the examined group of tinnitus patients.
Van Derlinden, E; Bernaerts, K; Van Impe, J F
2010-05-21
Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy). This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Overview of NASA's Carbon Monitoring System Flux-Pilot Project
NASA Technical Reports Server (NTRS)
Pawson, Steven; Gunson, Michael R.; Jucks, Kenneth
2011-01-01
NASA's space-based observations of physical, chemical and biological parameters in the Earth System along with state-of-the-art modeling capabilities provide unique capabilities for analyses of the carbon cycle. The Carbon Monitoring System is developing an exploratory framework for detecting carbon in the environment and its changes, with a view towards contributing to national and international monitoring activities. The Flux-Pilot Project aims to provide a unified view of land-atmosphere and ocean-atmosphere carbon exchange, using observation-constrained models. Central to the project is the application of NASA's satellite observations (especially MODIS), the ACOS retrievals of the JAXA-GOSAT observations, and the "MERRA" meteorological reanalysis produced with GEOS-S. With a primary objective of estimating uncertainty in computed fluxes, two land- and two ocean-systems are run for 2009-2010 and compared with existing flux estimates. An transport model is used to evaluate simulated CO2 concentrations with in-situ and space-based observations, in order to assess the realism of the fluxes and how uncertainties in fluxes propagate into atmospheric concentrations that can be more readily evaluated. Finally, the atmospheric partial CO2 columns observed from space are inverted to give new estimates of surface fluxes, which are evaluated using the bottom-up estimates and independent datasets. The focus of this presentation will be on the science goals and current achievements of the pilot project, with emphasis on how policy-relevant questions help focus the scientific direction. Examples include the issue of what spatio-temporal resolution of fluxes can be detected from polar-orbiting satellites and whether it is possible to use space-based observations to separate contributions to atmospheric concentrations of (say) fossil-fuel and biological activity
NGSLR's Measurement of the Retro-Reflector Array Response of Various LEO to GNSS Satellites
NASA Technical Reports Server (NTRS)
McGarry, Jan; Clarke, Christopher; Degnan, John; Donovan, Howard; Hall, Benjamin; Hovarth, Julie; Zagwodzki, Thomas
2012-01-01
"NASA's Next Generation Satellite Laser Ranging System (NGSLR) has successfully demonstrated daylight and nighttime tracking this year to s atellites from LEO to GNSS orbits, using a 7-8 arcsecond beam divergence, a 43% QE Hamamatsu MCP-PMT with single photon detection, a narrow field of view (11 arcseconds), and a 1 mJ per pulse 2kHz repetition rate laser. We have compared the actual return rates we are getting against the theoretical link calculations, using the known system confi guration parameters, an estimate of the sky transmission using locall y measured visibility, and signal processing to extract the signal from the background noise. We can achieve good agreement between theory and measurement in most passes by using an estimated pOinting error. We will s.()w the results of this comparison along with our conclusio ns."
A technique for estimating 4D-CBCT using prior knowledge and limited-angle projections.
Zhang, You; Yin, Fang-Fang; Segars, W Paul; Ren, Lei
2013-12-01
To develop a technique to estimate onboard 4D-CBCT using prior information and limited-angle projections for potential 4D target verification of lung radiotherapy. Each phase of onboard 4D-CBCT is considered as a deformation from one selected phase (prior volume) of the planning 4D-CT. The deformation field maps (DFMs) are solved using a motion modeling and free-form deformation (MM-FD) technique. In the MM-FD technique, the DFMs are estimated using a motion model which is extracted from planning 4D-CT based on principal component analysis (PCA). The motion model parameters are optimized by matching the digitally reconstructed radiographs of the deformed volumes to the limited-angle onboard projections (data fidelity constraint). Afterward, the estimated DFMs are fine-tuned using a FD model based on data fidelity constraint and deformation energy minimization. The 4D digital extended-cardiac-torso phantom was used to evaluate the MM-FD technique. A lung patient with a 30 mm diameter lesion was simulated with various anatomical and respirational changes from planning 4D-CT to onboard volume, including changes of respiration amplitude, lesion size and lesion average-position, and phase shift between lesion and body respiratory cycle. The lesions were contoured in both the estimated and "ground-truth" onboard 4D-CBCT for comparison. 3D volume percentage-difference (VPD) and center-of-mass shift (COMS) were calculated to evaluate the estimation accuracy of three techniques: MM-FD, MM-only, and FD-only. Different onboard projection acquisition scenarios and projection noise levels were simulated to investigate their effects on the estimation accuracy. For all simulated patient and projection acquisition scenarios, the mean VPD (±S.D.)∕COMS (±S.D.) between lesions in prior images and "ground-truth" onboard images were 136.11% (±42.76%)∕15.5 mm (±3.9 mm). Using orthogonal-view 15°-each scan angle, the mean VPD∕COMS between the lesion in estimated and "ground-truth" onboard images for MM-only, FD-only, and MM-FD techniques were 60.10% (±27.17%)∕4.9 mm (±3.0 mm), 96.07% (±31.48%)∕12.1 mm (±3.9 mm) and 11.45% (±9.37%)∕1.3 mm (±1.3 mm), respectively. For orthogonal-view 30°-each scan angle, the corresponding results were 59.16% (±26.66%)∕4.9 mm (±3.0 mm), 75.98% (±27.21%)∕9.9 mm (±4.0 mm), and 5.22% (±2.12%)∕0.5 mm (±0.4 mm). For single-view scan angles of 3°, 30°, and 60°, the results for MM-FD technique were 32.77% (±17.87%)∕3.2 mm (±2.2 mm), 24.57% (±18.18%)∕2.9 mm (±2.0 mm), and 10.48% (±9.50%)∕1.1 mm (±1.3 mm), respectively. For projection angular-sampling-intervals of 0.6°, 1.2°, and 2.5° with the orthogonal-view 30°-each scan angle, the MM-FD technique generated similar VPD (maximum deviation 2.91%) and COMS (maximum deviation 0.6 mm), while sparser sampling yielded larger VPD∕COMS. With equal number of projections, the estimation results using scattered 360° scan angle were slightly better than those using orthogonal-view 30°-each scan angle. The estimation accuracy of MM-FD technique declined as noise level increased. The MM-FD technique substantially improves the estimation accuracy for onboard 4D-CBCT using prior planning 4D-CT and limited-angle projections, compared to the MM-only and FD-only techniques. It can potentially be used for the inter/intrafractional 4D-localization verification.
A technique for estimating 4D-CBCT using prior knowledge and limited-angle projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, You; Yin, Fang-Fang; Ren, Lei
2013-12-15
Purpose: To develop a technique to estimate onboard 4D-CBCT using prior information and limited-angle projections for potential 4D target verification of lung radiotherapy.Methods: Each phase of onboard 4D-CBCT is considered as a deformation from one selected phase (prior volume) of the planning 4D-CT. The deformation field maps (DFMs) are solved using a motion modeling and free-form deformation (MM-FD) technique. In the MM-FD technique, the DFMs are estimated using a motion model which is extracted from planning 4D-CT based on principal component analysis (PCA). The motion model parameters are optimized by matching the digitally reconstructed radiographs of the deformed volumes tomore » the limited-angle onboard projections (data fidelity constraint). Afterward, the estimated DFMs are fine-tuned using a FD model based on data fidelity constraint and deformation energy minimization. The 4D digital extended-cardiac-torso phantom was used to evaluate the MM-FD technique. A lung patient with a 30 mm diameter lesion was simulated with various anatomical and respirational changes from planning 4D-CT to onboard volume, including changes of respiration amplitude, lesion size and lesion average-position, and phase shift between lesion and body respiratory cycle. The lesions were contoured in both the estimated and “ground-truth” onboard 4D-CBCT for comparison. 3D volume percentage-difference (VPD) and center-of-mass shift (COMS) were calculated to evaluate the estimation accuracy of three techniques: MM-FD, MM-only, and FD-only. Different onboard projection acquisition scenarios and projection noise levels were simulated to investigate their effects on the estimation accuracy.Results: For all simulated patient and projection acquisition scenarios, the mean VPD (±S.D.)/COMS (±S.D.) between lesions in prior images and “ground-truth” onboard images were 136.11% (±42.76%)/15.5 mm (±3.9 mm). Using orthogonal-view 15°-each scan angle, the mean VPD/COMS between the lesion in estimated and “ground-truth” onboard images for MM-only, FD-only, and MM-FD techniques were 60.10% (±27.17%)/4.9 mm (±3.0 mm), 96.07% (±31.48%)/12.1 mm (±3.9 mm) and 11.45% (±9.37%)/1.3 mm (±1.3 mm), respectively. For orthogonal-view 30°-each scan angle, the corresponding results were 59.16% (±26.66%)/4.9 mm (±3.0 mm), 75.98% (±27.21%)/9.9 mm (±4.0 mm), and 5.22% (±2.12%)/0.5 mm (±0.4 mm). For single-view scan angles of 3°, 30°, and 60°, the results for MM-FD technique were 32.77% (±17.87%)/3.2 mm (±2.2 mm), 24.57% (±18.18%)/2.9 mm (±2.0 mm), and 10.48% (±9.50%)/1.1 mm (±1.3 mm), respectively. For projection angular-sampling-intervals of 0.6°, 1.2°, and 2.5° with the orthogonal-view 30°-each scan angle, the MM-FD technique generated similar VPD (maximum deviation 2.91%) and COMS (maximum deviation 0.6 mm), while sparser sampling yielded larger VPD/COMS. With equal number of projections, the estimation results using scattered 360° scan angle were slightly better than those using orthogonal-view 30°-each scan angle. The estimation accuracy of MM-FD technique declined as noise level increased.Conclusions: The MM-FD technique substantially improves the estimation accuracy for onboard 4D-CBCT using prior planning 4D-CT and limited-angle projections, compared to the MM-only and FD-only techniques. It can potentially be used for the inter/intrafractional 4D-localization verification.« less
Can we calibrate simultaneously groundwater recharge and aquifer hydrodynamic parameters ?
NASA Astrophysics Data System (ADS)
Hassane Maina, Fadji; Ackerer, Philippe; Bildstein, Olivier
2017-04-01
By groundwater model calibration, we consider here fitting the measured piezometric heads by estimating the hydrodynamic parameters (storage term and hydraulic conductivity) and the recharge. It is traditionally recommended to avoid simultaneous calibration of groundwater recharge and flow parameters because of correlation between recharge and the flow parameters. From a physical point of view, little recharge associated with low hydraulic conductivity can provide very similar piezometric changes than higher recharge and higher hydraulic conductivity. If this correlation is true under steady state conditions, we assume that this correlation is much weaker under transient conditions because recharge varies in time and the parameters do not. Moreover, the recharge is negligible during summer time for many climatic conditions due to reduced precipitation, increased evaporation and transpiration by vegetation cover. We analyze our hypothesis through global sensitivity analysis (GSA) in conjunction with the polynomial chaos expansion (PCE) methodology. We perform GSA by calculating the Sobol indices, which provide a variance-based 'measure' of the effects of uncertain parameters (storage and hydraulic conductivity) and recharge on the piezometric heads computed by the flow model. The choice of PCE has the following two benefits: (i) it provides the global sensitivity indices in a straightforward manner, and (ii) PCE can serve as a surrogate model for the calibration of parameters. The coefficients of the PCE are computed by probabilistic collocation. We perform the GSA on simplified real conditions coming from an already built groundwater model dedicated to a subdomain of the Upper-Rhine aquifer (geometry, boundary conditions, climatic data). GSA shows that the simultaneous calibration of recharge and flow parameters is possible if the calibration is performed over at least one year. It provides also the valuable information of the sensitivity versus time, depending on the aquifer inertia and climatic conditions. The groundwater levels variations during recharge (increase) are sensitive to the storage coefficient whereas the groundwater levels variations after recharge (decrease) are sensitive to the hydraulic conductivity. The performed model calibration on synthetic data sets shows that the parameters and recharge are estimated quite accurately.
Knotts, Thomas A.
2017-01-01
Molecular simulation has the ability to predict various physical properties that are difficult to obtain experimentally. For example, we implement molecular simulation to predict the critical constants (i.e., critical temperature, critical density, critical pressure, and critical compressibility factor) for large n-alkanes that thermally decompose experimentally (as large as C48). Historically, molecular simulation has been viewed as a tool that is limited to providing qualitative insight. One key reason for this perceived weakness in molecular simulation is the difficulty to quantify the uncertainty in the results. This is because molecular simulations have many sources of uncertainty that propagate and are difficult to quantify. We investigate one of the most important sources of uncertainty, namely, the intermolecular force field parameters. Specifically, we quantify the uncertainty in the Lennard-Jones (LJ) 12-6 parameters for the CH4, CH3, and CH2 united-atom interaction sites. We then demonstrate how the uncertainties in the parameters lead to uncertainties in the saturated liquid density and critical constant values obtained from Gibbs Ensemble Monte Carlo simulation. Our results suggest that the uncertainties attributed to the LJ 12-6 parameters are small enough that quantitatively useful estimates of the saturated liquid density and the critical constants can be obtained from molecular simulation. PMID:28527455
Determining the accuracy of maximum likelihood parameter estimates with colored residuals
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Klein, Vladislav
1994-01-01
An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.
Geocenter coordinates estimated from GNSS data as viewed by perturbation theory
NASA Astrophysics Data System (ADS)
Meindl, Michael; Beutler, Gerhard; Thaller, Daniela; Dach, Rolf; Jäggi, Adrian
2013-04-01
Time series of geocenter coordinates were determined with data of two global navigation satellite systems (GNSSs), namely the U.S. GPS (Global Positioning System) and the Russian GLONASS (Global'naya Nawigatsionnaya Sputnikowaya Sistema). The data was recorded in the years 2008-2011 by a global network of 92 permanently observing GPS/GLONASS receivers. Two types of daily solutions were generated independently for each GNSS, one including the estimation of geocenter coordinates and one without these parameters.A fair agreement for GPS and GLONASS was found in the geocenter x- and y-coordinate series. Our tests, however, clearly reveal artifacts in the z-component determined with the GLONASS data. Large periodic excursions in the GLONASS geocenter z-coordinates of about 40 cm peak-to-peak are related to the maximum elevation angles of the Sun above/below the orbital planes of the satellite system and thus have a period of about 4 months (third of a year). A detailed analysis revealed that the artifacts are almost uniquely governed by the differences of the estimates of direct solar radiation pressure (SRP) in the two solution series (with and without geocenter estimation). A simple formula is derived, describing the relation between the geocenter z-coordinate and the corresponding parameter of the SRP. The effect can be explained by first-order perturbation theory of celestial mechanics. The theory also predicts a heavy impact on the GNSS-derived geocenter if once-per-revolution SRP parameters are estimated in the direction of the satellite's solar panel axis. Specific experiments using GPS observations revealed that this is indeed the case.Although the main focus of this article is on GNSS, the theory developed is applicable to all satellite observing techniques. We applied the theory to satellite laser ranging (SLR) solutions using LAGEOS. It turns out that the correlation between geocenter and SRP parameters is not a critical issue for the SLR solutions. The reasons are threefold: The direct SRP is about a factor of 30-40 smaller for typical geodetic SLR satellites than for GNSS satellites, allowing it in most cases to not solve for SRP parameters (ruling out the correlation between these parameters and the geocenter coordinates); the orbital arc length of 7 days (which is typically used in SLR analysis) contains more than 50 revolutions of the LAGEOS satellites as compared to about two revolutions of GNSS satellites for the daily arcs used in GNSS analysis; the orbit geometry is not as critical for LAGEOS as for GNSS satellites, because the elevation angle of the Sun w.r.t. the orbital plane is usually significantly changing over 7 days.
Bayesian Parameter Estimation for Heavy-Duty Vehicles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Eric; Konan, Arnaud; Duran, Adam
2017-03-28
Accurate vehicle parameters are valuable for design, modeling, and reporting. Estimating vehicle parameters can be a very time-consuming process requiring tightly-controlled experimentation. This work describes a method to estimate vehicle parameters such as mass, coefficient of drag/frontal area, and rolling resistance using data logged during standard vehicle operation. The method uses Monte Carlo to generate parameter sets which is fed to a variant of the road load equation. Modeled road load is then compared to measured load to evaluate the probability of the parameter set. Acceptance of a proposed parameter set is determined using the probability ratio to the currentmore » state, so that the chain history will give a distribution of parameter sets. Compared to a single value, a distribution of possible values provides information on the quality of estimates and the range of possible parameter values. The method is demonstrated by estimating dynamometer parameters. Results confirm the method's ability to estimate reasonable parameter sets, and indicates an opportunity to increase the certainty of estimates through careful selection or generation of the test drive cycle.« less
Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong
2016-05-30
Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.
Depth interval estimates from motion parallax and binocular disparity beyond interaction space.
Gillam, Barbara; Palmisano, Stephen A; Govan, Donovan G
2011-01-01
Static and dynamic observers provided binocular and monocular estimates of the depths between real objects lying well beyond interaction space. On each trial, pairs of LEDs were presented inside a dark railway tunnel. The nearest LED was always 40 m from the observer, with the depth separation between LED pairs ranging from 0 up to 248 m. Dynamic binocular viewing was found to produce the greatest (ie most veridical) estimates of depth magnitude, followed next by static binocular viewing, and then by dynamic monocular viewing. (No significant depth was seen with static monocular viewing.) We found evidence that both binocular and monocular dynamic estimates of depth were scaled for the observation distance when the ground plane and walls of the tunnel were visible up to the nearest LED. We conclude that both motion parallax and stereopsis provide useful long-distance depth information and that motion-parallax information can enhance the degree of stereoscopic depth seen.
NASA Technical Reports Server (NTRS)
McWilliams, Sean T.; Lang, Ryan N.; Baker, John G.; Thorpe, James Ira
2011-01-01
We investigate the capability of LISA to measure the sky position of equal-mass, nonspinning black hole binaries, including for the first time the entire inspiral-merger-ringdown signal, the effect of the LISA orbits, and the complete three-channel LISA response. For an ensemble of systems near the peak of LISA's sensitivity band, with total rest mass of 2 x l0(exp 6) Stellar Mass at a redshift of z = 1 with random orientations and sky positions, we find median sky localization errors of approximately approx. 3 arcminutes. This is comparable to the field of view of powerful electromagnetic telescopes, such as the James Webb Space Telescope, that could be used to search for electromagnetic signals associated with merging black holes. We investigate the way in which parameter errors decrease with measurement time, focusing specifically on the additional information provided during the merger-ringdown segment of the signal. We find that this information improves all parameter estimates directly, rather than through diminishing correlations with any subset of well-determined parameters.
A software package for evaluating the performance of a star sensor operation
NASA Astrophysics Data System (ADS)
Sarpotdar, Mayuresh; Mathew, Joice; Sreejith, A. G.; Nirmal, K.; Ambily, S.; Prakash, Ajin; Safonova, Margarita; Murthy, Jayant
2017-02-01
We have developed a low-cost off-the-shelf component star sensor ( StarSense) for use in minisatellites and CubeSats to determine the attitude of a satellite in orbit. StarSense is an imaging camera with a limiting magnitude of 6.5, which extracts information from star patterns it records in the images. The star sensor implements a centroiding algorithm to find centroids of the stars in the image, a Geometric Voting algorithm for star pattern identification, and a QUEST algorithm for attitude quaternion calculation. Here, we describe the software package to evaluate the performance of these algorithms as a star sensor single operating system. We simulate the ideal case where sky background and instrument errors are omitted, and a more realistic case where noise and camera parameters are added to the simulated images. We evaluate such performance parameters of the algorithms as attitude accuracy, calculation time, required memory, star catalog size, sky coverage, etc., and estimate the errors introduced by each algorithm. This software package is written for use in MATLAB. The testing is parametrized for different hardware parameters, such as the focal length of the imaging setup, the field of view (FOV) of the camera, angle measurement accuracy, distortion effects, etc., and therefore, can be applied to evaluate the performance of such algorithms in any star sensor. For its hardware implementation on our StarSense, we are currently porting the codes in form of functions written in C. This is done keeping in view its easy implementation on any star sensor electronics hardware.
NASA Technical Reports Server (NTRS)
Braverman, Amy; Nguyen, Hai; Olsen, Edward; Cressie, Noel
2011-01-01
Space-time Data Fusion (STDF) is a methodology for combing heterogeneous remote sensing data to optimally estimate the true values of a geophysical field of interest, and obtain uncertainties for those estimates. The input data sets may have different observing characteristics including different footprints, spatial resolutions and fields of view, orbit cycles, biases, and noise characteristics. Despite these differences all observed data can be linked to the underlying field, and therefore the each other, by a statistical model. Differences in footprints and other geometric characteristics are accounted for by parameterizing pixel-level remote sensing observations as spatial integrals of true field values lying within pixel boundaries, plus measurement error. Both spatial and temporal correlations in the true field and in the observations are estimated and incorporated through the use of a space-time random effects (STRE) model. Once the models parameters are estimated, we use it to derive expressions for optimal (minimum mean squared error and unbiased) estimates of the true field at any arbitrary location of interest, computed from the observations. Standard errors of these estimates are also produced, allowing confidence intervals to be constructed. The procedure is carried out on a fine spatial grid to approximate a continuous field. We demonstrate STDF by applying it to the problem of estimating CO2 concentration in the lower-atmosphere using data from the Atmospheric Infrared Sounder (AIRS) and the Japanese Greenhouse Gasses Observing Satellite (GOSAT) over one year for the continental US.
Homography-based control scheme for mobile robots with nonholonomic and field-of-view constraints.
López-Nicolás, Gonzalo; Gans, Nicholas R; Bhattacharya, Sourabh; Sagüés, Carlos; Guerrero, Josechu J; Hutchinson, Seth
2010-08-01
In this paper, we present a visual servo controller that effects optimal paths for a nonholonomic differential drive robot with field-of-view constraints imposed by the vision system. The control scheme relies on the computation of homographies between current and goal images, but unlike previous homography-based methods, it does not use the homography to compute estimates of pose parameters. Instead, the control laws are directly expressed in terms of individual entries in the homography matrix. In particular, we develop individual control laws for the three path classes that define the language of optimal paths: rotations, straight-line segments, and logarithmic spirals. These control laws, as well as the switching conditions that define how to sequence path segments, are defined in terms of the entries of homography matrices. The selection of the corresponding control law requires the homography decomposition before starting the navigation. We provide a controllability and stability analysis for our system and give experimental results.
Ayaz, Shirazi Muhammad; Kim, Min Young
2018-01-01
In this article, a multi-view registration approach for the 3D handheld profiling system based on the multiple shot structured light technique is proposed. The multi-view registration approach is categorized into coarse registration and point cloud refinement using the iterative closest point (ICP) algorithm. Coarse registration of multiple point clouds was performed using relative orientation and translation parameters estimated via homography-based visual navigation. The proposed system was evaluated using an artificial human skull and a paper box object. For the quantitative evaluation of the accuracy of a single 3D scan, a paper box was reconstructed, and the mean errors in its height and breadth were found to be 9.4 μm and 23 μm, respectively. A comprehensive quantitative evaluation and comparison of proposed algorithm was performed with other variants of ICP. The root mean square error for the ICP algorithm to register a pair of point clouds of the skull object was also found to be less than 1 mm. PMID:29642552
NASA Astrophysics Data System (ADS)
Díaz, Elkin; Arguello, Henry
2016-05-01
Urban ecosystem studies require monitoring, controlling and planning to analyze building density, urban density, urban planning, atmospheric modeling and land use. In urban planning, there are many methods for building height estimation using optical remote sensing images. These methods however, highly depend on sun illumination and cloud-free weather. In contrast, high resolution synthetic aperture radar provides images independent from daytime and weather conditions, although, these images rely on special hardware and expensive acquisition. Most of the biggest cities around the world have been photographed by Google street view under different conditions. Thus, thousands of images from the principal streets of a city can be accessed online. The availability of this and similar rich city imagery such as StreetSide from Microsoft, represents huge opportunities in computer vision because these images can be used as input in many applications such as 3D modeling, segmentation, recognition and stereo correspondence. This paper proposes a novel algorithm to estimate building heights using public Google Street-View imagery. The objective of this work is to obtain thousands of geo-referenced images from Google Street-View using a representational state transfer system, and estimate their average height using single view metrology. Furthermore, the resulting measurements and image metadata are used to derive a layer of heights in a Google map available online. The experimental results show that the proposed algorithm can estimate an accurate average building height map of thousands of images using Google Street-View Imagery of any city.
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.; Lau, William K. M. (Technical Monitor)
2002-01-01
Validation of satellite remote-sensing methods for estimating rainfall against rain-gauge data is attractive because of the direct nature of the rain-gauge measurements. Comparisons of satellite estimates to rain-gauge data are difficult, however, because of the extreme variability of rain and the fact that satellites view large areas over a short time while rain gauges monitor small areas continuously. In this paper, a statistical model of rainfall variability developed for studies of sampling error in averages of satellite data is used to examine the impact of spatial and temporal averaging of satellite and gauge data on intercomparison results. The model parameters were derived from radar observations of rain, but the model appears to capture many of the characteristics of rain-gauge data as well. The model predicts that many months of data from areas containing a few gauges are required to validate satellite estimates over the areas, and that the areas should be of the order of several hundred km in diameter. Over gauge arrays of sufficiently high density, the optimal areas and averaging times are reduced. The possibility of using time-weighted averages of gauge data is explored.
Current interventions in the management of knee osteoarthritis
Bhatia, Dinesh; Bejarano, Tatiana; Novo, Mario
2013-01-01
Osteoarthritis (OA) is progressive joint disease characterized by joint inflammation and a reparative bone response and is one of the top five most disabling conditions that affects more than one-third of persons > 65 years of age, with an average estimation of about 30 million Americans currently affected by this disease. Global estimates reveal more than 100 million people are affected by OA. The financial expenditures for the care of persons with OA are estimated at a total annual national cost estimate of $15.5-$28.6 billion per year. As the number of people >65 years increases, so does the prevalence of OA and the need for cost-effective treatment and care. Developing a treatment strategy which encompasses the underlying physiology of degenerative joint disease is crucial, but it should be considerate to the different age ranges and different population needs. This paper focuses on different exercise and treatment protocols (pharmacological and non-pharmacological), the outcomes of a rehabilitation center, clinician-directed program versus an at home directed individual program to view what parameters are best at reducing pain, increasing functional independence, and reducing cost for persons diagnosed with knee OA. PMID:23559821
BGFit: management and automated fitting of biological growth curves.
Veríssimo, André; Paixão, Laura; Neves, Ana Rute; Vinga, Susana
2013-09-25
Existing tools to model cell growth curves do not offer a flexible integrative approach to manage large datasets and automatically estimate parameters. Due to the increase of experimental time-series from microbiology and oncology, the need for a software that allows researchers to easily organize experimental data and simultaneously extract relevant parameters in an efficient way is crucial. BGFit provides a web-based unified platform, where a rich set of dynamic models can be fitted to experimental time-series data, further allowing to efficiently manage the results in a structured and hierarchical way. The data managing system allows to organize projects, experiments and measurements data and also to define teams with different editing and viewing permission. Several dynamic and algebraic models are already implemented, such as polynomial regression, Gompertz, Baranyi, Logistic and Live Cell Fraction models and the user can add easily new models thus expanding current ones. BGFit allows users to easily manage their data and models in an integrated way, even if they are not familiar with databases or existing computational tools for parameter estimation. BGFit is designed with a flexible architecture that focus on extensibility and leverages free software with existing tools and methods, allowing to compare and evaluate different data modeling techniques. The application is described in the context of bacterial and tumor cells growth data fitting, but it is also applicable to any type of two-dimensional data, e.g. physical chemistry and macroeconomic time series, being fully scalable to high number of projects, data and model complexity.
Heidari, M.; Ranjithan, S.R.
1998-01-01
In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.
View-limiting shrouds for insolation radiometers
NASA Technical Reports Server (NTRS)
Dennison, E. W.; Trentelman, G. F.
1985-01-01
Insolation radiometers (normal incidence pyrheliometers) are used to measure the solar radiation incident on solar concentrators for calibrating thermal power generation measurements. The measured insolation value is dependent on the atmospheric transparency, solar elevation angle, circumsolar radiation, and radiometer field of view. The radiant energy entering the thermal receiver is dependent on the same factors. The insolation value and the receiver input will be proportional if the concentrator and the radiometer have similar fields of view. This report describes one practical method for matching the field of view of a radiometer to that of a solar concentrator. The concentrator field of view can be calculated by optical ray tracing methods and the field of view of a radiometer with a simple shroud can be calculated by using geometric equations. The parameters for the shroud can be adjusted to provide an acceptable match between the respective fields of view. Concentrator fields of view have been calculated for a family of paraboloidal concentrators and receiver apertures. The corresponding shroud parameters have also been determined.
Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2011-01-01
An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation applications such as model-based diagnostic, controls, and life usage calculations. The advantage of the innovation is the significant reduction in estimation errors that it can provide relative to the conventional approach of selecting a subset of health parameters to serve as the model tuning parameter vector. Because this technique needs only to be performed during the system design process, it places no additional computation burden on the onboard Kalman filter implementation. The technique has been developed for aircraft engine onboard estimation applications, as this application typically presents an under-determined estimation problem. However, this generic technique could be applied to other industries using gas turbine engine technology.
NASA Astrophysics Data System (ADS)
Radun, Jenni; Leisti, Tuomas; Virtanen, Toni; Nyman, Göte; Häkkinen, Jukka
2014-11-01
To understand the viewing strategies employed in a quality estimation task, we compared two visual tasks-quality estimation and difference estimation. The estimation was done for a pair of natural images having small global changes in quality. Two groups of observers estimated the same set of images, but with different instructions. One group estimated the difference in quality and the other the difference between image pairs. The results demonstrated the use of different visual strategies in the tasks. The quality estimation was found to include more visual planning during the first fixation than the difference estimation, but afterward needed only a few long fixations on the semantically important areas of the image. The difference estimation used many short fixations. Salient image areas were mainly attended to when these areas were also semantically important. The results support the hypothesis that these tasks' general characteristics (evaluation time, number of fixations, area fixated on) show differences in processing, but also suggest that examining only single fixations when comparing tasks is too narrow a view. When planning a subjective experiment, one must remember that a small change in the instructions might lead to a noticeable change in viewing strategy.
Investigating the Impact of Uncertainty about Item Parameters on Ability Estimation
ERIC Educational Resources Information Center
Zhang, Jinming; Xie, Minge; Song, Xiaolan; Lu, Ting
2011-01-01
Asymptotic expansions of the maximum likelihood estimator (MLE) and weighted likelihood estimator (WLE) of an examinee's ability are derived while item parameter estimators are treated as covariates measured with error. The asymptotic formulae present the amount of bias of the ability estimators due to the uncertainty of item parameter estimators.…
Robust estimation of fetal heart rate from US Doppler signals
NASA Astrophysics Data System (ADS)
Voicu, Iulian; Girault, Jean-Marc; Roussel, Catherine; Decock, Aliette; Kouame, Denis
2010-01-01
Introduction: In utero, Monitoring of fetal wellbeing or suffering is today an open challenge, due to the high number of clinical parameters to be considered. An automatic monitoring of fetal activity, dedicated for quantifying fetal wellbeing, becomes necessary. For this purpose and in a view to supply an alternative for the Manning test, we used an ultrasound multitransducer multigate Doppler system. One important issue (and first step in our investigation) is the accurate estimation of fetal heart rate (FHR). An estimation of the FHR is obtained by evaluating the autocorrelation function of the Doppler signals for ills and healthiness foetus. However, this estimator is not enough robust since about 20% of FHR are not detected in comparison to a reference system. These non detections are principally due to the fact that the Doppler signal generated by the fetal moving is strongly disturbed by the presence of others several Doppler sources (mother' s moving, pseudo breathing, etc.). By modifying the existing method (autocorrelation method) and by proposing new time and frequency estimators used in the audio' s domain, we reduce to 5% the probability of non-detection of the fetal heart rate. These results are really encouraging and they enable us to plan the use of automatic classification techniques in order to discriminate between healthy and in suffering foetus.
Lagrangian speckle model and tissue-motion estimation--theory.
Maurice, R L; Bertrand, M
1999-07-01
It is known that when a tissue is subjected to movements such as rotation, shearing, scaling, etc., changes in speckle patterns that result act as a noise source, often responsible for most of the displacement-estimate variance. From a modeling point of view, these changes can be thought of as resulting from two mechanisms: one is the motion of the speckles and the other, the alterations of their morphology. In this paper, we propose a new tissue-motion estimator to counteract these speckle decorrelation effects. The estimator is based on a Lagrangian description of the speckle motion. This description allows us to follow local characteristics of the speckle field as if they were a material property. This method leads to an analytical description of the decorrelation in a way which enables the derivation of an appropriate inverse filter for speckle restoration. The filter is appropriate for linear geometrical transformation of the scattering function (LT), i.e., a constant-strain region of interest (ROI). As the LT itself is a parameter of the filter, a tissue-motion estimator can be formulated as a nonlinear minimization problem, seeking the best match between the pre-tissue-motion image and a restored-speckle post-motion image. The method is tested, using simulated radio-frequency (RF) images of tissue undergoing axial shear.
Jang, Cheongjae; Ha, Junhyoung; Dupont, Pierre E.; Park, Frank Chongwoo
2017-01-01
Although existing mechanics-based models of concentric tube robots have been experimentally demonstrated to approximate the actual kinematics, determining accurate estimates of model parameters remains difficult due to the complex relationship between the parameters and available measurements. Further, because the mechanics-based models neglect some phenomena like friction, nonlinear elasticity, and cross section deformation, it is also not clear if model error is due to model simplification or to parameter estimation errors. The parameters of the superelastic materials used in these robots can be slowly time-varying, necessitating periodic re-estimation. This paper proposes a method for estimating the mechanics-based model parameters using an extended Kalman filter as a step toward on-line parameter estimation. Our methodology is validated through both simulation and experiments. PMID:28717554
Bibliography for aircraft parameter estimation
NASA Technical Reports Server (NTRS)
Iliff, Kenneth W.; Maine, Richard E.
1986-01-01
An extensive bibliography in the field of aircraft parameter estimation has been compiled. This list contains definitive works related to most aircraft parameter estimation approaches. Theoretical studies as well as practical applications are included. Many of these publications are pertinent to subjects peripherally related to parameter estimation, such as aircraft maneuver design or instrumentation considerations.
Two-dimensional advective transport in ground-water flow parameter estimation
Anderman, E.R.; Hill, M.C.; Poeter, E.P.
1996-01-01
Nonlinear regression is useful in ground-water flow parameter estimation, but problems of parameter insensitivity and correlation often exist given commonly available hydraulic-head and head-dependent flow (for example, stream and lake gain or loss) observations. To address this problem, advective-transport observations are added to the ground-water flow, parameter-estimation model MODFLOWP using particle-tracking methods. The resulting model is used to investigate the importance of advective-transport observations relative to head-dependent flow observations when either or both are used in conjunction with hydraulic-head observations in a simulation of the sewage-discharge plume at Otis Air Force Base, Cape Cod, Massachusetts, USA. The analysis procedure for evaluating the probable effect of new observations on the regression results consists of two steps: (1) parameter sensitivities and correlations calculated at initial parameter values are used to assess the model parameterization and expected relative contributions of different types of observations to the regression; and (2) optimal parameter values are estimated by nonlinear regression and evaluated. In the Cape Cod parameter-estimation model, advective-transport observations did not significantly increase the overall parameter sensitivity; however: (1) inclusion of advective-transport observations decreased parameter correlation enough for more unique parameter values to be estimated by the regression; (2) realistic uncertainties in advective-transport observations had a small effect on parameter estimates relative to the precision with which the parameters were estimated; and (3) the regression results and sensitivity analysis provided insight into the dynamics of the ground-water flow system, especially the importance of accurate boundary conditions. In this work, advective-transport observations improved the calibration of the model and the estimation of ground-water flow parameters, and use of regression and related techniques produced significant insight into the physical system.
NASA Astrophysics Data System (ADS)
Gaci, Said; Hachay, Olga; Zaourar, Naima
2017-04-01
One of the key elements in hydrocarbon reservoirs characterization is the S-wave velocity (Vs). Since the traditional estimating methods often fail to accurately predict this physical parameter, a new approach that takes into account its non-stationary and non-linear properties is needed. In this view, a prediction model based on complete ensemble empirical mode decomposition (CEEMD) and a multiple layer perceptron artificial neural network (MLP ANN) is suggested to compute Vs from P-wave velocity (Vp). Using a fine-to-coarse reconstruction algorithm based on CEEMD, the Vp log data is decomposed into a high frequency (HF) component, a low frequency (LF) component and a trend component. Then, different combinations of these components are used as inputs of the MLP ANN algorithm for estimating Vs log. Applications on well logs taken from different geological settings illustrate that the predicted Vs values using MLP ANN with the combinations of HF, LF and trend in inputs are more accurate than those obtained with the traditional estimating methods. Keywords: S-wave velocity, CEEMD, multilayer perceptron neural networks.
NASA Astrophysics Data System (ADS)
Coubard, F.; Brédif, M.; Paparoditis, N.; Briottet, X.
2011-04-01
Terrestrial geolocalized images are nowadays widely used on the Internet, mainly in urban areas, through immersion services such as Google Street View. On the long run, we seek to enhance the visualization of these images; for that purpose, radiometric corrections must be performed to free them from illumination conditions at the time of acquisition. Given the simultaneously acquired 3D geometric model of the scene with LIDAR or vision techniques, we face an inverse problem where the illumination and the geometry of the scene are known and the reflectance of the scene is to be estimated. Our main contribution is the introduction of a symbolic ray-tracing rendering to generate parametric images, for quick evaluation and comparison with the acquired images. The proposed approach is then based on an iterative estimation of the reflectance parameters of the materials, using a single rendering pre-processing. We validate the method on synthetic data with linear BRDF models and discuss the limitations of the proposed approach with more general non-linear BRDF models.
Jongerling, Joran; Laurenceau, Jean-Philippe; Hamaker, Ellen L
2015-01-01
In this article we consider a multilevel first-order autoregressive [AR(1)] model with random intercepts, random autoregression, and random innovation variance (i.e., the level 1 residual variance). Including random innovation variance is an important extension of the multilevel AR(1) model for two reasons. First, between-person differences in innovation variance are important from a substantive point of view, in that they capture differences in sensitivity and/or exposure to unmeasured internal and external factors that influence the process. Second, using simulation methods we show that modeling the innovation variance as fixed across individuals, when it should be modeled as a random effect, leads to biased parameter estimates. Additionally, we use simulation methods to compare maximum likelihood estimation to Bayesian estimation of the multilevel AR(1) model and investigate the trade-off between the number of individuals and the number of time points. We provide an empirical illustration by applying the extended multilevel AR(1) model to daily positive affect ratings from 89 married women over the course of 42 consecutive days.
Syryńska, Maria; Szyszka, Liliana; Post, Marcin
2008-01-01
Recognised and unrecognised bone diseases including maxilla and/or mandible may have influance on formation of malocclusions. In first stages of diseases the patients are directed or report for orthodontic treatment which starting need additional examinations mainly pantomographic views. In spite doing necessary additional examinations sometimes we can't recognise disorder like patient presented in our study. Then we can observate and if changes will begin disturbing the function--surgical intervention. Establishment of orthodontic treatment plan and explanation if during three years the dimension of asymmetry resulting from wrong growth right and left part of mandible and the estimation the rate of changes happening in this time. In study we used the own asymmetry index to estimate the patient's pantomographic views who reported for orthodontic treatment because of occlusion disorders, facial asymmetry and discomfort of mastication and speech. The telerentgenographic lateral views in right and posterior-anterior (PA) projection were also done. We measured and estimated the own asymmetry index on pantomographic views. The radiographs reveal the asymmetry of left part of mandible. The comparative analysis of pantomographic views enables the estimation of changes happening in time and the telerentgenographic lateral views, PA and computer tomography (CT) confirm changes which increase the asymmetry. The asymmetry index is the instrument which enable the estimation of growth changes in mandible with unsteady aetiology and histopathological unrecognised, allow determine the growth rate and facilitate the permanent control the dimension of mandible asymmetry.
Advances in parameter estimation techniques applied to flexible structures
NASA Technical Reports Server (NTRS)
Maben, Egbert; Zimmerman, David C.
1994-01-01
In this work, various parameter estimation techniques are investigated in the context of structural system identification utilizing distributed parameter models and 'measured' time-domain data. Distributed parameter models are formulated using the PDEMOD software developed by Taylor. Enhancements made to PDEMOD for this work include the following: (1) a Wittrick-Williams based root solving algorithm; (2) a time simulation capability; and (3) various parameter estimation algorithms. The parameter estimations schemes will be contrasted using the NASA Mini-Mast as the focus structure.
Impact of the time scale of model sensitivity response on coupled model parameter estimation
NASA Astrophysics Data System (ADS)
Liu, Chang; Zhang, Shaoqing; Li, Shan; Liu, Zhengyu
2017-11-01
That a model has sensitivity responses to parameter uncertainties is a key concept in implementing model parameter estimation using filtering theory and methodology. Depending on the nature of associated physics and characteristic variability of the fluid in a coupled system, the response time scales of a model to parameters can be different, from hourly to decadal. Unlike state estimation, where the update frequency is usually linked with observational frequency, the update frequency for parameter estimation must be associated with the time scale of the model sensitivity response to the parameter being estimated. Here, with a simple coupled model, the impact of model sensitivity response time scales on coupled model parameter estimation is studied. The model includes characteristic synoptic to decadal scales by coupling a long-term varying deep ocean with a slow-varying upper ocean forced by a chaotic atmosphere. Results show that, using the update frequency determined by the model sensitivity response time scale, both the reliability and quality of parameter estimation can be improved significantly, and thus the estimated parameters make the model more consistent with the observation. These simple model results provide a guideline for when real observations are used to optimize the parameters in a coupled general circulation model for improving climate analysis and prediction initialization.
Rapid production of optimal-quality reduced-resolution representations of very large databases
Sigeti, David E.; Duchaineau, Mark; Miller, Mark C.; Wolinsky, Murray; Aldrich, Charles; Mineev-Weinstein, Mark B.
2001-01-01
View space representation data is produced in real time from a world space database representing terrain features. The world space database is first preprocessed. A database is formed having one element for each spatial region corresponding to a finest selected level of detail. A multiresolution database is then formed by merging elements and a strict error metric is computed for each element at each level of detail that is independent of parameters defining the view space. The multiresolution database and associated strict error metrics are then processed in real time for real time frame representations. View parameters for a view volume comprising a view location and field of view are selected. The error metric with the view parameters is converted to a view-dependent error metric. Elements with the coarsest resolution are chosen for an initial representation. Data set first elements from the initial representation data set are selected that are at least partially within the view volume. The first elements are placed in a split queue ordered by the value of the view-dependent error metric. If the number of first elements in the queue meets or exceeds a predetermined number of elements or whether the largest error metric is less than or equal to a selected upper error metric bound, the element at the head of the queue is force split and the resulting elements are inserted into the queue. Force splitting is continued until the determination is positive to form a first multiresolution set of elements. The first multiresolution set of elements is then outputted as reduced resolution view space data representing the terrain features.
A Systematic Approach for Model-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.
Improved Estimates of Thermodynamic Parameters
NASA Technical Reports Server (NTRS)
Lawson, D. D.
1982-01-01
Techniques refined for estimating heat of vaporization and other parameters from molecular structure. Using parabolic equation with three adjustable parameters, heat of vaporization can be used to estimate boiling point, and vice versa. Boiling points and vapor pressures for some nonpolar liquids were estimated by improved method and compared with previously reported values. Technique for estimating thermodynamic parameters should make it easier for engineers to choose among candidate heat-exchange fluids for thermochemical cycles.
Bayesian LASSO, scale space and decision making in association genetics.
Pasanen, Leena; Holmström, Lasse; Sillanpää, Mikko J
2015-01-01
LASSO is a penalized regression method that facilitates model fitting in situations where there are as many, or even more explanatory variables than observations, and only a few variables are relevant in explaining the data. We focus on the Bayesian version of LASSO and consider four problems that need special attention: (i) controlling false positives, (ii) multiple comparisons, (iii) collinearity among explanatory variables, and (iv) the choice of the tuning parameter that controls the amount of shrinkage and the sparsity of the estimates. The particular application considered is association genetics, where LASSO regression can be used to find links between chromosome locations and phenotypic traits in a biological organism. However, the proposed techniques are relevant also in other contexts where LASSO is used for variable selection. We separate the true associations from false positives using the posterior distribution of the effects (regression coefficients) provided by Bayesian LASSO. We propose to solve the multiple comparisons problem by using simultaneous inference based on the joint posterior distribution of the effects. Bayesian LASSO also tends to distribute an effect among collinear variables, making detection of an association difficult. We propose to solve this problem by considering not only individual effects but also their functionals (i.e. sums and differences). Finally, whereas in Bayesian LASSO the tuning parameter is often regarded as a random variable, we adopt a scale space view and consider a whole range of fixed tuning parameters, instead. The effect estimates and the associated inference are considered for all tuning parameters in the selected range and the results are visualized with color maps that provide useful insights into data and the association problem considered. The methods are illustrated using two sets of artificial data and one real data set, all representing typical settings in association genetics.
NASA Astrophysics Data System (ADS)
Van der Auweraer, H.; Steinbichler, H.; Vanlanduit, S.; Haberstok, C.; Freymann, R.; Storer, D.; Linet, V.
2002-04-01
Accurate structural models are key to the optimization of the vibro-acoustic behaviour of panel-like structures. However, at the frequencies of relevance to the acoustic problem, the structural modes are very complex, requiring high-spatial-resolution measurements. The present paper discusses a vibration testing system based on pulsed-laser holographic electronic speckle pattern interferometry (ESPI) measurements. It is a characteristic of the method that time-triggered (and not time-averaged) vibration images are obtained. Its integration into a practicable modal testing and analysis procedure is reviewed. The accumulation of results at multiple excitation frequencies allows one to build up frequency response functions. A novel parameter extraction approach using spline-based data reduction and maximum-likelihood parameter estimation was developed. Specific extensions have been added in view of the industrial application of the approach. These include the integration of geometry and response information, the integration of multiple views into one single model, the integration with finite-element model data and the prior identification of the critical panels and critical modes. A global procedure was hence established. The approach has been applied to several industrial case studies, including car panels, the firewall of a monovolume car, a full vehicle, panels of a light truck and a household product. The research was conducted in the context of the EUREKA project HOLOMODAL and the Brite-Euram project SALOME.
NASA Astrophysics Data System (ADS)
Li, Zhengji; Teng, Qizhi; He, Xiaohai; Yue, Guihua; Wang, Zhengyong
2017-09-01
The parameter evaluation of reservoir rocks can help us to identify components and calculate the permeability and other parameters, and it plays an important role in the petroleum industry. Until now, computed tomography (CT) has remained an irreplaceable way to acquire the microstructure of reservoir rocks. During the evaluation and analysis, large samples and high-resolution images are required in order to obtain accurate results. Owing to the inherent limitations of CT, however, a large field of view results in low-resolution images, and high-resolution images entail a smaller field of view. Our method is a promising solution to these data collection limitations. In this study, a framework for sparse representation-based 3D volumetric super-resolution is proposed to enhance the resolution of 3D voxel images of reservoirs scanned with CT. A single reservoir structure and its downgraded model are divided into a large number of 3D cubes of voxel pairs and these cube pairs are used to calculate two overcomplete dictionaries and the sparse-representation coefficients in order to estimate the high frequency component. Future more, to better result, a new feature extract method with combine BM4D together with Laplacian filter are introduced. In addition, we conducted a visual evaluation of the method, and used the PSNR and FSIM to evaluate it qualitatively.
Estimating Convection Parameters in the GFDL CM2.1 Model Using Ensemble Data Assimilation
NASA Astrophysics Data System (ADS)
Li, Shan; Zhang, Shaoqing; Liu, Zhengyu; Lu, Lv; Zhu, Jiang; Zhang, Xuefeng; Wu, Xinrong; Zhao, Ming; Vecchi, Gabriel A.; Zhang, Rong-Hua; Lin, Xiaopei
2018-04-01
Parametric uncertainty in convection parameterization is one major source of model errors that cause model climate drift. Convection parameter tuning has been widely studied in atmospheric models to help mitigate the problem. However, in a fully coupled general circulation model (CGCM), convection parameters which impact the ocean as well as the climate simulation may have different optimal values. This study explores the possibility of estimating convection parameters with an ensemble coupled data assimilation method in a CGCM. Impacts of the convection parameter estimation on climate analysis and forecast are analyzed. In a twin experiment framework, five convection parameters in the GFDL coupled model CM2.1 are estimated individually and simultaneously under both perfect and imperfect model regimes. Results show that the ensemble data assimilation method can help reduce the bias in convection parameters. With estimated convection parameters, the analyses and forecasts for both the atmosphere and the ocean are generally improved. It is also found that information in low latitudes is relatively more important for estimating convection parameters. This study further suggests that when important parameters in appropriate physical parameterizations are identified, incorporating their estimation into traditional ensemble data assimilation procedure could improve the final analysis and climate prediction.
Seeing mountains in mole hills: geographical-slant perception
NASA Technical Reports Server (NTRS)
Proffitt, D. R.; Creem, S. H.; Zosh, W. D.; Kaiser, M. K. (Principal Investigator)
2001-01-01
When observers face directly toward the incline of a hill, their awareness of the slant of the hill is greatly overestimated, but motoric estimates are much more accurate. The present study examined whether similar results would be found when observers were allowed to view the side of a hill. Observers viewed the cross-sections of hills in real (Experiment 1) and virtual (Experiment 2) environments and estimated the inclines with verbal estimates, by adjusting the cross-section of a disk, and by adjusting a board with their unseen hand to match the inclines. We found that the results for cross-section viewing replicated those found when observers directly face the incline. Even though the angles of hills are directly evident when viewed from the side, slant perceptions are still grossly overestimated.
Punzo, Antonio; Ingrassia, Salvatore; Maruotti, Antonello
2018-04-22
A time-varying latent variable model is proposed to jointly analyze multivariate mixed-support longitudinal data. The proposal can be viewed as an extension of hidden Markov regression models with fixed covariates (HMRMFCs), which is the state of the art for modelling longitudinal data, with a special focus on the underlying clustering structure. HMRMFCs are inadequate for applications in which a clustering structure can be identified in the distribution of the covariates, as the clustering is independent from the covariates distribution. Here, hidden Markov regression models with random covariates are introduced by explicitly specifying state-specific distributions for the covariates, with the aim of improving the recovering of the clusters in the data with respect to a fixed covariates paradigm. The hidden Markov regression models with random covariates class is defined focusing on the exponential family, in a generalized linear model framework. Model identifiability conditions are sketched, an expectation-maximization algorithm is outlined for parameter estimation, and various implementation and operational issues are discussed. Properties of the estimators of the regression coefficients, as well as of the hidden path parameters, are evaluated through simulation experiments and compared with those of HMRMFCs. The method is applied to physical activity data. Copyright © 2018 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Moore, C. E.; Cardelino, B. H.; Frazier, D. O.; Niles, J.; Wang, X.-Q.
1998-01-01
The static third-order polarizabilities (gamma) of C60, C70, five isomers of C78 and two isomers of C84 were analyzed in terms of three properties, from a geometric point of view: symmetry, aromaticity and size. The polarizability values were based on the finite field approximation using a semiempirical Hamiltonian (AM1) and applied to molecular structures obtained from density functional theory calculations. Symmetry was characterized by the molecular group order. The selection of 6-member rings as aromatic was determined from an analysis of bond lengths. Maximum interatomic distance and surface area were the parameters considered with respect to size. Based on triple linear regression analysis, it was found that the static linear polarizability (alpha) and gamma in these molecules respond differently to geometrical properties: alpha depends almost exclusively on surface area while gamma is affected by a combination of number of aromatic rings, length and group order, in decreasing importance. In the case of alpha, valence electron contributions provide the same information as all-electron estimates. For gamma, the best correlation coefficients are obtained when all-electron estimates are used and when the dependent parameter is ln(gamma) instead of gamma.
Joint Multi-Fiber NODDI Parameter Estimation and Tractography Using the Unscented Information Filter
Reddy, Chinthala P.; Rathi, Yogesh
2016-01-01
Tracing white matter fiber bundles is an integral part of analyzing brain connectivity. An accurate estimate of the underlying tissue parameters is also paramount in several neuroscience applications. In this work, we propose to use a joint fiber model estimation and tractography algorithm that uses the NODDI (neurite orientation dispersion diffusion imaging) model to estimate fiber orientation dispersion consistently and smoothly along the fiber tracts along with estimating the intracellular and extracellular volume fractions from the diffusion signal. While the NODDI model has been used in earlier works to estimate the microstructural parameters at each voxel independently, for the first time, we propose to integrate it into a tractography framework. We extend this framework to estimate the NODDI parameters for two crossing fibers, which is imperative to trace fiber bundles through crossings as well as to estimate the microstructural parameters for each fiber bundle separately. We propose to use the unscented information filter (UIF) to accurately estimate the model parameters and perform tractography. The proposed approach has significant computational performance improvements as well as numerical robustness over the unscented Kalman filter (UKF). Our method not only estimates the confidence in the estimated parameters via the covariance matrix, but also provides the Fisher-information matrix of the state variables (model parameters), which can be quite useful to measure model complexity. Results from in-vivo human brain data sets demonstrate the ability of our algorithm to trace through crossing fiber regions, while estimating orientation dispersion and other biophysical model parameters in a consistent manner along the tracts. PMID:27147956
Reddy, Chinthala P; Rathi, Yogesh
2016-01-01
Tracing white matter fiber bundles is an integral part of analyzing brain connectivity. An accurate estimate of the underlying tissue parameters is also paramount in several neuroscience applications. In this work, we propose to use a joint fiber model estimation and tractography algorithm that uses the NODDI (neurite orientation dispersion diffusion imaging) model to estimate fiber orientation dispersion consistently and smoothly along the fiber tracts along with estimating the intracellular and extracellular volume fractions from the diffusion signal. While the NODDI model has been used in earlier works to estimate the microstructural parameters at each voxel independently, for the first time, we propose to integrate it into a tractography framework. We extend this framework to estimate the NODDI parameters for two crossing fibers, which is imperative to trace fiber bundles through crossings as well as to estimate the microstructural parameters for each fiber bundle separately. We propose to use the unscented information filter (UIF) to accurately estimate the model parameters and perform tractography. The proposed approach has significant computational performance improvements as well as numerical robustness over the unscented Kalman filter (UKF). Our method not only estimates the confidence in the estimated parameters via the covariance matrix, but also provides the Fisher-information matrix of the state variables (model parameters), which can be quite useful to measure model complexity. Results from in-vivo human brain data sets demonstrate the ability of our algorithm to trace through crossing fiber regions, while estimating orientation dispersion and other biophysical model parameters in a consistent manner along the tracts.
Real-Time Parameter Estimation in the Frequency Domain
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2000-01-01
A method for real-time estimation of parameters in a linear dynamic state-space model was developed and studied. The application is aircraft dynamic model parameter estimation from measured data in flight. Equation error in the frequency domain was used with a recursive Fourier transform for the real-time data analysis. Linear and nonlinear simulation examples and flight test data from the F-18 High Alpha Research Vehicle were used to demonstrate that the technique produces accurate model parameter estimates with appropriate error bounds. Parameter estimates converged in less than one cycle of the dominant dynamic mode, using no a priori information, with control surface inputs measured in flight during ordinary piloted maneuvers. The real-time parameter estimation method has low computational requirements and could be implemented
Linear Parameter Varying Control Synthesis for Actuator Failure, Based on Estimated Parameter
NASA Technical Reports Server (NTRS)
Shin, Jong-Yeob; Wu, N. Eva; Belcastro, Christine
2002-01-01
The design of a linear parameter varying (LPV) controller for an aircraft at actuator failure cases is presented. The controller synthesis for actuator failure cases is formulated into linear matrix inequality (LMI) optimizations based on an estimated failure parameter with pre-defined estimation error bounds. The inherent conservatism of an LPV control synthesis methodology is reduced using a scaling factor on the uncertainty block which represents estimated parameter uncertainties. The fault parameter is estimated using the two-stage Kalman filter. The simulation results of the designed LPV controller for a HiMXT (Highly Maneuverable Aircraft Technology) vehicle with the on-line estimator show that the desired performance and robustness objectives are achieved for actuator failure cases.
Multi-objective optimization in quantum parameter estimation
NASA Astrophysics Data System (ADS)
Gong, BeiLi; Cui, Wei
2018-04-01
We investigate quantum parameter estimation based on linear and Kerr-type nonlinear controls in an open quantum system, and consider the dissipation rate as an unknown parameter. We show that while the precision of parameter estimation is improved, it usually introduces a significant deformation to the system state. Moreover, we propose a multi-objective model to optimize the two conflicting objectives: (1) maximizing the Fisher information, improving the parameter estimation precision, and (2) minimizing the deformation of the system state, which maintains its fidelity. Finally, simulations of a simplified ɛ-constrained model demonstrate the feasibility of the Hamiltonian control in improving the precision of the quantum parameter estimation.
Cooley, Richard L.
1983-01-01
This paper investigates factors influencing the degree of improvement in estimates of parameters of a nonlinear regression groundwater flow model by incorporating prior information of unknown reliability. Consideration of expected behavior of the regression solutions and results of a hypothetical modeling problem lead to several general conclusions. First, if the parameters are properly scaled, linearized expressions for the mean square error (MSE) in parameter estimates of a nonlinear model will often behave very nearly as if the model were linear. Second, by using prior information, the MSE in properly scaled parameters can be reduced greatly over the MSE of ordinary least squares estimates of parameters. Third, plots of estimated MSE and the estimated standard deviation of MSE versus an auxiliary parameter (the ridge parameter) specifying the degree of influence of the prior information on regression results can help determine the potential for improvement of parameter estimates. Fourth, proposed criteria can be used to make appropriate choices for the ridge parameter and another parameter expressing degree of overall bias in the prior information. Results of a case study of Truckee Meadows, Reno-Sparks area, Washoe County, Nevada, conform closely to the results of the hypothetical problem. In the Truckee Meadows case, incorporation of prior information did not greatly change the parameter estimates from those obtained by ordinary least squares. However, the analysis showed that both sets of estimates are more reliable than suggested by the standard errors from ordinary least squares.
Symbolic enhancement of perspective displays
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Hacisalihzade, Selim S.
1990-01-01
Two exocentric azimuth judgment experiments with a perspective display were conducted with 16 subjects. Previous work has shown these judgments to exhibit a bias possibly due to misinterpretation of the viewing parameters used to generate the display. Though geometric compensations may be used to correct for the bias, an alternate technique selected in the following 2 experiments was the introduction of symbolic enhancements in the form of compass roses. It is suggested that a compass rose with 30 deg divisions results in overall optimal azimuth estimation accuracy when accuracy and decision time are both considered. The data also suggest that the added radial lines on the compass roses may interact with normalization processes that influence the judgment errors.
Wolters, Mark A; Dean, C B
2017-01-01
Remote sensing images from Earth-orbiting satellites are a potentially rich data source for monitoring and cataloguing atmospheric health hazards that cover large geographic regions. A method is proposed for classifying such images into hazard and nonhazard regions using the autologistic regression model, which may be viewed as a spatial extension of logistic regression. The method includes a novel and simple approach to parameter estimation that makes it well suited to handling the large and high-dimensional datasets arising from satellite-borne instruments. The methodology is demonstrated on both simulated images and a real application to the identification of forest fire smoke.
NASA Technical Reports Server (NTRS)
Scialdone, J. J.
1985-01-01
Experimentally measured outgassing as a function of time is presented for 14 space systems including several spacecraft instruments, spacecraft, the shuttle bay, and a spent solid fuel motor. The weights, volumes, and some of the scientific functions of the instruments involved are indicated. The methods used to obtain the data are briefly described. General indications on how to use the data to obtain the internal pressure versus time for a payload, its self-contamination, the gaseous flow in its vicinity, the column densities in its field of view, and other environmental parameters which are dependent on the outgassing of a payload are provided.
A Brief Survey of Modern Optimization for Statisticians
Lange, Kenneth; Chi, Eric C.; Zhou, Hua
2014-01-01
Modern computational statistics is turning more and more to high-dimensional optimization to handle the deluge of big data. Once a model is formulated, its parameters can be estimated by optimization. Because model parsimony is important, models routinely include nondifferentiable penalty terms such as the lasso. This sober reality complicates minimization and maximization. Our broad survey stresses a few important principles in algorithm design. Rather than view these principles in isolation, it is more productive to mix and match them. A few well chosen examples illustrate this point. Algorithm derivation is also emphasized, and theory is downplayed, particularly the abstractions of the convex calculus. Thus, our survey should be useful and accessible to a broad audience. PMID:25242858
Adaptive compressed sensing of multi-view videos based on the sparsity estimation
NASA Astrophysics Data System (ADS)
Yang, Senlin; Li, Xilong; Chong, Xin
2017-11-01
The conventional compressive sensing for videos based on the non-adaptive linear projections, and the measurement times is usually set empirically. As a result, the quality of videos reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was described. Then an estimation method for the sparsity of multi-view videos was proposed based on the two dimensional discrete wavelet transform (2D DWT). With an energy threshold given beforehand, the DWT coefficients were processed with both energy normalization and sorting by descending order, and the sparsity of the multi-view video can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of video frame effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparsity estimated with the energy threshold provided, the proposed method can ensure the reconstruction quality of multi-view videos.
NASA Astrophysics Data System (ADS)
Smeltzer, C. D.; Wang, Y.; Boersma, F.; Celarier, E. A.; Bucsela, E. J.
2013-12-01
We investigate the effects of retrieval radiation schemes and parameters on trend analysis using tropospheric nitrogen dioxide (NO2) vertical column density (VCD) measurements over the United States. Ozone Monitoring Instrument (OMI) observations from 2005 through 2012 are used in this analysis. We investigated two radiation schemes, provided by National Aeronautics and Space Administration (NASA TOMRAD) and Koninklijk Nederlands Meteorologisch Instituut (KNMI DAK). In addition, we analyzed trend dependence on radiation parameters, including surface albedo and viewing geometry. The cross-track mean VCD average difference is 10-15% between the two radiation schemes in 2005. As the OMI anomaly developed and progressively worsens, the difference between the two schemes becomes larger. Furthermore, applying surface albedo measurements from the Moderate Resolution Imaging Spectroradiometer (MODIS) leads to increases of estimated NO2 VCD trends over high-emission regions. We find that the uncertainties of OMI-derived NO2 VCD trends can be reduced by up to a factor of 3 by selecting OMI cross-track rows on the basis of their performance over the ocean [see abstract figure]. Comparison of OMI tropospheric VCD trends to those estimated based on the EPA surface NO2 observations indicate using MODIS surface albedo data and a more narrow selection of OMI cross-track rows greatly improves the agreement of estimated trends between satellite and surface data. This figure shows the reduction of uncertainty in OMI NO2 trend by selecting OMI cross-track rows based on the performance over the ocean. With this technique, uncertainties within the seasonal trend may be reduced by a factor of 3 or more (blue) compared with only removing the anomalous rows: considering OMI cross-track rows 4-24 (red).
NASA Astrophysics Data System (ADS)
Ewerlöf, Maria; Larsson, Marcus; Salerud, E. Göran
2017-02-01
Hyperspectral imaging (HSI) can estimate the spatial distribution of skin blood oxygenation, using visible to near-infrared light. HSI oximeters often use a liquid-crystal tunable filter, an acousto-optic tunable filter or mechanically adjustable filter wheels, which has too long response/switching times to monitor tissue hemodynamics. This work aims to evaluate a multispectral snapshot imaging system to estimate skin blood volume and oxygen saturation with high temporal and spatial resolution. We use a snapshot imager, the xiSpec camera (MQ022HG-IM-SM4X4-VIS, XIMEA), having 16 wavelength-specific Fabry-Perot filters overlaid on the custom CMOS-chip. The spectral distribution of the bands is however substantially overlapping, which needs to be taken into account for an accurate analysis. An inverse Monte Carlo analysis is performed using a two-layered skin tissue model, defined by epidermal thickness, haemoglobin concentration and oxygen saturation, melanin concentration and spectrally dependent reduced-scattering coefficient, all parameters relevant for human skin. The analysis takes into account the spectral detector response of the xiSpec camera. At each spatial location in the field-of-view, we compare the simulated output to the detected diffusively backscattered spectra to find the best fit. The imager is evaluated for spatial and temporal variations during arterial and venous occlusion protocols applied to the forearm. Estimated blood volume changes and oxygenation maps at 512x272 pixels show values that are comparable to reference measurements performed in contact with the skin tissue. We conclude that the snapshot xiSpec camera, paired with an inverse Monte Carlo algorithm, permits us to use this sensor for spatial and temporal measurement of varying physiological parameters, such as skin tissue blood volume and oxygenation.
NASA Astrophysics Data System (ADS)
van Leth, Thomas C.; Verstraeten, Willem W.; Sanders, Abram F. J.
2014-05-01
Mapping terrestrial chlorophyll fluorescence is a crucial activity to obtain information on the functional status of vegetation and to improve estimates of light-use efficiency (LUE) and global primary productivity (GPP). GPP quantifies carbon fixation by plant ecosystems and is therefore an important parameter for budgeting terrestrial carbon cycles. Satellite remote sensing offers an excellent tool for investigating GPP in a spatially explicit fashion across different scales of observation. The GPP estimates, however, still remain largely uncertain due to biotic and abiotic factors that influence plant production. Sun-induced fluorescence has the ability to enhance our knowledge on how environmentally induced changes affect the LUE. This can be linked to optical derived remote sensing parameters thereby reducing the uncertainty in GPP estimates. Satellite measurements provide a relatively new perspective on global sun-induced fluorescence, enabling us to quantify spatial distributions and changes over time. Techniques have recently been developed to retrieve fluorescence emissions from hyperspectral satellite measurements. We use data from the Global Ozone Monitoring Instrument 2 (GOME2) to infer terrestrial fluorescence. The spectral signatures of three basic components atmospheric: absorption, surface reflectance, and fluorescence radiance are separated using reference measurements of non-fluorescent surfaces (desserts, deep oceans and ice) to solve for the atmospheric absorption. An empirically based principal component analysis (PCA) approach is applied similar to that of Joiner et al. (2013, ACP). Here we show our first global maps of the GOME2 retrievals of chlorophyll fluorescence. First results indicate fluorescence distributions that are similar with that obtained by GOSAT and GOME2 as reported by Joiner et al. (2013, ACP), although we find slightly higher values. In view of optimizing the fluorescence retrieval, we will show the effect of the references selection procedure on the retrieval product.
The Radial Speed - Expansion Speed Relation for Earth-Directed CMEs
NASA Astrophysics Data System (ADS)
Makela, P. A.; Gopalswamy, N.; Yashiro, S.
2013-12-01
The propagation speed of Earth-directed coronal mass ejections (CMEs) is an essential parameter needed in space weather forecasting. However, the true propagation speed of Earth-directed CMEs cannot be measured accurately from coronagraph images taken from Earth's view. In order to circumvent the inaccuracies of speed measurements due to the projection effects, empirical relations expressing the radial speed (Vrad) of the CME as a function of the CME expansion speed (Vexp) have been suggested. Vexp is defined as the apparent speed the CME is spreading in the coronagraph's field of view. During 2010-2012 STEREO spacecraft provided a side view of Earth-directed CMEs, allowing measurements of true CME speeds and widths. In a case study of the 2011 February 15 CME Gopalswamy et al. (2012) compared three Vrad-Vexp relations (flat cone, full or shallow ice cream cone - Gopalswamy et al., 2009) and found the closest match with the observations for the (full ice cream cone) relation Vrad = 1/2(1 + cot w)Vexp, where w is the half width of the CME. Using the STEREO/SECCHI and SOHO/LASCO observations during this opportune period, we expand this analysis to a larger set of Earth-directed CMEs. We compare the computed CME speed estimates with the measured true speeds and estimate the accuracy of the Vrad-Vexp relations. References: Gopalswamy, N. et al. (2009), The expansion and radial speeds of coronal mass ejections, Cent. Eur. Astrophys. Bull., 33, 115. Gopalswamy, N. et al. (2012), The relationship between the expansion speed and radial speed of CMEs confirmed using quadrature observations of the 2011 February 15 CME, Sun and Geosphere, 7(1), 7.
Waller, Niels G; Feuerstahler, Leah
2017-01-01
In this study, we explored item and person parameter recovery of the four-parameter model (4PM) in over 24,000 real, realistic, and idealized data sets. In the first analyses, we fit the 4PM and three alternative models to data from three Minnesota Multiphasic Personality Inventory-Adolescent form factor scales using Bayesian modal estimation (BME). Our results indicated that the 4PM fits these scales better than simpler item Response Theory (IRT) models. Next, using the parameter estimates from these real data analyses, we estimated 4PM item parameters in 6,000 realistic data sets to establish minimum sample size requirements for accurate item and person parameter recovery. Using a factorial design that crossed discrete levels of item parameters, sample size, and test length, we also fit the 4PM to an additional 18,000 idealized data sets to extend our parameter recovery findings. Our combined results demonstrated that 4PM item parameters and parameter functions (e.g., item response functions) can be accurately estimated using BME in moderate to large samples (N ⩾ 5, 000) and person parameters can be accurately estimated in smaller samples (N ⩾ 1, 000). In the supplemental files, we report annotated [Formula: see text] code that shows how to estimate 4PM item and person parameters in [Formula: see text] (Chalmers, 2012 ).
Pradhan, Sudeep; Song, Byungjeong; Lee, Jaeyeon; Chae, Jung-Woo; Kim, Kyung Im; Back, Hyun-Moon; Han, Nayoung; Kwon, Kwang-Il; Yun, Hwi-Yeol
2017-12-01
Exploratory preclinical, as well as clinical trials, may involve a small number of patients, making it difficult to calculate and analyze the pharmacokinetic (PK) parameters, especially if the PK parameters show very high inter-individual variability (IIV). In this study, the performance of a classical first-order conditional estimation with interaction (FOCE-I) and expectation maximization (EM)-based Markov chain Monte Carlo Bayesian (BAYES) estimation methods were compared for estimating the population parameters and its distribution from data sets having a low number of subjects. In this study, 100 data sets were simulated with eight sampling points for each subject and with six different levels of IIV (5%, 10%, 20%, 30%, 50%, and 80%) in their PK parameter distribution. A stochastic simulation and estimation (SSE) study was performed to simultaneously simulate data sets and estimate the parameters using four different methods: FOCE-I only, BAYES(C) (FOCE-I and BAYES composite method), BAYES(F) (BAYES with all true initial parameters and fixed ω 2 ), and BAYES only. Relative root mean squared error (rRMSE) and relative estimation error (REE) were used to analyze the differences between true and estimated values. A case study was performed with a clinical data of theophylline available in NONMEM distribution media. NONMEM software assisted by Pirana, PsN, and Xpose was used to estimate population PK parameters, and R program was used to analyze and plot the results. The rRMSE and REE values of all parameter (fixed effect and random effect) estimates showed that all four methods performed equally at the lower IIV levels, while the FOCE-I method performed better than other EM-based methods at higher IIV levels (greater than 30%). In general, estimates of random-effect parameters showed significant bias and imprecision, irrespective of the estimation method used and the level of IIV. Similar performance of the estimation methods was observed with theophylline dataset. The classical FOCE-I method appeared to estimate the PK parameters more reliably than the BAYES method when using a simple model and data containing only a few subjects. EM-based estimation methods can be considered for adapting to the specific needs of a modeling project at later steps of modeling.
Control system estimation and design for aerospace vehicles
NASA Technical Reports Server (NTRS)
Stefani, R. T.; Williams, T. L.; Yakowitz, S. J.
1972-01-01
The selection of an estimator which is unbiased when applied to structural parameter estimation is discussed. The mathematical relationships for structural parameter estimation are defined. It is shown that a conventional weighted least squares (CWLS) estimate is biased when applied to structural parameter estimation. Two approaches to bias removal are suggested: (1) change the CWLS estimator or (2) change the objective function. The advantages of each approach are analyzed.
NASA Astrophysics Data System (ADS)
Choi, Hon-Chit; Wen, Lingfeng; Eberl, Stefan; Feng, Dagan
2006-03-01
Dynamic Single Photon Emission Computed Tomography (SPECT) has the potential to quantitatively estimate physiological parameters by fitting compartment models to the tracer kinetics. The generalized linear least square method (GLLS) is an efficient method to estimate unbiased kinetic parameters and parametric images. However, due to the low sensitivity of SPECT, noisy data can cause voxel-wise parameter estimation by GLLS to fail. Fuzzy C-Mean (FCM) clustering and modified FCM, which also utilizes information from the immediate neighboring voxels, are proposed to improve the voxel-wise parameter estimation of GLLS. Monte Carlo simulations were performed to generate dynamic SPECT data with different noise levels and processed by general and modified FCM clustering. Parametric images were estimated by Logan and Yokoi graphical analysis and GLLS. The influx rate (K I), volume of distribution (V d) were estimated for the cerebellum, thalamus and frontal cortex. Our results show that (1) FCM reduces the bias and improves the reliability of parameter estimates for noisy data, (2) GLLS provides estimates of micro parameters (K I-k 4) as well as macro parameters, such as volume of distribution (Vd) and binding potential (BP I & BP II) and (3) FCM clustering incorporating neighboring voxel information does not improve the parameter estimates, but improves noise in the parametric images. These findings indicated that it is desirable for pre-segmentation with traditional FCM clustering to generate voxel-wise parametric images with GLLS from dynamic SPECT data.
ERIC Educational Resources Information Center
Finch, Holmes; Edwards, Julianne M.
2016-01-01
Standard approaches for estimating item response theory (IRT) model parameters generally work under the assumption that the latent trait being measured by a set of items follows the normal distribution. Estimation of IRT parameters in the presence of nonnormal latent traits has been shown to generate biased person and item parameter estimates. A…
An Integrated Approach for Aircraft Engine Performance Estimation and Fault Diagnostics
NASA Technical Reports Server (NTRS)
imon, Donald L.; Armstrong, Jeffrey B.
2012-01-01
A Kalman filter-based approach for integrated on-line aircraft engine performance estimation and gas path fault diagnostics is presented. This technique is specifically designed for underdetermined estimation problems where there are more unknown system parameters representing deterioration and faults than available sensor measurements. A previously developed methodology is applied to optimally design a Kalman filter to estimate a vector of tuning parameters, appropriately sized to enable estimation. The estimated tuning parameters can then be transformed into a larger vector of health parameters representing system performance deterioration and fault effects. The results of this study show that basing fault isolation decisions solely on the estimated health parameter vector does not provide ideal results. Furthermore, expanding the number of the health parameters to address additional gas path faults causes a decrease in the estimation accuracy of those health parameters representative of turbomachinery performance deterioration. However, improved fault isolation performance is demonstrated through direct analysis of the estimated tuning parameters produced by the Kalman filter. This was found to provide equivalent or superior accuracy compared to the conventional fault isolation approach based on the analysis of sensed engine outputs, while simplifying online implementation requirements. Results from the application of these techniques to an aircraft engine simulation are presented and discussed.
NASA Astrophysics Data System (ADS)
Sedaghat, A.; Bayat, H.; Safari Sinegani, A. A.
2016-03-01
The saturated hydraulic conductivity ( K s ) of the soil is one of the main soil physical properties. Indirect estimation of this parameter using pedo-transfer functions (PTFs) has received considerable attention. The Purpose of this study was to improve the estimation of K s using fractal parameters of particle and micro-aggregate size distributions in smectitic soils. In this study 260 disturbed and undisturbed soil samples were collected from Guilan province, the north of Iran. The fractal model of Bird and Perrier was used to compute the fractal parameters of particle and micro-aggregate size distributions. The PTFs were developed by artificial neural networks (ANNs) ensemble to estimate K s by using available soil data and fractal parameters. There were found significant correlations between K s and fractal parameters of particles and microaggregates. Estimation of K s was improved significantly by using fractal parameters of soil micro-aggregates as predictors. But using geometric mean and geometric standard deviation of particles diameter did not improve K s estimations significantly. Using fractal parameters of particles and micro-aggregates simultaneously, had the most effect in the estimation of K s . Generally, fractal parameters can be successfully used as input parameters to improve the estimation of K s in the PTFs in smectitic soils. As a result, ANNs ensemble successfully correlated the fractal parameters of particles and micro-aggregates to K s .
Adaptive Modal Identification for Flutter Suppression Control
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Drew, Michael; Swei, Sean S.
2016-01-01
In this paper, we will develop an adaptive modal identification method for identifying the frequencies and damping of a flutter mode based on model-reference adaptive control (MRAC) and least-squares methods. The least-squares parameter estimation will achieve parameter convergence in the presence of persistent excitation whereas the MRAC parameter estimation does not guarantee parameter convergence. Two adaptive flutter suppression control approaches are developed: one based on MRAC and the other based on the least-squares method. The MRAC flutter suppression control is designed as an integral part of the parameter estimation where the feedback signal is used to estimate the modal information. On the other hand, the separation principle of control and estimation is applied to the least-squares method. The least-squares modal identification is used to perform parameter estimation.
Extremes in ecology: Avoiding the misleading effects of sampling variation in summary analyses
Link, W.A.; Sauer, J.R.
1996-01-01
Surveys such as the North American Breeding Bird Survey (BBS) produce large collections of parameter estimates. One's natural inclination when confronted with lists of parameter estimates is to look for the extreme values: in the BBS, these correspond to the species that appear to have the greatest changes in population size through time. Unfortunately, extreme estimates are liable to correspond to the most poorly estimated parameters. Consequently, the most extreme parameters may not match up with the most extreme parameter estimates. The ranking of parameter values on the basis of their estimates are a difficult statistical problem. We use data from the BBS and simulations to illustrate the potential misleading effects of sampling variation in rankings of parameters. We describe empirical Bayes and constrained empirical Bayes procedures which provide partial solutions to the problem of ranking in the presence of sampling variation.
NASA Astrophysics Data System (ADS)
Wu, Fang-Xiang; Mu, Lei; Shi, Zhong-Ke
2010-01-01
The models of gene regulatory networks are often derived from statistical thermodynamics principle or Michaelis-Menten kinetics equation. As a result, the models contain rational reaction rates which are nonlinear in both parameters and states. It is challenging to estimate parameters nonlinear in a model although there have been many traditional nonlinear parameter estimation methods such as Gauss-Newton iteration method and its variants. In this article, we develop a two-step method to estimate the parameters in rational reaction rates of gene regulatory networks via weighted linear least squares. This method takes the special structure of rational reaction rates into consideration. That is, in the rational reaction rates, the numerator and the denominator are linear in parameters. By designing a special weight matrix for the linear least squares, parameters in the numerator and the denominator can be estimated by solving two linear least squares problems. The main advantage of the developed method is that it can produce the analytical solutions to the estimation of parameters in rational reaction rates which originally is nonlinear parameter estimation problem. The developed method is applied to a couple of gene regulatory networks. The simulation results show the superior performance over Gauss-Newton method.
A new Bayesian recursive technique for parameter estimation
NASA Astrophysics Data System (ADS)
Kaheil, Yasir H.; Gill, M. Kashif; McKee, Mac; Bastidas, Luis
2006-08-01
The performance of any model depends on how well its associated parameters are estimated. In the current application, a localized Bayesian recursive estimation (LOBARE) approach is devised for parameter estimation. The LOBARE methodology is an extension of the Bayesian recursive estimation (BARE) method. It is applied in this paper on two different types of models: an artificial intelligence (AI) model in the form of a support vector machine (SVM) application for forecasting soil moisture and a conceptual rainfall-runoff (CRR) model represented by the Sacramento soil moisture accounting (SAC-SMA) model. Support vector machines, based on statistical learning theory (SLT), represent the modeling task as a quadratic optimization problem and have already been used in various applications in hydrology. They require estimation of three parameters. SAC-SMA is a very well known model that estimates runoff. It has a 13-dimensional parameter space. In the LOBARE approach presented here, Bayesian inference is used in an iterative fashion to estimate the parameter space that will most likely enclose a best parameter set. This is done by narrowing the sampling space through updating the "parent" bounds based on their fitness. These bounds are actually the parameter sets that were selected by BARE runs on subspaces of the initial parameter space. The new approach results in faster convergence toward the optimal parameter set using minimum training/calibration data and fewer sets of parameter values. The efficacy of the localized methodology is also compared with the previously used BARE algorithm.
Early Warning for Large Magnitude Earthquakes: Is it feasible?
NASA Astrophysics Data System (ADS)
Zollo, A.; Colombelli, S.; Kanamori, H.
2011-12-01
The mega-thrust, Mw 9.0, 2011 Tohoku earthquake has re-opened the discussion among the scientific community about the effectiveness of Earthquake Early Warning (EEW) systems, when applied to such large events. Many EEW systems are now under-testing or -development worldwide and most of them are based on the real-time measurement of ground motion parameters in a few second window after the P-wave arrival. Currently, we are using the initial Peak Displacement (Pd), and the Predominant Period (τc), among other parameters, to rapidly estimate the earthquake magnitude and damage potential. A well known problem about the real-time estimation of the magnitude is the parameter saturation. Several authors have shown that the scaling laws between early warning parameters and magnitude are robust and effective up to magnitude 6.5-7; the correlation, however, has not yet been verified for larger events. The Tohoku earthquake occurred near the East coast of Honshu, Japan, on the subduction boundary between the Pacific and the Okhotsk plates. The high quality Kik- and K- networks provided a large quantity of strong motion records of the mainshock, with a wide azimuthal coverage both along the Japan coast and inland. More than 300 3-component accelerograms have been available, with an epicentral distance ranging from about 100 km up to more than 500 km. This earthquake thus presents an optimal case study for testing the physical bases of early warning and to investigate the feasibility of a real-time estimation of earthquake size and damage potential even for M > 7 earthquakes. In the present work we used the acceleration waveform data of the main shock for stations along the coast, up to 200 km epicentral distance. We measured the early warning parameters, Pd and τc, within different time windows, starting from 3 seconds, and expanding the testing time window up to 30 seconds. The aim is to verify the correlation of these parameters with Peak Ground Velocity and Magnitude, respectively, as a function of the length of the P-wave window. The entire rupture process of the Tohoku earthquake lasted more than 120 seconds, as shown by the source time functions obtained by several authors. When a 3 second window is used to measure Pd and τc the result is an obvious underestimation of the event size and final PGV. However, as the time window increases up to 27-30 seconds, the measured values of Pd and τc become comparable with those expected for a magnitude M≥8.5 earthquake, according to the τc vs. M and the PGV vs. Pd relationships obtained in a previous work. Since we did not observe any saturation effect for the predominant period and peak displacement measured within a P-wave, 30-seconds window, we infer that, at least from a theoretical point of view, the estimation of earthquake damage potential through the early warning parameters is still feasible for large events, provided that a longer time window is used for parameter measurement. The off-line analysis of the Tohoku event records shows that reliable estimations of the damage potential could have been obtained 40-50 seconds after the origin time, by updating the measurements of the early warning parameters in progressively enlarged P-wave time windows from 3 to 30 seconds.
Sample Size and Item Parameter Estimation Precision When Utilizing the One-Parameter "Rasch" Model
ERIC Educational Resources Information Center
Custer, Michael
2015-01-01
This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…
A Comparative Study of Distribution System Parameter Estimation Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup
2016-07-17
In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of bothmore » methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.« less
A 3D Freehand Ultrasound System for Multi-view Reconstructions from Sparse 2D Scanning Planes
2011-01-01
Background A significant limitation of existing 3D ultrasound systems comes from the fact that the majority of them work with fixed acquisition geometries. As a result, the users have very limited control over the geometry of the 2D scanning planes. Methods We present a low-cost and flexible ultrasound imaging system that integrates several image processing components to allow for 3D reconstructions from limited numbers of 2D image planes and multiple acoustic views. Our approach is based on a 3D freehand ultrasound system that allows users to control the 2D acquisition imaging using conventional 2D probes. For reliable performance, we develop new methods for image segmentation and robust multi-view registration. We first present a new hybrid geometric level-set approach that provides reliable segmentation performance with relatively simple initializations and minimum edge leakage. Optimization of the segmentation model parameters and its effect on performance is carefully discussed. Second, using the segmented images, a new coarse to fine automatic multi-view registration method is introduced. The approach uses a 3D Hotelling transform to initialize an optimization search. Then, the fine scale feature-based registration is performed using a robust, non-linear least squares algorithm. The robustness of the multi-view registration system allows for accurate 3D reconstructions from sparse 2D image planes. Results Volume measurements from multi-view 3D reconstructions are found to be consistently and significantly more accurate than measurements from single view reconstructions. The volume error of multi-view reconstruction is measured to be less than 5% of the true volume. We show that volume reconstruction accuracy is a function of the total number of 2D image planes and the number of views for calibrated phantom. In clinical in-vivo cardiac experiments, we show that volume estimates of the left ventricle from multi-view reconstructions are found to be in better agreement with clinical measures than measures from single view reconstructions. Conclusions Multi-view 3D reconstruction from sparse 2D freehand B-mode images leads to more accurate volume quantification compared to single view systems. The flexibility and low-cost of the proposed system allow for fine control of the image acquisition planes for optimal 3D reconstructions from multiple views. PMID:21251284
A 3D freehand ultrasound system for multi-view reconstructions from sparse 2D scanning planes.
Yu, Honggang; Pattichis, Marios S; Agurto, Carla; Beth Goens, M
2011-01-20
A significant limitation of existing 3D ultrasound systems comes from the fact that the majority of them work with fixed acquisition geometries. As a result, the users have very limited control over the geometry of the 2D scanning planes. We present a low-cost and flexible ultrasound imaging system that integrates several image processing components to allow for 3D reconstructions from limited numbers of 2D image planes and multiple acoustic views. Our approach is based on a 3D freehand ultrasound system that allows users to control the 2D acquisition imaging using conventional 2D probes.For reliable performance, we develop new methods for image segmentation and robust multi-view registration. We first present a new hybrid geometric level-set approach that provides reliable segmentation performance with relatively simple initializations and minimum edge leakage. Optimization of the segmentation model parameters and its effect on performance is carefully discussed. Second, using the segmented images, a new coarse to fine automatic multi-view registration method is introduced. The approach uses a 3D Hotelling transform to initialize an optimization search. Then, the fine scale feature-based registration is performed using a robust, non-linear least squares algorithm. The robustness of the multi-view registration system allows for accurate 3D reconstructions from sparse 2D image planes. Volume measurements from multi-view 3D reconstructions are found to be consistently and significantly more accurate than measurements from single view reconstructions. The volume error of multi-view reconstruction is measured to be less than 5% of the true volume. We show that volume reconstruction accuracy is a function of the total number of 2D image planes and the number of views for calibrated phantom. In clinical in-vivo cardiac experiments, we show that volume estimates of the left ventricle from multi-view reconstructions are found to be in better agreement with clinical measures than measures from single view reconstructions. Multi-view 3D reconstruction from sparse 2D freehand B-mode images leads to more accurate volume quantification compared to single view systems. The flexibility and low-cost of the proposed system allow for fine control of the image acquisition planes for optimal 3D reconstructions from multiple views.
Hydrograph structure informed calibration in the frequency domain with time localization
NASA Astrophysics Data System (ADS)
Kumarasamy, K.; Belmont, P.
2015-12-01
Complex models with large number of parameters are commonly used to estimate sediment yields and predict changes in sediment loads as a result of changes in management or conservation practice at large watershed (>2000 km2) scales. As sediment yield is a strongly non-linear function that responds to channel (peak or mean) velocity or flow depth, it is critical to accurately represent flows. The process of calibration in such models (e.g., SWAT) generally involves the adjustment of several parameters to obtain better estimates of goodness of fit metrics such as Nash Sutcliff Efficiency (NSE). However, such indicators only provide a global view of model performance, potentially obscuring accuracy of the timing or magnitude of specific flows of interest. We describe an approach for streamflow calibration that will greatly reduce the black-box nature of calibration, when response from a parameter adjustment is not clearly known. Fourier Transform or the Short Term Fourier Transform could be used to characterize model performance in the frequency domain as well, however, the ambiguity of a Fourier transform with regards to time localization renders its implementation in a model calibration setting rather useless. Brief and sudden changes (e.g. stream flow peaks) in signals carry the most interesting information from parameter adjustments, which are completely lost in the transform without time localization. Wavelet transform captures the frequency component in the signal without compromising time and is applied to contrast changes in signal response to parameter adjustments. Here we employ the mother wavelet called the Mexican hat wavelet and apply a Continuous Wavelet Transform to understand the signal in the frequency domain. Further, with the use of the cross-wavelet spectrum we examine the relationship between the two signals (prior or post parameter adjustment) in the time-scale plane (e.g., lower scales correspond to higher frequencies). The non-stationarity of the streamflow signal does not hinder this assessment and regions of change called boundaries of influence (seasons or time when such change occurs in the hydrograph) for each parameter are delineated. In addition, we can discover the structural component of the signal (e.g., shifts or amplitude change) that has changed.
NASA Astrophysics Data System (ADS)
Liu, Y.; Pau, G. S. H.; Finsterle, S.
2015-12-01
Parameter inversion involves inferring the model parameter values based on sparse observations of some observables. To infer the posterior probability distributions of the parameters, Markov chain Monte Carlo (MCMC) methods are typically used. However, the large number of forward simulations needed and limited computational resources limit the complexity of the hydrological model we can use in these methods. In view of this, we studied the implicit sampling (IS) method, an efficient importance sampling technique that generates samples in the high-probability region of the posterior distribution and thus reduces the number of forward simulations that we need to run. For a pilot-point inversion of a heterogeneous permeability field based on a synthetic ponded infiltration experiment simulated with TOUGH2 (a subsurface modeling code), we showed that IS with linear map provides an accurate Bayesian description of the parameterized permeability field at the pilot points with just approximately 500 forward simulations. We further studied the use of surrogate models to improve the computational efficiency of parameter inversion. We implemented two reduced-order models (ROMs) for the TOUGH2 forward model. One is based on polynomial chaos expansion (PCE), of which the coefficients are obtained using the sparse Bayesian learning technique to mitigate the "curse of dimensionality" of the PCE terms. The other model is Gaussian process regression (GPR) for which different covariance, likelihood and inference models are considered. Preliminary results indicate that ROMs constructed based on the prior parameter space perform poorly. It is thus impractical to replace this hydrological model by a ROM directly in a MCMC method. However, the IS method can work with a ROM constructed for parameters in the close vicinity of the maximum a posteriori probability (MAP) estimate. We will discuss the accuracy and computational efficiency of using ROMs in the implicit sampling procedure for the hydrological problem considered. This work was supported, in part, by the U.S. Dept. of Energy under Contract No. DE-AC02-05CH11231
NASA Astrophysics Data System (ADS)
Tubino, Federica
2018-03-01
The effect of human-structure interaction in the vertical direction for footbridges is studied based on a probabilistic approach. The bridge is modeled as a continuous dynamic system, while pedestrians are schematized as moving single-degree-of-freedom systems with random dynamic properties. The non-dimensional form of the equations of motion allows us to obtain results that can be applied in a very wide set of cases. An extensive Monte Carlo simulation campaign is performed, varying the main non-dimensional parameters identified, and the mean values and coefficients of variation of the damping ratio and of the non-dimensional natural frequency of the coupled system are reported. The results obtained can be interpreted from two different points of view. If the characterization of pedestrians' equivalent dynamic parameters is assumed as uncertain, as revealed from a current literature review, then the paper provides a range of possible variations of the coupled system damping ratio and natural frequency as a function of pedestrians' parameters. Assuming that a reliable characterization of pedestrians' dynamic parameters is available (which is not the case at present, but could be in the future), the results presented can be adopted to estimate the damping ratio and natural frequency of the coupled footbridge-pedestrian system for a very wide range of real structures.
Testing optimum viewing conditions for mammographic image displays.
Waynant, R W; Chakrabarti, K; Kaczmarek, R A; Dagenais, I
1999-05-01
The viewbox luminance and viewing room light level are important parameters in a medical film display, but these parameters have not had much attention. Spatial variations and too much room illumination can mask real signal or create the false perception of a signal. This presentation looks at how scotopic light sources and dark-adapted radiologists may identify more real diseases.
2011-01-01
In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF), rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison. PMID:21989173
Dual Extended Kalman Filter for the Identification of Time-Varying Human Manual Control Behavior
NASA Technical Reports Server (NTRS)
Popovici, Alexandru; Zaal, Peter M. T.; Pool, Daan M.
2017-01-01
A Dual Extended Kalman Filter was implemented for the identification of time-varying human manual control behavior. Two filters that run concurrently were used, a state filter that estimates the equalization dynamics, and a parameter filter that estimates the neuromuscular parameters and time delay. Time-varying parameters were modeled as a random walk. The filter successfully estimated time-varying human control behavior in both simulated and experimental data. Simple guidelines are proposed for the tuning of the process and measurement covariance matrices and the initial parameter estimates. The tuning was performed on simulation data, and when applied on experimental data, only an increase in measurement process noise power was required in order for the filter to converge and estimate all parameters. A sensitivity analysis to initial parameter estimates showed that the filter is more sensitive to poor initial choices of neuromuscular parameters than equalization parameters, and bad choices for initial parameters can result in divergence, slow convergence, or parameter estimates that do not have a real physical interpretation. The promising results when applied to experimental data, together with its simple tuning and low dimension of the state-space, make the use of the Dual Extended Kalman Filter a viable option for identifying time-varying human control parameters in manual tracking tasks, which could be used in real-time human state monitoring and adaptive human-vehicle haptic interfaces.
NASA Astrophysics Data System (ADS)
van Schie, Guido; Tanner, Christine; Snoeren, Peter; Samulski, Maurice; Leifland, Karin; Wallis, Matthew G.; Karssemeijer, Nico
2011-08-01
To improve cancer detection in mammography, breast examinations usually consist of two views per breast. In order to combine information from both views, corresponding regions in the views need to be matched. In 3D digital breast tomosynthesis (DBT), this may be a difficult and time-consuming task for radiologists, because many slices have to be inspected individually. For multiview computer-aided detection (CAD) systems, matching corresponding regions is an essential step that needs to be automated. In this study, we developed an automatic method to quickly estimate corresponding locations in ipsilateral tomosynthesis views by applying a spatial transformation. First we match a model of a compressed breast to the tomosynthesis view containing a point of interest. Then we estimate the location of the corresponding point in the ipsilateral view by assuming that this model was decompressed, rotated and compressed again. In this study, we use a relatively simple, elastically deformable sphere model to obtain an analytical solution for the transformation in a given DBT case. We investigate three different methods to match the compression model to the data by using automatic segmentation of the pectoral muscle, breast tissue and nipple. For validation, we annotated 208 landmarks in both views of a total of 146 imaged breasts of 109 different patients and applied our method to each location. The best results are obtained by using the centre of gravity of the breast to define the central axis of the model, around which the breast is assumed to rotate between views. Results show a median 3D distance between the actual location and the estimated location of 14.6 mm, a good starting point for a registration method or a feature-based local search method to link suspicious regions in a multiview CAD system. Approximately half of the estimated locations are at most one slice away from the actual location, which makes the method useful as a mammographic workstation tool for radiologists to interactively find corresponding locations in ipsilateral tomosynthesis views.
NASA Astrophysics Data System (ADS)
Munbodh, R.; Moseley, D. J.
2014-03-01
We report results of an intensity-based 2D-3D rigid registration framework for patient positioning and monitoring during brain radiotherapy. We evaluated two intensity-based similarity measures, the Pearson Correlation Coefficient (ICC) and Maximum Likelihood with Gaussian noise (MLG) derived from the statistics of transmission images. A useful image frequency band was identified from the bone-to-no-bone ratio. Validation was performed on gold-standard data consisting of 3D kV CBCT scans and 2D kV radiographs of an anthropomorphic head phantom acquired at 23 different poses with parameter variations along six degrees of freedom. At each pose, a single limited field of view kV radiograph was registered to the reference CBCT. The ground truth was determined from markers affixed to the phantom and visible in the CBCT images. The mean (and standard deviation) of the absolute errors in recovering each of the six transformation parameters along the x, y and z axes for ICC were varphix: 0.08(0.04)°, varphiy: 0.10(0.09)°, varphiz: 0.03(0.03)°, tx: 0.13(0.11) mm, ty: 0.08(0.06) mm and tz: 0.44(0.23) mm. For MLG, the corresponding results were varphix: 0.10(0.04)°, varphiy: 0.10(0.09)°, varphiz: 0.05(0.07)°, tx: 0.11(0.13) mm, ty: 0.05(0.05) mm and tz: 0.44(0.31) mm. It is feasible to accurately estimate all six transformation parameters from a 3D CBCT of the head and a single 2D kV radiograph within an intensity-based registration framework that incorporates the physics of transmission images.
Dynamical Behavior of Meteor in AN Atmosphere: Theory vs Observations
NASA Astrophysics Data System (ADS)
Gritsevich, Maria
Up to now the only quantities which directly follow from the available meteor observations are its brightness, the height above sea level, the length along the trajectory, and as a consequence its velocity as a function of time. Other important parameters like meteoroid's mass, its shape, bulk and grain density, temperature remain unknown and should be found based on physical theories and special experiments. In this study I will consider modern methods for evaluating meteoroid parameters from observational data, and some of their applications. The study in particular takes an approach in modelling the meteoroids' mass and other properties from the aerodynamical point of view, e.g. from the rate of body deceleration in the atmosphere as opposed to conventionally used luminosity [1]. An analytical model of the atmospheric entry is calculated for registered meteors using published observational data and evaluating parameters describing drag, ablation and rotation rate of meteoroid along the luminous segment of the trajectory. One of the special features of this approach is the possibility of considering a change in body shape during its motion in the atmosphere. The correct mathematical modelling of meteor events is necessary for further studies of consequences for collisions of cosmic bodies with the Earth [2]. It also helps us to estimate the key parameters of the meteoroids, including deceleration, pre-entry mass, terminal mass, ablation coefficient, effective destruction enthalpy, and heat-transfer coefficient. With this information, one can use models for the dust influx onto Earth to estimate the number of meteors detected by a camera of a given sensitivity. References 1. Gritsevich M. I. Determination of Parameters of Meteor Bodies based on Flight Obser-vational Data // Advances in Space Research, 44, p. 323-334, 2009. 2. Gritsevich M. I., Stulov V. P. and Turchak L. I. Classification of Consequences for Col-lisions of Natural Cosmic Bodies with the Earth // Doklady Physics, 54, p. 499-503, 2009.
Ultrasonic Evaluation of Fatigue Damage
NASA Astrophysics Data System (ADS)
Bayer, P.; Singher, L.; Notea, A.
2004-02-01
Despite the fact that most engineers and designers are aware of fatigue, many severe breakdowns of industrial plant and machinery still occur due to fatigue. In effect, it's been estimated that fatigue causes at least 80% of the failures in modern engineering components. From an operational point of view, the detection of fatigue damage, preferably at a very early stage, is a critically important consideration in order to prevent possible catastrophic equipment failure and associated losses. This paper describes the investigation involving the use of ultrasonic waves as a potential tool for early detection of fatigue damage. The parameters investigated were the ultrasonic wave velocities (longitudinal and transverse waves) and attenuation coefficient before fatigue damage and after progressive stages of fatigue. Although comparatively small uncertainties were observed, the feasibility of utilizing the velocity of ultrasonic waves as a fatigue monitor was barely substantiated within actual research conditions. However, careful measurements of the ultrasonic attenuation parameter had demonstrated its potential to provide an early assessment of damage during fatigue.
Potential of hydraulically induced fractures to communicate with existing wellbores
NASA Astrophysics Data System (ADS)
Montague, James A.; Pinder, George F.
2015-10-01
The probability that new hydraulically fractured wells drilled within the area of New York underlain by the Marcellus Shale will intersect an existing wellbore is calculated using a statistical model, which incorporates: the depth of a new fracturing well, the vertical growth of induced fractures, and the depths and locations of existing nearby wells. The model first calculates the probability of encountering an existing well in plan view and combines this with the probability of an existing well-being at sufficient depth to intersect the fractured region. Average probability estimates for the entire region of New York underlain by the Marcellus Shale range from 0.00% to 3.45% based upon the input parameters used. The largest contributing parameter on the probability value calculated is the nearby density of wells meaning that due diligence by oil and gas companies during construction in identifying all nearby wells will have the greatest effect in reducing the probability of interwellbore communication.
Hill, Mary Catherine
1992-01-01
This report documents a new version of the U.S. Geological Survey modular, three-dimensional, finite-difference, ground-water flow model (MODFLOW) which, with the new Parameter-Estimation Package that also is documented in this report, can be used to estimate parameters by nonlinear regression. The new version of MODFLOW is called MODFLOWP (pronounced MOD-FLOW*P), and functions nearly identically to MODFLOW when the ParameterEstimation Package is not used. Parameters are estimated by minimizing a weighted least-squares objective function by the modified Gauss-Newton method or by a conjugate-direction method. Parameters used to calculate the following MODFLOW model inputs can be estimated: Transmissivity and storage coefficient of confined layers; hydraulic conductivity and specific yield of unconfined layers; vertical leakance; vertical anisotropy (used to calculate vertical leakance); horizontal anisotropy; hydraulic conductance of the River, Streamflow-Routing, General-Head Boundary, and Drain Packages; areal recharge rates; maximum evapotranspiration; pumpage rates; and the hydraulic head at constant-head boundaries. Any spatial variation in parameters can be defined by the user. Data used to estimate parameters can include existing independent estimates of parameter values, observed hydraulic heads or temporal changes in hydraulic heads, and observed gains and losses along head-dependent boundaries (such as streams). Model output includes statistics for analyzing the parameter estimates and the model; these statistics can be used to quantify the reliability of the resulting model, to suggest changes in model construction, and to compare results of models constructed in different ways.
Nonlinear adaptive control system design with asymptotically stable parameter estimation error
NASA Astrophysics Data System (ADS)
Mishkov, Rumen; Darmonski, Stanislav
2018-01-01
The paper presents a new general method for nonlinear adaptive system design with asymptotic stability of the parameter estimation error. The advantages of the approach include asymptotic unknown parameter estimation without persistent excitation and capability to directly control the estimates transient response time. The method proposed modifies the basic parameter estimation dynamics designed via a known nonlinear adaptive control approach. The modification is based on the generalised prediction error, a priori constraints with a hierarchical parameter projection algorithm, and the stable data accumulation concepts. The data accumulation principle is the main tool for achieving asymptotic unknown parameter estimation. It relies on the parametric identifiability system property introduced. Necessary and sufficient conditions for exponential stability of the data accumulation dynamics are derived. The approach is applied in a nonlinear adaptive speed tracking vector control of a three-phase induction motor.
Data-Adaptive Bias-Reduced Doubly Robust Estimation.
Vermeulen, Karel; Vansteelandt, Stijn
2016-05-01
Doubly robust estimators have now been proposed for a variety of target parameters in the causal inference and missing data literature. These consistently estimate the parameter of interest under a semiparametric model when one of two nuisance working models is correctly specified, regardless of which. The recently proposed bias-reduced doubly robust estimation procedure aims to partially retain this robustness in more realistic settings where both working models are misspecified. These so-called bias-reduced doubly robust estimators make use of special (finite-dimensional) nuisance parameter estimators that are designed to locally minimize the squared asymptotic bias of the doubly robust estimator in certain directions of these finite-dimensional nuisance parameters under misspecification of both parametric working models. In this article, we extend this idea to incorporate the use of data-adaptive estimators (infinite-dimensional nuisance parameters), by exploiting the bias reduction estimation principle in the direction of only one nuisance parameter. We additionally provide an asymptotic linearity theorem which gives the influence function of the proposed doubly robust estimator under correct specification of a parametric nuisance working model for the missingness mechanism/propensity score but a possibly misspecified (finite- or infinite-dimensional) outcome working model. Simulation studies confirm the desirable finite-sample performance of the proposed estimators relative to a variety of other doubly robust estimators.
NASA Astrophysics Data System (ADS)
Heimbach, P.; Bugnion, V.
2008-12-01
We present a new and original approach to understanding the sensitivity of the Greenland ice sheet to key model parameters and environmental conditions. At the heart of this approach is the use of an adjoint ice sheet model. MacAyeal (1992) introduced adjoints in the context of applying control theory to estimate basal sliding parameters (basal shear stress, basal friction) of an ice stream model which minimize a least-squares model vs. observation misfit. Since then, this method has become widespread to fit ice stream models to the increasing number and diversity of satellite observations, and to estimate uncertain model parameters. However, no attempt has been made to extend this method to comprehensive ice sheet models. Here, we present a first step toward moving beyond limiting the use of control theory to ice stream models. We have generated an adjoint of the three-dimensional thermo-mechanical ice sheet model SICOPOLIS of Greve (1997). The adjoint was generated using the automatic differentiation (AD) tool TAF. TAF generates exact source code representing the tangent linear and adjoint model of the parent model provided. Model sensitivities are given by the partial derivatives of a scalar-valued model diagnostic or "cost function" with respect to the controls, and can be efficiently calculated via the adjoint. An effort to generate an efficient adjoint with the newly developed open-source AD tool OpenAD is also under way. To gain insight into the adjoint solutions, we explore various cost functions, such as local and domain-integrated ice temperature, total ice volume or the velocity of ice at the margins of the ice sheet. Elements of our control space include initial cold ice temperatures, surface mass balance, as well as parameters such as appear in Glen's flow law, or in the surface degree-day or basal sliding parameterizations. Sensitivity maps provide a comprehensive view, and allow a quantification of where and to which variables the ice sheet model is most sensitive to. The model used in the present study includes simplifications in the model physics, parameterizations which rely on uncertain empirical constants, and is unable to capture fast ice streams. Nevertheless, as a proof-of-concept, this method can readily be extended to incorporate higher-order physics or parameterizations (or be applied to other models). It also opens the door to ice sheet state estimation: using the model's physics jointly with field and satellite observations to estimate a best estimate of the state of the ice sheets.
da Silveira, Christian L; Mazutti, Marcio A; Salau, Nina P G
2016-07-08
Process modeling can lead to of advantages such as helping in process control, reducing process costs and product quality improvement. This work proposes a solid-state fermentation distributed parameter model composed by seven differential equations with seventeen parameters to represent the process. Also, parameters estimation with a parameters identifyability analysis (PIA) is performed to build an accurate model with optimum parameters. Statistical tests were made to verify the model accuracy with the estimated parameters considering different assumptions. The results have shown that the model assuming substrate inhibition better represents the process. It was also shown that eight from the seventeen original model parameters were nonidentifiable and better results were obtained with the removal of these parameters from the estimation procedure. Therefore, PIA can be useful to estimation procedure, since it may reduce the number of parameters that can be evaluated. Further, PIA improved the model results, showing to be an important procedure to be taken. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:905-917, 2016. © 2016 American Institute of Chemical Engineers.
Estimation of the Parameters in a Two-State System Coupled to a Squeezed Bath
NASA Astrophysics Data System (ADS)
Hu, Yao-Hua; Yang, Hai-Feng; Tan, Yong-Gang; Tao, Ya-Ping
2018-04-01
Estimation of the phase and weight parameters of a two-state system in a squeezed bath by calculating quantum Fisher information is investigated. The results show that, both for the phase estimation and for the weight estimation, the quantum Fisher information always decays with time and changes periodically with the phases. The estimation precision can be enhanced by choosing the proper values of the phases and the squeezing parameter. These results can be provided as an analysis reference for the practical application of the parameter estimation in a squeezed bath.
Experimental design and efficient parameter estimation in preclinical pharmacokinetic studies.
Ette, E I; Howie, C A; Kelman, A W; Whiting, B
1995-05-01
Monte Carlo simulation technique used to evaluate the effect of the arrangement of concentrations on the efficiency of estimation of population pharmacokinetic parameters in the preclinical setting is described. Although the simulations were restricted to the one compartment model with intravenous bolus input, they provide the basis of discussing some structural aspects involved in designing a destructive ("quantic") preclinical population pharmacokinetic study with a fixed sample size as is usually the case in such studies. The efficiency of parameter estimation obtained with sampling strategies based on the three and four time point designs were evaluated in terms of the percent prediction error, design number, individual and joint confidence intervals coverage for parameter estimates approaches, and correlation analysis. The data sets contained random terms for both inter- and residual intra-animal variability. The results showed that the typical population parameter estimates for clearance and volume were efficiently (accurately and precisely) estimated for both designs, while interanimal variability (the only random effect parameter that could be estimated) was inefficiently (inaccurately and imprecisely) estimated with most sampling schedules of the two designs. The exact location of the third and fourth time point for the three and four time point designs, respectively, was not critical to the efficiency of overall estimation of all population parameters of the model. However, some individual population pharmacokinetic parameters were sensitive to the location of these times.
An Evaluation of Hierarchical Bayes Estimation for the Two- Parameter Logistic Model.
ERIC Educational Resources Information Center
Kim, Seock-Ho
Hierarchical Bayes procedures for the two-parameter logistic item response model were compared for estimating item parameters. Simulated data sets were analyzed using two different Bayes estimation procedures, the two-stage hierarchical Bayes estimation (HB2) and the marginal Bayesian with known hyperparameters (MB), and marginal maximum…
Estimation Methods for One-Parameter Testlet Models
ERIC Educational Resources Information Center
Jiao, Hong; Wang, Shudong; He, Wei
2013-01-01
This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…
Estimation of pharmacokinetic parameters from non-compartmental variables using Microsoft Excel.
Dansirikul, Chantaratsamon; Choi, Malcolm; Duffull, Stephen B
2005-06-01
This study was conducted to develop a method, termed 'back analysis (BA)', for converting non-compartmental variables to compartment model dependent pharmacokinetic parameters for both one- and two-compartment models. A Microsoft Excel spreadsheet was implemented with the use of Solver and visual basic functions. The performance of the BA method in estimating pharmacokinetic parameter values was evaluated by comparing the parameter values obtained to a standard modelling software program, NONMEM, using simulated data. The results show that the BA method was reasonably precise and provided low bias in estimating fixed and random effect parameters for both one- and two-compartment models. The pharmacokinetic parameters estimated from the BA method were similar to those of NONMEM estimation.
SBML-PET-MPI: a parallel parameter estimation tool for Systems Biology Markup Language based models.
Zi, Zhike
2011-04-01
Parameter estimation is crucial for the modeling and dynamic analysis of biological systems. However, implementing parameter estimation is time consuming and computationally demanding. Here, we introduced a parallel parameter estimation tool for Systems Biology Markup Language (SBML)-based models (SBML-PET-MPI). SBML-PET-MPI allows the user to perform parameter estimation and parameter uncertainty analysis by collectively fitting multiple experimental datasets. The tool is developed and parallelized using the message passing interface (MPI) protocol, which provides good scalability with the number of processors. SBML-PET-MPI is freely available for non-commercial use at http://www.bioss.uni-freiburg.de/cms/sbml-pet-mpi.html or http://sites.google.com/site/sbmlpetmpi/.
Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown
ERIC Educational Resources Information Center
Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi
2014-01-01
When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…
NASA Technical Reports Server (NTRS)
Suit, W. T.; Cannaday, R. L.
1979-01-01
The longitudinal and lateral stability and control parameters for a high wing, general aviation, airplane are examined. Estimations using flight data obtained at various flight conditions within the normal range of the aircraft are presented. The estimations techniques, an output error technique (maximum likelihood) and an equation error technique (linear regression), are presented. The longitudinal static parameters are estimated from climbing, descending, and quasi steady state flight data. The lateral excitations involve a combination of rudder and ailerons. The sensitivity of the aircraft modes of motion to variations in the parameter estimates are discussed.
NASA Technical Reports Server (NTRS)
Klein, V.
1979-01-01
Two identification methods, the equation error method and the output error method, are used to estimate stability and control parameter values from flight data for a low-wing, single-engine, general aviation airplane. The estimated parameters from both methods are in very good agreement primarily because of sufficient accuracy of measured data. The estimated static parameters also agree with the results from steady flights. The effect of power different input forms are demonstrated. Examination of all results available gives the best values of estimated parameters and specifies their accuracies.
Ensemble-Based Parameter Estimation in a Coupled GCM Using the Adaptive Spatial Average Method
Liu, Y.; Liu, Z.; Zhang, S.; ...
2014-05-29
Ensemble-based parameter estimation for a climate model is emerging as an important topic in climate research. And for a complex system such as a coupled ocean–atmosphere general circulation model, the sensitivity and response of a model variable to a model parameter could vary spatially and temporally. An adaptive spatial average (ASA) algorithm is proposed to increase the efficiency of parameter estimation. Refined from a previous spatial average method, the ASA uses the ensemble spread as the criterion for selecting “good” values from the spatially varying posterior estimated parameter values; these good values are then averaged to give the final globalmore » uniform posterior parameter. In comparison with existing methods, the ASA parameter estimation has a superior performance: faster convergence and enhanced signal-to-noise ratio.« less
A Systematic Approach to Sensor Selection for Aircraft Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2009-01-01
A systematic approach for selecting an optimal suite of sensors for on-board aircraft gas turbine engine health estimation is presented. The methodology optimally chooses the engine sensor suite and the model tuning parameter vector to minimize the Kalman filter mean squared estimation error in the engine s health parameters or other unmeasured engine outputs. This technique specifically addresses the underdetermined estimation problem where there are more unknown system health parameters representing degradation than available sensor measurements. This paper presents the theoretical estimation error equations, and describes the optimization approach that is applied to select the sensors and model tuning parameters to minimize these errors. Two different model tuning parameter vector selection approaches are evaluated: the conventional approach of selecting a subset of health parameters to serve as the tuning parameters, and an alternative approach that selects tuning parameters as a linear combination of all health parameters. Results from the application of the technique to an aircraft engine simulation are presented, and compared to those from an alternative sensor selection strategy.
Estimating Soil Hydraulic Parameters using Gradient Based Approach
NASA Astrophysics Data System (ADS)
Rai, P. K.; Tripathi, S.
2017-12-01
The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.
A variational approach to parameter estimation in ordinary differential equations.
Kaschek, Daniel; Timmer, Jens
2012-08-14
Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.
Helsel, Dennis R.; Gilliom, Robert J.
1986-01-01
Estimates of distributional parameters (mean, standard deviation, median, interquartile range) are often desired for data sets containing censored observations. Eight methods for estimating these parameters have been evaluated by R. J. Gilliom and D. R. Helsel (this issue) using Monte Carlo simulations. To verify those findings, the same methods are now applied to actual water quality data. The best method (lowest root-mean-squared error (rmse)) over all parameters, sample sizes, and censoring levels is log probability regression (LR), the method found best in the Monte Carlo simulations. Best methods for estimating moment or percentile parameters separately are also identical to the simulations. Reliability of these estimates can be expressed as confidence intervals using rmse and bias values taken from the simulation results. Finally, a new simulation study shows that best methods for estimating uncensored sample statistics from censored data sets are identical to those for estimating population parameters. Thus this study and the companion study by Gilliom and Helsel form the basis for making the best possible estimates of either population parameters or sample statistics from censored water quality data, and for assessments of their reliability.
Parameter estimation of qubit states with unknown phase parameter
NASA Astrophysics Data System (ADS)
Suzuki, Jun
2015-02-01
We discuss a problem of parameter estimation for quantum two-level system, qubit system, in presence of unknown phase parameter. We analyze trade-off relations for mean square errors (MSEs) when estimating relevant parameters with separable measurements based on known precision bounds; the symmetric logarithmic derivative (SLD) Cramér-Rao (CR) bound and Hayashi-Gill-Massar (HGM) bound. We investigate the optimal measurement which attains the HGM bound and discuss its properties. We show that the HGM bound for relevant parameters can be attained asymptotically by using some fraction of given n quantum states to estimate the phase parameter. We also discuss the Holevo bound which can be attained asymptotically by a collective measurement.
Earth-viewing satellite perspectives on the Chelyabinsk meteor event.
Miller, Steven D; Straka, William C; Bachmeier, A Scott; Schmit, Timothy J; Partain, Philip T; Noh, Yoo-Jeong
2013-11-05
Large meteors (or superbolides [Ceplecha Z, et al. (1999) Meteoroids 1998:37-54]), although rare in recorded history, give sobering testimony to civilization's inherent vulnerability. A not-so-subtle reminder came on the morning of February 15, 2013, when a large meteoroid hurtled into the Earth's atmosphere, forming a superbolide near the city of Chelyabinsnk, Russia, ∼1,500 km east of Moscow, Russia [Ivanova MA, et al. (2013) Abstracts of the 76th Annual Meeting of the Meteoritical Society, 5366]. The object exploded in the stratosphere, and the ensuing shock wave blasted the city of Chelyabinsk, damaging structures and injuring hundreds. Details of trajectory are important for determining its specific source, the likelihood of future events, and potential mitigation measures. Earth-viewing environmental satellites can assist in these assessments. Here we examine satellite observations of the Chelyabinsk superbolide debris trail, collected within minutes of its entry. Estimates of trajectory are derived from differential views of the significantly parallax-displaced [e.g., Hasler AF (1981) Bull Am Meteor Soc 52:194-212] debris trail. The 282.7 ± 2.3° azimuth of trajectory, 18.5 ± 3.8° slope to the horizontal, and 17.7 ± 0.5 km/s velocity derived from these satellites agree well with parameters inferred from the wealth of surface-based photographs and amateur videos. More importantly, the results demonstrate the general ability of Earth-viewing satellites to provide valuable insight on trajectory reconstruction in the more likely scenario of sparse or nonexistent surface observations.
Control parameters of the martian dune field positions at planetary scale: tests by the MCD
NASA Astrophysics Data System (ADS)
allemand, pascal
2016-04-01
The surface of Mars is occupied by more than 500 dunes fields mainly located inside impact craters of the south hemisphere and near the north polar cap. The questions of the activity of martian dunes and of the localization of the martian dune fields are not completely solved. It has been demonstrated recently by image observation and image correlation that some of these dune fields are clearly active. The sand flux of one of them has been even estimated. But there is no global view of the degree of activity of each the dune fields. (2)The topography of impact craters in which dune fields are localized is an important factor of their position. But there is no consensus of the effect of global atmospheric circulation on dune field localization. These two questions are addressed using the results of Mars Climate Database 5.2 (MCD) (Millour, 2015; Forget et al., 1999). The wind fields of the MCD have been first validated against the observations made on active dune fields. Using a classical transport law, the Drift Potential (DP) and the Relative Drift Potential (RDP) have been computed for each dune fields. A good correlation exists between the position of dune fields and specific values of these two parameters. The activity of each dune field is estimated from these parameters and tested on some examples by image observations. Finally a map of sand flow has been computed at the scale of the planet. This map shows that sand and dust is trapped in specific regions. These regions correspond to the area of dune field concentration.
Star Classification for the Kepler Input Catalog: From Images to Stellar Parameters
NASA Astrophysics Data System (ADS)
Brown, T. M.; Everett, M.; Latham, D. W.; Monet, D. G.
2005-12-01
The Stellar Classification Project is a ground-based effort to screen stars within the Kepler field of view, to allow removal of stars with large radii (and small potential transit signals) from the target list. Important components of this process are: (1) An automated photometry pipeline estimates observed magnitudes both for target stars and for stars in several calibration fields. (2) Data from calibration fields yield extinction-corrected AB magnitudes (with g, r, i, z magnitudes transformed to the SDSS system). We merge these with 2MASS J, H, K magnitudes. (3) The Basel grid of stellar atmosphere models yields synthetic colors, which are transformed to our photometric system by calibration against observations of stars in M67. (4) We combine the r magnitude and stellar galactic latitude with a simple model of interstellar extinction to derive a relation connecting {Teff, luminosity} to distance and reddening. For models satisfying this relation, we compute a chi-squared statistic describing the match between each model and the observed colors. (5) We create a merit function based on the chi-squared statistic, and on a Bayesian prior probability distribution which gives probability as a function of Teff, luminosity, log(Z), and height above the galactic plane. The stellar parameters ascribed to a star are those of the model that maximizes this merit function. (6) Parameter estimates are merged with positional and other information from extant catalogs to yield the Kepler Input Catalog, from which targets will be chosen. Testing and validation of this procedure are underway, with encouraging initial results.
NASA Astrophysics Data System (ADS)
Ames, D. P.; Osorio-Murillo, C.; Over, M. W.; Rubin, Y.
2012-12-01
The Method of Anchored Distributions (MAD) is an inverse modeling technique that is well-suited for estimation of spatially varying parameter fields using limited observations and Bayesian methods. This presentation will discuss the design, development, and testing of a free software implementation of the MAD technique using the open source DotSpatial geographic information system (GIS) framework, R statistical software, and the MODFLOW groundwater model. This new tool, dubbed MAD-GIS, is built using a modular architecture that supports the integration of external analytical tools and models for key computational processes including a forward model (e.g. MODFLOW, HYDRUS) and geostatistical analysis (e.g. R, GSLIB). The GIS-based graphical user interface provides a relatively simple way for new users of the technique to prepare the spatial domain, to identify observation and anchor points, to perform the MAD analysis using a selected forward model, and to view results. MAD-GIS uses the Managed Extensibility Framework (MEF) provided by the Microsoft .NET programming platform to support integration of different modeling and analytical tools at run-time through a custom "driver." Each driver establishes a connection with external programs through a programming interface, which provides the elements for communicating with core MAD software. This presentation gives an example of adapting the MODFLOW to serve as the external forward model in MAD-GIS for inferring the distribution functions of key MODFLOW parameters. Additional drivers for other models are being developed and it is expected that the open source nature of the project will engender the development of additional model drivers by 3rd party scientists.
Image informative maps for component-wise estimating parameters of signal-dependent noise
NASA Astrophysics Data System (ADS)
Uss, Mykhail L.; Vozel, Benoit; Lukin, Vladimir V.; Chehdi, Kacem
2013-01-01
We deal with the problem of blind parameter estimation of signal-dependent noise from mono-component image data. Multispectral or color images can be processed in a component-wise manner. The main results obtained rest on the assumption that the image texture and noise parameters estimation problems are interdependent. A two-dimensional fractal Brownian motion (fBm) model is used for locally describing image texture. A polynomial model is assumed for the purpose of describing the signal-dependent noise variance dependence on image intensity. Using the maximum likelihood approach, estimates of both fBm-model and noise parameters are obtained. It is demonstrated that Fisher information (FI) on noise parameters contained in an image is distributed nonuniformly over intensity coordinates (an image intensity range). It is also shown how to find the most informative intensities and the corresponding image areas for a given noisy image. The proposed estimator benefits from these detected areas to improve the estimation accuracy of signal-dependent noise parameters. Finally, the potential estimation accuracy (Cramér-Rao Lower Bound, or CRLB) of noise parameters is derived, providing confidence intervals of these estimates for a given image. In the experiment, the proposed and existing state-of-the-art noise variance estimators are compared for a large image database using CRLB-based statistical efficiency criteria.
Implementing an Automated Antenna Measurement System
NASA Technical Reports Server (NTRS)
Valerio, Matthew D.; Romanofsky, Robert R.; VanKeuls, Fred W.
2003-01-01
We developed an automated measurement system using a PC running a LabView application, a Velmex BiSlide X-Y positioner, and a HP85l0C network analyzer. The system provides high positioning accuracy and requires no user supervision. After the user inputs the necessary parameters into the LabView application, LabView controls the motor positioning and performs the data acquisition. Current parameters and measured data are shown on the PC display in two 3-D graphs and updated after every data point is collected. The final output is a formatted data file for later processing.
Modeling and parameter identification of impulse response matrix of mechanical systems
NASA Astrophysics Data System (ADS)
Bordatchev, Evgueni V.
1998-12-01
A method for studying the problem of modeling, identification and analysis of mechanical system dynamic characteristic in view of the impulse response matrix for the purpose of adaptive control is developed here. Two types of the impulse response matrices are considered: (i) on displacement, which describes the space-coupled relationship between vectors of the force and simulated displacement, which describes the space-coupled relationship between vectors of the force and simulated displacement and (ii) on acceleration, which also describes the space-coupled relationship between the vectors of the force and measured acceleration. The idea of identification consists of: (a) the practical obtaining of the impulse response matrix on acceleration by 'impact-response' technique; (b) the modeling and parameter estimation of the each impulse response function on acceleration through the fundamental representation of the impulse response function on displacement as a sum of the damped sine curves applying linear and non-linear least square methods; (c) simulating the impulse provides the additional possibility to calculate masses, damper and spring constants. The damped natural frequencies are used as a priori information and are found through the standard FFT analysis. The problem of double numerical integration is avoided by taking two derivations of the fundamental dynamic model of a mechanical system as linear combination of the mass-damper-spring subsystems. The identified impulse response matrix on displacement represents the dynamic properties of the mechanical system. From the engineering point of view, this matrix can be also understood as a 'dynamic passport' of the mechanical system and can be used for dynamic certification and analysis of the dynamic quality. In addition, the suggested approach mathematically reproduces amplitude-frequency response matrix in a low-frequency band and on zero frequency. This allows the possibility of determining the matrix of the static stiffness due to dynamic testing over the time of 10- 15 minutes. As a practical example, the dynamic properties in view of the impulse and frequency response matrices of the lathe spindle are obtained, identified and investigated. The developed approach for modeling and parameter identification appears promising for a wide range o industrial applications; for example, rotary systems.
In-plane elastic properties of auxetic multilattices
NASA Astrophysics Data System (ADS)
Berinskii, Igor E.
2018-07-01
Numerous studies proposed the possible use of auxetic periodic structures in engineering applications. The regular cellular structures with several nodes in a unit cell of the lattice are referred to as multilattices. In this work, a homogenization procedure was applied to three types of plane multilattices: conventional and re-entrant honeycombs (REH), double arrowheads, and semi REH constructed from elastic ribs. It was shown, that for all considered lattices the components of effective tensors of elasticity can be obtained in an explicit way in the frames of the same approach taking stretching, bending and shear of the ribs into account. As a result, equivalent elastic in-plane properties were found analytically as the functions of geometrical parameters of the lattices and the elastic parameters of the ribs. The estimation of the limits for the elastic properties was also performed. It was investigated how the condition of constant density changes the dependence of the elastic constants on the angles between the nodes. Also, different lattices were investigated at the same reference density taken equal to the density of the honeycomb lattice. The most typical cases from the practical point of view were considered and the corresponding elastic parameters were calculated for them.
The Dynamic Characteristic and Hysteresis Effect of an Air Spring
NASA Astrophysics Data System (ADS)
Löcken, F.; Welsch, M.
2015-02-01
In many applications of vibration technology, especially in chassis, air springs present a common alternative to steel spring concepts. A design-independent and therefore universal approach is presented to describe the dynamic characteristic of such springs. Differential and constitutive equations based on energy balances of the enclosed volume and the mountings are given to describe the nonlinear and dynamic characteristics. Therefore all parameters can be estimated directly from physical and geometrical properties, without parameter fitting. The numerically solved equations fit very well to measurements of a passenger car air spring. In a second step a simplification of this model leads to a pure mechanical equation. While in principle the same parameters are used, just an empirical correction of the effective heat transfer coefficient is needed to handle some simplification on this topic. Finally, a linearization of this equation leads to an analogous mechanical model that can be assembled from two common spring- and one dashpot elements in a specific arrangement. This transfer into "mechanical language" enables a system description with a simple force-displacement law and a consideration of the nonobvious hysteresis and stiffness increase of an air spring from a mechanical point of view.
NASA Astrophysics Data System (ADS)
Tumanov, Sergiu
A test of goodness of fit based on rank statistics was applied to prove the applicability of the Eggenberger-Polya discrete probability law to hourly SO 2-concentrations measured in the vicinity of single sources. With this end in view, the pollutant concentration was considered an integral quantity which may be accepted if one properly chooses the unit of measurement (in this case μg m -3) and if account is taken of the limited accuracy of measurements. The results of the test being satisfactory, even in the range of upper quantiles, the Eggenberger-Polya law was used in association with numerical modelling to estimate statistical parameters, e.g. quantiles, cumulative probabilities of threshold concentrations to be exceeded, and so on, in the grid points of a network covering the area of interest. This only needs accurate estimations of means and variances of the concentration series which can readily be obtained through routine air pollution dispersion modelling.
Progressive 3D shape abstraction via hierarchical CSG tree
NASA Astrophysics Data System (ADS)
Chen, Xingyou; Tang, Jin; Li, Chenglong
2017-06-01
A constructive solid geometry(CSG) tree model is proposed to progressively abstract 3D geometric shape of general object from 2D image. Unlike conventional ones, our method applies to general object without the need for massive CAD models, and represents the object shapes in a coarse-to-fine manner that allows users to view temporal shape representations at any time. It stands in a transitional position between 2D image feature and CAD model, benefits from state-of-the-art object detection approaches and better initializes CAD model for finer fitting, estimates 3D shape and pose parameters of object at different levels according to visual perception objective, in a coarse-to-fine manner. Two main contributions are the application of CSG building up procedure into visual perception, and the ability of extending object estimation result into a more flexible and expressive model than 2D/3D primitive shapes. Experimental results demonstrate the feasibility and effectiveness of the proposed approach.
Application of glas laser altimetry to detect elevation changes in East Antarctica
NASA Astrophysics Data System (ADS)
Scaioni, M.; Tong, X.; Li, R.
2013-10-01
In this paper the use of ICESat/GLAS laser altimeter for estimating multi-temporal elevation changes on polar ice sheets is afforded. Due to non-overlapping laser spots during repeat passes, interpolation methods are required to make comparisons. After reviewing the main methods described in the literature (crossover point analysis, cross-track DEM projection, space-temporal regressions), the last one has been chosen for its capability of providing more elevation change rate measurements. The standard implementation of the space-temporal linear regression technique has been revisited and improved to better cope with outliers and to check the estimability of model's parameters. GLAS data over the PANDA route in East Antarctica have been used for testing. Obtained results have been quite meaningful from a physical point of view, confirming the trend reported by the literature of a constant snow accumulation in the area during the two past decades, unlike the most part of the continent that has been losing mass.
Wagener, T.; Hogue, T.; Schaake, J.; Duan, Q.; Gupta, H.; Andreassian, V.; Hall, A.; Leavesley, G.
2006-01-01
The Model Parameter Estimation Experiment (MOPEX) is an international project aimed at developing enhanced techniques for the a priori estimation of parameters in hydrological models and in land surface parameterization schemes connected to atmospheric models. The MOPEX science strategy involves: database creation, a priori parameter estimation methodology development, parameter refinement or calibration, and the demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrological basins in the United States (US) and in other countries. This database is being continuously expanded to include basins from various hydroclimatic regimes throughout the world. MOPEX research has largely been driven by a series of international workshops that have brought interested hydrologists and land surface modellers together to exchange knowledge and experience in developing and applying parameter estimation techniques. With its focus on parameter estimation, MOPEX plays an important role in the international context of other initiatives such as GEWEX, HEPEX, PUB and PILPS. This paper outlines the MOPEX initiative, discusses its role in the scientific community, and briefly states future directions.
Transit Project Planning Guidance : Estimation of Transit Supply Parameters
DOT National Transportation Integrated Search
1984-04-01
This report discusses techniques applicable to the estimation of transit vehicle fleet requirements, vehicle-hours and vehicle-miles, and other related transit supply parameters. These parameters are used for estimating operating costs and certain ca...
Time Domain Estimation of Arterial Parameters using the Windkessel Model and the Monte Carlo Method
NASA Astrophysics Data System (ADS)
Gostuski, Vladimir; Pastore, Ignacio; Rodriguez Palacios, Gaspar; Vaca Diez, Gustavo; Moscoso-Vasquez, H. Marcela; Risk, Marcelo
2016-04-01
Numerous parameter estimation techniques exist for characterizing the arterial system using electrical circuit analogs. However, they are often limited by their requirements and usually high computational burdain. Therefore, a new method for estimating arterial parameters based on Monte Carlo simulation is proposed. A three element Windkessel model was used to represent the arterial system. The approach was to reduce the error between the calculated and physiological aortic pressure by randomly generating arterial parameter values, while keeping constant the arterial resistance. This last value was obtained for each subject using the arterial flow, and was a necessary consideration in order to obtain a unique set of values for the arterial compliance and peripheral resistance. The estimation technique was applied to in vivo data containing steady beats in mongrel dogs, and it reliably estimated Windkessel arterial parameters. Further, this method appears to be computationally efficient for on-line time-domain estimation of these parameters.
SBML-PET: a Systems Biology Markup Language-based parameter estimation tool.
Zi, Zhike; Klipp, Edda
2006-11-01
The estimation of model parameters from experimental data remains a bottleneck for a major breakthrough in systems biology. We present a Systems Biology Markup Language (SBML) based Parameter Estimation Tool (SBML-PET). The tool is designed to enable parameter estimation for biological models including signaling pathways, gene regulation networks and metabolic pathways. SBML-PET supports import and export of the models in the SBML format. It can estimate the parameters by fitting a variety of experimental data from different experimental conditions. SBML-PET has a unique feature of supporting event definition in the SMBL model. SBML models can also be simulated in SBML-PET. Stochastic Ranking Evolution Strategy (SRES) is incorporated in SBML-PET for parameter estimation jobs. A classic ODE Solver called ODEPACK is used to solve the Ordinary Differential Equation (ODE) system. http://sysbio.molgen.mpg.de/SBML-PET/. The website also contains detailed documentation for SBML-PET.
USDA-ARS?s Scientific Manuscript database
We proposed a method to estimate the error variance among non-replicated genotypes, thus to estimate the genetic parameters by using replicated controls. We derived formulas to estimate sampling variances of the genetic parameters. Computer simulation indicated that the proposed methods of estimatin...
Space Shuttle propulsion parameter estimation using optimal estimation techniques, volume 1
NASA Technical Reports Server (NTRS)
1983-01-01
The mathematical developments and their computer program implementation for the Space Shuttle propulsion parameter estimation project are summarized. The estimation approach chosen is the extended Kalman filtering with a modified Bryson-Frazier smoother. Its use here is motivated by the objective of obtaining better estimates than those available from filtering and to eliminate the lag associated with filtering. The estimation technique uses as the dynamical process the six degree equations-of-motion resulting in twelve state vector elements. In addition to these are mass and solid propellant burn depth as the ""system'' state elements. The ""parameter'' state elements can include aerodynamic coefficient, inertia, center-of-gravity, atmospheric wind, etc. deviations from referenced values. Propulsion parameter state elements have been included not as options just discussed but as the main parameter states to be estimated. The mathematical developments were completed for all these parameters. Since the systems dynamics and measurement processes are non-linear functions of the states, the mathematical developments are taken up almost entirely by the linearization of these equations as required by the estimation algorithms.
76 FR 49469 - Agency Information Collection Activities; Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-10
... visitor log. All visitor bags are processed through an X-ray machine and subject to search. Visitors will... comments: 1. Explain your views as clearly as possible and provide specific examples. 2. Describe any... your views. 4. If you estimate potential burden or costs, explain how you arrived at the estimate that...
Bayesian source term estimation of atmospheric releases in urban areas using LES approach.
Xue, Fei; Kikumoto, Hideki; Li, Xiaofeng; Ooka, Ryozo
2018-05-05
The estimation of source information from limited measurements of a sensor network is a challenging inverse problem, which can be viewed as an assimilation process of the observed concentration data and the predicted concentration data. When dealing with releases in built-up areas, the predicted data are generally obtained by the Reynolds-averaged Navier-Stokes (RANS) equations, which yields building-resolving results; however, RANS-based models are outperformed by large-eddy simulation (LES) in the predictions of both airflow and dispersion. Therefore, it is important to explore the possibility of improving the estimation of the source parameters by using the LES approach. In this paper, a novel source term estimation method is proposed based on LES approach using Bayesian inference. The source-receptor relationship is obtained by solving the adjoint equations constructed using the time-averaged flow field simulated by the LES approach based on the gradient diffusion hypothesis. A wind tunnel experiment with a constant point source downwind of a single building model is used to evaluate the performance of the proposed method, which is compared with that of the existing method using a RANS model. The results show that the proposed method reduces the errors of source location and releasing strength by 77% and 28%, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Arora, B. S.; Morgan, J.; Ord, S. M.; Tingay, S. J.; Hurley-Walker, N.; Bell, M.; Bernardi, G.; Bhat, N. D. R.; Briggs, F.; Callingham, J. R.; Deshpande, A. A.; Dwarakanath, K. S.; Ewall-Wice, A.; Feng, L.; For, B.-Q.; Hancock, P.; Hazelton, B. J.; Hindson, L.; Jacobs, D.; Johnston-Hollitt, M.; Kapińska, A. D.; Kudryavtseva, N.; Lenc, E.; McKinley, B.; Mitchell, D.; Oberoi, D.; Offringa, A. R.; Pindor, B.; Procopio, P.; Riding, J.; Staveley-Smith, L.; Wayth, R. B.; Wu, C.; Zheng, Q.; Bowman, J. D.; Cappallo, R. J.; Corey, B. E.; Emrich, D.; Goeke, R.; Greenhill, L. J.; Kaplan, D. L.; Kasper, J. C.; Kratzenberg, E.; Lonsdale, C. J.; Lynch, M. J.; McWhirter, S. R.; Morales, M. F.; Morgan, E.; Prabu, T.; Rogers, A. E. E.; Roshi, A.; Shankar, N. Udaya; Srivani, K. S.; Subrahmanyan, R.; Waterson, M.; Webster, R. L.; Whitney, A. R.; Williams, A.; Williams, C. L.
2015-08-01
We compare first-order (refractive) ionospheric effects seen by the MWA with the ionosphere as inferred from GPS data. The first-order ionosphere manifests itself as a bulk position shift of the observed sources across an MWA field of view. These effects can be computed from global ionosphere maps provided by GPS analysis centres, namely the CODE. However, for precision radio astronomy applications, data from local GPS networks needs to be incorporated into ionospheric modelling. For GPS observations, the ionospheric parameters are biased by GPS receiver instrument delays, among other effects, also known as receiver DCBs. The receiver DCBs need to be estimated for any non-CODE GPS station used for ionosphere modelling. In this work, single GPS station-based ionospheric modelling is performed at a time resolution of 10 min. Also the receiver DCBs are estimated for selected Geoscience Australia GPS receivers, located at Murchison Radio Observatory, Yarragadee, Mount Magnet and Wiluna. The ionospheric gradients estimated from GPS are compared with that inferred from MWA. The ionospheric gradients at all the GPS stations show a correlation with the gradients observed with the MWA. The ionosphere estimates obtained using GPS measurements show promise in terms of providing calibration information for the MWA.
Chang, Howard H; Peng, Roger D; Dominici, Francesca
2011-10-01
In air pollution epidemiology, there is a growing interest in estimating the health effects of coarse particulate matter (PM) with aerodynamic diameter between 2.5 and 10 μm. Coarse PM concentrations can exhibit considerable spatial heterogeneity because the particles travel shorter distances and do not remain suspended in the atmosphere for an extended period of time. In this paper, we develop a modeling approach for estimating the short-term effects of air pollution in time series analysis when the ambient concentrations vary spatially within the study region. Specifically, our approach quantifies the error in the exposure variable by characterizing, on any given day, the disagreement in ambient concentrations measured across monitoring stations. This is accomplished by viewing monitor-level measurements as error-prone repeated measurements of the unobserved population average exposure. Inference is carried out in a Bayesian framework to fully account for uncertainty in the estimation of model parameters. Finally, by using different exposure indicators, we investigate the sensitivity of the association between coarse PM and daily hospital admissions based on a recent national multisite time series analysis. Among Medicare enrollees from 59 US counties between the period 1999 and 2005, we find a consistent positive association between coarse PM and same-day admission for cardiovascular diseases.
A method to estimate the biomass of Spirulina platensis cultivated on a solid medium.
Pelizer, Lúcia Helena; Moraes, Iracema de Oliveira
2014-01-01
This paper presents a method to estimate the biomass of Spirulina cultivated on solid medium with sugarcane bagasse as a support, in view of the difficulty in determining biomass concentrations in bioprocesses, particularly those conducted in semi-solid or solid media. The genus Spirulina of the family Oscillatoriaceae comprises the group of multicellular filamentous cyanobacteria (blue-green microalgae). Spirulina is used as fish feed in aquaculture, as a food supplement, a source of vitamins, pigments, antioxidants and fatty acids. Therefore, its growth parameters are extremely important in studies of the development and optimization of bioprocesses. For studies of biomass growth, Spirulina platensis was cultured on solid medium using sugarcane bagasse as a support. The biomass thus produced was estimated by determining the protein content of the material grown during the process, based on the ratio of dry weight to protein content obtained in the surface growth experiments. The protein content of the biomass grown in Erlenmeyer flasks on surface medium was examined daily to check the influence of culture time on the protein content of the biomass. The biomass showed an average protein content of 42.2%. This methodology enabled the concentration of biomass adhering to the sugarcane bagasse to be estimated from the indirect measurement of the protein content associated with cell growth.
An overabundance of low-density Neptune-like planets
NASA Astrophysics Data System (ADS)
Cubillos, Patricio; Erkaev, Nikolai V.; Juvan, Ines; Fossati, Luca; Johnstone, Colin P.; Lammer, Helmut; Lendl, Monika; Odert, Petra; Kislyakova, Kristina G.
2017-04-01
We present a uniform analysis of the atmospheric escape rate of Neptune-like planets with estimated radius and mass (restricted to Mp < 30 M⊕). For each planet, we compute the restricted Jeans escape parameter, Λ, for a hydrogen atom evaluated at the planetary mass, radius, and equilibrium temperature. Values of Λ ≲ 20 suggest extremely high mass-loss rates. We identify 27 planets (out of 167) that are simultaneously consistent with hydrogen-dominated atmospheres and are expected to exhibit extreme mass-loss rates. We further estimate the mass-loss rates (Lhy) of these planets with tailored atmospheric hydrodynamic models. We compare Lhy to the energy-limited (maximum-possible high-energy driven) mass-loss rates. We confirm that 25 planets (15 per cent of the sample) exhibit extremely high mass-loss rates (Lhy > 0.1 M⌖ Gyr-1), well in excess of the energy-limited mass-loss rates. This constitutes a contradiction, since the hydrogen envelopes cannot be retained given the high mass-loss rates. We hypothesize that these planets are not truly under such high mass-loss rates. Instead, either hydrodynamic models overestimate the mass-loss rates, transit-timing-variation measurements underestimate the planetary masses, optical transit observations overestimate the planetary radii (due to high-altitude clouds), or Neptunes have consistently higher albedos than Jupiter planets. We conclude that at least one of these established estimations/techniques is consistently producing biased values for Neptune planets. Such an important fraction of exoplanets with misinterpreted parameters can significantly bias our view of populations studies, like the observed mass-radius distribution of exoplanets for example.
NASA Astrophysics Data System (ADS)
Baatz, D.; Kurtz, W.; Hendricks Franssen, H. J.; Vereecken, H.; Kollet, S. J.
2017-12-01
Parameter estimation for physically based, distributed hydrological models becomes increasingly challenging with increasing model complexity. The number of parameters is usually large and the number of observations relatively small, which results in large uncertainties. A moving transmitter - receiver concept to estimate spatially distributed hydrological parameters is presented by catchment tomography. In this concept, precipitation, highly variable in time and space, serves as a moving transmitter. As response to precipitation, runoff and stream discharge are generated along different paths and time scales, depending on surface and subsurface flow properties. Stream water levels are thus an integrated signal of upstream parameters, measured by stream gauges which serve as the receivers. These stream water level observations are assimilated into a distributed hydrological model, which is forced with high resolution, radar based precipitation estimates. Applying a joint state-parameter update with the Ensemble Kalman Filter, the spatially distributed Manning's roughness coefficient and saturated hydraulic conductivity are estimated jointly. The sequential data assimilation continuously integrates new information into the parameter estimation problem, especially during precipitation events. Every precipitation event constrains the possible parameter space. In the approach, forward simulations are performed with ParFlow, a variable saturated subsurface and overland flow model. ParFlow is coupled to the Parallel Data Assimilation Framework for the data assimilation and the joint state-parameter update. In synthetic, 3-dimensional experiments including surface and subsurface flow, hydraulic conductivity and the Manning's coefficient are efficiently estimated with the catchment tomography approach. A joint update of the Manning's coefficient and hydraulic conductivity tends to improve the parameter estimation compared to a single parameter update, especially in cases of biased initial parameter ensembles. The computational experiments additionally show to which degree of spatial heterogeneity and to which degree of uncertainty of subsurface flow parameters the Manning's coefficient and hydraulic conductivity can be estimated efficiently.
Comparison of CME three-dimensional parameters derived from single and multi-spacecraft
NASA Astrophysics Data System (ADS)
LEE, Harim; Moon, Yong-Jae; Na, Hyeonock; Jang, Soojeong
2014-06-01
Several geometrical models (e.g., cone and flux rope models) have been suggested to infer three-dimensional parameters of CMEs using multi-view observations (STEREO/SECCHI) and single-view observations (SOHO/LASCO). To prepare for when only single view observations are available, we have made a test whether the cone model parameters from single-view observations are consistent with those from multi-view ones. For this test, we select 35 CMEs which are identified as CMEs, whose angular widths are larger than 180 degrees, by one spacecraft and as limb CMEs by the other ones. For this we use SOHO/LASCO and STEREO/SECCHI data during the period from 2010 December to 2011 July when two spacecraft were separated by 90±10 degrees. In this study, we compare the 3-D parameters of these CMEs from three different methods: (1) a triangulation method using STEREO/SECCHI and SOHO/LASCO data, (2) a Graduated Cylindrical Shell (GCS) flux rope model using STEREO/SECCHI data, and (3) an ice cream cone model using SOHO/LASCO data. The parameters used for comparison are radial velocities, angular widths and source location (angle γ between the propagation direction and the plan of the sky). We find that the radial velocities and the γ-values from three methods are well correlated with one another (CC > 0.8). However, angular widths from the three methods are somewhat different with the correlation coefficients of CC > 0.4. We also find that the correlation coefficients between the locations from the three methods and the active region locations are larger than 0.9, implying that most of the CMEs are radially ejected.
Quantifying Transmission Heterogeneity Using Both Pathogen Phylogenies and Incidence Time Series
Li, Lucy M.; Grassly, Nicholas C.; Fraser, Christophe
2017-01-01
Abstract Heterogeneity in individual-level transmissibility can be quantified by the dispersion parameter k of the offspring distribution. Quantifying heterogeneity is important as it affects other parameter estimates, it modulates the degree of unpredictability of an epidemic, and it needs to be accounted for in models of infection control. Aggregated data such as incidence time series are often not sufficiently informative to estimate k. Incorporating phylogenetic analysis can help to estimate k concurrently with other epidemiological parameters. We have developed an inference framework that uses particle Markov Chain Monte Carlo to estimate k and other epidemiological parameters using both incidence time series and the pathogen phylogeny. Using the framework to fit a modified compartmental transmission model that includes the parameter k to simulated data, we found that more accurate and less biased estimates of the reproductive number were obtained by combining epidemiological and phylogenetic analyses. However, k was most accurately estimated using pathogen phylogeny alone. Accurately estimating k was necessary for unbiased estimates of the reproductive number, but it did not affect the accuracy of reporting probability and epidemic start date estimates. We further demonstrated that inference was possible in the presence of phylogenetic uncertainty by sampling from the posterior distribution of phylogenies. Finally, we used the inference framework to estimate transmission parameters from epidemiological and genetic data collected during a poliovirus outbreak. Despite the large degree of phylogenetic uncertainty, we demonstrated that incorporating phylogenetic data in parameter inference improved the accuracy and precision of estimates. PMID:28981709
Corredor, Germán; Whitney, Jon; Arias, Viviana; Madabhushi, Anant; Romero, Eduardo
2017-01-01
Abstract. Computational histomorphometric approaches typically use low-level image features for building machine learning classifiers. However, these approaches usually ignore high-level expert knowledge. A computational model (M_im) combines low-, mid-, and high-level image information to predict the likelihood of cancer in whole slide images. Handcrafted low- and mid-level features are computed from area, color, and spatial nuclei distributions. High-level information is implicitly captured from the recorded navigations of pathologists while exploring whole slide images during diagnostic tasks. This model was validated by predicting the presence of cancer in a set of unseen fields of view. The available database was composed of 24 cases of basal-cell carcinoma, from which 17 served to estimate the model parameters and the remaining 7 comprised the evaluation set. A total of 274 fields of view of size 1024×1024 pixels were extracted from the evaluation set. Then 176 patches from this set were used to train a support vector machine classifier to predict the presence of cancer on a patch-by-patch basis while the remaining 98 image patches were used for independent testing, ensuring that the training and test sets do not comprise patches from the same patient. A baseline model (M_ex) estimated the cancer likelihood for each of the image patches. M_ex uses the same visual features as M_im, but its weights are estimated from nuclei manually labeled as cancerous or noncancerous by a pathologist. M_im achieved an accuracy of 74.49% and an F-measure of 80.31%, while M_ex yielded corresponding accuracy and F-measures of 73.47% and 77.97%, respectively. PMID:28382314
On techniques for angle compensation in nonideal iris recognition.
Schuckers, Stephanie A C; Schmid, Natalia A; Abhyankar, Aditya; Dorairaj, Vivekanand; Boyce, Christopher K; Hornak, Lawrence A
2007-10-01
The popularity of the iris biometric has grown considerably over the past two to three years. Most research has been focused on the development of new iris processing and recognition algorithms for frontal view iris images. However, a few challenging directions in iris research have been identified, including processing of a nonideal iris and iris at a distance. In this paper, we describe two nonideal iris recognition systems and analyze their performance. The word "nonideal" is used in the sense of compensating for off-angle occluded iris images. The system is designed to process nonideal iris images in two steps: 1) compensation for off-angle gaze direction and 2) processing and encoding of the rotated iris image. Two approaches are presented to account for angular variations in the iris images. In the first approach, we use Daugman's integrodifferential operator as an objective function to estimate the gaze direction. After the angle is estimated, the off-angle iris image undergoes geometric transformations involving the estimated angle and is further processed as if it were a frontal view image. The encoding technique developed for a frontal image is based on the application of the global independent component analysis. The second approach uses an angular deformation calibration model. The angular deformations are modeled, and calibration parameters are calculated. The proposed method consists of a closed-form solution, followed by an iterative optimization procedure. The images are projected on the plane closest to the base calibrated plane. Biorthogonal wavelets are used for encoding to perform iris recognition. We use a special dataset of the off-angle iris images to quantify the performance of the designed systems. A series of receiver operating characteristics demonstrate various effects on the performance of the nonideal-iris-based recognition system.
Effects of control inputs on the estimation of stability and control parameters of a light airplane
NASA Technical Reports Server (NTRS)
Cannaday, R. L.; Suit, W. T.
1977-01-01
The maximum likelihood parameter estimation technique was used to determine the values of stability and control derivatives from flight test data for a low-wing, single-engine, light airplane. Several input forms were used during the tests to investigate the consistency of parameter estimates as it relates to inputs. These consistencies were compared by using the ensemble variance and estimated Cramer-Rao lower bound. In addition, the relationship between inputs and parameter correlations was investigated. Results from the stabilator inputs are inconclusive but the sequence of rudder input followed by aileron input or aileron followed by rudder gave more consistent estimates than did rudder or ailerons individually. Also, square-wave inputs appeared to provide slightly improved consistency in the parameter estimates when compared to sine-wave inputs.
NASA Technical Reports Server (NTRS)
Morris, A. Terry
1999-01-01
This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.
A technique for automatically extracting useful field of view and central field of view images.
Pandey, Anil Kumar; Sharma, Param Dev; Aheer, Deepak; Kumar, Jay Prakash; Sharma, Sanjay Kumar; Patel, Chetan; Kumar, Rakesh; Bal, Chandra Sekhar
2016-01-01
It is essential to ensure the uniform response of the single photon emission computed tomography gamma camera system before using it for the clinical studies by exposing it to uniform flood source. Vendor specific acquisition and processing protocol provide for studying flood source images along with the quantitative uniformity parameters such as integral and differential uniformity. However, a significant difficulty is that the time required to acquire a flood source image varies from 10 to 35 min depending both on the activity of Cobalt-57 flood source and the pre specified counts in the vendors protocol (usually 4000K-10,000K counts). In case the acquired total counts are less than the total prespecified counts, and then the vendor's uniformity processing protocol does not precede with the computation of the quantitative uniformity parameters. In this study, we have developed and verified a technique for reading the flood source image, remove unwanted information, and automatically extract and save the useful field of view and central field of view images for the calculation of the uniformity parameters. This was implemented using MATLAB R2013b running on Ubuntu Operating system and was verified by subjecting it to the simulated and real flood sources images. The accuracy of the technique was found to be encouraging, especially in view of practical difficulties with vendor-specific protocols. It may be used as a preprocessing step while calculating uniformity parameters of the gamma camera in lesser time with fewer constraints.
Generalized sensitivity analysis of the minimal model of the intravenous glucose tolerance test.
Munir, Mohammad
2018-06-01
Generalized sensitivity functions characterize the sensitivity of the parameter estimates with respect to the nominal parameters. We observe from the generalized sensitivity analysis of the minimal model of the intravenous glucose tolerance test that the measurements of insulin, 62 min after the administration of the glucose bolus into the experimental subject's body, possess no information about the parameter estimates. The glucose measurements possess the information about the parameter estimates up to three hours. These observations have been verified by the parameter estimation of the minimal model. The standard errors of the estimates and crude Monte Carlo process also confirm this observation. Copyright © 2018 Elsevier Inc. All rights reserved.
Liang, Yuzhen; Torralba-Sanchez, Tifany L; Di Toro, Dominic M
2018-04-18
Polyparameter Linear Free Energy Relationships (pp-LFERs) using Abraham system parameters have many useful applications. However, developing the Abraham system parameters depends on the availability and quality of the Abraham solute parameters. Using Quantum Chemically estimated Abraham solute Parameters (QCAP) is shown to produce pp-LFERs that have lower root mean square errors (RMSEs) of predictions for solvent-water partition coefficients than parameters that are estimated using other presently available methods. pp-LFERs system parameters are estimated for solvent-water, plant cuticle-water systems, and for novel compounds using QCAP solute parameters and experimental partition coefficients. Refitting the system parameter improves the calculation accuracy and eliminates the bias. Refitted models for solvent-water partition coefficients using QCAP solute parameters give better results (RMSE = 0.278 to 0.506 log units for 24 systems) than those based on ABSOLV (0.326 to 0.618) and QSPR (0.294 to 0.700) solute parameters. For munition constituents and munition-like compounds not included in the calibration of the refitted model, QCAP solute parameters produce pp-LFER models with much lower RMSEs for solvent-water partition coefficients (RMSE = 0.734 and 0.664 for original and refitted model, respectively) than ABSOLV (4.46 and 5.98) and QSPR (2.838 and 2.723). Refitting plant cuticle-water pp-LFER including munition constituents using QCAP solute parameters also results in lower RMSE (RMSE = 0.386) than that using ABSOLV (0.778) and QSPR (0.512) solute parameters. Therefore, for fitting a model in situations for which experimental data exist and system parameters can be re-estimated, or for which system parameters do not exist and need to be developed, QCAP is the quantum chemical method of choice.
Visualization and processing of computed solid-state NMR parameters: MagresView and MagresPython.
Sturniolo, Simone; Green, Timothy F G; Hanson, Robert M; Zilka, Miri; Refson, Keith; Hodgkinson, Paul; Brown, Steven P; Yates, Jonathan R
2016-09-01
We introduce two open source tools to aid the processing and visualisation of ab-initio computed solid-state NMR parameters. The Magres file format for computed NMR parameters (as implemented in CASTEP v8.0 and QuantumEspresso v5.0.0) is implemented. MagresView is built upon the widely used Jmol crystal viewer, and provides an intuitive environment to display computed NMR parameters. It can provide simple pictorial representation of one- and two-dimensional NMR spectra as well as output a selected spin-system for exact simulations with dedicated spin-dynamics software. MagresPython provides a simple scripting environment to manipulate large numbers of computed NMR parameters to search for structural correlations. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Nonlinear features for classification and pose estimation of machined parts from single views
NASA Astrophysics Data System (ADS)
Talukder, Ashit; Casasent, David P.
1998-10-01
A new nonlinear feature extraction method is presented for classification and pose estimation of objects from single views. The feature extraction method is called the maximum representation and discrimination feature (MRDF) method. The nonlinear MRDF transformations to use are obtained in closed form, and offer significant advantages compared to nonlinear neural network implementations. The features extracted are useful for both object discrimination (classification) and object representation (pose estimation). We consider MRDFs on image data, provide a new 2-stage nonlinear MRDF solution, and show it specializes to well-known linear and nonlinear image processing transforms under certain conditions. We show the use of MRDF in estimating the class and pose of images of rendered solid CAD models of machine parts from single views using a feature-space trajectory neural network classifier. We show new results with better classification and pose estimation accuracy than are achieved by standard principal component analysis and Fukunaga-Koontz feature extraction methods.
NASA Technical Reports Server (NTRS)
Hodges, D. B.
1976-01-01
An iterative method is presented to retrieve single field of view (FOV) tropospheric temperature profiles directly from cloud-contaminated radiance data. A well-defined temperature profile may be calculated from the radiative transfer equation (RTE) for a partly cloudy atmosphere when the average fractional cloud amount and cloud-top height for the FOV are known. A cloud model is formulated to calculate the fractional cloud amount from an estimated cloud-top height. The method is then examined through use of simulated radiance data calculated through vertical integration of the RTE for a partly cloudy atmosphere using known values of cloud-top height(s) and fractional cloud amount(s). Temperature profiles are retrieved from the simulated data assuming various errors in the cloud parameters. Temperature profiles are retrieved from NOAA-4 satellite-measured radiance data obtained over an area dominated by an active cold front and with considerable cloud cover and compared with radiosonde data. The effects of using various guessed profiles and the number of iterations are considered.
Uncertainties in the Item Parameter Estimates and Robust Automated Test Assembly
ERIC Educational Resources Information Center
Veldkamp, Bernard P.; Matteucci, Mariagiulia; de Jong, Martijn G.
2013-01-01
Item response theory parameters have to be estimated, and because of the estimation process, they do have uncertainty in them. In most large-scale testing programs, the parameters are stored in item banks, and automated test assembly algorithms are applied to assemble operational test forms. These algorithms treat item parameters as fixed values,…
Yoon, Ki-Hyuk; Ju, Heongkyu; Kwon, Hyunkyung; Park, Inkyu; Kim, Sung-Kyu
2016-02-22
We present optical characteristics of view image provided by a high-density multi-view autostereoscopic 3D display (HD-MVA3D) with a parallax barrier (PB). Diffraction effects that become of great importance in such a display system that uses a PB, are considered in an one-dimensional model of the 3D display, in which the numerical simulation of light from display panel pixels through PB slits to viewing zone is performed. The simulation results are then compared to the corresponding experimental measurements with discussion. We demonstrate that, as a main parameter for view image quality evaluation, the Fresnel number can be used to determine the PB slit aperture for the best performance of the display system. It is revealed that a set of the display parameters, which gives the Fresnel number of ∼ 0.7 offers maximized brightness of the view images while that corresponding to the Fresnel number of 0.4 ∼ 0.5 offers minimized image crosstalk. The compromise between the brightness and crosstalk enables optimization of the relative magnitude of the brightness to the crosstalk and lead to the choice of display parameter set for the HD-MVA3D with a PB, which satisfies the condition where the Fresnel number lies between 0.4 and 0.7.
Karr, Jonathan R; Williams, Alex H; Zucker, Jeremy D; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A; Bot, Brian M; Hoff, Bruce R; Kellen, Michael R; Covert, Markus W; Stolovitzky, Gustavo A; Meyer, Pablo
2015-05-01
Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model's structure and in silico "experimental" data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation.
Adaptive Parameter Estimation of Person Recognition Model in a Stochastic Human Tracking Process
NASA Astrophysics Data System (ADS)
Nakanishi, W.; Fuse, T.; Ishikawa, T.
2015-05-01
This paper aims at an estimation of parameters of person recognition models using a sequential Bayesian filtering method. In many human tracking method, any parameters of models used for recognize the same person in successive frames are usually set in advance of human tracking process. In real situation these parameters may change according to situation of observation and difficulty level of human position prediction. Thus in this paper we formulate an adaptive parameter estimation using general state space model. Firstly we explain the way to formulate human tracking in general state space model with their components. Then referring to previous researches, we use Bhattacharyya coefficient to formulate observation model of general state space model, which is corresponding to person recognition model. The observation model in this paper is a function of Bhattacharyya coefficient with one unknown parameter. At last we sequentially estimate this parameter in real dataset with some settings. Results showed that sequential parameter estimation was succeeded and were consistent with observation situations such as occlusions.
Tube-Load Model Parameter Estimation for Monitoring Arterial Hemodynamics
Zhang, Guanqun; Hahn, Jin-Oh; Mukkamala, Ramakrishna
2011-01-01
A useful model of the arterial system is the uniform, lossless tube with parametric load. This tube-load model is able to account for wave propagation and reflection (unlike lumped-parameter models such as the Windkessel) while being defined by only a few parameters (unlike comprehensive distributed-parameter models). As a result, the parameters may be readily estimated by accurate fitting of the model to available arterial pressure and flow waveforms so as to permit improved monitoring of arterial hemodynamics. In this paper, we review tube-load model parameter estimation techniques that have appeared in the literature for monitoring wave reflection, large artery compliance, pulse transit time, and central aortic pressure. We begin by motivating the use of the tube-load model for parameter estimation. We then describe the tube-load model, its assumptions and validity, and approaches for estimating its parameters. We next summarize the various techniques and their experimental results while highlighting their advantages over conventional techniques. We conclude the review by suggesting future research directions and describing potential applications. PMID:22053157
Karr, Jonathan R.; Williams, Alex H.; Zucker, Jeremy D.; Raue, Andreas; Steiert, Bernhard; Timmer, Jens; Kreutz, Clemens; Wilkinson, Simon; Allgood, Brandon A.; Bot, Brian M.; Hoff, Bruce R.; Kellen, Michael R.; Covert, Markus W.; Stolovitzky, Gustavo A.; Meyer, Pablo
2015-01-01
Whole-cell models that explicitly represent all cellular components at the molecular level have the potential to predict phenotype from genotype. However, even for simple bacteria, whole-cell models will contain thousands of parameters, many of which are poorly characterized or unknown. New algorithms are needed to estimate these parameters and enable researchers to build increasingly comprehensive models. We organized the Dialogue for Reverse Engineering Assessments and Methods (DREAM) 8 Whole-Cell Parameter Estimation Challenge to develop new parameter estimation algorithms for whole-cell models. We asked participants to identify a subset of parameters of a whole-cell model given the model’s structure and in silico “experimental” data. Here we describe the challenge, the best performing methods, and new insights into the identifiability of whole-cell models. We also describe several valuable lessons we learned toward improving future challenges. Going forward, we believe that collaborative efforts supported by inexpensive cloud computing have the potential to solve whole-cell model parameter estimation. PMID:26020786
Optimized tuner selection for engine performance estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)
2013-01-01
A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.
Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters
ERIC Educational Resources Information Center
Hoshino, Takahiro; Shigemasu, Kazuo
2008-01-01
The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…
Impacts of different types of measurements on estimating unsaturated flow parameters
NASA Astrophysics Data System (ADS)
Shi, Liangsheng; Song, Xuehang; Tong, Juxiu; Zhu, Yan; Zhang, Qiuru
2015-05-01
This paper assesses the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.
Impacts of Different Types of Measurements on Estimating Unsaturatedflow Parameters
NASA Astrophysics Data System (ADS)
Shi, L.
2015-12-01
This study evaluates the value of different types of measurements for estimating soil hydraulic parameters. A numerical method based on ensemble Kalman filter (EnKF) is presented to solely or jointly assimilate point-scale soil water head data, point-scale soil water content data, surface soil water content data and groundwater level data. This study investigates the performance of EnKF under different types of data, the potential worth contained in these data, and the factors that may affect estimation accuracy. Results show that for all types of data, smaller measurements errors lead to faster convergence to the true values. Higher accuracy measurements are required to improve the parameter estimation if a large number of unknown parameters need to be identified simultaneously. The data worth implied by the surface soil water content data and groundwater level data is prone to corruption by a deviated initial guess. Surface soil moisture data are capable of identifying soil hydraulic parameters for the top layers, but exert less or no influence on deeper layers especially when estimating multiple parameters simultaneously. Groundwater level is one type of valuable information to infer the soil hydraulic parameters. However, based on the approach used in this study, the estimates from groundwater level data may suffer severe degradation if a large number of parameters must be identified. Combined use of two or more types of data is helpful to improve the parameter estimation.
Accuracy Estimation and Parameter Advising for Protein Multiple Sequence Alignment
DeBlasio, Dan
2013-01-01
Abstract We develop a novel and general approach to estimating the accuracy of multiple sequence alignments without knowledge of a reference alignment, and use our approach to address a new task that we call parameter advising: the problem of choosing values for alignment scoring function parameters from a given set of choices to maximize the accuracy of a computed alignment. For protein alignments, we consider twelve independent features that contribute to a quality alignment. An accuracy estimator is learned that is a polynomial function of these features; its coefficients are determined by minimizing its error with respect to true accuracy using mathematical optimization. Compared to prior approaches for estimating accuracy, our new approach (a) introduces novel feature functions that measure nonlocal properties of an alignment yet are fast to evaluate, (b) considers more general classes of estimators beyond linear combinations of features, and (c) develops new regression formulations for learning an estimator from examples; in addition, for parameter advising, we (d) determine the optimal parameter set of a given cardinality, which specifies the best parameter values from which to choose. Our estimator, which we call Facet (for “feature-based accuracy estimator”), yields a parameter advisor that on the hardest benchmarks provides more than a 27% improvement in accuracy over the best default parameter choice, and for parameter advising significantly outperforms the best prior approaches to assessing alignment quality. PMID:23489379
Knopman, Debra S.; Voss, Clifford I.
1988-01-01
Sensitivities of solute concentration to parameters associated with first-order chemical decay, boundary conditions, initial conditions, and multilayer transport are examined in one-dimensional analytical models of transient solute transport in porous media. A sensitivity is a change in solute concentration resulting from a change in a model parameter. Sensitivity analysis is important because minimum information required in regression on chemical data for the estimation of model parameters by regression is expressed in terms of sensitivities. Nonlinear regression models of solute transport were tested on sets of noiseless observations from known models that exceeded the minimum sensitivity information requirements. Results demonstrate that the regression models consistently converged to the correct parameters when the initial sets of parameter values substantially deviated from the correct parameters. On the basis of the sensitivity analysis, several statements may be made about design of sampling for parameter estimation for the models examined: (1) estimation of parameters associated with solute transport in the individual layers of a multilayer system is possible even when solute concentrations in the individual layers are mixed in an observation well; (2) when estimating parameters in a decaying upstream boundary condition, observations are best made late in the passage of the front near a time chosen by adding the inverse of an hypothesized value of the source decay parameter to the estimated mean travel time at a given downstream location; (3) estimation of a first-order chemical decay parameter requires observations to be made late in the passage of the front, preferably near a location corresponding to a travel time of √2 times the half-life of the solute; and (4) estimation of a parameter relating to spatial variability in an initial condition requires observations to be made early in time relative to passage of the solute front.
ASSESSMENT OF INTAKE ACCORDING TO IDEAS GUIDANCE: CASE STUDY.
Bitar, A; Maghrabi, M
2018-04-01
Estimation of radiation intake and internal dose can be carried out through direct or indirect measurements during routine or special monitoring program. In case of Iodine-131 contamination, direct measurements, such as thyroid counting, are fast and efficient to give quick results. Generally, the calculation method implements suitable values for known parameters whereas default values are used if no information is available. However, in view to avoid significant discrepancies, IDEAS guidelines put in route a comprehensive method to evaluate the monitoring data for one and different types of monitoring. This article deals with a case of internal contamination of a worker who inhaled aerosols containing 131I during the production of radiopharmaceuticals. The interpretation of data obtained was done by following IDEAS guidelines.
Parameter Estimation of Partial Differential Equation Models.
Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab
2013-01-01
Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data.
Reconstructing the hidden states in time course data of stochastic models.
Zimmer, Christoph
2015-11-01
Parameter estimation is central for analyzing models in Systems Biology. The relevance of stochastic modeling in the field is increasing. Therefore, the need for tailored parameter estimation techniques is increasing as well. Challenges for parameter estimation are partial observability, measurement noise, and the computational complexity arising from the dimension of the parameter space. This article extends the multiple shooting for stochastic systems' method, developed for inference in intrinsic stochastic systems. The treatment of extrinsic noise and the estimation of the unobserved states is improved, by taking into account the correlation between unobserved and observed species. This article demonstrates the power of the method on different scenarios of a Lotka-Volterra model, including cases in which the prey population dies out or explodes, and a Calcium oscillation system. Besides showing how the new extension improves the accuracy of the parameter estimates, this article analyzes the accuracy of the state estimates. In contrast to previous approaches, the new approach is well able to estimate states and parameters for all the scenarios. As it does not need stochastic simulations, it is of the same order of speed as conventional least squares parameter estimation methods with respect to computational time. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Estimation of correlation functions by stochastic approximation.
NASA Technical Reports Server (NTRS)
Habibi, A.; Wintz, P. A.
1972-01-01
Consideration of the autocorrelation function of a zero-mean stationary random process. The techniques are applicable to processes with nonzero mean provided the mean is estimated first and subtracted. Two recursive techniques are proposed, both of which are based on the method of stochastic approximation and assume a functional form for the correlation function that depends on a number of parameters that are recursively estimated from successive records. One technique uses a standard point estimator of the correlation function to provide estimates of the parameters that minimize the mean-square error between the point estimates and the parametric function. The other technique provides estimates of the parameters that maximize a likelihood function relating the parameters of the function to the random process. Examples are presented.
Wagner, Brian J.; Gorelick, Steven M.
1986-01-01
A simulation nonlinear multiple-regression methodology for estimating parameters that characterize the transport of contaminants is developed and demonstrated. Finite difference contaminant transport simulation is combined with a nonlinear weighted least squares multiple-regression procedure. The technique provides optimal parameter estimates and gives statistics for assessing the reliability of these estimates under certain general assumptions about the distributions of the random measurement errors. Monte Carlo analysis is used to estimate parameter reliability for a hypothetical homogeneous soil column for which concentration data contain large random measurement errors. The value of data collected spatially versus data collected temporally was investigated for estimation of velocity, dispersion coefficient, effective porosity, first-order decay rate, and zero-order production. The use of spatial data gave estimates that were 2–3 times more reliable than estimates based on temporal data for all parameters except velocity. Comparison of estimated linear and nonlinear confidence intervals based upon Monte Carlo analysis showed that the linear approximation is poor for dispersion coefficient and zero-order production coefficient when data are collected over time. In addition, examples demonstrate transport parameter estimation for two real one-dimensional systems. First, the longitudinal dispersivity and effective porosity of an unsaturated soil are estimated using laboratory column data. We compare the reliability of estimates based upon data from individual laboratory experiments versus estimates based upon pooled data from several experiments. Second, the simulation nonlinear regression procedure is extended to include an additional governing equation that describes delayed storage during contaminant transport. The model is applied to analyze the trends, variability, and interrelationship of parameters in a mourtain stream in northern California.
The Inverse Problem for Confined Aquifer Flow: Identification and Estimation With Extensions
NASA Astrophysics Data System (ADS)
Loaiciga, Hugo A.; MariñO, Miguel A.
1987-01-01
The contributions of this work are twofold. First, a methodology for estimating the elements of parameter matrices in the governing equation of flow in a confined aquifer is developed. The estimation techniques for the distributed-parameter inverse problem pertain to linear least squares and generalized least squares methods. The linear relationship among the known heads and unknown parameters of the flow equation provides the background for developing criteria for determining the identifiability status of unknown parameters. Under conditions of exact or overidentification it is possible to develop statistically consistent parameter estimators and their asymptotic distributions. The estimation techniques, namely, two-stage least squares and three stage least squares, are applied to a specific groundwater inverse problem and compared between themselves and with an ordinary least squares estimator. The three-stage estimator provides the closer approximation to the actual parameter values, but it also shows relatively large standard errors as compared to the ordinary and two-stage estimators. The estimation techniques provide the parameter matrices required to simulate the unsteady groundwater flow equation. Second, a nonlinear maximum likelihood estimation approach to the inverse problem is presented. The statistical properties of maximum likelihood estimators are derived, and a procedure to construct confidence intervals and do hypothesis testing is given. The relative merits of the linear and maximum likelihood estimators are analyzed. Other topics relevant to the identification and estimation methodologies, i.e., a continuous-time solution to the flow equation, coping with noise-corrupted head measurements, and extension of the developed theory to nonlinear cases are also discussed. A simulation study is used to evaluate the methods developed in this study.
Estimation of teleported and gained parameters in a non-inertial frame
NASA Astrophysics Data System (ADS)
Metwally, N.
2017-04-01
Quantum Fisher information is introduced as a measure of estimating the teleported information between two users, one of which is uniformly accelerated. We show that the final teleported state depends on the initial parameters, in addition to the gained parameters during the teleportation process. The estimation degree of these parameters depends on the value of the acceleration, the used single mode approximation (within/beyond), the type of encoded information (classic/quantum) in the teleported state, and the entanglement of the initial communication channel. The estimation degree of the parameters can be maximized if the partners teleport classical information.
Sequential Feedback Scheme Outperforms the Parallel Scheme for Hamiltonian Parameter Estimation.
Yuan, Haidong
2016-10-14
Measurement and estimation of parameters are essential for science and engineering, where the main quest is to find the highest achievable precision with the given resources and design schemes to attain it. Two schemes, the sequential feedback scheme and the parallel scheme, are usually studied in the quantum parameter estimation. While the sequential feedback scheme represents the most general scheme, it remains unknown whether it can outperform the parallel scheme for any quantum estimation tasks. In this Letter, we show that the sequential feedback scheme has a threefold improvement over the parallel scheme for Hamiltonian parameter estimations on two-dimensional systems, and an order of O(d+1) improvement for Hamiltonian parameter estimation on d-dimensional systems. We also show that, contrary to the conventional belief, it is possible to simultaneously achieve the highest precision for estimating all three components of a magnetic field, which sets a benchmark on the local precision limit for the estimation of a magnetic field.
Stock market speculation: Spontaneous symmetry breaking of economic valuation
NASA Astrophysics Data System (ADS)
Sornette, Didier
2000-09-01
Firm foundation theory estimates a security's firm fundamental value based on four determinants: expected growth rate, expected dividend payout, the market interest rate and the degree of risk. In contrast, other views of decision-making in the stock market, using alternatives such as human psychology and behavior, bounded rationality, agent-based modeling and evolutionary game theory, expound that speculative and crowd behavior of investors may play a major role in shaping market prices. Here, we propose that the two views refer to two classes of companies connected through a "phase transition". Our theory is based on (1) the identification of the fundamental parity symmetry of prices (p→-p), which results from the relative direction of payment flux compared to commodity flux and (2) the observation that a company's risk-adjusted growth rate discounted by the market interest rate behaves as a control parameter for the observable price. We find a critical value of this control parameter at which a spontaneous symmetry-breaking of prices occurs, leading to a spontaneous valuation in absence of earnings, similarly to the emergence of a spontaneous magnetization in Ising models in absence of a magnetic field. The low growth rate phase is described by the firm foundation theory while the large growth rate phase is the regime of speculation and crowd behavior. In practice, while large "finite-time horizon" effects round off the predicted singularities, our symmetry-breaking speculation theory accounts for the apparent over-pricing and the high volatility of fast growing companies on the stock markets.
Simultaneous Mean and Covariance Correction Filter for Orbit Estimation.
Wang, Xiaoxu; Pan, Quan; Ding, Zhengtao; Ma, Zhengya
2018-05-05
This paper proposes a novel filtering design, from a viewpoint of identification instead of the conventional nonlinear estimation schemes (NESs), to improve the performance of orbit state estimation for a space target. First, a nonlinear perturbation is viewed or modeled as an unknown input (UI) coupled with the orbit state, to avoid the intractable nonlinear perturbation integral (INPI) required by NESs. Then, a simultaneous mean and covariance correction filter (SMCCF), based on a two-stage expectation maximization (EM) framework, is proposed to simply and analytically fit or identify the first two moments (FTM) of the perturbation (viewed as UI), instead of directly computing such the INPI in NESs. Orbit estimation performance is greatly improved by utilizing the fit UI-FTM to simultaneously correct the state estimation and its covariance. Third, depending on whether enough information is mined, SMCCF should outperform existing NESs or the standard identification algorithms (which view the UI as a constant independent of the state and only utilize the identified UI-mean to correct the state estimation, regardless of its covariance), since it further incorporates the useful covariance information in addition to the mean of the UI. Finally, our simulations demonstrate the superior performance of SMCCF via an orbit estimation example.
Sun, Xiaodian; Jin, Li; Xiong, Momiao
2008-01-01
It is system dynamics that determines the function of cells, tissues and organisms. To develop mathematical models and estimate their parameters are an essential issue for studying dynamic behaviors of biological systems which include metabolic networks, genetic regulatory networks and signal transduction pathways, under perturbation of external stimuli. In general, biological dynamic systems are partially observed. Therefore, a natural way to model dynamic biological systems is to employ nonlinear state-space equations. Although statistical methods for parameter estimation of linear models in biological dynamic systems have been developed intensively in the recent years, the estimation of both states and parameters of nonlinear dynamic systems remains a challenging task. In this report, we apply extended Kalman Filter (EKF) to the estimation of both states and parameters of nonlinear state-space models. To evaluate the performance of the EKF for parameter estimation, we apply the EKF to a simulation dataset and two real datasets: JAK-STAT signal transduction pathway and Ras/Raf/MEK/ERK signaling transduction pathways datasets. The preliminary results show that EKF can accurately estimate the parameters and predict states in nonlinear state-space equations for modeling dynamic biochemical networks. PMID:19018286
Theory and experimental validation of SPLASH (Single Panel Lamp and Shroud Helper).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larsen, Marvin Elwood; Porter, Jason M.
2005-06-01
The radiant heat test facility develops test sets providing well-characterized thermal environments, often representing fires. Many of the components and procedures have become standardized to such an extent that the development of a specialized design tool was appropriate. SPLASH (Single Panel Lamp and Shroud Helper) is that tool. SPLASH is implemented as a user-friendly program that allows a designer to describe a test setup in terms of parameters such as lamp number, power, position, and separation distance. Thermal radiation is the dominant mechanism of heat transfer and the SPLASH model solves a radiation enclosure problem to estimate temperature distributions inmore » a shroud providing the boundary condition of interest. Irradiance distribution on a specified viewing plane is also estimated. This document provides the theoretical development for the underlying model. A series of tests were conducted to characterize SPLASH's ability to analyze lamp and shroud systems. The comparison suggests that SPLASH succeeds as a design tool. Simplifications made to keep the model tractable are demonstrated to result in estimates that are only approximately as uncertain as many of the properties and characteristics of the operating environment.« less
NASA Technical Reports Server (NTRS)
Steffen, Konrad; Key, Jeff; Maslanik, Jim; Haefliger, Marcel; Fowler, Chuck
1992-01-01
Satellite data for the estimation of radiative and turbulent heat fluxes is becoming an increasingly important tool in large-scale studies of climate. One parameter needed in the estimation of these fluxes is surface temperature. To our knowledge, little effort has been directed to the retrieval of the sea ice surface temperature (IST) in the Arctic, an area where the first effects of a changing climate are expected to be seen. The reason is not one of methodology, but rather our limited knowledge of atmospheric temperature, humidity, and aerosol profiles, the microphysical properties of polar clouds, and the spectral characteristics of the wide variety of surface types found there. We have developed a means to correct for the atmospheric attenuation of satellite-measured clear sky brightness temperatures used in the retrieval of ice surface temperature from the split-window thermal channels of the advanced very high resolution radiometer (AVHRR) sensors on-board three of the NOAA series satellites. These corrections are specified for three different 'seasons' and as a function of satellite viewing angle, and are expected to be applicable to the perennial ice pack in the central Arctic Basin.
Oviedo de la Fuente, Manuel; Febrero-Bande, Manuel; Muñoz, María Pilar; Domínguez, Àngela
2018-01-01
This paper proposes a novel approach that uses meteorological information to predict the incidence of influenza in Galicia (Spain). It extends the Generalized Least Squares (GLS) methods in the multivariate framework to functional regression models with dependent errors. These kinds of models are useful when the recent history of the incidence of influenza are readily unavailable (for instance, by delays on the communication with health informants) and the prediction must be constructed by correcting the temporal dependence of the residuals and using more accessible variables. A simulation study shows that the GLS estimators render better estimations of the parameters associated with the regression model than they do with the classical models. They obtain extremely good results from the predictive point of view and are competitive with the classical time series approach for the incidence of influenza. An iterative version of the GLS estimator (called iGLS) was also proposed that can help to model complicated dependence structures. For constructing the model, the distance correlation measure [Formula: see text] was employed to select relevant information to predict influenza rate mixing multivariate and functional variables. These kinds of models are extremely useful to health managers in allocating resources in advance to manage influenza epidemics.
Validation of ERS-1 environmental data products
NASA Technical Reports Server (NTRS)
Goodberlet, Mark A.; Swift, Calvin T.; Wilkerson, John C.
1994-01-01
Evaluation of the launch-version algorithms used by the European Space Agency (ESA) to derive wind field and ocean wave estimates from measurements of sensors aboard the European Remote Sensing satellite, ERS-1, has been accomplished through comparison of the derived parameters with coincident measurements made by 24 open ocean buoys maintained by the National Oceanic and Atmospheric Administration). During the period from November 1, 1991 through February 28, 1992, data bases with 577 and 485 pairs of coincident sensor/buoy wind and wave measurements were collected for the Active Microwave Instrument (AMI) and Radar Altimeter (RA) respectively. Based on these data, algorithm retrieval accuracy is estimated to be plus or minus 4 m/s for AMI wind speed, plus or minus 3 m/s for RA wind speed and plus or minus 0.6 m for RA wave height. After removing 180 degree ambiguity errors, the AMI wind direction retrieval accuracy was estimated at plus or minus 28 degrees. All of the ERS-1 wind and wave retrievals are relatively unbiased. These results should be viewed as interim since improved algorithms are under development. As final versions are implemented, additional assessments should be conducted to complete the validation.
Polarimetric, Two-Color, Photon-Counting Laser Altimeter Measurements of Forest Canopy Structure
NASA Technical Reports Server (NTRS)
Harding, David J.; Dabney, Philip W.; Valett, Susan
2011-01-01
Laser altimeter measurements of forest stands with distinct structures and compositions have been acquired at 532 nm (green) and 1064 nm (near-infrared) wavelengths and parallel and perpendicular polarization states using the Slope Imaging Multi-polarization Photon Counting Lidar (SIMPL). The micropulse, single photon ranging measurement approach employed by SIMPL provides canopy structure measurements with high vertical and spatial resolution. Using a height distribution analysis method adapted from conventional, 1064 nm, full-waveform lidar remote sensing, the sensitivity of two parameters commonly used for above-ground biomass estimation are compared as a function of wavelength. The results for the height of median energy (HOME) and canopy cover are for the most part very similar, indicating biomass estimations using lidars operating at green and near-infrared wavelengths will yield comparable estimates. The expected detection of increasing depolarization with depth into the canopies due to volume multiple-scattering was not observed, possibly due to the small laser footprint and the small detector field of view used in the SIMPL instrument. The results of this work provide pathfinder information for NASA's ICESat-2 mission that will employ a 532 nm, micropulse, photon counting laser altimeter.
Robust guaranteed-cost adaptive quantum phase estimation
NASA Astrophysics Data System (ADS)
Roy, Shibdas; Berry, Dominic W.; Petersen, Ian R.; Huntington, Elanor H.
2017-05-01
Quantum parameter estimation plays a key role in many fields like quantum computation, communication, and metrology. Optimal estimation allows one to achieve the most precise parameter estimates, but requires accurate knowledge of the model. Any inevitable uncertainty in the model parameters may heavily degrade the quality of the estimate. It is therefore desired to make the estimation process robust to such uncertainties. Robust estimation was previously studied for a varying phase, where the goal was to estimate the phase at some time in the past, using the measurement results from both before and after that time within a fixed time interval up to current time. Here, we consider a robust guaranteed-cost filter yielding robust estimates of a varying phase in real time, where the current phase is estimated using only past measurements. Our filter minimizes the largest (worst-case) variance in the allowable range of the uncertain model parameter(s) and this determines its guaranteed cost. It outperforms in the worst case the optimal Kalman filter designed for the model with no uncertainty, which corresponds to the center of the possible range of the uncertain parameter(s). Moreover, unlike the Kalman filter, our filter in the worst case always performs better than the best achievable variance for heterodyne measurements, which we consider as the tolerable threshold for our system. Furthermore, we consider effective quantum efficiency and effective noise power, and show that our filter provides the best results by these measures in the worst case.
77 FR 2058 - Agency Information Collection Activities; Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-13
... through an X-ray machine and subject to search. Visitors will be provided an EPA/DC badge that must be... comments: 1. Explain your views as clearly as possible and provide specific examples. 2. Describe any... your views. 4. If you estimate potential burden or costs, explain how you arrived at the estimate that...
A Bayesian model for time-to-event data with informative censoring
Kaciroti, Niko A.; Raghunathan, Trivellore E.; Taylor, Jeremy M. G.; Julius, Stevo
2012-01-01
Randomized trials with dropouts or censored data and discrete time-to-event type outcomes are frequently analyzed using the Kaplan–Meier or product limit (PL) estimation method. However, the PL method assumes that the censoring mechanism is noninformative and when this assumption is violated, the inferences may not be valid. We propose an expanded PL method using a Bayesian framework to incorporate informative censoring mechanism and perform sensitivity analysis on estimates of the cumulative incidence curves. The expanded method uses a model, which can be viewed as a pattern mixture model, where odds for having an event during the follow-up interval (tk−1,tk], conditional on being at risk at tk−1, differ across the patterns of missing data. The sensitivity parameters relate the odds of an event, between subjects from a missing-data pattern with the observed subjects for each interval. The large number of the sensitivity parameters is reduced by considering them as random and assumed to follow a log-normal distribution with prespecified mean and variance. Then we vary the mean and variance to explore sensitivity of inferences. The missing at random (MAR) mechanism is a special case of the expanded model, thus allowing exploration of the sensitivity to inferences as departures from the inferences under the MAR assumption. The proposed approach is applied to data from the TRial Of Preventing HYpertension. PMID:22223746
Omnidirectional Underwater Camera Design and Calibration
Bosch, Josep; Gracias, Nuno; Ridao, Pere; Ribas, David
2015-01-01
This paper presents the development of an underwater omnidirectional multi-camera system (OMS) based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing is required, which often leads to strong image distortion due to refraction among the different media. This phenomena makes the basic pinhole camera model invalid for underwater cameras, especially when using wide-angle lenses, and requires the explicit modeling of the individual optical rays. To address this problem, a ray tracing approach has been adopted to create a field-of-view (FOV) simulator for underwater cameras. The simulator allows for the testing of different housing geometries and optics for the cameras to ensure a complete hemisphere coverage in underwater operation. This paper describes the design and testing of a compact custom housing for a commercial off-the-shelf OMS camera (Ladybug 3) and presents the first results of its use. A proposed three-stage calibration process allows for the estimation of all of the relevant camera parameters. Experimental results are presented, which illustrate the performance of the calibration method and validate the approach. PMID:25774707
ERIC Educational Resources Information Center
Xu, Xueli; Jia, Yue
2011-01-01
Estimation of item response model parameters and ability distribution parameters has been, and will remain, an important topic in the educational testing field. Much research has been dedicated to addressing this task. Some studies have focused on item parameter estimation when the latent ability was assumed to follow a normal distribution,…
ERIC Educational Resources Information Center
Gugel, John F.
A new method for estimating the parameters of the normal ogive three-parameter model for multiple-choice test items--the normalized direct (NDIR) procedure--is examined. The procedure is compared to a more commonly used estimation procedure, Lord's LOGIST, using computer simulations. The NDIR procedure uses the normalized (mid-percentile)…
Luo, Rutao; Piovoso, Michael J.; Martinez-Picado, Javier; Zurakowski, Ryan
2012-01-01
Mathematical models based on ordinary differential equations (ODE) have had significant impact on understanding HIV disease dynamics and optimizing patient treatment. A model that characterizes the essential disease dynamics can be used for prediction only if the model parameters are identifiable from clinical data. Most previous parameter identification studies for HIV have used sparsely sampled data from the decay phase following the introduction of therapy. In this paper, model parameters are identified from frequently sampled viral-load data taken from ten patients enrolled in the previously published AutoVac HAART interruption study, providing between 69 and 114 viral load measurements from 3–5 phases of viral decay and rebound for each patient. This dataset is considerably larger than those used in previously published parameter estimation studies. Furthermore, the measurements come from two separate experimental conditions, which allows for the direct estimation of drug efficacy and reservoir contribution rates, two parameters that cannot be identified from decay-phase data alone. A Markov-Chain Monte-Carlo method is used to estimate the model parameter values, with initial estimates obtained using nonlinear least-squares methods. The posterior distributions of the parameter estimates are reported and compared for all patients. PMID:22815727
Exploratory Study for Continuous-time Parameter Estimation of Ankle Dynamics
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.; Boyle, Richard D.
2014-01-01
Recently, a parallel pathway model to describe ankle dynamics was proposed. This model provides a relationship between ankle angle and net ankle torque as the sum of a linear and nonlinear contribution. A technique to identify parameters of this model in discrete-time has been developed. However, these parameters are a nonlinear combination of the continuous-time physiology, making insight into the underlying physiology impossible. The stable and accurate estimation of continuous-time parameters is critical for accurate disease modeling, clinical diagnosis, robotic control strategies, development of optimal exercise protocols for longterm space exploration, sports medicine, etc. This paper explores the development of a system identification technique to estimate the continuous-time parameters of ankle dynamics. The effectiveness of this approach is assessed via simulation of a continuous-time model of ankle dynamics with typical parameters found in clinical studies. The results show that although this technique improves estimates, it does not provide robust estimates of continuous-time parameters of ankle dynamics. Due to this we conclude that alternative modeling strategies and more advanced estimation techniques be considered for future work.
Filter Function for Wavefront Sensing Over a Field of View
NASA Technical Reports Server (NTRS)
Dean, Bruce H.
2007-01-01
A filter function has been derived as a means of optimally weighting the wavefront estimates obtained in image-based phase retrieval performed at multiple points distributed over the field of view of a telescope or other optical system. When the data obtained in wavefront sensing and, more specifically, image-based phase retrieval, are used for controlling the shape of a deformable mirror or other optic used to correct the wavefront, the control law obtained by use of the filter function gives a more balanced optical performance over the field of view than does a wavefront-control law obtained by use of a wavefront estimate obtained from a single point in the field of view.
NASA Technical Reports Server (NTRS)
Iliff, Kenneth W.
1987-01-01
The aircraft parameter estimation problem is used to illustrate the utility of parameter estimation, which applies to many engineering and scientific fields. Maximum likelihood estimation has been used to extract stability and control derivatives from flight data for many years. This paper presents some of the basic concepts of aircraft parameter estimation and briefly surveys the literature in the field. The maximum likelihood estimator is discussed, and the basic concepts of minimization and estimation are examined for a simple simulated aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Some of the major conclusions for the simulated example are also developed for the analysis of flight data from the F-14, highly maneuverable aircraft technology (HiMAT), and space shuttle vehicles.
ERIC Educational Resources Information Center
Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun
2002-01-01
Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)
NASA Astrophysics Data System (ADS)
Montzka, Carsten; Hendricks Franssen, Harrie-Jan; Moradkhani, Hamid; Pütz, Thomas; Han, Xujun; Vereecken, Harry
2013-04-01
An adequate description of soil hydraulic properties is essential for a good performance of hydrological forecasts. So far, several studies showed that data assimilation could reduce the parameter uncertainty by considering soil moisture observations. However, these observations and also the model forcings were recorded with a specific measurement error. It seems a logical step to base state updating and parameter estimation on observations made at multiple time steps, in order to reduce the influence of outliers at single time steps given measurement errors and unknown model forcings. Such outliers could result in erroneous state estimation as well as inadequate parameters. This has been one of the reasons to use a smoothing technique as implemented for Bayesian data assimilation methods such as the Ensemble Kalman Filter (i.e. Ensemble Kalman Smoother). Recently, an ensemble-based smoother has been developed for state update with a SIR particle filter. However, this method has not been used for dual state-parameter estimation. In this contribution we present a Particle Smoother with sequentially smoothing of particle weights for state and parameter resampling within a time window as opposed to the single time step data assimilation used in filtering techniques. This can be seen as an intermediate variant between a parameter estimation technique using global optimization with estimation of single parameter sets valid for the whole period, and sequential Monte Carlo techniques with estimation of parameter sets evolving from one time step to another. The aims are i) to improve the forecast of evaporation and groundwater recharge by estimating hydraulic parameters, and ii) to reduce the impact of single erroneous model inputs/observations by a smoothing method. In order to validate the performance of the proposed method in a real world application, the experiment is conducted in a lysimeter environment.
NASA Astrophysics Data System (ADS)
Patil, Prataprao; Vyasarayani, C. P.; Ramji, M.
2017-06-01
In this work, digital photoelasticity technique is used to estimate the crack tip fracture parameters for different crack configurations. Conventionally, only isochromatic data surrounding the crack tip is used for SIF estimation, but with the advent of digital photoelasticity, pixel-wise availability of both isoclinic and isochromatic data could be exploited for SIF estimation in a novel way. A linear least square approach is proposed to estimate the mixed-mode crack tip fracture parameters by solving the multi-parameter stress field equation. The stress intensity factor (SIF) is extracted from those estimated fracture parameters. The isochromatic and isoclinic data around the crack tip is estimated using the ten-step phase shifting technique. To get the unwrapped data, the adaptive quality guided phase unwrapping algorithm (AQGPU) has been used. The mixed mode fracture parameters, especially SIF are estimated for specimen configurations like single edge notch (SEN), center crack and straight crack ahead of inclusion using the proposed algorithm. The experimental SIF values estimated using the proposed method are compared with analytical/finite element analysis (FEA) results, and are found to be in good agreement.
NASA Technical Reports Server (NTRS)
Orme, John S.; Gilyard, Glenn B.
1992-01-01
Integrated engine-airframe optimal control technology may significantly improve aircraft performance. This technology requires a reliable and accurate parameter estimator to predict unmeasured variables. To develop this technology base, NASA Dryden Flight Research Facility (Edwards, CA), McDonnell Aircraft Company (St. Louis, MO), and Pratt & Whitney (West Palm Beach, FL) have developed and flight-tested an adaptive performance seeking control system which optimizes the quasi-steady-state performance of the F-15 propulsion system. This paper presents flight and ground test evaluations of the propulsion system parameter estimation process used by the performance seeking control system. The estimator consists of a compact propulsion system model and an extended Kalman filter. The extended Laman filter estimates five engine component deviation parameters from measured inputs. The compact model uses measurements and Kalman-filter estimates as inputs to predict unmeasured propulsion parameters such as net propulsive force and fan stall margin. The ability to track trends and estimate absolute values of propulsion system parameters was demonstrated. For example, thrust stand results show a good correlation, especially in trends, between the performance seeking control estimated and measured thrust.
Parameter estimation of kinetic models from metabolic profiles: two-phase dynamic decoupling method.
Jia, Gengjie; Stephanopoulos, Gregory N; Gunawan, Rudiyanto
2011-07-15
Time-series measurements of metabolite concentration have become increasingly more common, providing data for building kinetic models of metabolic networks using ordinary differential equations (ODEs). In practice, however, such time-course data are usually incomplete and noisy, and the estimation of kinetic parameters from these data is challenging. Practical limitations due to data and computational aspects, such as solving stiff ODEs and finding global optimal solution to the estimation problem, give motivations to develop a new estimation procedure that can circumvent some of these constraints. In this work, an incremental and iterative parameter estimation method is proposed that combines and iterates between two estimation phases. One phase involves a decoupling method, in which a subset of model parameters that are associated with measured metabolites, are estimated using the minimization of slope errors. Another phase follows, in which the ODE model is solved one equation at a time and the remaining model parameters are obtained by minimizing concentration errors. The performance of this two-phase method was tested on a generic branched metabolic pathway and the glycolytic pathway of Lactococcus lactis. The results showed that the method is efficient in getting accurate parameter estimates, even when some information is missing.
Quadratic semiparametric Von Mises calculus
Robins, James; Li, Lingling; Tchetgen, Eric
2009-01-01
We discuss a new method of estimation of parameters in semiparametric and nonparametric models. The method is based on U-statistics constructed from quadratic influence functions. The latter extend ordinary linear influence functions of the parameter of interest as defined in semiparametric theory, and represent second order derivatives of this parameter. For parameters for which the matching cannot be perfect the method leads to a bias-variance trade-off, and results in estimators that converge at a slower than n–1/2-rate. In a number of examples the resulting rate can be shown to be optimal. We are particularly interested in estimating parameters in models with a nuisance parameter of high dimension or low regularity, where the parameter of interest cannot be estimated at n–1/2-rate. PMID:23087487
Estimation of the ARNO model baseflow parameters using daily streamflow data
NASA Astrophysics Data System (ADS)
Abdulla, F. A.; Lettenmaier, D. P.; Liang, Xu
1999-09-01
An approach is described for estimation of baseflow parameters of the ARNO model, using historical baseflow recession sequences extracted from daily streamflow records. This approach allows four of the model parameters to be estimated without rainfall data, and effectively facilitates partitioning of the parameter estimation procedure so that parsimonious search procedures can be used to estimate the remaining storm response parameters separately. Three methods of optimization are evaluated for estimation of four baseflow parameters. These methods are the downhill Simplex (S), Simulated Annealing combined with the Simplex method (SA) and Shuffled Complex Evolution (SCE). These estimation procedures are explored in conjunction with four objective functions: (1) ordinary least squares; (2) ordinary least squares with Box-Cox transformation; (3) ordinary least squares on prewhitened residuals; (4) ordinary least squares applied to prewhitened with Box-Cox transformation of residuals. The effects of changing the seed random generator for both SA and SCE methods are also explored, as are the effects of the bounds of the parameters. Although all schemes converge to the same values of the objective function, SCE method was found to be less sensitive to these issues than both the SA and the Simplex schemes. Parameter uncertainty and interactions are investigated through estimation of the variance-covariance matrix and confidence intervals. As expected the parameters were found to be correlated and the covariance matrix was found to be not diagonal. Furthermore, the linearized confidence interval theory failed for about one-fourth of the catchments while the maximum likelihood theory did not fail for any of the catchments.
Information fusion methods based on physical laws.
Rao, Nageswara S V; Reister, David B; Barhen, Jacob
2005-01-01
We consider systems whose parameters satisfy certain easily computable physical laws. Each parameter is directly measured by a number of sensors, or estimated using measurements, or both. The measurement process may introduce both systematic and random errors which may then propagate into the estimates. Furthermore, the actual parameter values are not known since every parameter is measured or estimated, which makes the existing sample-based fusion methods inapplicable. We propose a fusion method for combining the measurements and estimators based on the least violation of physical laws that relate the parameters. Under fairly general smoothness and nonsmoothness conditions on the physical laws, we show the asymptotic convergence of our method and also derive distribution-free performance bounds based on finite samples. For suitable choices of the fuser classes, we show that for each parameter the fused estimate is probabilistically at least as good as its best measurement as well as best estimate. We illustrate the effectiveness of this method for a practical problem of fusing well-log data in methane hydrate exploration.
NASA Astrophysics Data System (ADS)
Nair, S. P.; Righetti, R.
2015-05-01
Recent elastography techniques focus on imaging information on properties of materials which can be modeled as viscoelastic or poroelastic. These techniques often require the fitting of temporal strain data, acquired from either a creep or stress-relaxation experiment to a mathematical model using least square error (LSE) parameter estimation. It is known that the strain versus time relationships for tissues undergoing creep compression have a non-linear relationship. In non-linear cases, devising a measure of estimate reliability can be challenging. In this article, we have developed and tested a method to provide non linear LSE parameter estimate reliability: which we called Resimulation of Noise (RoN). RoN provides a measure of reliability by estimating the spread of parameter estimates from a single experiment realization. We have tested RoN specifically for the case of axial strain time constant parameter estimation in poroelastic media. Our tests show that the RoN estimated precision has a linear relationship to the actual precision of the LSE estimator. We have also compared results from the RoN derived measure of reliability against a commonly used reliability measure: the correlation coefficient (CorrCoeff). Our results show that CorrCoeff is a poor measure of estimate reliability for non-linear LSE parameter estimation. While the RoN is specifically tested only for axial strain time constant imaging, a general algorithm is provided for use in all LSE parameter estimation.
Ramadan, Ahmed; Boss, Connor; Choi, Jongeun; Peter Reeves, N; Cholewicki, Jacek; Popovich, John M; Radcliffe, Clark J
2018-07-01
Estimating many parameters of biomechanical systems with limited data may achieve good fit but may also increase 95% confidence intervals in parameter estimates. This results in poor identifiability in the estimation problem. Therefore, we propose a novel method to select sensitive biomechanical model parameters that should be estimated, while fixing the remaining parameters to values obtained from preliminary estimation. Our method relies on identifying the parameters to which the measurement output is most sensitive. The proposed method is based on the Fisher information matrix (FIM). It was compared against the nonlinear least absolute shrinkage and selection operator (LASSO) method to guide modelers on the pros and cons of our FIM method. We present an application identifying a biomechanical parametric model of a head position-tracking task for ten human subjects. Using measured data, our method (1) reduced model complexity by only requiring five out of twelve parameters to be estimated, (2) significantly reduced parameter 95% confidence intervals by up to 89% of the original confidence interval, (3) maintained goodness of fit measured by variance accounted for (VAF) at 82%, (4) reduced computation time, where our FIM method was 164 times faster than the LASSO method, and (5) selected similar sensitive parameters to the LASSO method, where three out of five selected sensitive parameters were shared by FIM and LASSO methods.
On-line implementation of nonlinear parameter estimation for the Space Shuttle main engine
NASA Technical Reports Server (NTRS)
Buckland, Julia H.; Musgrave, Jeffrey L.; Walker, Bruce K.
1992-01-01
We investigate the performance of a nonlinear estimation scheme applied to the estimation of several parameters in a performance model of the Space Shuttle Main Engine. The nonlinear estimator is based upon the extended Kalman filter which has been augmented to provide estimates of several key performance variables. The estimated parameters are directly related to the efficiency of both the low pressure and high pressure fuel turbopumps. Decreases in the parameter estimates may be interpreted as degradations in turbine and/or pump efficiencies which can be useful measures for an online health monitoring algorithm. This paper extends previous work which has focused on off-line parameter estimation by investigating the filter's on-line potential from a computational standpoint. ln addition, we examine the robustness of the algorithm to unmodeled dynamics. The filter uses a reduced-order model of the engine that includes only fuel-side dynamics. The on-line results produced during this study are comparable to off-line results generated previously. The results show that the parameter estimates are sensitive to dynamics not included in the filter model. Off-line results using an extended Kalman filter with a full order engine model to address the robustness problems of the reduced-order model are also presented.
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor)
1994-01-01
Real-time video presentations are provided in the field of operator-supervised automation and teleoperation, particularly in control stations having movable cameras for optimal viewing of a region of interest in robotics and teleoperations for performing different types of tasks. Movable monitors to match the corresponding camera orientations (pan, tilt, and roll) are provided in order to match the coordinate systems of all the monitors to the operator internal coordinate system. Automated control of the arrangement of cameras and monitors, and of the configuration of system parameters, is provided for optimal viewing and performance of each type of task for each operator since operators have different individual characteristics. The optimal viewing arrangement and system parameter configuration is determined and stored for each operator in performing each of many types of tasks in order to aid the automation of setting up optimal arrangements and configurations for successive tasks in real time. Factors in determining what is optimal include the operator's ability to use hand-controllers for each type of task. Robot joint locations, forces and torques are used, as well as the operator's identity, to identify the current type of task being performed in order to call up a stored optimal viewing arrangement and system parameter configuration.
Efficient design of multituned transmission line NMR probes: the electrical engineering approach.
Frydel, J A; Krzystyniak, M; Pienkowski, D; Pietrzak, M; de Sousa Amadeu, N; Ratajczyk, T; Idzik, K; Gutmann, T; Tietze, D; Voigt, S; Fenn, A; Limbach, H H; Buntkowsky, G
2011-01-01
Transmission line-based multi-channel solid state NMR probes have many advantages regarding the cost of construction, number of RF-channels, and achievable RF-power levels. Nevertheless, these probes are only rarely employed in solid state-NMR-labs, mainly owing to the difficult experimental determination of the necessary RF-parameters. Here, the efficient design of multi-channel solid state MAS-NMR probes employing transmission line theory and modern techniques of electrical engineering is presented. As technical realization a five-channel ((1)H, (31)P, (13)C, (2)H and (15)N) probe for operation at 7 Tesla is described. This very cost efficient design goal is a multi port single coil transmission line probe based on the design developed by Schaefer and McKay. The electrical performance of the probe is determined by measuring of Scattering matrix parameters (S-parameters) in particular input/output ports. These parameters are compared to the calculated parameters of the design employing the S-matrix formalism. It is shown that the S-matrix formalism provides an excellent tool for examination of transmission line probes and thus the tool for a rational design of these probes. On the other hand, the resulting design provides excellent electrical performance. From a point of view of Nuclear Magnetic Resonance (NMR), calibration spectra of particular ports (channels) are of great importance. The estimation of the π/2 pulses length for all five NMR channels is presented. Copyright © 2011 Elsevier Inc. All rights reserved.
Influence of speckle image reconstruction on photometric precision for large solar telescopes
NASA Astrophysics Data System (ADS)
Peck, C. L.; Wöger, F.; Marino, J.
2017-11-01
Context. High-resolution observations from large solar telescopes require adaptive optics (AO) systems to overcome image degradation caused by Earth's turbulent atmosphere. AO corrections are, however, only partial. Achieving near-diffraction limited resolution over a large field of view typically requires post-facto image reconstruction techniques to reconstruct the source image. Aims: This study aims to examine the expected photometric precision of amplitude reconstructed solar images calibrated using models for the on-axis speckle transfer functions and input parameters derived from AO control data. We perform a sensitivity analysis of the photometric precision under variations in the model input parameters for high-resolution solar images consistent with four-meter class solar telescopes. Methods: Using simulations of both atmospheric turbulence and partial compensation by an AO system, we computed the speckle transfer function under variations in the input parameters. We then convolved high-resolution numerical simulations of the solar photosphere with the simulated atmospheric transfer function, and subsequently deconvolved them with the model speckle transfer function to obtain a reconstructed image. To compute the resulting photometric precision, we compared the intensity of the original image with the reconstructed image. Results: The analysis demonstrates that high photometric precision can be obtained for speckle amplitude reconstruction using speckle transfer function models combined with AO-derived input parameters. Additionally, it shows that the reconstruction is most sensitive to the input parameter that characterizes the atmospheric distortion, and sub-2% photometric precision is readily obtained when it is well estimated.
Efficient estimation of Pareto model: Some modified percentile estimators.
Bhatti, Sajjad Haider; Hussain, Shahzad; Ahmad, Tanvir; Aslam, Muhammad; Aftab, Muhammad; Raza, Muhammad Ali
2018-01-01
The article proposes three modified percentile estimators for parameter estimation of the Pareto distribution. These modifications are based on median, geometric mean and expectation of empirical cumulative distribution function of first-order statistic. The proposed modified estimators are compared with traditional percentile estimators through a Monte Carlo simulation for different parameter combinations with varying sample sizes. Performance of different estimators is assessed in terms of total mean square error and total relative deviation. It is determined that modified percentile estimator based on expectation of empirical cumulative distribution function of first-order statistic provides efficient and precise parameter estimates compared to other estimators considered. The simulation results were further confirmed using two real life examples where maximum likelihood and moment estimators were also considered.
Quantifying Key Climate Parameter Uncertainties Using an Earth System Model with a Dynamic 3D Ocean
NASA Astrophysics Data System (ADS)
Olson, R.; Sriver, R. L.; Goes, M. P.; Urban, N.; Matthews, D.; Haran, M.; Keller, K.
2011-12-01
Climate projections hinge critically on uncertain climate model parameters such as climate sensitivity, vertical ocean diffusivity and anthropogenic sulfate aerosol forcings. Climate sensitivity is defined as the equilibrium global mean temperature response to a doubling of atmospheric CO2 concentrations. Vertical ocean diffusivity parameterizes sub-grid scale ocean vertical mixing processes. These parameters are typically estimated using Intermediate Complexity Earth System Models (EMICs) that lack a full 3D representation of the oceans, thereby neglecting the effects of mixing on ocean dynamics and meridional overturning. We improve on these studies by employing an EMIC with a dynamic 3D ocean model to estimate these parameters. We carry out historical climate simulations with the University of Victoria Earth System Climate Model (UVic ESCM) varying parameters that affect climate sensitivity, vertical ocean mixing, and effects of anthropogenic sulfate aerosols. We use a Bayesian approach whereby the likelihood of each parameter combination depends on how well the model simulates surface air temperature and upper ocean heat content. We use a Gaussian process emulator to interpolate the model output to an arbitrary parameter setting. We use Markov Chain Monte Carlo method to estimate the posterior probability distribution function (pdf) of these parameters. We explore the sensitivity of the results to prior assumptions about the parameters. In addition, we estimate the relative skill of different observations to constrain the parameters. We quantify the uncertainty in parameter estimates stemming from climate variability, model and observational errors. We explore the sensitivity of key decision-relevant climate projections to these parameters. We find that climate sensitivity and vertical ocean diffusivity estimates are consistent with previously published results. The climate sensitivity pdf is strongly affected by the prior assumptions, and by the scaling parameter for the aerosols. The estimation method is computationally fast and can be used with more complex models where climate sensitivity is diagnosed rather than prescribed. The parameter estimates can be used to create probabilistic climate projections using the UVic ESCM model in future studies.
Bias-Corrected Estimation of Noncentrality Parameters of Covariance Structure Models
ERIC Educational Resources Information Center
Raykov, Tenko
2005-01-01
A bias-corrected estimator of noncentrality parameters of covariance structure models is discussed. The approach represents an application of the bootstrap methodology for purposes of bias correction, and utilizes the relation between average of resample conventional noncentrality parameter estimates and their sample counterpart. The…
NASA Astrophysics Data System (ADS)
Amiri-Simkooei, A. R.
2018-01-01
Three-dimensional (3D) coordinate transformations, generally consisting of origin shifts, axes rotations, scale changes, and skew parameters, are widely used in many geomatics applications. Although in some geodetic applications simplified transformation models are used based on the assumption of small transformation parameters, in other fields of applications such parameters are indeed large. The algorithms of two recent papers on the weighted total least-squares (WTLS) problem are used for the 3D coordinate transformation. The methodology can be applied to the case when the transformation parameters are generally large of which no approximate values of the parameters are required. Direct linearization of the rotation and scale parameters is thus not required. The WTLS formulation is employed to take into consideration errors in both the start and target systems on the estimation of the transformation parameters. Two of the well-known 3D transformation methods, namely affine (12, 9, and 8 parameters) and similarity (7 and 6 parameters) transformations, can be handled using the WTLS theory subject to hard constraints. Because the method can be formulated by the standard least-squares theory with constraints, the covariance matrix of the transformation parameters can directly be provided. The above characteristics of the 3D coordinate transformation are implemented in the presence of different variance components, which are estimated using the least squares variance component estimation. In particular, the estimability of the variance components is investigated. The efficacy of the proposed formulation is verified on two real data sets.
Cosmological parameter estimation using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Prasad, J.; Souradeep, T.
2014-03-01
Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.
Nonlinear, discrete flood event models, 1. Bayesian estimation of parameters
NASA Astrophysics Data System (ADS)
Bates, Bryson C.; Townley, Lloyd R.
1988-05-01
In this paper (Part 1), a Bayesian procedure for parameter estimation is applied to discrete flood event models. The essence of the procedure is the minimisation of a sum of squares function for models in which the computed peak discharge is nonlinear in terms of the parameters. This objective function is dependent on the observed and computed peak discharges for several storms on the catchment, information on the structure of observation error, and prior information on parameter values. The posterior covariance matrix gives a measure of the precision of the estimated parameters. The procedure is demonstrated using rainfall and runoff data from seven Australian catchments. It is concluded that the procedure is a powerful alternative to conventional parameter estimation techniques in situations where a number of floods are available for parameter estimation. Parts 2 and 3 will discuss the application of statistical nonlinearity measures and prediction uncertainty analysis to calibrated flood models. Bates (this volume) and Bates and Townley (this volume).
Quantitative body DW-MRI biomarkers uncertainty estimation using unscented wild-bootstrap.
Freiman, M; Voss, S D; Mulkern, R V; Perez-Rossello, J M; Warfield, S K
2011-01-01
We present a new method for the uncertainty estimation of diffusion parameters for quantitative body DW-MRI assessment. Diffusion parameters uncertainty estimation from DW-MRI is necessary for clinical applications that use these parameters to assess pathology. However, uncertainty estimation using traditional techniques requires repeated acquisitions, which is undesirable in routine clinical use. Model-based bootstrap techniques, for example, assume an underlying linear model for residuals rescaling and cannot be utilized directly for body diffusion parameters uncertainty estimation due to the non-linearity of the body diffusion model. To offset this limitation, our method uses the Unscented transform to compute the residuals rescaling parameters from the non-linear body diffusion model, and then applies the wild-bootstrap method to infer the body diffusion parameters uncertainty. Validation through phantom and human subject experiments shows that our method identify the regions with higher uncertainty in body DWI-MRI model parameters correctly with realtive error of -36% in the uncertainty values.
Estimating parameter of influenza transmission using regularized least square
NASA Astrophysics Data System (ADS)
Nuraini, N.; Syukriah, Y.; Indratno, S. W.
2014-02-01
Transmission process of influenza can be presented in a mathematical model as a non-linear differential equations system. In this model the transmission of influenza is determined by the parameter of contact rate of the infected host and susceptible host. This parameter will be estimated using a regularized least square method where the Finite Element Method and Euler Method are used for approximating the solution of the SIR differential equation. The new infected data of influenza from CDC is used to see the effectiveness of the method. The estimated parameter represents the contact rate proportion of transmission probability in a day which can influence the number of infected people by the influenza. Relation between the estimated parameter and the number of infected people by the influenza is measured by coefficient of correlation. The numerical results show positive correlation between the estimated parameters and the infected people.
NASA Astrophysics Data System (ADS)
Cheung, Shao-Yong; Lee, Chieh-Han; Yu, Hwa-Lung
2017-04-01
Due to the limited hydrogeological observation data and high levels of uncertainty within, parameter estimation of the groundwater model has been an important issue. There are many methods of parameter estimation, for example, Kalman filter provides a real-time calibration of parameters through measurement of groundwater monitoring wells, related methods such as Extended Kalman Filter and Ensemble Kalman Filter are widely applied in groundwater research. However, Kalman Filter method is limited to linearity. This study propose a novel method, Bayesian Maximum Entropy Filtering, which provides a method that can considers the uncertainty of data in parameter estimation. With this two methods, we can estimate parameter by given hard data (certain) and soft data (uncertain) in the same time. In this study, we use Python and QGIS in groundwater model (MODFLOW) and development of Extended Kalman Filter and Bayesian Maximum Entropy Filtering in Python in parameter estimation. This method may provide a conventional filtering method and also consider the uncertainty of data. This study was conducted through numerical model experiment to explore, combine Bayesian maximum entropy filter and a hypothesis for the architecture of MODFLOW groundwater model numerical estimation. Through the virtual observation wells to simulate and observe the groundwater model periodically. The result showed that considering the uncertainty of data, the Bayesian maximum entropy filter will provide an ideal result of real-time parameters estimation.
Parameter estimating state reconstruction
NASA Technical Reports Server (NTRS)
George, E. B.
1976-01-01
Parameter estimation is considered for systems whose entire state cannot be measured. Linear observers are designed to recover the unmeasured states to a sufficient accuracy to permit the estimation process. There are three distinct dynamics that must be accommodated in the system design: the dynamics of the plant, the dynamics of the observer, and the system updating of the parameter estimation. The latter two are designed to minimize interaction of the involved systems. These techniques are extended to weakly nonlinear systems. The application to a simulation of a space shuttle POGO system test is of particular interest. A nonlinear simulation of the system is developed, observers designed, and the parameters estimated.
Parameter Estimation in Atmospheric Data Sets
NASA Technical Reports Server (NTRS)
Wenig, Mark; Colarco, Peter
2004-01-01
In this study the structure tensor technique is used to estimate dynamical parameters in atmospheric data sets. The structure tensor is a common tool for estimating motion in image sequences. This technique can be extended to estimate other dynamical parameters such as diffusion constants or exponential decay rates. A general mathematical framework was developed for the direct estimation of the physical parameters that govern the underlying processes from image sequences. This estimation technique can be adapted to the specific physical problem under investigation, so it can be used in a variety of applications in trace gas, aerosol, and cloud remote sensing. As a test scenario this technique will be applied to modeled dust data. In this case vertically integrated dust concentrations were used to derive wind information. Those results can be compared to the wind vector fields which served as input to the model. Based on this analysis, a method to compute atmospheric data parameter fields will be presented. .
Estimation of delays and other parameters in nonlinear functional differential equations
NASA Technical Reports Server (NTRS)
Banks, H. T.; Lamm, P. K. D.
1983-01-01
A spline-based approximation scheme for nonlinear nonautonomous delay differential equations is discussed. Convergence results (using dissipative type estimates on the underlying nonlinear operators) are given in the context of parameter estimation problems which include estimation of multiple delays and initial data as well as the usual coefficient-type parameters. A brief summary of some of the related numerical findings is also given.
Improved battery parameter estimation method considering operating scenarios for HEV/EV applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Jufeng; Xia, Bing; Shang, Yunlong
This study presents an improved battery parameter estimation method based on typical operating scenarios in hybrid electric vehicles and pure electric vehicles. Compared with the conventional estimation methods, the proposed method takes both the constant-current charging and the dynamic driving scenarios into account, and two separate sets of model parameters are estimated through different parts of the pulse-rest test. The model parameters for the constant-charging scenario are estimated from the data in the pulse-charging periods, while the model parameters for the dynamic driving scenario are estimated from the data in the rest periods, and the length of the fitted datasetmore » is determined by the spectrum analysis of the load current. In addition, the unsaturated phenomenon caused by the long-term resistor-capacitor (RC) network is analyzed, and the initial voltage expressions of the RC networks in the fitting functions are improved to ensure a higher model fidelity. Simulation and experiment results validated the feasibility of the developed estimation method.« less
Improved battery parameter estimation method considering operating scenarios for HEV/EV applications
Yang, Jufeng; Xia, Bing; Shang, Yunlong; ...
2016-12-22
This study presents an improved battery parameter estimation method based on typical operating scenarios in hybrid electric vehicles and pure electric vehicles. Compared with the conventional estimation methods, the proposed method takes both the constant-current charging and the dynamic driving scenarios into account, and two separate sets of model parameters are estimated through different parts of the pulse-rest test. The model parameters for the constant-charging scenario are estimated from the data in the pulse-charging periods, while the model parameters for the dynamic driving scenario are estimated from the data in the rest periods, and the length of the fitted datasetmore » is determined by the spectrum analysis of the load current. In addition, the unsaturated phenomenon caused by the long-term resistor-capacitor (RC) network is analyzed, and the initial voltage expressions of the RC networks in the fitting functions are improved to ensure a higher model fidelity. Simulation and experiment results validated the feasibility of the developed estimation method.« less
Lin, Jen-Jen; Cheng, Jung-Yu; Huang, Li-Fei; Lin, Ying-Hsiu; Wan, Yung-Liang; Tsui, Po-Hsiang
2017-05-01
The Nakagami distribution is an approximation useful to the statistics of ultrasound backscattered signals for tissue characterization. Various estimators may affect the Nakagami parameter in the detection of changes in backscattered statistics. In particular, the moment-based estimator (MBE) and maximum likelihood estimator (MLE) are two primary methods used to estimate the Nakagami parameters of ultrasound signals. This study explored the effects of the MBE and different MLE approximations on Nakagami parameter estimations. Ultrasound backscattered signals of different scatterer number densities were generated using a simulation model, and phantom experiments and measurements of human liver tissues were also conducted to acquire real backscattered echoes. Envelope signals were employed to estimate the Nakagami parameters by using the MBE, first- and second-order approximations of MLE (MLE 1 and MLE 2 , respectively), and Greenwood approximation (MLE gw ) for comparisons. The simulation results demonstrated that, compared with the MBE and MLE 1 , the MLE 2 and MLE gw enabled more stable parameter estimations with small sample sizes. Notably, the required data length of the envelope signal was 3.6 times the pulse length. The phantom and tissue measurement results also showed that the Nakagami parameters estimated using the MLE 2 and MLE gw could simultaneously differentiate various scatterer concentrations with lower standard deviations and reliably reflect physical meanings associated with the backscattered statistics. Therefore, the MLE 2 and MLE gw are suggested as estimators for the development of Nakagami-based methodologies for ultrasound tissue characterization. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Doury, Maxime; Dizeux, Alexandre; de Cesare, Alain; Lucidarme, Olivier; Pellot-Barakat, Claire; Bridal, S. Lori; Frouin, Frédérique
2017-02-01
Dynamic contrast-enhanced ultrasound has been proposed to monitor tumor therapy, as a complement to volume measurements. To assess the variability of perfusion parameters in ideal conditions, four consecutive test-retest studies were acquired in a mouse tumor model, using controlled injections. The impact of mathematical modeling on parameter variability was then investigated. Coefficients of variation (CV) of tissue blood volume (BV) and tissue blood flow (BF) based-parameters were estimated inside 32 sub-regions of the tumors, comparing the log-normal (LN) model with a one-compartment model fed by an arterial input function (AIF) and improved by the introduction of a time delay parameter. Relative perfusion parameters were also estimated by normalization of the LN parameters and normalization of the one-compartment parameters estimated with the AIF, using a reference tissue (RT) region. A direct estimation (rRTd) of relative parameters, based on the one-compartment model without using the AIF, was also obtained by using the kinetics inside the RT region. Results of test-retest studies show that absolute regional parameters have high CV, whatever the approach, with median values of about 30% for BV, and 40% for BF. The positive impact of normalization was established, showing a coherent estimation of relative parameters, with reduced CV (about 20% for BV and 30% for BF using the rRTd approach). These values were significantly lower (p < 0.05) than the CV of absolute parameters. The rRTd approach provided the smallest CV and should be preferred for estimating relative perfusion parameters.
Noise normalization and windowing functions for VALIDAR in wind parameter estimation
NASA Astrophysics Data System (ADS)
Beyon, Jeffrey Y.; Koch, Grady J.; Li, Zhiwen
2006-05-01
The wind parameter estimates from a state-of-the-art 2-μm coherent lidar system located at NASA Langley, Virginia, named VALIDAR (validation lidar), were compared after normalizing the noise by its estimated power spectra via the periodogram and the linear predictive coding (LPC) scheme. The power spectra and the Doppler shift estimates were the main parameter estimates for comparison. Different types of windowing functions were implemented in VALIDAR data processing algorithm and their impact on the wind parameter estimates was observed. Time and frequency independent windowing functions such as Rectangular, Hanning, and Kaiser-Bessel and time and frequency dependent apodized windowing function were compared. The briefing of current nonlinear algorithm development for Doppler shift correction subsequently follows.
Estimation of Graded Response Model Parameters Using MULTILOG.
ERIC Educational Resources Information Center
Baker, Frank B.
1997-01-01
Describes an idiosyncracy of the MULTILOG (D. Thissen, 1991) parameter estimation process discovered during a simulation study involving the graded response model. A misordering reflected in boundary function location parameter estimates resulted in a large negative contribution to the true score followed by a large positive contribution. These…
A Bayesian approach to parameter and reliability estimation in the Poisson distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1972-01-01
For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.
NASA Astrophysics Data System (ADS)
Simon, E.; Bertino, L.; Samuelsen, A.
2011-12-01
Combined state-parameter estimation in ocean biogeochemical models with ensemble-based Kalman filters is a challenging task due to the non-linearity of the models, the constraints of positiveness that apply to the variables and parameters, and the non-Gaussian distribution of the variables in which they result. Furthermore, these models are sensitive to numerous parameters that are poorly known. Previous works [1] demonstrated that the Gaussian anamorphosis extensions of ensemble-based Kalman filters were relevant tools to perform combined state-parameter estimation in such non-Gaussian framework. In this study, we focus on the estimation of the grazing preferences parameters of zooplankton species. These parameters are introduced to model the diet of zooplankton species among phytoplankton species and detritus. They are positive values and their sum is equal to one. Because the sum-to-one constraint cannot be handled by ensemble-based Kalman filters, a reformulation of the parameterization is proposed. We investigate two types of changes of variables for the estimation of sum-to-one constrained parameters. The first one is based on Gelman [2] and leads to the estimation of normal distributed parameters. The second one is based on the representation of the unit sphere in spherical coordinates and leads to the estimation of parameters with bounded distributions (triangular or uniform). These formulations are illustrated and discussed in the framework of twin experiments realized in the 1D coupled model GOTM-NORWECOM with Gaussian anamorphosis extensions of the deterministic ensemble Kalman filter (DEnKF). [1] Simon E., Bertino L. : Gaussian anamorphosis extension of the DEnKF for combined state and parameter estimation : application to a 1D ocean ecosystem model. Journal of Marine Systems, 2011. doi :10.1016/j.jmarsys.2011.07.007 [2] Gelman A. : Method of Moments Using Monte Carlo Simulation. Journal of Computational and Graphical Statistics, 4, 1, 36-54, 1995.
Relative effects of survival and reproduction on the population dynamics of emperor geese
Schmutz, Joel A.; Rockwell, Robert F.; Petersen, Margaret R.
1997-01-01
Populations of emperor geese (Chen canagica) in Alaska declined sometime between the mid-1960s and the mid-1980s and have increased little since. To promote recovery of this species to former levels, managers need to know how much their perturbations of survival and/or reproduction would affect population growth rate (λ). We constructed an individual-based population model to evaluate the relative effect of altering mean values of various survival and reproductive parameters on λ and fall age structure (AS, defined as the proportion of juv), assuming additive rather than compensatory relations among parameters. Altering survival of adults had markedly greater relative effects on λ than did equally proportionate changes in either juvenile survival or reproductive parameters. We found the opposite pattern for relative effects on AS. Due to concerns about bias in the initial parameter estimates used in our model, we used 5 additional sets of parameter estimates with this model structure. We found that estimates of survival based on aerial survey data gathered each fall resulted in models that corresponded more closely to independent estimates of λ than did models that used mark-recapture estimates of survival. This disparity suggests that mark-recapture estimates of survival are biased low. To further explore how parameter estimates affected estimates of λ, we used values of survival and reproduction found in other goose species, and we examined the effect of an hypothesized correlation between an individual's clutch size and the subsequent survival of her young. The rank order of parameters in their relative effects on λ was consistent for all 6 parameter sets we examined. The observed variation in relative effects on λ among the 6 parameter sets is indicative of how relative effects on λ may vary among goose populations. With this knowledge of the relative effects of survival and reproductive parameters on λ, managers can make more informed decisions about which parameters to influence through management or to target for future study.
Reference tissue modeling with parameter coupling: application to a study of SERT binding in HIV
NASA Astrophysics Data System (ADS)
Endres, Christopher J.; Hammoud, Dima A.; Pomper, Martin G.
2011-04-01
When applicable, it is generally preferred to evaluate positron emission tomography (PET) studies using a reference tissue-based approach as that avoids the need for invasive arterial blood sampling. However, most reference tissue methods have been shown to have a bias that is dependent on the level of tracer binding, and the variability of parameter estimates may be substantially affected by noise level. In a study of serotonin transporter (SERT) binding in HIV dementia, it was determined that applying parameter coupling to the simplified reference tissue model (SRTM) reduced the variability of parameter estimates and yielded the strongest between-group significant differences in SERT binding. The use of parameter coupling makes the application of SRTM more consistent with conventional blood input models and reduces the total number of fitted parameters, thus should yield more robust parameter estimates. Here, we provide a detailed evaluation of the application of parameter constraint and parameter coupling to [11C]DASB PET studies. Five quantitative methods, including three methods that constrain the reference tissue clearance (kr2) to a common value across regions were applied to the clinical and simulated data to compare measurement of the tracer binding potential (BPND). Compared with standard SRTM, either coupling of kr2 across regions or constraining kr2 to a first-pass estimate improved the sensitivity of SRTM to measuring a significant difference in BPND between patients and controls. Parameter coupling was particularly effective in reducing the variance of parameter estimates, which was less than 50% of the variance obtained with standard SRTM. A linear approach was also improved when constraining kr2 to a first-pass estimate, although the SRTM-based methods yielded stronger significant differences when applied to the clinical study. This work shows that parameter coupling reduces the variance of parameter estimates and may better discriminate between-group differences in specific binding.
Table look-up estimation of signal and noise parameters from quantized observables
NASA Technical Reports Server (NTRS)
Vilnrotter, V. A.; Rodemich, E. R.
1986-01-01
A table look-up algorithm for estimating underlying signal and noise parameters from quantized observables is examined. A general mathematical model is developed, and a look-up table designed specifically for estimating parameters from four-bit quantized data is described. Estimator performance is evaluated both analytically and by means of numerical simulation, and an example is provided to illustrate the use of the look-up table for estimating signal-to-noise ratios commonly encountered in Voyager-type data.
Comparing Three Estimation Methods for the Three-Parameter Logistic IRT Model
ERIC Educational Resources Information Center
Lamsal, Sunil
2015-01-01
Different estimation procedures have been developed for the unidimensional three-parameter item response theory (IRT) model. These techniques include the marginal maximum likelihood estimation, the fully Bayesian estimation using Markov chain Monte Carlo simulation techniques, and the Metropolis-Hastings Robbin-Monro estimation. With each…
NASA Astrophysics Data System (ADS)
Bloßfeld, Mathis; Panzetta, Francesca; Müller, Horst; Gerstl, Michael
2016-04-01
The GGOS vision is to integrate geometric and gravimetric observation techniques to estimate consistent geodetic-geophysical parameters. In order to reach this goal, the common estimation of station coordinates, Stokes coefficients and Earth Orientation Parameters (EOP) is necessary. Satellite Laser Ranging (SLR) provides the ability to study correlations between the different parameter groups since the observed satellite orbit dynamics are sensitive to the above mentioned geodetic parameters. To decrease the correlations, SLR observations to multiple satellites have to be combined. In this paper, we compare the estimated EOP of (i) single satellite SLR solutions and (ii) multi-satellite SLR solutions. Therefore, we jointly estimate station coordinates, EOP, Stokes coefficients and orbit parameters using different satellite constellations. A special focus in this investigation is put on the de-correlation of different geodetic parameter groups due to the combination of SLR observations. Besides SLR observations to spherical satellites (commonly used), we discuss the impact of SLR observations to non-spherical satellites such as, e.g., the JASON-2 satellite. The goal of this study is to discuss the existing parameter interactions and to present a strategy how to obtain reliable estimates of station coordinates, EOP, orbit parameter and Stokes coefficients in one common adjustment. Thereby, the benefits of a multi-satellite SLR solution are evaluated.
Okahara, Shigeyuki; Zu Soh; Takahashi, Shinya; Sueda, Taijiro; Tsuji, Toshio
2016-08-01
We proposed a blood viscosity estimation method based on pressure-flow characteristics of oxygenators used during cardiopulmonary bypass (CPB) in a previous study that showed the estimated viscosity to correlate well with the measured viscosity. However, the determination of the parameters included in the method required the use of blood, thereby leading to high cost of calibration. Therefore, in this study we propose a new method to monitor blood viscosity, which approximates the pressure-flow characteristics of blood considered as a non-Newtonian fluid with characteristics of a Newtonian fluid by using the parameters derived from glycerin solution to enable ease of acquisition. Because parameters used in the estimation method are based on fluid types, bovine blood parameters were used to calculate estimated viscosity (ηe), and glycerin parameters were used to estimate deemed viscosity (ηdeem). Three samples of whole bovine blood with different hematocrit levels (21.8%, 31.0%, and 39.8%) were prepared and perfused into the oxygenator. As the temperature changed from 37 °C to 27 °C, the oxygenator mean inlet pressure and outlet pressure were recorded for flows of 2 L/min and 4 L/min, and the viscosity was estimated. The value of deemed viscosity calculated with the glycerin parameters was lower than estimated viscosity calculated with bovine blood parameters by 20-33% at 21.8% hematocrit, 12-27% at 31.0% hematocrit, and 10-15% at 39.8% hematocrit. Furthermore, deemed viscosity was lower than estimated viscosity by 10-30% at 2 L/min and 30-40% at 4 L/min. Nevertheless, estimated and deemed viscosities varied with a similar slope. Therefore, this shows that deemed viscosity achieved using glycerin parameters may be capable of successfully monitoring relative viscosity changes of blood in a perfusing oxygenator.
Nam, Kanghyun
2015-11-11
This article presents methods for estimating lateral vehicle velocity and tire cornering stiffness, which are key parameters in vehicle dynamics control, using lateral tire force measurements. Lateral tire forces acting on each tire are directly measured by load-sensing hub bearings that were invented and further developed by NSK Ltd. For estimating the lateral vehicle velocity, tire force models considering lateral load transfer effects are used, and a recursive least square algorithm is adapted to identify the lateral vehicle velocity as an unknown parameter. Using the estimated lateral vehicle velocity, tire cornering stiffness, which is an important tire parameter dominating the vehicle's cornering responses, is estimated. For the practical implementation, the cornering stiffness estimation algorithm based on a simple bicycle model is developed and discussed. Finally, proposed estimation algorithms were evaluated using experimental test data.
NASA Astrophysics Data System (ADS)
O'Shaughnessy, Richard; Blackman, Jonathan; Field, Scott E.
2017-07-01
The recent direct observation of gravitational waves has further emphasized the desire for fast, low-cost, and accurate methods to infer the parameters of gravitational wave sources. Due to expense in waveform generation and data handling, the cost of evaluating the likelihood function limits the computational performance of these calculations. Building on recently developed surrogate models and a novel parameter estimation pipeline, we show how to quickly generate the likelihood function as an analytic, closed-form expression. Using a straightforward variant of a production-scale parameter estimation code, we demonstrate our method using surrogate models of effective-one-body and numerical relativity waveforms. Our study is the first time these models have been used for parameter estimation and one of the first ever parameter estimation calculations with multi-modal numerical relativity waveforms, which include all \\ell ≤slant 4 modes. Our grid-free method enables rapid parameter estimation for any waveform with a suitable reduced-order model. The methods described in this paper may also find use in other data analysis studies, such as vetting coincident events or the computation of the coalescing-compact-binary detection statistic.
ERIC Educational Resources Information Center
Molenaar, Peter C. M.; Nesselroade, John R.
1998-01-01
Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…
Reliability analysis of structural ceramic components using a three-parameter Weibull distribution
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Powers, Lynn M.; Starlinger, Alois
1992-01-01
Described here are nonlinear regression estimators for the three-Weibull distribution. Issues relating to the bias and invariance associated with these estimators are examined numerically using Monte Carlo simulation methods. The estimators were used to extract parameters from sintered silicon nitride failure data. A reliability analysis was performed on a turbopump blade utilizing the three-parameter Weibull distribution and the estimates from the sintered silicon nitride data.
Earth-viewing satellite perspectives on the Chelyabinsk meteor event
Miller, Steven D.; Straka, William C.; Bachmeier, A. Scott; Schmit, Timothy J.; Partain, Philip T.; Noh, Yoo-Jeong
2013-01-01
Large meteors (or superbolides [Ceplecha Z, et al. (1999) Meteoroids 1998:37–54]), although rare in recorded history, give sobering testimony to civilization’s inherent vulnerability. A not-so-subtle reminder came on the morning of February 15, 2013, when a large meteoroid hurtled into the Earth’s atmosphere, forming a superbolide near the city of Chelyabinsnk, Russia, ∼1,500 km east of Moscow, Russia [Ivanova MA, et al. (2013) Abstracts of the 76th Annual Meeting of the Meteoritical Society, 5366]. The object exploded in the stratosphere, and the ensuing shock wave blasted the city of Chelyabinsk, damaging structures and injuring hundreds. Details of trajectory are important for determining its specific source, the likelihood of future events, and potential mitigation measures. Earth-viewing environmental satellites can assist in these assessments. Here we examine satellite observations of the Chelyabinsk superbolide debris trail, collected within minutes of its entry. Estimates of trajectory are derived from differential views of the significantly parallax-displaced [e.g., Hasler AF (1981) Bull Am Meteor Soc 52:194–212] debris trail. The 282.7 ± 2.3° azimuth of trajectory, 18.5 ± 3.8° slope to the horizontal, and 17.7 ± 0.5 km/s velocity derived from these satellites agree well with parameters inferred from the wealth of surface-based photographs and amateur videos. More importantly, the results demonstrate the general ability of Earth-viewing satellites to provide valuable insight on trajectory reconstruction in the more likely scenario of sparse or nonexistent surface observations. PMID:24145398
AATSR Based Volcanic Ash Plume Top Height Estimation
NASA Astrophysics Data System (ADS)
Virtanen, Timo H.; Kolmonen, Pekka; Sogacheva, Larisa; Sundstrom, Anu-Maija; Rodriguez, Edith; de Leeuw, Gerrit
2015-11-01
The AATSR Correlation Method (ACM) height estimation algorithm is presented. The algorithm uses Advanced Along Track Scanning Radiometer (AATSR) satellite data to detect volcanic ash plumes and to estimate the plume top height. The height estimate is based on the stereo-viewing capability of the AATSR instrument, which allows to determine the parallax between the satellite's nadir and 55◦ forward views, and thus the corresponding height. AATSR provides an advantage compared to other stereo-view satellite instruments: with AATSR it is possible to detect ash plumes using brightness temperature difference between thermal infrared (TIR) channels centered at 11 and 12 μm. The automatic ash detection makes the algorithm efficient in processing large quantities of data: the height estimate is calculated only for the ash-flagged pixels. Besides ash plumes, the algorithm can be applied to any elevated feature with sufficient contrast to the background, such as smoke and dust plumes and clouds. The ACM algorithm can be applied to the Sea and Land Surface Temperature Radiometer (SLSTR), scheduled for launch at the end of 2015.
Parameter Estimation and Model Selection for Indoor Environments Based on Sparse Observations
NASA Astrophysics Data System (ADS)
Dehbi, Y.; Loch-Dehbi, S.; Plümer, L.
2017-09-01
This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.
Determining wave direction using curvature parameters.
de Queiroz, Eduardo Vitarelli; de Carvalho, João Luiz Baptista
2016-01-01
The curvature of the sea wave was tested as a parameter for estimating wave direction in the search for better results in estimates of wave direction in shallow waters, where waves of different sizes, frequencies and directions intersect and it is difficult to characterize. We used numerical simulations of the sea surface to determine wave direction calculated from the curvature of the waves. Using 1000 numerical simulations, the statistical variability of the wave direction was determined. The results showed good performance by the curvature parameter for estimating wave direction. Accuracy in the estimates was improved by including wave slope parameters in addition to curvature. The results indicate that the curvature is a promising technique to estimate wave directions.•In this study, the accuracy and precision of curvature parameters to measure wave direction are analyzed using a model simulation that generates 1000 wave records with directional resolution.•The model allows the simultaneous simulation of time-series wave properties such as sea surface elevation, slope and curvature and they were used to analyze the variability of estimated directions.•The simultaneous acquisition of slope and curvature parameters can contribute to estimates wave direction, thus increasing accuracy and precision of results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gama, D. R. G.; Lepine, J. R. D.; Mendoza, E.
We studied the environment of the dust bubble N10 in molecular emission. Infrared bubbles, first detected by the GLIMPSE survey at 8.0 μ m, are ideal regions to investigate the effect of the expansion of the H ii region on its surroundings and the eventual triggering of star formation at its borders. In this work, we present a multi-wavelength study of N10. This bubble is especially interesting because infrared studies of the young stellar content suggest a scenario of ongoing star formation, possibly triggered on the edge of the H ii region. We carried out observations of {sup 12}CO(1-0) andmore » {sup 13}CO(1-0) emission at PMO 13.7 m toward N10. We also analyzed the IR and sub-millimeter emission on this region and compare those different tracers to obtain a detailed view of the interaction between the expanding H ii region and the molecular gas. We also estimated the parameters of the denser cold dust condensation and the ionized gas inside the shell. Bright CO emission was detected and two molecular clumps were identified from which we have derived physical parameters. We also estimate the parameters for the densest cold dust condensation and for the ionized gas inside the shell. The comparison between the dynamical age of this region and the fragmentation timescale favors the “Radiation-Driven Implosion” mechanism of star formation. N10 is a case of particular interest with gas structures in a narrow frontier between the H ii region and surrounding molecular material, and with a range of ages of YSOs situated in the region, indicating triggered star formation.« less
Lord, Dominique
2006-07-01
There has been considerable research conducted on the development of statistical models for predicting crashes on highway facilities. Despite numerous advancements made for improving the estimation tools of statistical models, the most common probabilistic structure used for modeling motor vehicle crashes remains the traditional Poisson and Poisson-gamma (or Negative Binomial) distribution; when crash data exhibit over-dispersion, the Poisson-gamma model is usually the model of choice most favored by transportation safety modelers. Crash data collected for safety studies often have the unusual attributes of being characterized by low sample mean values. Studies have shown that the goodness-of-fit of statistical models produced from such datasets can be significantly affected. This issue has been defined as the "low mean problem" (LMP). Despite recent developments on methods to circumvent the LMP and test the goodness-of-fit of models developed using such datasets, no work has so far examined how the LMP affects the fixed dispersion parameter of Poisson-gamma models used for modeling motor vehicle crashes. The dispersion parameter plays an important role in many types of safety studies and should, therefore, be reliably estimated. The primary objective of this research project was to verify whether the LMP affects the estimation of the dispersion parameter and, if it is, to determine the magnitude of the problem. The secondary objective consisted of determining the effects of an unreliably estimated dispersion parameter on common analyses performed in highway safety studies. To accomplish the objectives of the study, a series of Poisson-gamma distributions were simulated using different values describing the mean, the dispersion parameter, and the sample size. Three estimators commonly used by transportation safety modelers for estimating the dispersion parameter of Poisson-gamma models were evaluated: the method of moments, the weighted regression, and the maximum likelihood method. In an attempt to complement the outcome of the simulation study, Poisson-gamma models were fitted to crash data collected in Toronto, Ont. characterized by a low sample mean and small sample size. The study shows that a low sample mean combined with a small sample size can seriously affect the estimation of the dispersion parameter, no matter which estimator is used within the estimation process. The probability the dispersion parameter becomes unreliably estimated increases significantly as the sample mean and sample size decrease. Consequently, the results show that an unreliably estimated dispersion parameter can significantly undermine empirical Bayes (EB) estimates as well as the estimation of confidence intervals for the gamma mean and predicted response. The paper ends with recommendations about minimizing the likelihood of producing Poisson-gamma models with an unreliable dispersion parameter for modeling motor vehicle crashes.
Tourism values for Mexican free-tailed bat (Tadarida brasiliensis mexicana) viewing
Bagstad, Kenneth J.; Widerholdt, Ruscena
2013-01-01
Migratory species provide diverse ecosystem services to people, but these values have seldom been estimated rangewide for a single species. In this article, we summarize visitation and consumer surplus for recreational visitors to viewing sites for the Mexican free-tailed bat (Tadarida brasiliensis mexicana) throughout the Southwestern United States. Public bat viewing opportunities are available at 17 of 25 major roosts across six states; on an annual basis, we estimate that over 242,000 visitors view bats, gaining over $6.5 million in consumer surplus. A better understanding of spatial mismatches between the areas where bats provide value to people and areas most critical for maintaining migratory populations can better inform conservation planning, including economic incentive systems for conservation.
Temporal rainfall estimation using input data reduction and model inversion
NASA Astrophysics Data System (ADS)
Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.
2016-12-01
Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a demonstration of equifinality. The use of a likelihood function that considers both rainfall and streamflow error combined with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.
Wang, Yong; Ma, Xiaolei; Liu, Yong; Gong, Ke; Henricakson, Kristian C.; Xu, Maozeng; Wang, Yinhai
2016-01-01
This paper proposes a two-stage algorithm to simultaneously estimate origin-destination (OD) matrix, link choice proportion, and dispersion parameter using partial traffic counts in a congested network. A non-linear optimization model is developed which incorporates a dynamic dispersion parameter, followed by a two-stage algorithm in which Generalized Least Squares (GLS) estimation and a Stochastic User Equilibrium (SUE) assignment model are iteratively applied until the convergence is reached. To evaluate the performance of the algorithm, the proposed approach is implemented in a hypothetical network using input data with high error, and tested under a range of variation coefficients. The root mean squared error (RMSE) of the estimated OD demand and link flows are used to evaluate the model estimation results. The results indicate that the estimated dispersion parameter theta is insensitive to the choice of variation coefficients. The proposed approach is shown to outperform two established OD estimation methods and produce parameter estimates that are close to the ground truth. In addition, the proposed approach is applied to an empirical network in Seattle, WA to validate the robustness and practicality of this methodology. In summary, this study proposes and evaluates an innovative computational approach to accurately estimate OD matrices using link-level traffic flow data, and provides useful insight for optimal parameter selection in modeling travelers’ route choice behavior. PMID:26761209
NASA Astrophysics Data System (ADS)
Aslan, Serdar; Taylan Cemgil, Ali; Akın, Ata
2016-08-01
Objective. In this paper, we aimed for the robust estimation of the parameters and states of the hemodynamic model by using blood oxygen level dependent signal. Approach. In the fMRI literature, there are only a few successful methods that are able to make a joint estimation of the states and parameters of the hemodynamic model. In this paper, we implemented a maximum likelihood based method called the particle smoother expectation maximization (PSEM) algorithm for the joint state and parameter estimation. Main results. Former sequential Monte Carlo methods were only reliable in the hemodynamic state estimates. They were claimed to outperform the local linearization (LL) filter and the extended Kalman filter (EKF). The PSEM algorithm is compared with the most successful method called square-root cubature Kalman smoother (SCKS) for both state and parameter estimation. SCKS was found to be better than the dynamic expectation maximization (DEM) algorithm, which was shown to be a better estimator than EKF, LL and particle filters. Significance. PSEM was more accurate than SCKS for both the state and the parameter estimation. Hence, PSEM seems to be the most accurate method for the system identification and state estimation for the hemodynamic model inversion literature. This paper do not compare its results with Tikhonov-regularized Newton—CKF (TNF-CKF), a recent robust method which works in filtering sense.
NASA Astrophysics Data System (ADS)
Li, Xin; Cai, Yu; Moloney, Brendan; Chen, Yiyi; Huang, Wei; Woods, Mark; Coakley, Fergus V.; Rooney, William D.; Garzotto, Mark G.; Springer, Charles S.
2016-08-01
Dynamic-Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) has been used widely for clinical applications. Pharmacokinetic modeling of DCE-MRI data that extracts quantitative contrast reagent/tissue-specific model parameters is the most investigated method. One of the primary challenges in pharmacokinetic analysis of DCE-MRI data is accurate and reliable measurement of the arterial input function (AIF), which is the driving force behind all pharmacokinetics. Because of effects such as inflow and partial volume averaging, AIF measured from individual arteries sometimes require amplitude scaling for better representation of the blood contrast reagent (CR) concentration time-courses. Empirical approaches like blinded AIF estimation or reference tissue AIF derivation can be useful and practical, especially when there is no clearly visible blood vessel within the imaging field-of-view (FOV). Similarly, these approaches generally also require magnitude scaling of the derived AIF time-courses. Since the AIF varies among individuals even with the same CR injection protocol and the perfect scaling factor for reconstructing the ground truth AIF often remains unknown, variations in estimated pharmacokinetic parameters due to varying AIF scaling factors are of special interest. In this work, using simulated and real prostate cancer DCE-MRI data, we examined parameter variations associated with AIF scaling. Our results show that, for both the fast-exchange-limit (FXL) Tofts model and the water exchange sensitized fast-exchange-regime (FXR) model, the commonly fitted CR transfer constant (Ktrans) and the extravascular, extracellular volume fraction (ve) scale nearly proportionally with the AIF, whereas the FXR-specific unidirectional cellular water efflux rate constant, kio, and the CR intravasation rate constant, kep, are both AIF scaling insensitive. This indicates that, for DCE-MRI of prostate cancer and possibly other cancers, kio and kep may be more suitable imaging biomarkers for cross-platform, multicenter applications. Data from our limited study cohort show that kio correlates with Gleason scores, suggesting that it may be a useful biomarker for prostate cancer disease progression monitoring.
Galdón, Eduardo; Casas, Marta; Gayango, Manuel; Caraballo, Isidoro
2016-12-01
The deep understanding of products and processes has become a requirement for pharmaceutical industries to follow the Quality by Design principles promoted by the regulatory authorities. With this aim, SeDeM expert system was developed as a useful preformulation tool to predict the likelihood to process drugs and excipients through direct compression. SeDeM system is a step forward in the rational development of a formulation, allowing the normalisation of the rheological parameters and the identification of the weaknesses and strengths of a powder or a powder blend. However, this method is based on the assumption of a linear behavior of disordered systems. As percolation theory has demonstrated, powder blends behave as non-linear systems that can suffer abrupt changes in their properties near to geometrical phase transitions of the components. The aim of this paper was to analyze for the first time the evolution of the SeDeM parameters in drug/excipient powder blends from the point of view of the percolation theory and to compare the changes predicted by SeDeM with the predictions of Percolation theory. For this purpose, powder blends of lactose and theophylline with varying concentrations of the model drug have been prepared and the SeDeM analysis has been applied to each blend in order to monitor the evolution of their properties. On the other hand, percolation thresholds have been estimated for these powder blends where critical points have been found for important rheological parameters as the powder flow. Finally, the predictions of percolation theory and SeDeM have been compared concluding that percolation theory can complement the SeDeM method for a more accurate estimation of the Design Space. Copyright © 2016 Elsevier B.V. All rights reserved.
Improving the precision of dynamic forest parameter estimates using Landsat
Evan B. Brooks; John W. Coulston; Randolph H. Wynne; Valerie A. Thomas
2016-01-01
The use of satellite-derived classification maps to improve post-stratified forest parameter estimates is wellestablished.When reducing the variance of post-stratification estimates for forest change parameters such as forestgrowth, it is logical to use a change-related strata map. At the stand level, a time series of Landsat images is
NASA Astrophysics Data System (ADS)
Roozegar, Mehdi; Mahjoob, Mohammad J.; Ayati, Moosa
2017-05-01
This paper deals with adaptive estimation of the unknown parameters and states of a pendulum-driven spherical robot (PDSR), which is a nonlinear in parameters (NLP) chaotic system with parametric uncertainties. Firstly, the mathematical model of the robot is deduced by applying the Newton-Euler methodology for a system of rigid bodies. Then, based on the speed gradient (SG) algorithm, the states and unknown parameters of the robot are estimated online for different step length gains and initial conditions. The estimated parameters are updated adaptively according to the error between estimated and true state values. Since the errors of the estimated states and parameters as well as the convergence rates depend significantly on the value of step length gain, this gain should be chosen optimally. Hence, a heuristic fuzzy logic controller is employed to adjust the gain adaptively. Simulation results indicate that the proposed approach is highly encouraging for identification of this NLP chaotic system even if the initial conditions change and the uncertainties increase; therefore, it is reliable to be implemented on a real robot.
Impact of orbit, clock and EOP errors in GNSS Precise Point Positioning
NASA Astrophysics Data System (ADS)
Hackman, C.
2012-12-01
Precise point positioning (PPP; [1]) has gained ever-increasing usage in GNSS carrier-phase positioning, navigation and timing (PNT) since its inception in the late 1990s. In this technique, high-precision satellite clocks, satellite ephemerides and earth-orientation parameters (EOPs) are applied as fixed input by the user in order to estimate receiver/location-specific quantities such as antenna coordinates, troposphere delay and receiver-clock corrections. This is in contrast to "network" solutions, in which (typically) less-precise satellite clocks, satellite ephemerides and EOPs are used as input, and in which these parameters are estimated simultaneously with the receiver/location-specific parameters. The primary reason for increased PPP application is that it offers most of the benefits of a network solution with a smaller computing cost. In addition, the software required to do PPP positioning can be simpler than that required for network solutions. Finally, PPP permits high-precision positioning of single or sparsely spaced receivers that may have few or no GNSS satellites in common view. A drawback of PPP is that the accuracy of the results depend directly on the accuracy of the supplied orbits, clocks and EOPs, since these parameters are not adjusted during the processing. In this study, we will examine the impact of orbit, EOP and satellite clock estimates on PPP solutions. Our primary focus will be the impact of these errors on station coordinates; however the study may be extended to error propagation into receiver-clock corrections and/or troposphere estimates if time permits. Study motivation: the United States Naval Observatory (USNO) began testing PPP processing using its own predicted orbits, clocks and EOPs in Summer 2012 [2]. The results of such processing could be useful for real- or near-real-time applications should they meet accuracy/precision requirements. Understanding how errors in satellite clocks, satellite orbits and EOPs propagate into PPP positioning and timing results allows researchers to focus their improvement efforts in areas most in need of attention. The initial study will be conducted using the simulation capabilities of Bernese GPS Software and extended to using real data if time permits. [1] J.F. Zumberge, M.B. Heflin, D.C. Jefferson, M.M. Watkins and F.H. Webb, Precise point positioning for the efficient and robust analysis of GPS data from large networks, J. Geophys. Res., 102(B3), 5005-5017, doi:10.1029/96JB03860, 1997. [2] C. Hackman, S.M. Byram, V.J. Slabinski and J.C. Tracey, Near-real-time and other high-precision GNSS-based orbit/clock/earth-orientation/troposphere parameters available from USNO, Proc. 2012 ION Joint Navigation Conference, 15 pp., in press, 2012.
K-ε Turbulence Model Parameter Estimates Using an Approximate Self-similar Jet-in-Crossflow Solution
DeChant, Lawrence; Ray, Jaideep; Lefantzi, Sophia; ...
2017-06-09
The k-ε turbulence model has been described as perhaps “the most widely used complete turbulence model.” This family of heuristic Reynolds Averaged Navier-Stokes (RANS) turbulence closures is supported by a suite of model parameters that have been estimated by demanding the satisfaction of well-established canonical flows such as homogeneous shear flow, log-law behavior, etc. While this procedure does yield a set of so-called nominal parameters, it is abundantly clear that they do not provide a universally satisfactory turbulence model that is capable of simulating complex flows. Recent work on the Bayesian calibration of the k-ε model using jet-in-crossflow wind tunnelmore » data has yielded parameter estimates that are far more predictive than nominal parameter values. In this paper, we develop a self-similar asymptotic solution for axisymmetric jet-in-crossflow interactions and derive analytical estimates of the parameters that were inferred using Bayesian calibration. The self-similar method utilizes a near field approach to estimate the turbulence model parameters while retaining the classical far-field scaling to model flow field quantities. Our parameter values are seen to be far more predictive than the nominal values, as checked using RANS simulations and experimental measurements. They are also closer to the Bayesian estimates than the nominal parameters. A traditional simplified jet trajectory model is explicitly related to the turbulence model parameters and is shown to yield good agreement with measurement when utilizing the analytical derived turbulence model coefficients. Finally, the close agreement between the turbulence model coefficients obtained via Bayesian calibration and the analytically estimated coefficients derived in this paper is consistent with the contention that the Bayesian calibration approach is firmly rooted in the underlying physical description.« less
NASA Astrophysics Data System (ADS)
Post, Hanna; Vrugt, Jasper A.; Fox, Andrew; Vereecken, Harry; Hendricks Franssen, Harrie-Jan
2017-03-01
The Community Land Model (CLM) contains many parameters whose values are uncertain and thus require careful estimation for model application at individual sites. Here we used Bayesian inference with the DiffeRential Evolution Adaptive Metropolis (DREAM(zs)) algorithm to estimate eight CLM v.4.5 ecosystem parameters using 1 year records of half-hourly net ecosystem CO2 exchange (NEE) observations of four central European sites with different plant functional types (PFTs). The posterior CLM parameter distributions of each site were estimated per individual season and on a yearly basis. These estimates were then evaluated using NEE data from an independent evaluation period and data from "nearby" FLUXNET sites at 600 km distance to the original sites. Latent variables (multipliers) were used to treat explicitly uncertainty in the initial carbon-nitrogen pools. The posterior parameter estimates were superior to their default values in their ability to track and explain the measured NEE data of each site. The seasonal parameter values reduced with more than 50% (averaged over all sites) the bias in the simulated NEE values. The most consistent performance of CLM during the evaluation period was found for the posterior parameter values of the forest PFTs, and contrary to the C3-grass and C3-crop sites, the latent variables of the initial pools further enhanced the quality-of-fit. The carbon sink function of the forest PFTs significantly increased with the posterior parameter estimates. We thus conclude that land surface model predictions of carbon stocks and fluxes require careful consideration of uncertain ecological parameters and initial states.
NASA Astrophysics Data System (ADS)
Yashima, Kenta; Ito, Kana; Nakamura, Kazuyuki
2013-03-01
When an Infectious disease where to prevail throughout the population, epidemic parameters such as the basic reproduction ratio, initial point of infection etc. are estimated from the time series data of infected population. However, it is unclear how does the structure of host population affects this estimation accuracy. In other words, what kind of city is difficult to estimate its epidemic parameters? To answer this question, epidemic data are simulated by constructing a commuting network with different network structure and running the infection process over this network. From the given time series data for each network structure, we would like to analyzed estimation accuracy of epidemic parameters.
Polarimetric image reconstruction algorithms
NASA Astrophysics Data System (ADS)
Valenzuela, John R.
In the field of imaging polarimetry Stokes parameters are sought and must be inferred from noisy and blurred intensity measurements. Using a penalized-likelihood estimation framework we investigate reconstruction quality when estimating intensity images and then transforming to Stokes parameters (traditional estimator), and when estimating Stokes parameters directly (Stokes estimator). We define our cost function for reconstruction by a weighted least squares data fit term and a regularization penalty. It is shown that under quadratic regularization, the traditional and Stokes estimators can be made equal by appropriate choice of regularization parameters. It is empirically shown that, when using edge preserving regularization, estimating the Stokes parameters directly leads to lower RMS error in reconstruction. Also, the addition of a cross channel regularization term further lowers the RMS error for both methods especially in the case of low SNR. The technique of phase diversity has been used in traditional incoherent imaging systems to jointly estimate an object and optical system aberrations. We extend the technique of phase diversity to polarimetric imaging systems. Specifically, we describe penalized-likelihood methods for jointly estimating Stokes images and optical system aberrations from measurements that contain phase diversity. Jointly estimating Stokes images and optical system aberrations involves a large parameter space. A closed-form expression for the estimate of the Stokes images in terms of the aberration parameters is derived and used in a formulation that reduces the dimensionality of the search space to the number of aberration parameters only. We compare the performance of the joint estimator under both quadratic and edge-preserving regularization. The joint estimator with edge-preserving regularization yields higher fidelity polarization estimates than with quadratic regularization. Under quadratic regularization, using the reduced-parameter search strategy, accurate aberration estimates can be obtained without recourse to regularization "tuning". Phase-diverse wavefront sensing is emerging as a viable candidate wavefront sensor for adaptive-optics systems. In a quadratically penalized weighted least squares estimation framework a closed form expression for the object being imaged in terms of the aberrations in the system is available. This expression offers a dramatic reduction of the dimensionality of the estimation problem and thus is of great interest for practical applications. We have derived an expression for an approximate joint covariance matrix for object and aberrations in the phase diversity context. Our expression for the approximate joint covariance is compared with the "known-object" Cramer-Rao lower bound that is typically used for system parameter optimization. Estimates of the optimal amount of defocus in a phase-diverse wavefront sensor derived from the joint-covariance matrix, the known-object Cramer-Rao bound, and Monte Carlo simulations are compared for an extended scene and a point object. It is found that our variance approximation, that incorporates the uncertainty of the object, leads to an improvement in predicting the optimal amount of defocus to use in a phase-diverse wavefront sensor.
Fuzzy multinomial logistic regression analysis: A multi-objective programming approach
NASA Astrophysics Data System (ADS)
Abdalla, Hesham A.; El-Sayed, Amany A.; Hamed, Ramadan
2017-05-01
Parameter estimation for multinomial logistic regression is usually based on maximizing the likelihood function. For large well-balanced datasets, Maximum Likelihood (ML) estimation is a satisfactory approach. Unfortunately, ML can fail completely or at least produce poor results in terms of estimated probabilities and confidence intervals of parameters, specially for small datasets. In this study, a new approach based on fuzzy concepts is proposed to estimate parameters of the multinomial logistic regression. The study assumes that the parameters of multinomial logistic regression are fuzzy. Based on the extension principle stated by Zadeh and Bárdossy's proposition, a multi-objective programming approach is suggested to estimate these fuzzy parameters. A simulation study is used to evaluate the performance of the new approach versus Maximum likelihood (ML) approach. Results show that the new proposed model outperforms ML in cases of small datasets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kane, V.E.
1982-01-01
A class of goodness-of-fit estimators is found to provide a useful alternative in certain situations to the standard maximum likelihood method which has some undesirable estimation characteristics for estimation from the three-parameter lognormal distribution. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Filliben tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Robustness of the procedures are examined and example data sets analyzed.
Kinnamon, Daniel D; Lipsitz, Stuart R; Ludwig, David A; Lipshultz, Steven E; Miller, Tracie L
2010-04-01
The hydration of fat-free mass, or hydration fraction (HF), is often defined as a constant body composition parameter in a two-compartment model and then estimated from in vivo measurements. We showed that the widely used estimator for the HF parameter in this model, the mean of the ratios of measured total body water (TBW) to fat-free mass (FFM) in individual subjects, can be inaccurate in the presence of additive technical errors. We then proposed a new instrumental variables estimator that accurately estimates the HF parameter in the presence of such errors. In Monte Carlo simulations, the mean of the ratios of TBW to FFM was an inaccurate estimator of the HF parameter, and inferences based on it had actual type I error rates more than 13 times the nominal 0.05 level under certain conditions. The instrumental variables estimator was accurate and maintained an actual type I error rate close to the nominal level in all simulations. When estimating and performing inference on the HF parameter, the proposed instrumental variables estimator should yield accurate estimates and correct inferences in the presence of additive technical errors, but the mean of the ratios of TBW to FFM in individual subjects may not.
Assessing Interval Estimation Methods for Hill Model ...
The Hill model of concentration-response is ubiquitous in toxicology, perhaps because its parameters directly relate to biologically significant metrics of toxicity such as efficacy and potency. Point estimates of these parameters obtained through least squares regression or maximum likelihood are commonly used in high-throughput risk assessment, but such estimates typically fail to include reliable information concerning confidence in (or precision of) the estimates. To address this issue, we examined methods for assessing uncertainty in Hill model parameter estimates derived from concentration-response data. In particular, using a sample of ToxCast concentration-response data sets, we applied four methods for obtaining interval estimates that are based on asymptotic theory, bootstrapping (two varieties), and Bayesian parameter estimation, and then compared the results. These interval estimation methods generally did not agree, so we devised a simulation study to assess their relative performance. We generated simulated data by constructing four statistical error models capable of producing concentration-response data sets comparable to those observed in ToxCast. We then applied the four interval estimation methods to the simulated data and compared the actual coverage of the interval estimates to the nominal coverage (e.g., 95%) in order to quantify performance of each of the methods in a variety of cases (i.e., different values of the true Hill model paramet
Kalman filter data assimilation: targeting observations and parameter estimation.
Bellsky, Thomas; Kostelich, Eric J; Mahalov, Alex
2014-06-01
This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.
Kalman filter data assimilation: Targeting observations and parameter estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bellsky, Thomas, E-mail: bellskyt@asu.edu; Kostelich, Eric J.; Mahalov, Alex
2014-06-15
This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly locatedmore » observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.« less
NASA Astrophysics Data System (ADS)
Bozorgzadeh, Nezam; Yanagimura, Yoko; Harrison, John P.
2017-12-01
The Hoek-Brown empirical strength criterion for intact rock is widely used as the basis for estimating the strength of rock masses. Estimations of the intact rock H-B parameters, namely the empirical constant m and the uniaxial compressive strength σc, are commonly obtained by fitting the criterion to triaxial strength data sets of small sample size. This paper investigates how such small sample sizes affect the uncertainty associated with the H-B parameter estimations. We use Monte Carlo (MC) simulation to generate data sets of different sizes and different combinations of H-B parameters, and then investigate the uncertainty in H-B parameters estimated from these limited data sets. We show that the uncertainties depend not only on the level of variability but also on the particular combination of parameters being investigated. As particular combinations of H-B parameters can informally be considered to represent specific rock types, we discuss that as the minimum number of required samples depends on rock type it should correspond to some acceptable level of uncertainty in the estimations. Also, a comparison of the results from our analysis with actual rock strength data shows that the probability of obtaining reliable strength parameter estimations using small samples may be very low. We further discuss the impact of this on ongoing implementation of reliability-based design protocols and conclude with suggestions for improvements in this respect.
Inference of reactive transport model parameters using a Bayesian multivariate approach
NASA Astrophysics Data System (ADS)
Carniato, Luca; Schoups, Gerrit; van de Giesen, Nick
2014-08-01
Parameter estimation of subsurface transport models from multispecies data requires the definition of an objective function that includes different types of measurements. Common approaches are weighted least squares (WLS), where weights are specified a priori for each measurement, and weighted least squares with weight estimation (WLS(we)) where weights are estimated from the data together with the parameters. In this study, we formulate the parameter estimation task as a multivariate Bayesian inference problem. The WLS and WLS(we) methods are special cases in this framework, corresponding to specific prior assumptions about the residual covariance matrix. The Bayesian perspective allows for generalizations to cases where residual correlation is important and for efficient inference by analytically integrating out the variances (weights) and selected covariances from the joint posterior. Specifically, the WLS and WLS(we) methods are compared to a multivariate (MV) approach that accounts for specific residual correlations without the need for explicit estimation of the error parameters. When applied to inference of reactive transport model parameters from column-scale data on dissolved species concentrations, the following results were obtained: (1) accounting for residual correlation between species provides more accurate parameter estimation for high residual correlation levels whereas its influence for predictive uncertainty is negligible, (2) integrating out the (co)variances leads to an efficient estimation of the full joint posterior with a reduced computational effort compared to the WLS(we) method, and (3) in the presence of model structural errors, none of the methods is able to identify the correct parameter values.
NASA Technical Reports Server (NTRS)
Joiner, J.; Vasilkov, A.; Gupta, P.; Bhartia, P. K.; Veefkind, P.; Sneep, M.; de Haan, J.; Polonsky, I.; Spurr, R.
2012-01-01
The cloud Optical Centroid Pressure (OCP), also known as the effective cloud pressure, is a satellite-derived parameter that is commonly used in trace-gas retrievals to account for the effects of clouds on near-infrared through ultraviolet radiance measurements. Fast simulators are desirable to further expand the use of cloud OCP retrievals into the operational and climate communities for applications such as data assimilation and evaluation of cloud vertical structure in general circulation models. In this paper, we develop and validate fast simulators that provide estimates of the cloud OCP given a vertical profile of optical extinction. We use a pressure-weighting scheme where the weights depend upon optical parameters of clouds and/or aerosol. A cloud weighting function is easily extracted using this formulation. We then use fast simulators to compare two different satellite cloud OCP retrievals from the Ozone Monitoring Instrument (OMI) with estimates based on collocated cloud extinction profiles from a combination of CloudS at radar and MODIS visible radiance data. These comparisons are made over a wide range of conditions to provide a comprehensive validation of the OMI cloud OCP retrievals. We find generally good agreement between OMI cloud OCPs and those predicted by CloudSat. However, the OMI cloud OCPs from the two independent algorithms agree better with each other than either does with the estimates from CloudSat/MODIS. Differences between OMI cloud OCPs and those based on CloudSat/MODIS may result from undetected snow/ice at the surface, cloud 3-D effects, low altitude clouds missed by CloudSat, and the fact that CloudSat only observes a relatively small fraction of an OMI field-of-view.
NASA Astrophysics Data System (ADS)
Wang, Daosheng; Zhang, Jicai; He, Xianqiang; Chu, Dongdong; Lv, Xianqing; Wang, Ya Ping; Yang, Yang; Fan, Daidu; Gao, Shu
2018-01-01
Model parameters in the suspended cohesive sediment transport models are critical for the accurate simulation of suspended sediment concentrations (SSCs). Difficulties in estimating the model parameters still prevent numerical modeling of the sediment transport from achieving a high level of predictability. Based on a three-dimensional cohesive sediment transport model and its adjoint model, the satellite remote sensing data of SSCs during both spring tide and neap tide, retrieved from Geostationary Ocean Color Imager (GOCI), are assimilated to synchronously estimate four spatially and temporally varying parameters in the Hangzhou Bay in China, including settling velocity, resuspension rate, inflow open boundary conditions and initial conditions. After data assimilation, the model performance is significantly improved. Through several sensitivity experiments, the spatial and temporal variation tendencies of the estimated model parameters are verified to be robust and not affected by model settings. The pattern for the variations of the estimated parameters is analyzed and summarized. The temporal variations and spatial distributions of the estimated settling velocity are negatively correlated with current speed, which can be explained using the combination of flocculation process and Stokes' law. The temporal variations and spatial distributions of the estimated resuspension rate are also negatively correlated with current speed, which are related to the grain size of the seabed sediments under different current velocities. Besides, the estimated inflow open boundary conditions reach the local maximum values near the low water slack conditions and the estimated initial conditions are negatively correlated with water depth, which is consistent with the general understanding. The relationships between the estimated parameters and the hydrodynamic fields can be suggestive for improving the parameterization in cohesive sediment transport models.
On-line estimation of error covariance parameters for atmospheric data assimilation
NASA Technical Reports Server (NTRS)
Dee, Dick P.
1995-01-01
A simple scheme is presented for on-line estimation of covariance parameters in statistical data assimilation systems. The scheme is based on a maximum-likelihood approach in which estimates are produced on the basis of a single batch of simultaneous observations. Simple-sample covariance estimation is reasonable as long as the number of available observations exceeds the number of tunable parameters by two or three orders of magnitude. Not much is known at present about model error associated with actual forecast systems. Our scheme can be used to estimate some important statistical model error parameters such as regionally averaged variances or characteristic correlation length scales. The advantage of the single-sample approach is that it does not rely on any assumptions about the temporal behavior of the covariance parameters: time-dependent parameter estimates can be continuously adjusted on the basis of current observations. This is of practical importance since it is likely to be the case that both model error and observation error strongly depend on the actual state of the atmosphere. The single-sample estimation scheme can be incorporated into any four-dimensional statistical data assimilation system that involves explicit calculation of forecast error covariances, including optimal interpolation (OI) and the simplified Kalman filter (SKF). The computational cost of the scheme is high but not prohibitive; on-line estimation of one or two covariance parameters in each analysis box of an operational bozed-OI system is currently feasible. A number of numerical experiments performed with an adaptive SKF and an adaptive version of OI, using a linear two-dimensional shallow-water model and artificially generated model error are described. The performance of the nonadaptive versions of these methods turns out to depend rather strongly on correct specification of model error parameters. These parameters are estimated under a variety of conditions, including uniformly distributed model error and time-dependent model error statistics.
Knopman, Debra S.; Voss, Clifford I.
1987-01-01
The spatial and temporal variability of sensitivities has a significant impact on parameter estimation and sampling design for studies of solute transport in porous media. Physical insight into the behavior of sensitivities is offered through an analysis of analytically derived sensitivities for the one-dimensional form of the advection-dispersion equation. When parameters are estimated in regression models of one-dimensional transport, the spatial and temporal variability in sensitivities influences variance and covariance of parameter estimates. Several principles account for the observed influence of sensitivities on parameter uncertainty. (1) Information about a physical parameter may be most accurately gained at points in space and time with a high sensitivity to the parameter. (2) As the distance of observation points from the upstream boundary increases, maximum sensitivity to velocity during passage of the solute front increases and the consequent estimate of velocity tends to have lower variance. (3) The frequency of sampling must be “in phase” with the S shape of the dispersion sensitivity curve to yield the most information on dispersion. (4) The sensitivity to the dispersion coefficient is usually at least an order of magnitude less than the sensitivity to velocity. (5) The assumed probability distribution of random error in observations of solute concentration determines the form of the sensitivities. (6) If variance in random error in observations is large, trends in sensitivities of observation points may be obscured by noise and thus have limited value in predicting variance in parameter estimates among designs. (7) Designs that minimize the variance of one parameter may not necessarily minimize the variance of other parameters. (8) The time and space interval over which an observation point is sensitive to a given parameter depends on the actual values of the parameters in the underlying physical system.
Fisher information of a single qubit interacts with a spin-qubit in the presence of a magnetic field
NASA Astrophysics Data System (ADS)
Metwally, N.
2018-06-01
In this contribution, quantum Fisher information is utilized to estimate the parameters of a central qubit interacting with a single-spin qubit. The effect of the longitudinal, transverse and the rotating strengths of the magnetic field on the estimation degree is discussed. It is shown that, in the resonance case, the number of peaks and consequently the size of the estimation regions increase as the rotating magnetic field strength increases. The precision estimation of the central qubit parameters depends on the initial state settings of the central and the spin-qubit, either encode classical or quantum information. It is displayed that, the upper bounds of the estimation degree are large if the two qubits encode classical information. In the non-resonance case, the estimation degree depends on which of the longitudinal/transverse strength is larger. The coupling constant between the central qubit and the spin-qubit has a different effect on the estimation degree of the weight and the phase parameters, where the possibility of estimating the weight parameter decreases as the coupling constant increases, while it increases for the phase parameter. For large number of spin-particles, namely, we have a spin-bath particles, the upper bounds of the Fisher information with respect to the weight parameter of the central qubit decreases as the number of the spin particle increases. As the interaction time increases, the upper bounds appear at different initial values of the weight parameter.
NASA Technical Reports Server (NTRS)
Chin, M. M.; Goad, C. C.; Martin, T. V.
1972-01-01
A computer program for the estimation of orbit and geodetic parameters is presented. The areas in which the program is operational are defined. The specific uses of the program are given as: (1) determination of definitive orbits, (2) tracking instrument calibration, (3) satellite operational predictions, and (4) geodetic parameter estimation. The relationship between the various elements in the solution of the orbit and geodetic parameter estimation problem is analyzed. The solution of the problems corresponds to the orbit generation mode in the first case and to the data reduction mode in the second case.
A simulation of water pollution model parameter estimation
NASA Technical Reports Server (NTRS)
Kibler, J. F.
1976-01-01
A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.
Liang, Hua; Miao, Hongyu; Wu, Hulin
2010-03-01
Modeling viral dynamics in HIV/AIDS studies has resulted in deep understanding of pathogenesis of HIV infection from which novel antiviral treatment guidance and strategies have been derived. Viral dynamics models based on nonlinear differential equations have been proposed and well developed over the past few decades. However, it is quite challenging to use experimental or clinical data to estimate the unknown parameters (both constant and time-varying parameters) in complex nonlinear differential equation models. Therefore, investigators usually fix some parameter values, from the literature or by experience, to obtain only parameter estimates of interest from clinical or experimental data. However, when such prior information is not available, it is desirable to determine all the parameter estimates from data. In this paper, we intend to combine the newly developed approaches, a multi-stage smoothing-based (MSSB) method and the spline-enhanced nonlinear least squares (SNLS) approach, to estimate all HIV viral dynamic parameters in a nonlinear differential equation model. In particular, to the best of our knowledge, this is the first attempt to propose a comparatively thorough procedure, accounting for both efficiency and accuracy, to rigorously estimate all key kinetic parameters in a nonlinear differential equation model of HIV dynamics from clinical data. These parameters include the proliferation rate and death rate of uninfected HIV-targeted cells, the average number of virions produced by an infected cell, and the infection rate which is related to the antiviral treatment effect and is time-varying. To validate the estimation methods, we verified the identifiability of the HIV viral dynamic model and performed simulation studies. We applied the proposed techniques to estimate the key HIV viral dynamic parameters for two individual AIDS patients treated with antiretroviral therapies. We demonstrate that HIV viral dynamics can be well characterized and quantified for individual patients. As a result, personalized treatment decision based on viral dynamic models is possible.
Hormuth, David A; Skinner, Jack T; Does, Mark D; Yankeelov, Thomas E
2014-05-01
Dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) can quantitatively and qualitatively assess physiological characteristics of tissue. Quantitative DCE-MRI requires an estimate of the time rate of change of the concentration of the contrast agent in the blood plasma, the vascular input function (VIF). Measuring the VIF in small animals is notoriously difficult as it requires high temporal resolution images limiting the achievable number of slices, field-of-view, spatial resolution, and signal-to-noise. Alternatively, a population-averaged VIF could be used to mitigate the acquisition demands in studies aimed to investigate, for example, tumor vascular characteristics. Thus, the overall goal of this manuscript is to determine how the kinetic parameters estimated by a population based VIF differ from those estimated by an individual VIF. Eight rats bearing gliomas were imaged before, during, and after an injection of Gd-DTPA. K(trans), ve, and vp were extracted from signal-time curves of tumor tissue using both individual and population-averaged VIFs. Extended model voxel estimates of K(trans) and ve in all animals had concordance correlation coefficients (CCC) ranging from 0.69 to 0.98 and Pearson correlation coefficients (PCC) ranging from 0.70 to 0.99. Additionally, standard model estimates resulted in CCCs ranging from 0.81 to 0.99 and PCCs ranging from 0.98 to 1.00, supporting the use of a population based VIF if an individual VIF is not available. Copyright © 2014 Elsevier Inc. All rights reserved.
Quantifying the effect of experimental design choices for in vitro scratch assays.
Johnston, Stuart T; Ross, Joshua V; Binder, Benjamin J; Sean McElwain, D L; Haridas, Parvathi; Simpson, Matthew J
2016-07-07
Scratch assays are often used to investigate potential drug treatments for chronic wounds and cancer. Interpreting these experiments with a mathematical model allows us to estimate the cell diffusivity, D, and the cell proliferation rate, λ. However, the influence of the experimental design on the estimates of D and λ is unclear. Here we apply an approximate Bayesian computation (ABC) parameter inference method, which produces a posterior distribution of D and λ, to new sets of synthetic data, generated from an idealised mathematical model, and experimental data for a non-adhesive mesenchymal population of fibroblast cells. The posterior distribution allows us to quantify the amount of information obtained about D and λ. We investigate two types of scratch assay, as well as varying the number and timing of the experimental observations captured. Our results show that a scrape assay, involving one cell front, provides more precise estimates of D and λ, and is more computationally efficient to interpret than a wound assay, with two opposingly directed cell fronts. We find that recording two observations, after making the initial observation, is sufficient to estimate D and λ, and that the final observation time should correspond to the time taken for the cell front to move across the field of view. These results provide guidance for estimating D and λ, while simultaneously minimising the time and cost associated with performing and interpreting the experiment. Copyright © 2016 Elsevier Ltd. All rights reserved.
Halliday, David M; Senik, Mohd Harizal; Stevenson, Carl W; Mason, Rob
2016-08-01
The ability to infer network structure from multivariate neuronal signals is central to computational neuroscience. Directed network analyses typically use parametric approaches based on auto-regressive (AR) models, where networks are constructed from estimates of AR model parameters. However, the validity of using low order AR models for neurophysiological signals has been questioned. A recent article introduced a non-parametric approach to estimate directionality in bivariate data, non-parametric approaches are free from concerns over model validity. We extend the non-parametric framework to include measures of directed conditional independence, using scalar measures that decompose the overall partial correlation coefficient summatively by direction, and a set of functions that decompose the partial coherence summatively by direction. A time domain partial correlation function allows both time and frequency views of the data to be constructed. The conditional independence estimates are conditioned on a single predictor. The framework is applied to simulated cortical neuron networks and mixtures of Gaussian time series data with known interactions. It is applied to experimental data consisting of local field potential recordings from bilateral hippocampus in anaesthetised rats. The framework offers a non-parametric approach to estimation of directed interactions in multivariate neuronal recordings, and increased flexibility in dealing with both spike train and time series data. The framework offers a novel alternative non-parametric approach to estimate directed interactions in multivariate neuronal recordings, and is applicable to spike train and time series data. Copyright © 2016 Elsevier B.V. All rights reserved.
Elk viewing in Pennsylvania: an evolving eco-tourism system
Bruce E. Lord; Charles H. Strauss; Michael J. Powell
2002-01-01
In 1997, the Pennsylvania Game Commission established an Elk Viewing Area within Pennsylvania's elk range. The viewing area has become the focus for a developing eco-tourism system. During the four years of operation, a research team from Penn State has measured the number of visitors, their expenditure patterns, and other parameters of their visit. The trends...
Application of Novel Lateral Tire Force Sensors to Vehicle Parameter Estimation of Electric Vehicles
Nam, Kanghyun
2015-01-01
This article presents methods for estimating lateral vehicle velocity and tire cornering stiffness, which are key parameters in vehicle dynamics control, using lateral tire force measurements. Lateral tire forces acting on each tire are directly measured by load-sensing hub bearings that were invented and further developed by NSK Ltd. For estimating the lateral vehicle velocity, tire force models considering lateral load transfer effects are used, and a recursive least square algorithm is adapted to identify the lateral vehicle velocity as an unknown parameter. Using the estimated lateral vehicle velocity, tire cornering stiffness, which is an important tire parameter dominating the vehicle’s cornering responses, is estimated. For the practical implementation, the cornering stiffness estimation algorithm based on a simple bicycle model is developed and discussed. Finally, proposed estimation algorithms were evaluated using experimental test data. PMID:26569246
MARSnet: Mission-aware Autonomous Radar Sensor Network for Future Combat Systems
2007-05-03
34Parameter estimation for 3-parameter log-logistic distribution (LLD3) by Porne ", Parameter estimation for 3-parameter log-logistic distribu- tion...section V we physical security, air traffic control, traffic monitoring, andvidefaconu s cribedy. video surveillance, industrial automation etc. Each
Optimal Linking Design for Response Model Parameters
ERIC Educational Resources Information Center
Barrett, Michelle D.; van der Linden, Wim J.
2017-01-01
Linking functions adjust for differences between identifiability restrictions used in different instances of the estimation of item response model parameters. These adjustments are necessary when results from those instances are to be compared. As linking functions are derived from estimated item response model parameters, parameter estimation…
Rosado-Mendez, Ivan M; Nam, Kibo; Hall, Timothy J; Zagzebski, James A
2013-07-01
Reported here is a phantom-based comparison of methods for determining the power spectral density (PSD) of ultrasound backscattered signals. Those power spectral density values are then used to estimate parameters describing α(f), the frequency dependence of the acoustic attenuation coefficient. Phantoms were scanned with a clinical system equipped with a research interface to obtain radiofrequency echo data. Attenuation, modeled as a power law α(f)= α0 f (β), was estimated using a reference phantom method. The power spectral density was estimated using the short-time Fourier transform (STFT), Welch's periodogram, and Thomson's multitaper technique, and performance was analyzed when limiting the size of the parameter-estimation region. Errors were quantified by the bias and standard deviation of the α0 and β estimates, and by the overall power-law fit error (FE). For parameter estimation regions larger than ~34 pulse lengths (~1 cm for this experiment), an overall power-law FE of 4% was achieved with all spectral estimation methods. With smaller parameter estimation regions as in parametric image formation, the bias and standard deviation of the α0 and β estimates depended on the size of the parameter estimation region. Here, the multitaper method reduced the standard deviation of the α0 and β estimates compared with those using the other techniques. The results provide guidance for choosing methods for estimating the power spectral density in quantitative ultrasound methods.
Evolution of brain-body allometry in Lake Tanganyika cichlids.
Tsuboi, Masahito; Kotrschal, Alexander; Hayward, Alexander; Buechel, Severine Denise; Zidar, Josefina; Løvlie, Hanne; Kolm, Niclas
2016-07-01
Brain size is strongly associated with body size in all vertebrates. This relationship has been hypothesized to be an important constraint on adaptive brain size evolution. The essential assumption behind this idea is that static (i.e., within species) brain-body allometry has low ability to evolve. However, recent studies have reported mixed support for this view. Here, we examine brain-body static allometry in Lake Tanganyika cichlids using a phylogenetic comparative framework. We found considerable variation in the static allometric intercept, which explained the majority of variation in absolute and relative brain size. In contrast, the slope of the brain-body static allometry had relatively low variation, which explained less variation in absolute and relative brain size compared to the intercept and body size. Further examination of the tempo and mode of evolution of static allometric parameters confirmed these observations. Moreover, the estimated evolutionary parameters indicate that the limited observed variation in the static allometric slope could be a result of strong stabilizing selection. Overall, our findings suggest that the brain-body static allometric slope may represent an evolutionary constraint in Lake Tanganyika cichlids. © 2016 The Author(s).
NASA Technical Reports Server (NTRS)
Macmillan, Daniel S.; Han, Daesoo
1989-01-01
The attitude of the Nimbus-7 spacecraft has varied significantly over its lifetime. A summary of the orbital and long-term behavior of the attitude angles and the effects of attitude variations on Scanning Multichannel Microwave Radiometer (SMMR) brightness temperatures is presented. One of the principal effects of these variations is to change the incident angle at which the SMMR views the Earth's surface. The brightness temperatures depend upon the incident angle sensitivities of both the ocean surface emissivity and the atmospheric path length. Ocean surface emissivity is quite sensitive to incident angle variation near the SMMR incident angle, which is about 50 degrees. This sensitivity was estimated theoretically for a smooth ocean surface and no atmosphere. A 1-degree increase in the angle of incidence produces a 2.9 C increase in the retrieved sea surface temperature and a 5.7 m/sec decrease in retrieved sea surface wind speed. An incident angle correction is applied to the SMMR radiances before using them in the geophysical parameter retrieval algorithms. The corrected retrieval data is compared with data obtained without applying the correction.
Demonstrating the conservation of angular momentum using spherical magnets
NASA Astrophysics Data System (ADS)
Lindén, Johan; Slotte, Joakim; Källman, Kjell-Mikael
2018-01-01
An experimental setup for demonstrating the conservation of angular momentum of rotating spherical magnets is described. Two spherical Nd-Fe-B magnets are placed on a double inclined plane and projected towards each other with pre-selected impact parameters ranging from zero to a few tens of millimeters. After impact, the two magnets either revolve vigorously around the common center of mass or stop immediately, depending on the value of the impact parameter. Using a pick-up coil connected to an oscilloscope, the angular frequency for the rotating magnets was measured, and an estimate for the angular momentum was obtained. A high-speed video camera captured the impact and was used for measuring linear and angular velocities of the magnets. A very good agreement between the initial angular momentum before the impact and the final angular momentum of the revolving dumbbell is observed. The two rotating magnets, and the rotating electromagnetic field emanating from them, can also be viewed as a toy model for the newly discovered gravitational waves, where two black holes collide after revolving around each other. (Enhanced online)
NASA Astrophysics Data System (ADS)
Arab, M.; Khodam-Mohammadi, A.
2018-03-01
As a deformed matter bounce scenario with a dark energy component, we propose a deformed one with running vacuum model (RVM) in which the dark energy density ρ _{Λ } is written as a power series of H^2 and \\dot{H} with a constant equation of state parameter, same as the cosmological constant, w=-1. Our results in analytical and numerical point of views show that in some cases same as Λ CDM bounce scenario, although the spectral index may achieve a good consistency with observations, a positive value of running of spectral index (α _s) is obtained which is not compatible with inflationary paradigm where it predicts a small negative value for α _s. However, by extending the power series up to H^4, ρ _{Λ }=n_0+n_2 H^2+n_4 H^4, and estimating a set of consistent parameters, we obtain the spectral index n_s, a small negative value of running α _s and tensor to scalar ratio r, which these reveal a degeneracy between deformed matter bounce scenario with RVM-DE and inflationary cosmology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jiali; Han, Yuefeng; Stein, Michael L.
2016-02-10
The Weather Research and Forecast (WRF) model downscaling skill in extreme maximum daily temperature is evaluated by using the generalized extreme value (GEV) distribution. While the GEV distribution has been used extensively in climatology and meteorology for estimating probabilities of extreme events, accurately estimating GEV parameters based on data from a single pixel can be difficult, even with fairly long data records. This work proposes a simple method assuming that the shape parameter, the most difficult of the three parameters to estimate, does not vary over a relatively large region. This approach is applied to evaluate 31-year WRF-downscaled extreme maximummore » temperature through comparison with North American Regional Reanalysis (NARR) data. Uncertainty in GEV parameter estimates and the statistical significance in the differences of estimates between WRF and NARR are accounted for by conducting bootstrap resampling. Despite certain biases over parts of the United States, overall, WRF shows good agreement with NARR in the spatial pattern and magnitudes of GEV parameter estimates. Both WRF and NARR show a significant increase in extreme maximum temperature over the southern Great Plains and southeastern United States in January and over the western United States in July. The GEV model shows clear benefits from the regionally constant shape parameter assumption, for example, leading to estimates of the location and scale parameters of the model that show coherent spatial patterns.« less
Bayesian-MCMC-based parameter estimation of stealth aircraft RCS models
NASA Astrophysics Data System (ADS)
Xia, Wei; Dai, Xiao-Xia; Feng, Yuan
2015-12-01
When modeling a stealth aircraft with low RCS (Radar Cross Section), conventional parameter estimation methods may cause a deviation from the actual distribution, owing to the fact that the characteristic parameters are estimated via directly calculating the statistics of RCS. The Bayesian-Markov Chain Monte Carlo (Bayesian-MCMC) method is introduced herein to estimate the parameters so as to improve the fitting accuracies of fluctuation models. The parameter estimations of the lognormal and the Legendre polynomial models are reformulated in the Bayesian framework. The MCMC algorithm is then adopted to calculate the parameter estimates. Numerical results show that the distribution curves obtained by the proposed method exhibit improved consistence with the actual ones, compared with those fitted by the conventional method. The fitting accuracy could be improved by no less than 25% for both fluctuation models, which implies that the Bayesian-MCMC method might be a good candidate among the optimal parameter estimation methods for stealth aircraft RCS models. Project supported by the National Natural Science Foundation of China (Grant No. 61101173), the National Basic Research Program of China (Grant No. 613206), the National High Technology Research and Development Program of China (Grant No. 2012AA01A308), the State Scholarship Fund by the China Scholarship Council (CSC), and the Oversea Academic Training Funds, and University of Electronic Science and Technology of China (UESTC).
Pillai, Nikhil; Craig, Morgan; Dokoumetzidis, Aristeidis; Schwartz, Sorell L; Bies, Robert; Freedman, Immanuel
2018-06-19
In mathematical pharmacology, models are constructed to confer a robust method for optimizing treatment. The predictive capability of pharmacological models depends heavily on the ability to track the system and to accurately determine parameters with reference to the sensitivity in projected outcomes. To closely track chaotic systems, one may choose to apply chaos synchronization. An advantageous byproduct of this methodology is the ability to quantify model parameters. In this paper, we illustrate the use of chaos synchronization combined with Nelder-Mead search to estimate parameters of the well-known Kirschner-Panetta model of IL-2 immunotherapy from noisy data. Chaos synchronization with Nelder-Mead search is shown to provide more accurate and reliable estimates than Nelder-Mead search based on an extended least squares (ELS) objective function. Our results underline the strength of this approach to parameter estimation and provide a broader framework of parameter identification for nonlinear models in pharmacology. Copyright © 2018 Elsevier Ltd. All rights reserved.
Heisenberg scaling with weak measurement: a quantum state discrimination point of view
2015-03-18
a quantum state discrimination point of view. The Heisenberg scaling of the photon number for the precision of the interaction parameter between...coherent light and a spin one-half particle (or pseudo-spin) has a simple interpretation in terms of the interaction rotating the quantum state to an...release; distribution is unlimited. Heisenberg scaling with weak measurement: a quantum state discrimination point of view The views, opinions and/or
ASYMPTOTICS FOR CHANGE-POINT MODELS UNDER VARYING DEGREES OF MIS-SPECIFICATION
SONG, RUI; BANERJEE, MOULINATH; KOSOROK, MICHAEL R.
2015-01-01
Change-point models are widely used by statisticians to model drastic changes in the pattern of observed data. Least squares/maximum likelihood based estimation of change-points leads to curious asymptotic phenomena. When the change–point model is correctly specified, such estimates generally converge at a fast rate (n) and are asymptotically described by minimizers of a jump process. Under complete mis-specification by a smooth curve, i.e. when a change–point model is fitted to data described by a smooth curve, the rate of convergence slows down to n1/3 and the limit distribution changes to that of the minimizer of a continuous Gaussian process. In this paper we provide a bridge between these two extreme scenarios by studying the limit behavior of change–point estimates under varying degrees of model mis-specification by smooth curves, which can be viewed as local alternatives. We find that the limiting regime depends on how quickly the alternatives approach a change–point model. We unravel a family of ‘intermediate’ limits that can transition, at least qualitatively, to the limits in the two extreme scenarios. The theoretical results are illustrated via a set of carefully designed simulations. We also demonstrate how inference for the change-point parameter can be performed in absence of knowledge of the underlying scenario by resorting to subsampling techniques that involve estimation of the convergence rate. PMID:26681814
NASA Technical Reports Server (NTRS)
1974-01-01
Activities related to the National Geodetic Satellite Program are reported and include a discussion of Ohio State University's OSU275 set of tracking station coordinates and transformation parameters, determination of network distortions, and plans for data acquisition and processing. The problems encountered in the development of the LAGEOS satellite are reported in an account of activities related to the Earth and Ocean Physics Applications Program. The LAGEOS problem involves transmission and reception of the laser pulse designed to make accurate determinations of the earth's crustal and rotational motions. Pulse motion, ephemeris, arc range measurements, and accuracy estimates are discussed in view of the problem. Personnel involved in the two programs are also listed, along with travel activities and reports published to date.
Development of a positive corona from a long grounded wire in a growing thunderstorm field
NASA Astrophysics Data System (ADS)
Mokrov, M. S.; Raizer, Yu P.; Bazelyan, E. M.
2013-11-01
The properties of a non-stationary corona initiated from a long grounded wire suspended horizontally above the ground and coronating in a slowly varying thundercloud electric field are studied. A two-dimensional (2D) model of the corona is developed. On the basis of this model, characteristics of the corona produced by a lightning protection wire are calculated under thunderstorm conditions. The corona characteristics are also found by using approximate analytical and quasi-one-dimensional numerical models. The results of these models agree reasonably well with those obtained from the 2D simulation. This allows one to estimate the corona parameters without recourse to the cumbersome simulation. This work was performed with a view to study the efficiency of lightning protection wires later on.
Robust design of a 2-DOF GMV controller: a direct self-tuning and fuzzy scheduling approach.
Silveira, Antonio S; Rodríguez, Jaime E N; Coelho, Antonio A R
2012-01-01
This paper presents a study on self-tuning control strategies with generalized minimum variance control in a fixed two degree of freedom structure-or simply GMV2DOF-within two adaptive perspectives. One, from the process model point of view, using a recursive least squares estimator algorithm for direct self-tuning design, and another, using a Mamdani fuzzy GMV2DOF parameters scheduling technique based on analytical and physical interpretations from robustness analysis of the system. Both strategies are assessed by simulation and real plants experimentation environments composed of a damped pendulum and an under development wind tunnel from the Department of Automation and Systems of the Federal University of Santa Catarina. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Integrating Evolutionary Game Theory into Mechanistic Genotype-Phenotype Mapping.
Zhu, Xuli; Jiang, Libo; Ye, Meixia; Sun, Lidan; Gragnoli, Claudia; Wu, Rongling
2016-05-01
Natural selection has shaped the evolution of organisms toward optimizing their structural and functional design. However, how this universal principle can enhance genotype-phenotype mapping of quantitative traits has remained unexplored. Here we show that the integration of this principle and functional mapping through evolutionary game theory gains new insight into the genetic architecture of complex traits. By viewing phenotype formation as an evolutionary system, we formulate mathematical equations to model the ecological mechanisms that drive the interaction and coordination of its constituent components toward population dynamics and stability. Functional mapping provides a procedure for estimating the genetic parameters that specify the dynamic relationship of competition and cooperation and predicting how genes mediate the evolution of this relationship during trait formation. Copyright © 2016 Elsevier Ltd. All rights reserved.
SPACE FOR AUDIO-VISUAL LARGE GROUP INSTRUCTION.
ERIC Educational Resources Information Center
GAUSEWITZ, CARL H.
WITH AN INCREASING INTEREST IN AND UTILIZATION OF AUDIO-VISUAL MEDIA IN EDUCATION FACILITIES, IT IS IMPORTANT THAT STANDARDS ARE ESTABLISHED FOR ESTIMATING THE SPACE REQUIRED FOR VIEWING THESE VARIOUS MEDIA. THIS MONOGRAPH SUGGESTS SUCH STANDARDS FOR VIEWING AREAS, VIEWING ANGLES, SEATING PATTERNS, SCREEN CHARACTERISTICS AND EQUIPMENT PERFORMANCES…
Preliminary Evaluation of a Commercial 360 Multi-Camera Rig for Photogrammetric Purposes
NASA Astrophysics Data System (ADS)
Teppati Losè, L.; Chiabrando, F.; Spanò, A.
2018-05-01
The research presented in this paper is focused on a preliminary evaluation of a 360 multi-camera rig: the possibilities to use the images acquired by the system in a photogrammetric workflow and for the creation of spherical images are investigated and different tests and analyses are reported. Particular attention is dedicated to different operative approaches for the estimation of the interior orientation parameters of the cameras, both from an operative and theoretical point of view. The consistency of the six cameras that compose the 360 system was in depth analysed adopting a self-calibration approach in a commercial photogrammetric software solution. A 3D calibration field was projected and created, and several topographic measurements were performed in order to have a set of control points to enhance and control the photogrammetric process. The influence of the interior parameters of the six cameras were analyse both in the different phases of the photogrammetric workflow (reprojection errors on the single tie point, dense cloud generation, geometrical description of the surveyed object, etc.), both in the stitching of the different images into a single spherical panorama (some consideration on the influence of the camera parameters on the overall quality of the spherical image are reported also in these section).
Feature selection and classification of multiparametric medical images using bagging and SVM
NASA Astrophysics Data System (ADS)
Fan, Yong; Resnick, Susan M.; Davatzikos, Christos
2008-03-01
This paper presents a framework for brain classification based on multi-parametric medical images. This method takes advantage of multi-parametric imaging to provide a set of discriminative features for classifier construction by using a regional feature extraction method which takes into account joint correlations among different image parameters; in the experiments herein, MRI and PET images of the brain are used. Support vector machine classifiers are then trained based on the most discriminative features selected from the feature set. To facilitate robust classification and optimal selection of parameters involved in classification, in view of the well-known "curse of dimensionality", base classifiers are constructed in a bagging (bootstrap aggregating) framework for building an ensemble classifier and the classification parameters of these base classifiers are optimized by means of maximizing the area under the ROC (receiver operating characteristic) curve estimated from their prediction performance on left-out samples of bootstrap sampling. This classification system is tested on a sex classification problem, where it yields over 90% classification rates for unseen subjects. The proposed classification method is also compared with other commonly used classification algorithms, with favorable results. These results illustrate that the methods built upon information jointly extracted from multi-parametric images have the potential to perform individual classification with high sensitivity and specificity.
NASA Astrophysics Data System (ADS)
Lika, Konstadia; Kearney, Michael R.; Kooijman, Sebastiaan A. L. M.
2011-11-01
The covariation method for estimating the parameters of the standard Dynamic Energy Budget (DEB) model provides a single-step method of accessing all the core DEB parameters from commonly available empirical data. In this study, we assess the robustness of this parameter estimation procedure and analyse the role of pseudo-data using elasticity coefficients. In particular, we compare the performance of Maximum Likelihood (ML) vs. Weighted Least Squares (WLS) approaches and find that the two approaches tend to converge in performance as the number of uni-variate data sets increases, but that WLS is more robust when data sets comprise single points (zero-variate data). The efficiency of the approach is shown to be high, and the prior parameter estimates (pseudo-data) have very little influence if the real data contain information about the parameter values. For instance, the effects of the pseudo-value for the allocation fraction κ is reduced when there is information for both growth and reproduction, that for the energy conductance is reduced when information on age at birth and puberty is given, and the effects of the pseudo-value for the maturity maintenance rate coefficient are insignificant. The estimation of some parameters (e.g., the zoom factor and the shape coefficient) requires little information, while that of others (e.g., maturity maintenance rate, puberty threshold and reproduction efficiency) require data at several food levels. The generality of the standard DEB model, in combination with the estimation of all of its parameters, allows comparison of species on the basis of parameter values. We discuss a number of preliminary patterns emerging from the present collection of parameter estimates across a wide variety of taxa. We make the observation that the estimated value of the fraction κ of mobilised reserve that is allocated to soma is far away from the value that maximises reproduction. We recognise this as the reason why two very different parameter sets must exist that fit most data set reasonably well, and give arguments why, in most cases, the set with the large value of κ should be preferred. The continued development of a parameter database through the estimation procedures described here will provide a strong basis for understanding evolutionary patterns in metabolic organisation across the diversity of life.