NASA Astrophysics Data System (ADS)
Hendricks Franssen, H. J.; Post, H.; Vrugt, J. A.; Fox, A. M.; Baatz, R.; Kumbhar, P.; Vereecken, H.
2015-12-01
Estimation of net ecosystem exchange (NEE) by land surface models is strongly affected by uncertain ecosystem parameters and initial conditions. A possible approach is the estimation of plant functional type (PFT) specific parameters for sites with measurement data like NEE and application of the parameters at other sites with the same PFT and no measurements. This upscaling strategy was evaluated in this work for sites in Germany and France. Ecosystem parameters and initial conditions were estimated with NEE-time series of one year length, or a time series of only one season. The DREAM(zs) algorithm was used for the estimation of parameters and initial conditions. DREAM(zs) is not limited to Gaussian distributions and can condition to large time series of measurement data simultaneously. DREAM(zs) was used in combination with the Community Land Model (CLM) v4.5. Parameter estimates were evaluated by model predictions at the same site for an independent verification period. In addition, the parameter estimates were evaluated at other, independent sites situated >500km away with the same PFT. The main conclusions are: i) simulations with estimated parameters reproduced better the NEE measurement data in the verification periods, including the annual NEE-sum (23% improvement), annual NEE-cycle and average diurnal NEE course (error reduction by factor 1,6); ii) estimated parameters based on seasonal NEE-data outperformed estimated parameters based on yearly data; iii) in addition, those seasonal parameters were often also significantly different from their yearly equivalents; iv) estimated parameters were significantly different if initial conditions were estimated together with the parameters. We conclude that estimated PFT-specific parameters improve land surface model predictions significantly at independent verification sites and for independent verification periods so that their potential for upscaling is demonstrated. However, simulation results also indicate that possibly the estimated parameters mask other model errors. This would imply that their application at climatic time scales would not improve model predictions. A central question is whether the integration of many different data streams (e.g., biomass, remotely sensed LAI) could solve the problems indicated here.
Estimation of chaotic coupled map lattices using symbolic vector dynamics
NASA Astrophysics Data System (ADS)
Wang, Kai; Pei, Wenjiang; Cheung, Yiu-ming; Shen, Yi; He, Zhenya
2010-01-01
In [K. Wang, W.J. Pei, Z.Y. He, Y.M. Cheung, Phys. Lett. A 367 (2007) 316], an original symbolic vector dynamics based method has been proposed for initial condition estimation in additive white Gaussian noisy environment. The estimation precision of this estimation method is determined by symbolic errors of the symbolic vector sequence gotten by symbolizing the received signal. This Letter further develops the symbolic vector dynamical estimation method. We correct symbolic errors with backward vector and the estimated values by using different symbols, and thus the estimation precision can be improved. Both theoretical and experimental results show that this algorithm enables us to recover initial condition of coupled map lattice exactly in both noisy and noise free cases. Therefore, we provide novel analytical techniques for understanding turbulences in coupled map lattice.
NASA Astrophysics Data System (ADS)
Kanoglu, U.; Wronna, M.; Baptista, M. A.; Miranda, J. M. A.
2017-12-01
The one-dimensional analytical runup theory in combination with near shore synthetic waveforms is a promising tool for tsunami rapid early warning systems. Its application in realistic cases with complex bathymetry and initial wave condition from inverse modelling have shown that maximum runup values can be estimated reasonably well. In this study we generate a simplistic bathymetry domains which resemble realistic near-shore features. We investigate the accuracy of the analytical runup formulae to the variation of fault source parameters and near-shore bathymetric features. To do this we systematically vary the fault plane parameters to compute the initial tsunami wave condition. Subsequently, we use the initial conditions to run the numerical tsunami model using coupled system of four nested grids and compare the results to the analytical estimates. Variation of the dip angle of the fault plane showed that analytical estimates have less than 10% difference for angles 5-45 degrees in a simple bathymetric domain. These results shows that the use of analytical formulae for fast run up estimates constitutes a very promising approach in a simple bathymetric domain and might be implemented in Hazard Mapping and Early Warning.
A new Method for the Estimation of Initial Condition Uncertainty Structures in Mesoscale Models
NASA Astrophysics Data System (ADS)
Keller, J. D.; Bach, L.; Hense, A.
2012-12-01
The estimation of fast growing error modes of a system is a key interest of ensemble data assimilation when assessing uncertainty in initial conditions. Over the last two decades three methods (and variations of these methods) have evolved for global numerical weather prediction models: ensemble Kalman filter, singular vectors and breeding of growing modes (or now ensemble transform). While the former incorporates a priori model error information and observation error estimates to determine ensemble initial conditions, the latter two techniques directly address the error structures associated with Lyapunov vectors. However, in global models these structures are mainly associated with transient global wave patterns. When assessing initial condition uncertainty in mesoscale limited area models, several problems regarding the aforementioned techniques arise: (a) additional sources of uncertainty on the smaller scales contribute to the error and (b) error structures from the global scale may quickly move through the model domain (depending on the size of the domain). To address the latter problem, perturbation structures from global models are often included in the mesoscale predictions as perturbed boundary conditions. However, the initial perturbations (when used) are often generated with a variant of an ensemble Kalman filter which does not necessarily focus on the large scale error patterns. In the framework of the European regional reanalysis project of the Hans-Ertel-Center for Weather Research we use a mesoscale model with an implemented nudging data assimilation scheme which does not support ensemble data assimilation at all. In preparation of an ensemble-based regional reanalysis and for the estimation of three-dimensional atmospheric covariance structures, we implemented a new method for the assessment of fast growing error modes for mesoscale limited area models. The so-called self-breeding is development based on the breeding of growing modes technique. Initial perturbations are integrated forward for a short time period and then rescaled and added to the initial state again. Iterating this rapid breeding cycle provides estimates for the initial uncertainty structure (or local Lyapunov vectors) given a specific norm. To avoid that all ensemble perturbations converge towards the leading local Lyapunov vector we apply an ensemble transform variant to orthogonalize the perturbations in the sub-space spanned by the ensemble. By choosing different kind of norms to measure perturbation growth, this technique allows for estimating uncertainty patterns targeted at specific sources of errors (e.g. convection, turbulence). With case study experiments we show applications of the self-breeding method for different sources of uncertainty and different horizontal scales.
Geostatistical applications in ground-water modeling in south-central Kansas
Ma, T.-S.; Sophocleous, M.; Yu, Y.-S.
1999-01-01
This paper emphasizes the supportive role of geostatistics in applying ground-water models. Field data of 1994 ground-water level, bedrock, and saltwater-freshwater interface elevations in south-central Kansas were collected and analyzed using the geostatistical approach. Ordinary kriging was adopted to estimate initial conditions for ground-water levels and topography of the Permian bedrock at the nodes of a finite difference grid used in a three-dimensional numerical model. Cokriging was used to estimate initial conditions for the saltwater-freshwater interface. An assessment of uncertainties in the estimated data is presented. The kriged and cokriged estimation variances were analyzed to evaluate the adequacy of data employed in the modeling. Although water levels and bedrock elevations are well described by spherical semivariogram models, additional data are required for better cokriging estimation of the interface data. The geostatistically analyzed data were employed in a numerical model of the Siefkes site in the project area. Results indicate that the computed chloride concentrations and ground-water drawdowns reproduced the observed data satisfactorily.This paper emphasizes the supportive role of geostatistics in applying ground-water models. Field data of 1994 ground-water level, bedrock, and saltwater-freshwater interface elevations in south-central Kansas were collected and analyzed using the geostatistical approach. Ordinary kriging was adopted to estimate initial conditions for ground-water levels and topography of the Permian bedrock at the nodes of a finite difference grid used in a three-dimensional numerical model. Cokriging was used to estimate initial conditions for the saltwater-freshwater interface. An assessment of uncertainties in the estimated data is presented. The kriged and cokriged estimation variances were analyzed to evaluate the adequacy of data employed in the modeling. Although water levels and bedrock elevations are well described by spherical semivariogram models, additional data are required for better cokriging estimation of the interface data. The geostatistically analyzed data were employed in a numerical model of the Siefkes site in the project area. Results indicate that the computed chloride concentrations and ground-water drawdowns reproduced the observed data satisfactorily.
New formulations for tsunami runup estimation
NASA Astrophysics Data System (ADS)
Kanoglu, U.; Aydin, B.; Ceylan, N.
2017-12-01
We evaluate shoreline motion and maximum runup in two folds: One, we use linear shallow water-wave equations over a sloping beach and solve as initial-boundary value problem similar to the nonlinear solution of Aydın and Kanoglu (2017, Pure Appl. Geophys., https://doi.org/10.1007/s00024-017-1508-z). Methodology we present here is simple; it involves eigenfunction expansion and, hence, avoids integral transform techniques. We then use several different types of initial wave profiles with and without initial velocity, estimate shoreline properties and confirm classical runup invariance between linear and nonlinear theories. Two, we use the nonlinear shallow water-wave solution of Kanoglu (2004, J. Fluid Mech. 513, 363-372) to estimate maximum runup. Kanoglu (2004) presented a simple integral solution for the nonlinear shallow water-wave equations using the classical Carrier and Greenspan transformation, and further extended shoreline position and velocity to a simpler integral formulation. In addition, Tinti and Tonini (2005, J. Fluid Mech. 535, 33-64) defined initial condition in a very convenient form for near-shore events. We use Tinti and Tonini (2005) type initial condition in Kanoglu's (2004) shoreline integral solution, which leads further simplified estimates for shoreline position and velocity, i.e. algebraic relation. We then use this algebraic runup estimate to investigate effect of earthquake source parameters on maximum runup and present results similar to Sepulveda and Liu (2016, Coast. Eng. 112, 57-68).
NASA Astrophysics Data System (ADS)
Wang, Daosheng; Zhang, Jicai; He, Xianqiang; Chu, Dongdong; Lv, Xianqing; Wang, Ya Ping; Yang, Yang; Fan, Daidu; Gao, Shu
2018-01-01
Model parameters in the suspended cohesive sediment transport models are critical for the accurate simulation of suspended sediment concentrations (SSCs). Difficulties in estimating the model parameters still prevent numerical modeling of the sediment transport from achieving a high level of predictability. Based on a three-dimensional cohesive sediment transport model and its adjoint model, the satellite remote sensing data of SSCs during both spring tide and neap tide, retrieved from Geostationary Ocean Color Imager (GOCI), are assimilated to synchronously estimate four spatially and temporally varying parameters in the Hangzhou Bay in China, including settling velocity, resuspension rate, inflow open boundary conditions and initial conditions. After data assimilation, the model performance is significantly improved. Through several sensitivity experiments, the spatial and temporal variation tendencies of the estimated model parameters are verified to be robust and not affected by model settings. The pattern for the variations of the estimated parameters is analyzed and summarized. The temporal variations and spatial distributions of the estimated settling velocity are negatively correlated with current speed, which can be explained using the combination of flocculation process and Stokes' law. The temporal variations and spatial distributions of the estimated resuspension rate are also negatively correlated with current speed, which are related to the grain size of the seabed sediments under different current velocities. Besides, the estimated inflow open boundary conditions reach the local maximum values near the low water slack conditions and the estimated initial conditions are negatively correlated with water depth, which is consistent with the general understanding. The relationships between the estimated parameters and the hydrodynamic fields can be suggestive for improving the parameterization in cohesive sediment transport models.
Wang, Hexiang; Barton, Justin E.; Schuster, Eugenio
2015-09-01
The accuracy of the internal states of a tokamak, which usually cannot be measured directly, is of crucial importance for feedback control of the plasma dynamics. A first-principles-driven plasma response model could provide an estimation of the internal states given the boundary conditions on the magnetic axis and at plasma boundary. However, the estimation would highly depend on initial conditions, which may not always be known, disturbances, and non-modeled dynamics. Here in this work, a closed-loop state observer for the poloidal magnetic flux is proposed based on a very limited set of real-time measurements by following an Extended Kalman Filteringmore » (EKF) approach. Comparisons between estimated and measured magnetic flux profiles are carried out for several discharges in the DIII-D tokamak. The experimental results illustrate the capability of the proposed observer in dealing with incorrect initial conditions and measurement noise.« less
Knopman, Debra S.; Voss, Clifford I.
1988-01-01
Sensitivities of solute concentration to parameters associated with first-order chemical decay, boundary conditions, initial conditions, and multilayer transport are examined in one-dimensional analytical models of transient solute transport in porous media. A sensitivity is a change in solute concentration resulting from a change in a model parameter. Sensitivity analysis is important because minimum information required in regression on chemical data for the estimation of model parameters by regression is expressed in terms of sensitivities. Nonlinear regression models of solute transport were tested on sets of noiseless observations from known models that exceeded the minimum sensitivity information requirements. Results demonstrate that the regression models consistently converged to the correct parameters when the initial sets of parameter values substantially deviated from the correct parameters. On the basis of the sensitivity analysis, several statements may be made about design of sampling for parameter estimation for the models examined: (1) estimation of parameters associated with solute transport in the individual layers of a multilayer system is possible even when solute concentrations in the individual layers are mixed in an observation well; (2) when estimating parameters in a decaying upstream boundary condition, observations are best made late in the passage of the front near a time chosen by adding the inverse of an hypothesized value of the source decay parameter to the estimated mean travel time at a given downstream location; (3) estimation of a first-order chemical decay parameter requires observations to be made late in the passage of the front, preferably near a location corresponding to a travel time of √2 times the half-life of the solute; and (4) estimation of a parameter relating to spatial variability in an initial condition requires observations to be made early in time relative to passage of the solute front.
Controlling Chaos Via Knowledge of Initial Condition for a Curved Structure
NASA Technical Reports Server (NTRS)
Maestrello, L.
2000-01-01
Nonlinear response of a flexible curved panel exhibiting bifurcation to fully developed chaos is demonstrated along with the sensitivity to small perturbation from the initial conditions. The response is determined from the measured time series at two fixed points. The panel is forced by an external nonharmonic multifrequency and monofrequency sound field. Using a low power time-continuous feedback control, carefully tuned at each initial condition, produces large long-term effects on the dynamics toward taming chaos. Without the knowledge of the initial conditions, control may be achieved by destructive interference. In this case, the control power is proportional to the loading power. Calculation of the correlation dimension and the estimation of positive Lyapunov exponents, in practice, are the proof of chaotic response.
UXO Burial Prediction Fidelity
2017-07-01
been developed to predict the initial penetration depth of underwater mines . SERDP would like to know if and how these existing mine models could be...designed for near-cylindrical mines —for munitions, however, projectile-specific drag, lift, and moment coefficients are needed for estimating...as inputs. Other models have been built to estimate these initial conditions for mines dropped into water. Can these mine models be useful for
NASA Astrophysics Data System (ADS)
Ke, Weiyao; Moreland, J. Scott; Bernhard, Jonah E.; Bass, Steffen A.
2017-10-01
We study the initial three-dimensional spatial configuration of the quark-gluon plasma (QGP) produced in relativistic heavy-ion collisions using centrality and pseudorapidity-dependent measurements of the medium's charged particle density and two-particle correlations. A cumulant-generating function is first used to parametrize the rapidity dependence of local entropy deposition and extend arbitrary boost-invariant initial conditions to nonzero beam rapidities. The model is then compared to p +Pb and Pb + Pb charged-particle pseudorapidity densities and two-particle pseudorapidity correlations and systematically optimized using Bayesian parameter estimation to extract high-probability initial condition parameters. The optimized initial conditions are then compared to a number of experimental observables including the pseudorapidity-dependent anisotropic flows, event-plane decorrelations, and flow correlations. We find that the form of the initial local longitudinal entropy profile is well constrained by these experimental measurements.
Uncertainty Estimation in Tsunami Initial Condition From Rapid Bayesian Finite Fault Modeling
NASA Astrophysics Data System (ADS)
Benavente, R. F.; Dettmer, J.; Cummins, P. R.; Urrutia, A.; Cienfuegos, R.
2017-12-01
It is well known that kinematic rupture models for a given earthquake can present discrepancies even when similar datasets are employed in the inversion process. While quantifying this variability can be critical when making early estimates of the earthquake and triggered tsunami impact, "most likely models" are normally used for this purpose. In this work, we quantify the uncertainty of the tsunami initial condition for the great Illapel earthquake (Mw = 8.3, 2015, Chile). We focus on utilizing data and inversion methods that are suitable to rapid source characterization yet provide meaningful and robust results. Rupture models from teleseismic body and surface waves as well as W-phase are derived and accompanied by Bayesian uncertainty estimates from linearized inversion under positivity constraints. We show that robust and consistent features about the rupture kinematics appear when working within this probabilistic framework. Moreover, by using static dislocation theory, we translate the probabilistic slip distributions into seafloor deformation which we interpret as a tsunami initial condition. After considering uncertainty, our probabilistic seafloor deformation models obtained from different data types appear consistent with each other providing meaningful results. We also show that selecting just a single "representative" solution from the ensemble of initial conditions for tsunami propagation may lead to overestimating information content in the data. Our results suggest that rapid, probabilistic rupture models can play a significant role during emergency response by providing robust information about the extent of the disaster.
Ductile Crack Initiation Criterion with Mismatched Weld Joints Under Dynamic Loading Conditions.
An, Gyubaek; Jeong, Se-Min; Park, Jeongung
2018-03-01
Brittle failure of high toughness steel structures tends to occur after ductile crack initiation/propagation. Damages to steel structures were reported in the Hanshin Great Earthquake. Several brittle failures were observed in beam-to-column connection zones with geometrical discontinuity. It is widely known that triaxial stresses accelerate the ductile fracture of steels. The study examined the effects of geometrical heterogeneity and strength mismatches (both of which elevate plastic constraints due to heterogeneous plastic straining) and loading rate on critical conditions initiating ductile fracture. This involved applying the two-parameter criterion (involving equivalent plastic strain and stress triaxiality) to estimate ductile cracking for strength mismatched specimens under static and dynamic tensile loading conditions. Ductile crack initiation testing was conducted under static and dynamic loading conditions using circumferentially notched specimens (Charpy type) with/without strength mismatches. The results indicated that the condition for ductile crack initiation using the two parameter criterion was a transferable criterion to evaluate ductile crack initiation independent of the existence of strength mismatches and loading rates.
Inverse analysis of turbidites by machine learning
NASA Astrophysics Data System (ADS)
Naruse, H.; Nakao, K.
2017-12-01
This study aims to propose a method to estimate paleo-hydraulic conditions of turbidity currents from ancient turbidites by using machine-learning technique. In this method, numerical simulation was repeated under various initial conditions, which produces a data set of characteristic features of turbidites. Then, this data set of turbidites is used for supervised training of a deep-learning neural network (NN). Quantities of characteristic features of turbidites in the training data set are given to input nodes of NN, and output nodes are expected to provide the estimates of initial condition of the turbidity current. The optimization of weight coefficients of NN is then conducted to reduce root-mean-square of the difference between the true conditions and the output values of NN. The empirical relationship with numerical results and the initial conditions is explored in this method, and the discovered relationship is used for inversion of turbidity currents. This machine learning can potentially produce NN that estimates paleo-hydraulic conditions from data of ancient turbidites. We produced a preliminary implementation of this methodology. A forward model based on 1D shallow-water equations with a correction of density-stratification effect was employed. This model calculates a behavior of a surge-like turbidity current transporting mixed-size sediment, and outputs spatial distribution of volume per unit area of each grain-size class on the uniform slope. Grain-size distribution was discretized 3 classes. Numerical simulation was repeated 1000 times, and thus 1000 beds of turbidites were used as the training data for NN that has 21000 input nodes and 5 output nodes with two hidden-layers. After the machine learning finished, independent simulations were conducted 200 times in order to evaluate the performance of NN. As a result of this test, the initial conditions of validation data were successfully reconstructed by NN. The estimated values show very small deviation from the true parameters. Comparing to previous inverse modeling of turbidity currents, our methodology is superior especially in the efficiency of computation. Also, our methodology has advantage in extensibility and applicability to various sediment transport processes such as pyroclastic flows or debris flows.
Estimating Soil Hydraulic Parameters using Gradient Based Approach
NASA Astrophysics Data System (ADS)
Rai, P. K.; Tripathi, S.
2017-12-01
The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.
Disruption of State Estimation in the Human Lateral Cerebellum
Miall, R. Chris; Christensen, Lars O. D; Cain, Owen; Stanley, James
2007-01-01
The cerebellum has been proposed to be a crucial component in the state estimation process that combines information from motor efferent and sensory afferent signals to produce a representation of the current state of the motor system. Such a state estimate of the moving human arm would be expected to be used when the arm is rapidly and skillfully reaching to a target. We now report the effects of transcranial magnetic stimulation (TMS) over the ipsilateral cerebellum as healthy humans were made to interrupt a slow voluntary movement to rapidly reach towards a visually defined target. Errors in the initial direction and in the final finger position of this reach-to-target movement were significantly higher for cerebellar stimulation than they were in control conditions. The average directional errors in the cerebellar TMS condition were consistent with the reaching movements being planned and initiated from an estimated hand position that was 138 ms out of date. We suggest that these results demonstrate that the cerebellum is responsible for estimating the hand position over this time interval and that TMS disrupts this state estimate. PMID:18044990
NASA Astrophysics Data System (ADS)
Roozegar, Mehdi; Mahjoob, Mohammad J.; Ayati, Moosa
2017-05-01
This paper deals with adaptive estimation of the unknown parameters and states of a pendulum-driven spherical robot (PDSR), which is a nonlinear in parameters (NLP) chaotic system with parametric uncertainties. Firstly, the mathematical model of the robot is deduced by applying the Newton-Euler methodology for a system of rigid bodies. Then, based on the speed gradient (SG) algorithm, the states and unknown parameters of the robot are estimated online for different step length gains and initial conditions. The estimated parameters are updated adaptively according to the error between estimated and true state values. Since the errors of the estimated states and parameters as well as the convergence rates depend significantly on the value of step length gain, this gain should be chosen optimally. Hence, a heuristic fuzzy logic controller is employed to adjust the gain adaptively. Simulation results indicate that the proposed approach is highly encouraging for identification of this NLP chaotic system even if the initial conditions change and the uncertainties increase; therefore, it is reliable to be implemented on a real robot.
NASA Astrophysics Data System (ADS)
Gaztanaga, Enrique; Fosalba, Pablo
1998-12-01
In Paper I of this series, we introduced the spherical collapse (SC) approximation in Lagrangian space as a way of estimating the cumulants xi_J of density fluctuations in cosmological perturbation theory (PT). Within this approximation, the dynamics is decoupled from the statistics of the initial conditions, so we are able to present here the cumulants for generic non-Gaussian initial conditions, which can be estimated to arbitrary order including the smoothing effects. The SC model turns out to recover the exact leading-order non-linear contributions up to terms involving non-local integrals of the J-point functions. We argue that for the hierarchical ratios S_J, these non-local terms are subdominant and tend to compensate each other. The resulting predictions show a non-trivial time evolution that can be used to discriminate between models of structure formation. We compare these analytic results with non-Gaussian N-body simulations, which turn out to be in very good agreement up to scales where sigma<~1.
Linsell, L; Dawson, J; Zondervan, K; Rose, P; Randall, T; Fitzpatrick, R; Carr, A
2006-02-01
To estimate the national prevalence and incidence of adults consulting for a shoulder condition and to investigate patterns of diagnosis, treatment, consultation and referral 3 yr after initial presentation. Prevalence and incidence rates were estimated for 658469 patients aged 18 and over in the year 2000 using a primary care database, the IMS Disease Analyzer-Mediplus UK. A cohort of 9215 incident cases was followed-up prospectively for 3 yr beyond the initial consultation. The annual prevalence and incidence of people consulting for a shoulder condition was 2.36% [95% confidence interval (CI) 2.32-2.40%] and 1.47% (95% CI 1.44-1.50%), respectively. Prevalence increased linearly with age whilst incidence peaked at around 50 yr then remained static at around 2%. Around half of the incident cases consulted once only, while 13.6% were still consulting with a shoulder problem during the third year of follow-up. During the 3 yr following initial presentation, 22.4% of patients were referred to secondary care, 30.8% were prescribed non-steroidal anti-inflammatory drugs and 10.6% were given an injection by their general practitioner (GP). GPs tended to use a limited number of generalized codes when recording a diagnosis; just five of 426 possible Read codes relating to shoulder conditions accounted for 74.6% of the diagnoses of new cases recorded by GPs. The prevalence of people consulting for shoulder problems in primary care is substantially lower than community-based estimates of shoulder pain. Most referrals occur within 3 months of initial presentation, but only a minority of patients are referred to orthopaedic specialists or rheumatologists. GPs may lack confidence in applying precise diagnoses to shoulder conditions.
Estimating impervious cover and riparian zone condition in New England watersheds
Under EPA’s Green Infrastructure Initiative, research activities are underway to evaluate the effectiveness of green infrastructure in mitigating the effects of urbanization and stormwater impacts on stream biota and habitat. Preliminary analyses, using impervious cover estimate...
Analytic model to estimate thermonuclear neutron yield in z-pinches using the magnetic Noh problem
NASA Astrophysics Data System (ADS)
Allen, Robert C.
The objective was to build a model which could be used to estimate neutron yield in pulsed z-pinch experiments, benchmark future z-pinch simulation tools and to assist scaling for breakeven systems. To accomplish this, a recent solution to the magnetic Noh problem was utilized which incorporates a self-similar solution with cylindrical symmetry and azimuthal magnetic field (Velikovich, 2012). The self-similar solution provides the conditions needed to calculate the time dependent implosion dynamics from which batch burn is assumed and used to calculate neutron yield. The solution to the model is presented. The ion densities and time scales fix the initial mass and implosion velocity, providing estimates of the experimental results given specific initial conditions. Agreement is shown with experimental data (Coverdale, 2007). A parameter sweep was done to find the neutron yield, implosion velocity and gain for a range of densities and time scales for DD reactions and a curve fit was done to predict the scaling as a function of preshock conditions.
Sensitivity of Forecast Skill to Different Objective Analysis Schemes
NASA Technical Reports Server (NTRS)
Baker, W. E.
1979-01-01
Numerical weather forecasts are characterized by rapidly declining skill in the first 48 to 72 h. Recent estimates of the sources of forecast error indicate that the inaccurate specification of the initial conditions contributes substantially to this error. The sensitivity of the forecast skill to the initial conditions is examined by comparing a set of real-data experiments whose initial data were obtained with two different analysis schemes. Results are presented to emphasize the importance of the objective analysis techniques used in the assimilation of observational data.
Stormwater quality modelling in combined sewers: calibration and uncertainty analysis.
Kanso, A; Chebbo, G; Tassin, B
2005-01-01
Estimating the level of uncertainty in urban stormwater quality models is vital for their utilization. This paper presents the results of application of a Monte Carlo Markov Chain method based on the Bayesian theory for the calibration and uncertainty analysis of a storm water quality model commonly used in available software. The tested model uses a hydrologic/hydrodynamic scheme to estimate the accumulation, the erosion and the transport of pollutants on surfaces and in sewers. It was calibrated for four different initial conditions of in-sewer deposits. Calibration results showed large variability in the model's responses in function of the initial conditions. They demonstrated that the model's predictive capacity is very low.
Almalik, Osama; Nijhuis, Michiel B; van den Heuvel, Edwin R
2014-01-01
Shelf-life estimation usually requires that at least three registration batches are tested for stability at multiple storage conditions. The shelf-life estimates are often obtained by linear regression analysis per storage condition, an approach implicitly suggested by ICH guideline Q1E. A linear regression analysis combining all data from multiple storage conditions was recently proposed in the literature when variances are homogeneous across storage conditions. The combined analysis is expected to perform better than the separate analysis per storage condition, since pooling data would lead to an improved estimate of the variation and higher numbers of degrees of freedom, but this is not evident for shelf-life estimation. Indeed, the two approaches treat the observed initial batch results, the intercepts in the model, and poolability of batches differently, which may eliminate or reduce the expected advantage of the combined approach with respect to the separate approach. Therefore, a simulation study was performed to compare the distribution of simulated shelf-life estimates on several characteristics between the two approaches and to quantify the difference in shelf-life estimates. In general, the combined statistical analysis does estimate the true shelf life more consistently and precisely than the analysis per storage condition, but it did not outperform the separate analysis in all circumstances.
A priori Estimates for 3D Incompressible Current-Vortex Sheets
NASA Astrophysics Data System (ADS)
Coulombel, J.-F.; Morando, A.; Secchi, P.; Trebeschi, P.
2012-04-01
We consider the free boundary problem for current-vortex sheets in ideal incompressible magneto-hydrodynamics. It is known that current-vortex sheets may be at most weakly (neutrally) stable due to the existence of surface waves solutions to the linearized equations. The existence of such waves may yield a loss of derivatives in the energy estimate of the solution with respect to the source terms. However, under a suitable stability condition satisfied at each point of the initial discontinuity and a flatness condition on the initial front, we prove an a priori estimate in Sobolev spaces for smooth solutions with no loss of derivatives. The result of this paper gives some hope for proving the local existence of smooth current-vortex sheets without resorting to a Nash-Moser iteration. Such result would be a rigorous confirmation of the stabilizing effect of the magnetic field on Kelvin-Helmholtz instabilities, which is well known in astrophysics.
NASA Astrophysics Data System (ADS)
Zheng, Qin; Yang, Zubin; Sha, Jianxin; Yan, Jun
2017-02-01
In predictability problem research, the conditional nonlinear optimal perturbation (CNOP) describes the initial perturbation that satisfies a certain constraint condition and causes the largest prediction error at the prediction time. The CNOP has been successfully applied in estimation of the lower bound of maximum predictable time (LBMPT). Generally, CNOPs are calculated by a gradient descent algorithm based on the adjoint model, which is called ADJ-CNOP. This study, through the two-dimensional Ikeda model, investigates the impacts of the nonlinearity on ADJ-CNOP and the corresponding precision problems when using ADJ-CNOP to estimate the LBMPT. Our conclusions are that (1) when the initial perturbation is large or the prediction time is long, the strong nonlinearity of the dynamical model in the prediction variable will lead to failure of the ADJ-CNOP method, and (2) when the objective function has multiple extreme values, ADJ-CNOP has a large probability of producing local CNOPs, hence making a false estimation of the LBMPT. Furthermore, the particle swarm optimization (PSO) algorithm, one kind of intelligent algorithm, is introduced to solve this problem. The method using PSO to compute CNOP is called PSO-CNOP. The results of numerical experiments show that even with a large initial perturbation and long prediction time, or when the objective function has multiple extreme values, PSO-CNOP can always obtain the global CNOP. Since the PSO algorithm is a heuristic search algorithm based on the population, it can overcome the impact of nonlinearity and the disturbance from multiple extremes of the objective function. In addition, to check the estimation accuracy of the LBMPT presented by PSO-CNOP and ADJ-CNOP, we partition the constraint domain of initial perturbations into sufficiently fine grid meshes and take the LBMPT obtained by the filtering method as a benchmark. The result shows that the estimation presented by PSO-CNOP is closer to the true value than the one by ADJ-CNOP with the forecast time increasing.
Battery state-of-charge estimation using approximate least squares
NASA Astrophysics Data System (ADS)
Unterrieder, C.; Zhang, C.; Lunglmayr, M.; Priewasser, R.; Marsili, S.; Huemer, M.
2015-03-01
In recent years, much effort has been spent to extend the runtime of battery-powered electronic applications. In order to improve the utilization of the available cell capacity, high precision estimation approaches for battery-specific parameters are needed. In this work, an approximate least squares estimation scheme is proposed for the estimation of the battery state-of-charge (SoC). The SoC is determined based on the prediction of the battery's electromotive force. The proposed approach allows for an improved re-initialization of the Coulomb counting (CC) based SoC estimation method. Experimental results for an implementation of the estimation scheme on a fuel gauge system on chip are illustrated. Implementation details and design guidelines are presented. The performance of the presented concept is evaluated for realistic operating conditions (temperature effects, aging, standby current, etc.). For the considered test case of a GSM/UMTS load current pattern of a mobile phone, the proposed method is able to re-initialize the CC-method with a high accuracy, while state-of-the-art methods fail to perform a re-initialization.
Data assimilation method based on the constraints of confidence region
NASA Astrophysics Data System (ADS)
Li, Yong; Li, Siming; Sheng, Yao; Wang, Luheng
2018-03-01
The ensemble Kalman filter (EnKF) is a distinguished data assimilation method that is widely used and studied in various fields including methodology and oceanography. However, due to the limited sample size or imprecise dynamics model, it is usually easy for the forecast error variance to be underestimated, which further leads to the phenomenon of filter divergence. Additionally, the assimilation results of the initial stage are poor if the initial condition settings differ greatly from the true initial state. To address these problems, the variance inflation procedure is usually adopted. In this paper, we propose a new method based on the constraints of a confidence region constructed by the observations, called EnCR, to estimate the inflation parameter of the forecast error variance of the EnKF method. In the new method, the state estimate is more robust to both the inaccurate forecast models and initial condition settings. The new method is compared with other adaptive data assimilation methods in the Lorenz-63 and Lorenz-96 models under various model parameter settings. The simulation results show that the new method performs better than the competing methods.
NASA Astrophysics Data System (ADS)
Gar Alalm, Mohamed; Tawfik, Ahmed; Ookawara, Shinichi
2017-03-01
In this study, solar photo-Fenton reaction using compound parabolic collectors reactor was assessed for removal of phenol from aqueous solution. The effect of irradiation time, initial concentration, initial pH, and dosage of Fenton reagent were investigated. H2O2 and aromatic intermediates (catechol, benzoquinone, and hydroquinone) were quantified during the reaction to study the pathways of the oxidation process. Complete degradation of phenol was achieved after 45 min of irradiation when the initial concentration was 100 mg/L. However, increasing the initial concentration up to 500 mg/L inhibited the degradation efficiency. The dosage of H2O2 and Fe+2 significantly affected the degradation efficiency of phenol. The observed optimum pH for the reaction was 3.1. Phenol degradation at different concentration was fitted to the pseudo-first order kinetic according to Langmuir-Hinshelwood model. Costs estimation for a large scale reactor based was performed. The total costs of the best economic condition with maximum degradation of phenol are 2.54 €/m3.
EA 18G Growler Aircraft (EA 18G)
2015-12-01
10051.9 N/A 13186.9 8636.4 11550.1 15672.4 1 APB Breach Confidence Level Confidence Level of cost estimate for current APB: 50% The current...estimate recommendation aims to provide sufficient resources to execute the program under normal conditions, encountering average levels of technical...TY $M) Initial PAUC Development Estimate Changes PAUC Production Estimate Econ Qty Sch Eng Est Oth Spt Total 93.573 4.150 1.442 -0.319 0.947 -0.348
Darrington, Richard T; Jiao, Jim
2004-04-01
Rapid and accurate stability prediction is essential to pharmaceutical formulation development. Commonly used stability prediction methods include monitoring parent drug loss at intended storage conditions or initial rate determination of degradants under accelerated conditions. Monitoring parent drug loss at the intended storage condition does not provide a rapid and accurate stability assessment because often <0.5% drug loss is all that can be observed in a realistic time frame, while the accelerated initial rate method in conjunction with extrapolation of rate constants using the Arrhenius or Eyring equations often introduces large errors in shelf-life prediction. In this study, the shelf life prediction of a model pharmaceutical preparation utilizing sensitive high-performance liquid chromatography-mass spectrometry (LC/MS) to directly quantitate degradant formation rates at the intended storage condition is proposed. This method was compared to traditional shelf life prediction approaches in terms of time required to predict shelf life and associated error in shelf life estimation. Results demonstrated that the proposed LC/MS method using initial rates analysis provided significantly improved confidence intervals for the predicted shelf life and required less overall time and effort to obtain the stability estimation compared to the other methods evaluated. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association.
Prognosis and Conditional Disease-Free Survival Among Patients With Ovarian Cancer
Kurta, Michelle L.; Edwards, Robert P.; Moysich, Kirsten B.; McDonough, Kathleen; Bertolet, Marnie; Weissfeld, Joel L.; Catov, Janet M.; Modugno, Francesmary; Bunker, Clareann H.; Ness, Roberta B.; Diergaarde, Brenda
2014-01-01
Purpose Traditional disease-free survival (DFS) does not reflect changes in prognosis over time. Conditional DFS accounts for elapsed time since achieving remission and may provide more relevant prognostic information for patients and clinicians. This study aimed to estimate conditional DFS among patients with ovarian cancer and to evaluate the impact of patient characteristics. Patients and Methods Patients were recruited as part of the Hormones and Ovarian Cancer Prediction case-control study and were included in the current study if they had achieved remission after a diagnosis of cancer of the ovary, fallopian tube, or peritoneum (N = 404). Demographic and lifestyle information was collected at enrollment; disease, treatment, and outcome information was abstracted from medical records. DFS was calculated using the Kaplan-Meier method. Conditional DFS estimates were computed using cumulative DFS estimates. Results Median DFS was 2.54 years (range, 0.03-9.96 years) and 3-year DFS was 48.2%. The probability of surviving an additional 3 years without recurrence, conditioned on having already survived 1, 2, 3, 4, and 5 years after remission, was 63.8%, 80.5%, 90.4%, 97.0%, and 97.7%, respectively. Initial differences in 3-year DFS at time of remission between age, stage, histology, and grade groups decreased over time. Conclusion DFS estimates for patients with ovarian cancer improved dramatically over time, in particular among those with poorer initial prognoses. Conditional DFS is a more relevant measure of prognosis for patients with ovarian cancer who have already achieved a period of remission, and time elapsed since remission should be taken into account when making follow-up care decisions. PMID:25403208
NASA Technical Reports Server (NTRS)
Finley, Tom D.; Wong, Douglas T.; Tripp, John S.
1993-01-01
A newly developed technique for enhanced data reduction provides an improved procedure that allows least squares minimization to become possible between data sets with an unequal number of data points. This technique was applied in the Crew and Equipment Translation Aid (CETA) experiment on the STS-37 Shuttle flight in April 1991 to obtain the velocity profile from the acceleration data. The new technique uses a least-squares method to estimate the initial conditions and calibration constants. These initial conditions are estimated by least-squares fitting the displacements indicated by the Hall-effect sensor data to the corresponding displacements obtained from integrating the acceleration data. The velocity and displacement profiles can then be recalculated from the corresponding acceleration data using the estimated parameters. This technique, which enables instantaneous velocities to be obtained from the test data instead of only average velocities at varying discrete times, offers more detailed velocity information, particularly during periods of large acceleration or deceleration.
Stable boundary conditions and difference schemes for Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Dutt, P.
1985-01-01
The Navier-Stokes equations can be viewed as an incompletely elliptic perturbation of the Euler equations. By using the entropy function for the Euler equations as a measure of energy for the Navier-Stokes equations, it was possible to obtain nonlinear energy estimates for the mixed initial boundary value problem. These estimates are used to derive boundary conditions which guarantee L2 boundedness even when the Reynolds number tends to infinity. Finally, a new difference scheme for modelling the Navier-Stokes equations in multidimensions for which it is possible to obtain discrete energy estimates exactly analogous to those we obtained for the differential equation was proposed.
NASA Astrophysics Data System (ADS)
Khade, Vikram; Kurian, Jaison; Chang, Ping; Szunyogh, Istvan; Thyng, Kristen; Montuoro, Raffaele
2017-05-01
This paper demonstrates the potential of ocean ensemble forecasting in the Gulf of Mexico (GoM). The Bred Vector (BV) technique with one week rescaling frequency is implemented on a 9 km resolution version of the Regional Ocean Modelling System (ROMS). Numerical experiments are carried out by using the HYCOM analysis products to define the initial conditions and the lateral boundary conditions. The growth rates of the forecast uncertainty are estimated to be about 10% of initial amplitude per week. By carrying out ensemble forecast experiments with and without perturbed surface forcing, it is demonstrated that in the coastal regions accounting for uncertainties in the atmospheric forcing is more important than accounting for uncertainties in the ocean initial conditions. In the Loop Current region, the initial condition uncertainties, are the dominant source of the forecast uncertainty. The root-mean-square error of the Lagrangian track forecasts at the 15-day forecast lead time can be reduced by about 10 - 50 km using the ensemble mean Eulerian forecast of the oceanic flow for the computation of the tracks, instead of the single-initial-condition Eulerian forecast.
NASA Astrophysics Data System (ADS)
Sun, Li-Sha; Kang, Xiao-Yun; Zhang, Qiong; Lin, Lan-Xin
2011-12-01
Based on symbolic dynamics, a novel computationally efficient algorithm is proposed to estimate the unknown initial vectors of globally coupled map lattices (CMLs). It is proved that not all inverse chaotic mapping functions are satisfied for contraction mapping. It is found that the values in phase space do not always converge on their initial values with respect to sufficient backward iteration of the symbolic vectors in terms of global convergence or divergence (CD). Both CD property and the coupling strength are directly related to the mapping function of the existing CML. Furthermore, the CD properties of Logistic, Bernoulli, and Tent chaotic mapping functions are investigated and compared. Various simulation results and the performances of the initial vector estimation with different signal-to-noise ratios (SNRs) are also provided to confirm the proposed algorithm. Finally, based on the spatiotemporal chaotic characteristics of the CML, the conditions of estimating the initial vectors using symbolic dynamics are discussed. The presented method provides both theoretical and experimental results for better understanding and characterizing the behaviours of spatiotemporal chaotic systems.
On the decay of solutions to the 2D Neumann exterior problem for the wave equation
NASA Astrophysics Data System (ADS)
Secchi, Paolo; Shibata, Yoshihiro
We consider the exterior problem in the plane for the wave equation with a Neumann boundary condition and study the asymptotic behavior of the solution for large times. For possible application we are interested to show a decay estimate which does not involve weighted norms of the initial data. In the paper we prove such an estimate, by a combination of the estimate of the local energy decay and decay estimates for the free space solution.
2012-10-12
structure on the evolving storm behaviour. 13 7. Large scale influences on Rapid Intensification and Extratropical Transition: RI and ET...assimilation techniques to better initialize and validate TC structures (including the intense inner core and storm asymmetries) consistent with the large...Without vortex specification, initial conditions usually contain a weak and misplaced circulation. Based on estimates of central pressure and storm size
Parameter identification of thermophilic anaerobic degradation of valerate.
Flotats, Xavier; Ahring, Birgitte K; Angelidaki, Irini
2003-01-01
The considered mathematical model of the decomposition of valerate presents three unknown kinetic parameters, two unknown stoichiometric coefficients, and three unknown initial concentrations for biomass. Applying a structural identifiability study, we concluded that it is necessary to perform simultaneous batch experiments with different initial conditions for estimating these parameters. Four simultaneous batch experiments were conducted at 55 degrees C, characterized by four different initial acetate concentrations. Product inhibition of valerate degradation by acetate was considered. Practical identification was done optimizing the sum of the multiple determination coefficients for all measured state variables and for all experiments simultaneously. The estimated values of kinetic parameters and stoichiometric coefficients were characterized by the parameter correlation matrix, the confidence interval, and the student's t-test at 5% significance level with positive results except for the saturation constant, for which more experiments for improving its identifiability should be conducted. In this article, we discuss kinetic parameter estimation methods.
NASA Astrophysics Data System (ADS)
Kashiwabara, Takahito
Strong solutions of the non-stationary Navier-Stokes equations under non-linearized slip or leak boundary conditions are investigated. We show that the problems are formulated by a variational inequality of parabolic type, to which uniqueness is established. Using Galerkin's method and deriving a priori estimates, we prove global and local existence for 2D and 3D slip problems respectively. For leak problems, under no-leak assumption at t=0 we prove local existence in 2D and 3D cases. Compatibility conditions for initial states play a significant role in the estimates.
Monitoring vegetation conditions from LANDSAT for use in range management
NASA Technical Reports Server (NTRS)
Haas, R. H.; Deering, D. W.; Rouse, J. W., Jr.; Schell, J. A.
1975-01-01
A summary of the LANDSAT Great Plains Corridor projects and the principal results are presented. Emphasis is given to the use of satellite acquired phenological data for range management and agri-business activities. A convenient method of reducing LANDSAT MSS data to provide quantitative estimates of green biomass on rangelands in the Great Plains is explained. Suggestions for the use of this approach for evaluating range feed conditions are presented. A LANDSAT Follow-on project has been initiated which will employ the green biomass estimation method in a quasi-operational monitoring of range readiness and range feed conditions on a regional scale.
Ray, Joel G.; Bartsch, Emily; Park, Alison L.; Shah, Prakesh S.; Dzakpasu, Susie
2017-01-01
Background: Hypertensive disorders, especially preeclampsia, are the leading reason for provider-initiated preterm birth. We estimated how universal acetylsalicylic acid (ASA) prophylaxis might reduce rates of provider-initiated preterm birth associated with preeclampsia and intrauterine growth restriction, which are related conditions. Methods: We performed a cohort study of singleton hospital births in 2013 in Canada, excluding Quebec. We estimated the proportion of term births and provider-initiated preterm births affected by preeclampsia and/or intrauterine growth restriction, and the corresponding mean maternal and newborn hospital length of stay. We projected the potential number of cases reduced and corresponding hospital length of stay if ASA prophylaxis lowered cases of preeclampsia and intrauterine growth restriction by a relative risk reduction (RRR) of 10% (lowest) or 53% (highest), as suggested by randomized clinical trials. Results: Of the 269 303 singleton live births and stillbirths in our cohort, 4495 (1.7%) were provider-initiated preterm births. Of the 4495, 1512 (33.6%) had a diagnosis of preeclampsia and/or intrauterine growth restriction. The mean maternal length of stay was 2.0 (95% confidence interval [CI] 2.0-2.0) days among term births unaffected by either condition and 7.3 (95% CI 6.1-8.6) days among provider-initiated preterm births with both conditions. The corresponding values for mean newborn length of stay were 1.9 (95% CI 1.8-1.9) days and 21.8 (95% CI 17.4-26.2) days. If ASA conferred a 53% RRR against preeclampsia and/or intrauterine growth restriction, 3365 maternal and 11 591 newborn days in hospital would be averted. If ASA conferred a 10% RRR, 635 maternal and 2187 newborn days in hospital would be averted. Interpretation: A universal ASA prophylaxis strategy could substantially reduce the burden of long maternal and newborn hospital stays associated with provider-initiated preterm birth. However, until there is compelling evidence that administration of ASA to all, or most, pregnant women reduces the risk of preeclampsia and/or intrauterine growth restriction, clinicians should continue to follow current clinical practice guidelines. PMID:28646095
NASA Astrophysics Data System (ADS)
Tkachenko, Ekaterina
2017-11-01
This work presents a hypothesis about the mechanism of bromine activation during polar boundary layer ozone depletion events (ODEs) as well as the mechanism of aerosol formation from the frost flowers. The author suggests that ODEs may be initiated by the electric-field gradients created at the sharp tips of ice formations as a result of the combined effect of various environmental conditions. According to the author's estimates, these electric-field gradients may be sufficient for the onset of point or corona discharges followed by generation of high local concentrations of the reactive oxygen species and initiation of free-radical and redox reactions. This process may be responsible for the formation of seed bromine which then undergoes further amplification by HOBr-driven bromine explosion. The proposed hypothesis may explain a variety of environmental conditions and substrates as well as poor reproducibility of ODE initiation observed by researchers in the field. According to the author's estimates, high wind can generate sufficient conditions for overcoming the Rayleigh limit and thus can initiate ;spraying; of charged aerosol nanoparticles. These charged aerosol nanoparticles can provoke formation of free radicals, turning the ODE on. One can also envision a possible emission of halogen ion as a result of the ;electrospray; process analogous to that of electrospray ionization mass-spectrometry.
Quick estimate of oil discovery from gas-condensate reservoirs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarem, A.M.
1966-10-24
A quick method of estimating the depletion performance of gas-condensate reservoirs is presented by graphical representations. The method is based on correlations reported in the literature and expresses recoverable liquid as function of gas reserves, producing gas-oil ratio, and initial and final reservoir pressures. The amount of recoverable liquid reserves (RLR) under depletion conditions, is estimated from an equation which is given. Where the liquid-reserves are in stock-tank barrels the gas reserves are in Mcf, with the arbitrary constant, N calculated from one graphical representation by dividing fractional oil recovery by the initial gas-oil ratio and multiplying 10U6D for convenience.more » An equation is given for estimating the coefficient C. These factors (N and C) can be determined from the graphical representations. An example calculation is included.« less
Automated startup of the MIT research reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwok, K.S.
1992-01-01
This summary describes the development, implementation, and testing of a generic method for performing automated startups of nuclear reactors described by space-independent kinetics under conditions of closed-loop digital control. The technique entails first obtaining a reliable estimate of the reactor's initial degree of subcriticality and then substituting that estimate into a model-based control law so as to permit a power increase from subcritical on a demanded trajectory. The estimation of subcriticality is accomplished by application of the perturbed reactivity method. The shutdown reactor is perturbed by the insertion of reactivity at a known rate. Observation of the resulting period permitsmore » determination of the initial degree of subcriticality. A major advantage to this method is that repeated estimates are obtained of the same quantity. Hence, statistical methods can be applied to improve the quality of the calculation.« less
Interpreting Repeated Temperature-Depth Profiles for Groundwater Flow
NASA Astrophysics Data System (ADS)
Bense, Victor F.; Kurylyk, Barret L.; van Daal, Jonathan; van der Ploeg, Martine J.; Carey, Sean K.
2017-10-01
Temperature can be used to trace groundwater flows due to thermal disturbances of subsurface advection. Prior hydrogeological studies that have used temperature-depth profiles to estimate vertical groundwater fluxes have either ignored the influence of climate change by employing steady-state analytical solutions or applied transient techniques to study temperature-depth profiles recorded at only a single point in time. Transient analyses of a single profile are predicated on the accurate determination of an unknown profile at some time in the past to form the initial condition. In this study, we use both analytical solutions and a numerical model to demonstrate that boreholes with temperature-depth profiles recorded at multiple times can be analyzed to either overcome the uncertainty associated with estimating unknown initial conditions or to form an additional check for the profile fitting. We further illustrate that the common approach of assuming a linear initial temperature-depth profile can result in significant errors for groundwater flux estimates. Profiles obtained from a borehole in the Veluwe area, Netherlands in both 1978 and 2016 are analyzed for an illustrative example. Since many temperature-depth profiles were collected in the late 1970s and 1980s, these previously profiled boreholes represent a significant and underexploited opportunity to obtain repeat measurements that can be used for similar analyses at other sites around the world.
NASA Astrophysics Data System (ADS)
Durán-Barroso, Pablo; González, Javier; Valdés, Juan B.
2016-04-01
Rainfall-runoff quantification is one of the most important tasks in both engineering and watershed management as it allows to identify, forecast and explain watershed response. For that purpose, the Natural Resources Conservation Service Curve Number method (NRCS CN) is the conceptual lumped model more recognized in the field of rainfall-runoff estimation. Furthermore, there is still an ongoing discussion about the procedure to determine the portion of rainfall retained in the watershed before runoff is generated, called as initial abstractions. This concept is computed as a ratio (λ) of the soil potential maximum retention S of the watershed. Initially, this ratio was assumed to be 0.2, but later it has been proposed to be modified to 0.05. However, the actual procedures to convert NRCS CN model parameters obtained under a different hypothesis about λ do not incorporate any adaptation of climatic conditions of each watershed. By this reason, we propose a new simple method for computing model parameters which is adapted to local conditions taking into account regional patterns of climate conditions. After checking the goodness of this procedure against the actual ones in 34 different watersheds located in Ohio and Texas (United States), we concluded that this novel methodology represents the most accurate and efficient alternative to refit the initial abstraction ratio.
Hydrologic Evaluation of Landfill Performance (HELP) Model
The program models rainfall, runoff, infiltration, and other water pathways to estimate how much water builds up above each landfill liner. It can incorporate data on vegetation, soil types, geosynthetic materials, initial moisture conditions, slopes, etc.
NASA Astrophysics Data System (ADS)
van Dijk, Albert I. J. M.; Peña-Arancibia, Jorge L.; Wood, Eric F.; Sheffield, Justin; Beck, Hylke E.
2013-05-01
Ideally, a seasonal streamflow forecasting system would ingest skilful climate forecasts and propagate these through calibrated hydrological models initialized with observed catchment conditions. At global scale, practical problems exist in each of these aspects. For the first time, we analyzed theoretical and actual skill in bimonthly streamflow forecasts from a global ensemble streamflow prediction (ESP) system. Forecasts were generated six times per year for 1979-2008 by an initialized hydrological model and an ensemble of 1° resolution daily climate estimates for the preceding 30 years. A post-ESP conditional sampling method was applied to 2.6% of forecasts, based on predictive relationships between precipitation and 1 of 21 climate indices prior to the forecast date. Theoretical skill was assessed against a reference run with historic forcing. Actual skill was assessed against streamflow records for 6192 small (<10,000 km2) catchments worldwide. The results show that initial catchment conditions provide the main source of skill. Post-ESP sampling enhanced skill in equatorial South America and Southeast Asia, particularly in terms of tercile probability skill, due to the persistence and influence of the El Niño Southern Oscillation. Actual skill was on average 54% of theoretical skill but considerably more for selected regions and times of year. The realized fraction of the theoretical skill probably depended primarily on the quality of precipitation estimates. Forecast skill could be predicted as the product of theoretical skill and historic model performance. Increases in seasonal forecast skill are likely to require improvement in the observation of precipitation and initial hydrological conditions.
Yadav, Shankar; Weng, Hsin-Yi
2017-04-04
The study aim was to quantify the impact of movement restriction on the well-being of pigs and the associated mitigation responses during a classical swine fever (CSF) outbreak. We developed a stochastic risk assessment model and incorporated Indiana swine industry statistics to estimate the timing and number of swine premises that would encounter overcrowding or feed interruption resulting from movement restriction. Our model also quantified the amount of on-farm euthanasia and movement of pigs to slaughter plants required to alleviate those conditions. We simulated various single-site (i.e., an outbreak initiated from one location) and multiple-site (i.e., an outbreak initiated from more than one location) outbreak scenarios in Indiana to estimate outputs. The study estimated that 14% of the swine premises in Indiana would encounter overcrowding or feed interruption due to movement restriction implemented during a CSF outbreak. The number of premises that would experience animal welfare conditions was about 2.5 fold of the number of infected premises. On-farm euthanasia needed to be performed on 33% of those swine premises to alleviate adverse animal welfare conditions, and more than 90% of on-farm euthanasia had to be carried out within 2 weeks after the implementation of movement restriction. Conversely, movement of pigs to slaughter plants could alleviate 67% of adverse animal welfare conditions due to movement restriction, and only less than 1% of movement of pigs to slaughter plants had to be initiated in the first 2 weeks of movement restrictions. The risk of secondary outbreaks due to movement of pigs from movement restriction areas to slaughter plants was low and only seven pigs from each shipment needed to be tested for CSF infection to prevent a secondary outbreak. We found that the scale of adverse animal welfare consequences of movement restriction during a CSF outbreak in Indiana was substantial, and controlled movement of pigs to slaughter plants was an efficient and low-risk alternative mitigation response to on-farm euthanasia. The output estimates generated from this study provide empirical evidence for decision makers to properly incorporate required resources for mitigating adverse animal welfare conditions in CSF outbreak management strategic planning.
Exploration of warm-up period in conceptual hydrological modelling
NASA Astrophysics Data System (ADS)
Kim, Kue Bum; Kwon, Hyun-Han; Han, Dawei
2018-01-01
One of the important issues in hydrological modelling is to specify the initial conditions of the catchment since it has a major impact on the response of the model. Although this issue should be a high priority among modelers, it has remained unaddressed by the community. The typical suggested warm-up period for the hydrological models has ranged from one to several years, which may lead to an underuse of data. The model warm-up is an adjustment process for the model to reach an 'optimal' state, where internal stores (e.g., soil moisture) move from the estimated initial condition to an 'optimal' state. This study explores the warm-up period of two conceptual hydrological models, HYMOD and IHACRES, in a southwestern England catchment. A series of hydrologic simulations were performed for different initial soil moisture conditions and different rainfall amounts to evaluate the sensitivity of the warm-up period. Evaluation of the results indicates that both initial wetness and rainfall amount affect the time required for model warm up, although it depends on the structure of the hydrological model. Approximately one and a half months are required for the model to warm up in HYMOD for our study catchment and climatic conditions. In addition, it requires less time to warm up under wetter initial conditions (i.e., saturated initial conditions). On the other hand, approximately six months is required for warm-up in IHACRES, and the wet or dry initial conditions have little effect on the warm-up period. Instead, the initial values that are close to the optimal value result in less warm-up time. These findings have implications for hydrologic model development, specifically in determining soil moisture initial conditions and warm-up periods to make full use of the available data, which is very important for catchments with short hydrological records.
NASA Astrophysics Data System (ADS)
Peres, David Johnny; Cancelliere, Antonino
2016-04-01
Assessment of shallow landslide hazard is important for appropriate planning of mitigation measures. Generally, return period of slope instability is assumed as a quantitative metric to map landslide triggering hazard on a catchment. The most commonly applied approach to estimate such return period consists in coupling a physically-based landslide triggering model (hydrological and slope stability) with rainfall intensity-duration-frequency (IDF) curves. Among the drawbacks of such an approach, the following assumptions may be mentioned: (1) prefixed initial conditions, with no regard to their probability of occurrence, and (2) constant intensity-hyetographs. In our work we propose the use of a Monte Carlo simulation approach in order to investigate the effects of the two above mentioned assumptions. The approach is based on coupling a physically based hydrological and slope stability model with a stochastic rainfall time series generator. By this methodology a long series of synthetic rainfall data can be generated and given as input to a landslide triggering physically based model, in order to compute the return period of landslide triggering as the mean inter-arrival time of a factor of safety less than one. In particular, we couple the Neyman-Scott rectangular pulses model for hourly rainfall generation and the TRIGRS v.2 unsaturated model for the computation of transient response to individual rainfall events. Initial conditions are computed by a water table recession model that links initial conditions at a given event to the final response at the preceding event, thus taking into account variable inter-arrival time between storms. One-thousand years of synthetic hourly rainfall are generated to estimate return periods up to 100 years. Applications are first carried out to map landslide triggering hazard in the Loco catchment, located in highly landslide-prone area of the Peloritani Mountains, Sicily, Italy. Then a set of additional simulations are performed in order to compare the results obtained by the traditional IDF-based method with the Monte Carlo ones. Results indicate that both variability of initial conditions and of intra-event rainfall intensity significantly affect return period estimation. In particular, the common assumption of an initial water table depth at the base of the pervious strata may lead in practice to an overestimation of return period up to one order of magnitude, while the assumption of constant-intensity hyetographs may yield an overestimation by a factor of two or three. Hence, it may be concluded that the analysed simplifications involved in the traditional IDF-based approach generally imply a non-conservative assessment of landslide triggering hazard.
NASA Astrophysics Data System (ADS)
Jakacki, Jaromir; Golenko, Mariya
2014-05-01
Two hydrodynamical models (Princeton Ocean Model (POM) and Parallel Ocean Program (POP)) have been implemented for the Baltic Sea area that consists of locations of the dumped chemical munitions during II War World. The models have been configured based on similar data source - bathymetry, initial conditions and external forces were implemented based on identical data. The horizontal resolutions of the models are also very similar. Several simulations with different initial conditions have been done. Comparison and analysis of the bottom currents from both models have been performed. Based on it estimating of the dangerous area and critical time have been done. Also lagrangian particle tracking and passive tracer were implemented and based on these results probability of the appearing dangerous doses and its time evolution have been presented. This work has been performed in the frame of the MODUM project financially supported by NATO.
On the generation of climate model ensembles
NASA Astrophysics Data System (ADS)
Haughton, Ned; Abramowitz, Gab; Pitman, Andy; Phipps, Steven J.
2014-10-01
Climate model ensembles are used to estimate uncertainty in future projections, typically by interpreting the ensemble distribution for a particular variable probabilistically. There are, however, different ways to produce climate model ensembles that yield different results, and therefore different probabilities for a future change in a variable. Perhaps equally importantly, there are different approaches to interpreting the ensemble distribution that lead to different conclusions. Here we use a reduced-resolution climate system model to compare three common ways to generate ensembles: initial conditions perturbation, physical parameter perturbation, and structural changes. Despite these three approaches conceptually representing very different categories of uncertainty within a modelling system, when comparing simulations to observations of surface air temperature they can be very difficult to separate. Using the twentieth century CMIP5 ensemble for comparison, we show that initial conditions ensembles, in theory representing internal variability, significantly underestimate observed variance. Structural ensembles, perhaps less surprisingly, exhibit over-dispersion in simulated variance. We argue that future climate model ensembles may need to include parameter or structural perturbation members in addition to perturbed initial conditions members to ensure that they sample uncertainty due to internal variability more completely. We note that where ensembles are over- or under-dispersive, such as for the CMIP5 ensemble, estimates of uncertainty need to be treated with care.
Adaptive Estimation and Heuristic Optimization of Nonlinear Spacecraft Attitude Dynamics
2016-09-15
Algorithm GPS Global Positioning System HOUF Higher Order Unscented Filter IC initial conditions IMM Interacting Multiple Model IMU Inertial Measurement Unit ...sources ranging from inertial measurement units to star sensors are used to construct observations for attitude estimation algorithms. The sensor...parameters. A single vector measurement will provide two independent parameters, as a unit vector constraint removes a DOF making the problem underdetermined
Bianca N. I. Eskelson; Hailemariam Temesgen; Tara M. Barrett
2008-01-01
Many growth and yield simulators require a stand table or tree-list to set the initial condition for projections in time. Most similar neighbour (MSN) approaches can be used for estimating stand tables from information commonly available on forest cover maps (e.g. height, volume, canopy cover, and species composition). Simulations were used to compare MSN (using an...
Gao, Wei; Liu, Yalong; Xu, Bo
2014-12-19
A new algorithm called Huber-based iterated divided difference filtering (HIDDF) is derived and applied to cooperative localization of autonomous underwater vehicles (AUVs) supported by a single surface leader. The position states are estimated using acoustic range measurements relative to the leader, in which some disadvantages such as weak observability, large initial error and contaminated measurements with outliers are inherent. By integrating both merits of iterated divided difference filtering (IDDF) and Huber's M-estimation methodology, the new filtering method could not only achieve more accurate estimation and faster convergence contrast to standard divided difference filtering (DDF) in conditions of weak observability and large initial error, but also exhibit robustness with respect to outlier measurements, for which the standard IDDF would exhibit severe degradation in estimation accuracy. The correctness as well as validity of the algorithm is demonstrated through experiment results.
Evaluation of Planetary Boundary Layer Scheme Sensitivities for the Purpose of Parameter Estimation
Meteorological model errors caused by imperfect parameterizations generally cannot be overcome simply by optimizing initial and boundary conditions. However, advanced data assimilation methods are capable of extracting significant information about parameterization behavior from ...
NASA Astrophysics Data System (ADS)
Xu, Peiliang
2018-06-01
The numerical integration method has been routinely used by major institutions worldwide, for example, NASA Goddard Space Flight Center and German Research Center for Geosciences (GFZ), to produce global gravitational models from satellite tracking measurements of CHAMP and/or GRACE types. Such Earth's gravitational products have found widest possible multidisciplinary applications in Earth Sciences. The method is essentially implemented by solving the differential equations of the partial derivatives of the orbit of a satellite with respect to the unknown harmonic coefficients under the conditions of zero initial values. From the mathematical and statistical point of view, satellite gravimetry from satellite tracking is essentially the problem of estimating unknown parameters in the Newton's nonlinear differential equations from satellite tracking measurements. We prove that zero initial values for the partial derivatives are incorrect mathematically and not permitted physically. The numerical integration method, as currently implemented and used in mathematics and statistics, chemistry and physics, and satellite gravimetry, is groundless, mathematically and physically. Given the Newton's nonlinear governing differential equations of satellite motion with unknown equation parameters and unknown initial conditions, we develop three methods to derive new local solutions around a nominal reference orbit, which are linked to measurements to estimate the unknown corrections to approximate values of the unknown parameters and the unknown initial conditions. Bearing in mind that satellite orbits can now be tracked almost continuously at unprecedented accuracy, we propose the measurement-based perturbation theory and derive global uniformly convergent solutions to the Newton's nonlinear governing differential equations of satellite motion for the next generation of global gravitational models. Since the solutions are global uniformly convergent, theoretically speaking, they are able to extract smallest possible gravitational signals from modern and future satellite tracking measurements, leading to the production of global high-precision, high-resolution gravitational models. By directly turning the nonlinear differential equations of satellite motion into the nonlinear integral equations, and recognizing the fact that satellite orbits are measured with random errors, we further reformulate the links between satellite tracking measurements and the global uniformly convergent solutions to the Newton's governing differential equations as a condition adjustment model with unknown parameters, or equivalently, the weighted least squares estimation of unknown differential equation parameters with equality constraints, for the reconstruction of global high-precision, high-resolution gravitational models from modern (and future) satellite tracking measurements.
NASA Astrophysics Data System (ADS)
Gajek, Andrzej
2016-09-01
The article presents diagnostics monitor for control of the efficiency of brakes in various road conditions in cars equipped with pressure sensor in brake (ESP) system. Now the brake efficiency of the vehicles is estimated periodically in the stand conditions on the base of brake forces measurement or in the road conditions on the base of the brake deceleration. The presented method allows to complete the stand - periodical tests of the brakes by current on board diagnostics system OBD for brakes. First part of the article presents theoretical dependences between deceleration of the vehicle and brake pressure. The influence of the vehicle mass, initial speed of braking, temperature of brakes, aerodynamic drag, rolling resistance, engine resistance, state of the road surface, angle of the road sloping on the deceleration have been analysed. The manner of the appointed of these parameters has been analysed. The results of the initial investigation have been presented. At the end of the article the strategy of the estimation and signalization of the irregular value of the deceleration are presented.
Evaluating the effect of smoking cessation treatment on a complex dynamical system.
Bekiroglu, Korkut; Russell, Michael A; Lagoa, Constantino M; Lanza, Stephanie T; Piper, Megan E
2017-11-01
To understand the dynamic relations among tobacco withdrawal symptoms to inform the development of effective smoking cessation treatments. Dynamical system models from control engineering are introduced and utilized to evaluate complex treatment effects. We demonstrate how dynamical models can be used to examine how distinct withdrawal-related processes are related over time and how treatment influences these relations. Intensive longitudinal data from a randomized placebo-controlled smoking cessation trial (N=1504) are used to estimate a dynamical model of withdrawal-related processes including momentary craving, negative affect, quitting self-efficacy, and cessation fatigue for each of six treatment conditions (nicotine patch, nicotine lozenge, bupropion, patch + lozenge, bupropion + lozenge, and placebo). Estimation and simulation results show that (1) withdrawal measurements are interrelated over time, (2) nicotine patch + nicotine lozenge showed reduced cessation fatigue and enhanced self-efficacy in the long-term while bupropion + nicotine lozenge was more effective at reducing negative affect and craving, and (3) although nicotine patch + nicotine lozenge had a better initial effect on cessation fatigue and self-efficacy, nicotine lozenge had a stronger effect on negative affect and nicotine patch had a stronger impact on craving. This approach can be used to provide new evidence illustrating (a) the total impact of treatment conditions (via steady state values) and (b) the total initial impact (via rate of initial change values) on smoking-related outcomes for separate treatment conditions, noting that the conditions that produce the largest change may be different than the conditions that produce the fastest change. Copyright © 2017 Elsevier B.V. All rights reserved.
FracFit: A Robust Parameter Estimation Tool for Anomalous Transport Problems
NASA Astrophysics Data System (ADS)
Kelly, J. F.; Bolster, D.; Meerschaert, M. M.; Drummond, J. D.; Packman, A. I.
2016-12-01
Anomalous transport cannot be adequately described with classical Fickian advection-dispersion equations (ADE). Rather, fractional calculus models may be used, which capture non-Fickian behavior (e.g. skewness and power-law tails). FracFit is a robust parameter estimation tool based on space- and time-fractional models used to model anomalous transport. Currently, four fractional models are supported: 1) space fractional advection-dispersion equation (sFADE), 2) time-fractional dispersion equation with drift (TFDE), 3) fractional mobile-immobile equation (FMIE), and 4) tempered fractional mobile-immobile equation (TFMIE); additional models may be added in the future. Model solutions using pulse initial conditions and continuous injections are evaluated using stable distribution PDFs and CDFs or subordination integrals. Parameter estimates are extracted from measured breakthrough curves (BTCs) using a weighted nonlinear least squares (WNLS) algorithm. Optimal weights for BTCs for pulse initial conditions and continuous injections are presented, facilitating the estimation of power-law tails. Two sample applications are analyzed: 1) continuous injection laboratory experiments using natural organic matter and 2) pulse injection BTCs in the Selke river. Model parameters are compared across models and goodness-of-fit metrics are presented, assisting model evaluation. The sFADE and time-fractional models are compared using space-time duality (Baeumer et. al., 2009), which links the two paradigms.
Power Management and Distribution (PMAD) Model Development: Final Report
NASA Technical Reports Server (NTRS)
Metcalf, Kenneth J.
2011-01-01
Power management and distribution (PMAD) models were developed in the early 1990's to model candidate architectures for various Space Exploration Initiative (SEI) missions. They were used to generate "ballpark" component mass estimates to support conceptual PMAD system design studies. The initial set of models was provided to NASA Lewis Research Center (since renamed Glenn Research Center) in 1992. They were developed to estimate the characteristics of power conditioning components predicted to be available in the 2005 timeframe. Early 90's component and device designs and material technologies were projected forward to the 2005 timeframe, and algorithms reflecting those design and material improvements were incorporated into the models to generate mass, volume, and efficiency estimates for circa 2005 components. The models are about ten years old now and NASA GRC requested a review of them to determine if they should be updated to bring them into agreement with current performance projections or to incorporate unforeseen design or technology advances. This report documents the results of this review and the updated power conditioning models and new transmission line models generated to estimate post 2005 PMAD system masses and sizes. This effort continues the expansion and enhancement of a library of PMAD models developed to allow system designers to assess future power system architectures and distribution techniques quickly and consistently.
Estimating impervious cover and riparian zone condition in New England watersheds.
Under EPA’s Green Infrastructure Initiative, research activities are underway to evaluate the effectiveness of green infrastructure in mitigating the effects of urbanization and stormwater impacts on stream biota and habitat. Preliminary analyses, using impervious cover es...
Effect of Initial Conditions on Reproducibility of Scientific Research
Djulbegovic, Benjamin; Hozo, Iztok
2014-01-01
Background: It is estimated that about half of currently published research cannot be reproduced. Many reasons have been offered as explanations for failure to reproduce scientific research findings- from fraud to the issues related to design, conduct, analysis, or publishing scientific research. We also postulate a sensitive dependency on initial conditions by which small changes can result in the large differences in the research findings when attempted to be reproduced at later times. Methods: We employed a simple logistic regression equation to model the effect of covariates on the initial study findings. We then fed the input from the logistic equation into a logistic map function to model stability of the results in repeated experiments over time. We illustrate the approach by modeling effects of different factors on the choice of correct treatment. Results: We found that reproducibility of the study findings depended both on the initial values of all independent variables and the rate of change in the baseline conditions, the latter being more important. When the changes in the baseline conditions vary by about 3.5 to about 4 in between experiments, no research findings could be reproduced. However, when the rate of change between the experiments is ≤2.5 the results become highly predictable between the experiments. Conclusions: Many results cannot be reproduced because of the changes in the initial conditions between the experiments. Better control of the baseline conditions in-between the experiments may help improve reproducibility of scientific findings. PMID:25132705
Effect of initial conditions on reproducibility of scientific research.
Djulbegovic, Benjamin; Hozo, Iztok
2014-06-01
It is estimated that about half of currently published research cannot be reproduced. Many reasons have been offered as explanations for failure to reproduce scientific research findings- from fraud to the issues related to design, conduct, analysis, or publishing scientific research. We also postulate a sensitive dependency on initial conditions by which small changes can result in the large differences in the research findings when attempted to be reproduced at later times. We employed a simple logistic regression equation to model the effect of covariates on the initial study findings. We then fed the input from the logistic equation into a logistic map function to model stability of the results in repeated experiments over time. We illustrate the approach by modeling effects of different factors on the choice of correct treatment. We found that reproducibility of the study findings depended both on the initial values of all independent variables and the rate of change in the baseline conditions, the latter being more important. When the changes in the baseline conditions vary by about 3.5 to about 4 in between experiments, no research findings could be reproduced. However, when the rate of change between the experiments is ≤2.5 the results become highly predictable between the experiments. Many results cannot be reproduced because of the changes in the initial conditions between the experiments. Better control of the baseline conditions in-between the experiments may help improve reproducibility of scientific findings.
Assessment of PDF Micromixing Models Using DNS Data for a Two-Step Reaction
NASA Astrophysics Data System (ADS)
Tsai, Kuochen; Chakrabarti, Mitali; Fox, Rodney O.; Hill, James C.
1996-11-01
Although the probability density function (PDF) method is known to treat the chemical reaction terms exactly, its application to turbulent reacting flows have been overshadowed by the ability to model the molecular mixing terms satisfactorily. In this study, two PDF molecular mixing models, the linear-mean-square-estimation (LMSE or IEM) model and the generalized interaction-by-exchange-with-the-mean (GIEM) model, are compared with the DNS data in decaying turbulence with a two-step parallel-consecutive reaction and two segregated initial conditions: ``slabs" and ``blobs". Since the molecular mixing model is expected to have a strong effect on the mean values of chemical species under such initial conditions, the model evaluation is intended to answer the following questions: Can the PDF models predict the mean values of chemical species correctly with completely segregated initial conditions? (2) Is a single molecular mixing timescale sufficient for the PDF models to predict the mean values with different initial conditions? (3) Will the chemical reactions change the molecular mixing timescales of the reacting species enough to affect the accuracy of the model's prediction for the mean values of chemical species?
The Cauchy Problem in Local Spaces for the Complex Ginzburg-Landau EquationII. Contraction Methods
NASA Astrophysics Data System (ADS)
Ginibre, J.; Velo, G.
We continue the study of the initial value problem for the complex Ginzburg-Landau equation
Re-evaluating estimates of impervious cover and riparian zone condition in New England watersheds
Under EPA’s Green Infrastructure Initiative, research activities are underway to evaluate the effectiveness of green infrastructure in mitigating the effects of urbanization and stormwater impacts on stream biota and habitat. Preliminary analyses, using impervious cover es...
C-5 Reliability Enhancement and Re-engining Program (C-5 RERP)
2015-12-01
Production Estimate Current APB Production Objective/Threshold Demonstrated Performance Current Estimate Time To Climb/Initial Level Off 837,000 lbs...RCR - Runway Condition Reading SDD - System Design and Development SL - Sea Level C-5 RERP December 2015 SAR March 23, 2016 16:10:28 UNCLASSIFIED 12...5.3 5.3 Acq O&M 0.0 0.0 -- 0.0 0.0 0.0 0.0 Total 7146.6 7135.7 N/A 6698.0 7694.1 7510.7 7066.6 Confidence Level Confidence Level of cost estimate
Dynamical interpretation of conditional patterns
NASA Technical Reports Server (NTRS)
Adrian, R. J.; Moser, R. D.; Moin, P.
1988-01-01
While great progress is being made in characterizing the 3-D structure of organized turbulent motions using conditional averaging analysis, there is a lack of theoretical guidance regarding the interpretation and utilization of such information. Questions concerning the significance of the structures, their contributions to various transport properties, and their dynamics cannot be answered without recourse to appropriate dynamical governing equations. One approach which addresses some of these questions uses the conditional fields as initial conditions and calculates their evolution from the Navier-Stokes equations, yielding valuable information about stability, growth, and longevity of the mean structure. To interpret statistical aspects of the structures, a different type of theory which deals with the structures in the context of their contributions to the statistics of the flow is needed. As a first step toward this end, an effort was made to integrate the structural information from the study of organized structures with a suitable statistical theory. This is done by stochastically estimating the two-point conditional averages that appear in the equation for the one-point probability density function, and relating the structures to the conditional stresses. Salient features of the estimates are identified, and the structure of the one-point estimates in channel flow is defined.
Rapid and accurate estimation of release conditions in the javelin throw.
Hubbard, M; Alaways, L W
1989-01-01
We have developed a system to measure initial conditions in the javelin throw rapidly enough to be used by the thrower for feedback in performance improvement. The system consists of three subsystems whose main tasks are: (A) acquisition of automatically digitized high speed (200 Hz) video x, y position data for the first 0.1-0.2 s of the javelin flight after release (B) estimation of five javelin release conditions from the x, y position data and (C) graphical presentation to the thrower of these release conditions and a simulation of the subsequent flight together with optimal conditions and flight for the sam release velocity. The estimation scheme relies on a simulation model and is at least an order of magnitude more accurate than previously reported measurements of javelin release conditions. The system provides, for the first time ever in any throwing event, the ability to critique nearly instantly in a precise, quantitative manner the crucial factors in the throw which determine the range. This should be expected to much greater control and consistency of throwing variables by athletes who use system and could even lead to an evolution of new throwing techniques.
Leachate concentrations from water leach and column leach tests on fly ash-stabilized soils
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bin-Shafique, S.; Benson, C.H.; Edil, T.B.
2006-01-15
Batch water leaching tests (WLTs) and column leaching tests (CLTs) were conducted on coal-combustion fly ashes, soil, and soil-fly ash mixtures to characterize leaching of Cd, Cr, Se, and Ag. The concentrations of these metals were also measured in the field at two sites where soft fine-grained soils were mechanically stabilized with fly ash. Concentrations in leachate from the WLTs on soil-fly ash mixtures are different from those on fly ash alone and cannot be accurately estimated based on linear dilution calculations using concentrations from WLTs on fly ash alone. The concentration varies nonlinearly with fly ash content due tomore » the variation in pH with fly ash content. Leachate concentrations are low when the pH of the leachate or the cation exchange capacity (CEC) of the soil is high. Initial concentrations from CLTs are higher than concentrations from WLTs due to differences in solid-liquid ratio, pH, and solid-liquid contact. However, both exhibit similar trends with fly ash content, leachate pH, and soil properties. Scaling factors can be applied to WLT concentrations (50 for Ag and Cd, 10 for Cr and Se) to estimate initial concentrations for CLTs. Concentrations in leachate collected from the field sites were generally similar or slightly lower than concentrations measured in CLTs on the same materials. Thus, CLTs appear to provide a good indication of conditions that occur in the field provided that the test conditions mimic the field conditions. In addition, initial concentrations in the field can be conservatively estimated from WLT concentrations using the aforementioned scaling factors provided that the pH of the infiltrating water is near neutral.« less
Estimation and Control of Nonlinear and Hybrid Systems with Applications to Air-to-Air Guidance
1989-03-31
systems. flexible structures [13], and last but not least Hence, some conditions for detariniing the macroeconomic models (5]. stability of hybrid...equation being defined almost everywhere, the propagation of the characteristic function is introduced. Denote the characteristic function of Pa as Oa (v,w...initial condition is ( Da (v,w;Ox,zo;O)=ejvxoejwzo and the auxilary conditions correspond to (13), i.e., 0a(O,O;TlxO,zO;O)=. Similar to the case for the
NASA Astrophysics Data System (ADS)
Beilina, L.; Cristofol, M.; Li, S.; Yamamoto, M.
2018-01-01
We consider an inverse problem of reconstructing two spatially varying coefficients in an acoustic equation of hyperbolic type using interior data of solutions with suitable choices of initial condition. Using a Carleman estimate, we prove Lipschitz stability estimates which ensure unique reconstruction of both coefficients. Our theoretical results are justified by numerical studies on the reconstruction of two unknown coefficients using noisy backscattered data.
NASA Technical Reports Server (NTRS)
Barnes, J.; Dekel, A.; Efstathiou, G.; Frenk, C. S.
1985-01-01
The cluster correlation function xi sub c(r) is compared with the particle correlation function, xi(r) in cosmological N-body simulations with a wide range of initial conditions. The experiments include scale-free initial conditions, pancake models with a coherence length in the initial density field, and hybrid models. Three N-body techniques and two cluster-finding algorithms are used. In scale-free models with white noise initial conditions, xi sub c and xi are essentially identical. In scale-free models with more power on large scales, it is found that the amplitude of xi sub c increases with cluster richness; in this case the clusters give a biased estimate of the particle correlations. In the pancake and hybrid models (with n = 0 or 1), xi sub c is steeper than xi, but the cluster correlation length exceeds that of the points by less than a factor of 2, independent of cluster richness. Thus the high amplitude of xi sub c found in studies of rich clusters of galaxies is inconsistent with white noise and pancake models and may indicate a primordial fluctuation spectrum with substantial power on large scales.
NASA Astrophysics Data System (ADS)
Pan, Chaofeng; Dai, Wei; Chen, Liao; Chen, Long; Wang, Limei
2017-10-01
With the impact of serious environmental pollution in our cities combined with the ongoing depletion of oil resources, electric vehicles are becoming highly favored as means of transport. Not only for the advantage of low noise, but for their high energy efficiency and zero pollution. The Power battery is used as the energy source of electric vehicles. However, it does currently still have a few shortcomings, noticeably the low energy density, with high costs and short cycle life results in limited mileage compared with conventional passenger vehicles. There is great difference in vehicle energy consumption rate under different environment and driving conditions. Estimation error of current driving range is relatively large due to without considering the effects of environmental temperature and driving conditions. The development of a driving range estimation method will have a great impact on the electric vehicles. A new driving range estimation model based on the combination of driving cycle identification and prediction is proposed and investigated. This model can effectively eliminate mileage errors and has good convergence with added robustness. Initially the identification of the driving cycle is based on Kernel Principal Component feature parameters and fuzzy C referring to clustering algorithm. Secondly, a fuzzy rule between the characteristic parameters and energy consumption is established under MATLAB/Simulink environment. Furthermore the Markov algorithm and BP(Back Propagation) neural network method is utilized to predict the future driving conditions to improve the accuracy of the remaining range estimation. Finally, driving range estimation method is carried out under the ECE 15 condition by using the rotary drum test bench, and the experimental results are compared with the estimation results. Results now show that the proposed driving range estimation method can not only estimate the remaining mileage, but also eliminate the fluctuation of the residual range under different driving conditions.
Drouillard, Antoine; Bouvier, Anne-Marie; Rollot, Fabien; Faivre, Jean; Jooste, Valérie; Lepage, Côme
2015-07-01
Traditionally, survival estimates have been reported as survival from the time of diagnosis. A patient's probability of survival changes according to time elapsed since the diagnosis and this is known as conditional survival. The aim was to estimate 5-year net conditional survival in patients with colorectal cancer in a well-defined French population at yearly intervals up to 5 years. Our study included 18,300 colorectal cancers diagnosed between 1976 and 2008 and registered in the population-based digestive cancer registry of Burgundy (France). We calculated conditional 5-year net survival, using the Pohar Perme estimator, for every additional year survived after diagnosis from 1 to 5 years. The initial 5-year net survival estimates varied between 89% for stage I and 9% for advanced stage cancer. The corresponding 5-year net survival for patients alive after 5 years was 95% and 75%. Stage II and III patients who survived 5 years had a similar probability of surviving 5 more years, respectively 87% and 84%. For survivors after the first year following diagnosis, five-year conditional net survival was similar regardless of age class and period of diagnosis. For colorectal cancer survivors, conditional net survival provides relevant and complementary prognostic information for patients and clinicians. Copyright © 2015 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.
i-SVOC -- A simulation program for indoor SVOCs (Version 1.0)
Program i-SVOC estimates the emissions, transport, and sorption of semivolatile organic compounds (SVOCs) in the indoor environment as functions of time when a series of initial conditions is given. This program implements a framework for dynamic modeling of indoor SVOCs develope...
Simulation Program i-SVOC User’s Guide
This document is the User’s Guide for computer program i-SVOC, which estimates the emissions, transport, and sorption of semivolatile organic compounds (SVOCs) in the indoor environment as a function of time when a series of initial conditions is given. This program implements a ...
Hydrologic Engineering in Planning,
1981-04-01
through abstraction of losses 3) Transform precipitation excess to streamflow 4) Estimate other contributions in order to obtain the total runoff...similar to those of surface entry, transmission ability and storage capacity and are illustrated in Figure 4.3. The initial losses are the losses that...AVERAGE CONDITIONS LEGEND w UNIFORM LOSSES 0I SOIL TRANSMISSION RATE A NTECEDENT CONDITIONS U) -~(WET)(DY IL 0 / -J TIME TIME SOIL CHARACTERISTICS 0,0
2014-01-01
Background Of the estimated 800,000 adults living with HIV in Zambia in 2011, roughly half were receiving antiretroviral therapy (ART). As treatment scale up continues, information on the care provided to patients after initiating ART can help guide decision-making. We estimated retention in care, the quantity of resources utilized, and costs for a retrospective cohort of adults initiating ART under routine clinical conditions in Zambia. Methods Data on resource utilization (antiretroviral [ARV] and non-ARV drugs, laboratory tests, outpatient clinic visits, and fixed resources) and retention in care were extracted from medical records for 846 patients who initiated ART at ≥15 years of age at six treatment sites between July 2007 and October 2008. Unit costs were estimated from the provider’s perspective using site- and country-level data and are reported in 2011 USD. Results Patients initiated ART at a median CD4 cell count of 145 cells/μL. Fifty-nine percent of patients initiated on a tenofovir-containing regimen, ranging from 15% to 86% depending on site. One year after ART initiation, 75% of patients were retained in care. The average cost per patient retained in care one year after ART initiation was $243 (95% CI, $194-$293), ranging from $184 (95% CI, $172-$195) to $304 (95% CI, $290-$319) depending on site. Patients retained in care one year after ART initiation received, on average, 11.4 months’ worth of ARV drugs, 1.5 CD4 tests, 1.3 blood chemistry tests, 1.4 full blood count tests, and 6.5 clinic visits with a doctor or clinical officer. At all sites, ARV drugs were the largest cost component, ranging from 38% to 84% of total costs, depending on site. Conclusions Patients initiate ART late in the course of disease progression and a large proportion drop out of care after initiation. The quantity of resources utilized and costs vary widely by site, and patients utilize a different mix of resources under routine clinical conditions than if they were receiving fully guideline-concordant care. Improving retention in care and guideline concordance, including increasing the use of tenofovir in first-line ART regimens, may lead to increases in overall treatment costs. PMID:24684772
NASA Astrophysics Data System (ADS)
Witzany, V.; Jefremov, P.
2018-06-01
Context. When a black hole is accreting well below the Eddington rate, a geometrically thick, radiatively inefficient state of the accretion disk is established. There is a limited number of closed-form physical solutions for geometrically thick (nonselfgravitating) toroidal equilibria of perfect fluids orbiting a spinning black hole, and these are predominantly used as initial conditions for simulations of accretion in the aforementioned mode. However, different initial configurations might lead to different results and thus observational predictions drawn from such simulations. Aims: We aim to expand the known equilibria by a number of closed multiparametric solutions with various possibilities of rotation curves and geometric shapes. Then, we ask whether choosing these as initial conditions influences the onset of accretion and the asymptotic state of the disk. Methods: We have investigated a set of examples from the derived solutions in detail; we analytically estimate the growth of the magneto-rotational instability (MRI) from their rotation curves and evolve the analytically obtained tori using the 2D magneto-hydrodynamical code HARM. Properties of the evolutions are then studied through the mass, energy, and angular-momentum accretion rates. Results: The rotation curve has a decisive role in the numerical onset of accretion in accordance with our analytical MRI estimates: in the first few orbital periods, the average accretion rate is linearly proportional to the initial MRI rate in the toroids. The final state obtained from any initial condition within the studied class after an evolution of ten or more orbital periods is mostly qualitatively identical and the quantitative properties vary within a single order of magnitude. The average values of the energy of the accreted fluid have an irregular dependency on initial data, and in some cases fluid with energies many times its rest mass is systematically accreted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benkovitz, C.M.
Sulfur emissions from volcanoes are located in areas of volcanic activity, are extremely variable in time, and can be released anywhere from ground level to the stratosphere. Previous estimates of global sulfur emissions from all sources by various authors have included estimates for emissions from volcanic activity. In general, these global estimates of sulfur emissions from volcanoes are given as global totals for an ``average`` year. A project has been initiated at Brookhaven National Laboratory to compile inventories of sulfur emissions from volcanoes. In order to complement the GEIA inventories of anthropogenic sulfur emissions, which represent conditions circa specific years,more » sulfur emissions from volcanoes are being estimated for the years 1985 and 1990.« less
Space-time asymptotics of the two dimensional Navier-Stokes flow in the whole plane
NASA Astrophysics Data System (ADS)
Okabe, Takahiro
2018-01-01
We consider the space-time behavior of the two dimensional Navier-Stokes flow. Introducing some qualitative structure of initial data, we succeed to derive the first order asymptotic expansion of the Navier-Stokes flow without moment condition on initial data in L1 (R2) ∩ Lσ2 (R2). Moreover, we characterize the necessary and sufficient condition for the rapid energy decay ‖ u (t) ‖ 2 = o (t-1) as t → ∞ motivated by Miyakawa-Schonbek [21]. By weighted estimated in Hardy spaces, we discuss the possibility of the second order asymptotic expansion of the Navier-Stokes flow assuming the first order moment condition on initial data. Moreover, observing that the Navier-Stokes flow u (t) lies in the Hardy space H1 (R2) for t > 0, we consider the asymptotic expansions in terms of Hardy-norm. Finally we consider the rapid time decay ‖ u (t) ‖ 2 = o (t - 3/2 ) as t → ∞ with cyclic symmetry introduced by Brandolese [2].
Ecology and thermal inactivation of microbes in and on interplanetary space vehicle components
NASA Technical Reports Server (NTRS)
Reyes, A. L.; Campbell, J. E.
1976-01-01
The heat resistance of Bacillus subtilis var. niger was measured from 85 to 125 C using moisture levels of % RH or = 0.001 to 100. Curves are presented which characterize thermal destruction using thermal death times defined as F values at a given combination of three moisture and temperature conditions. The times required at 100 C for reductions of 99.99% of the initial population were estimated for the three moisture conditions. The linear model (from which estimates of D are obtained) was satisfactory for estimating thermal death times (% RH or = 0.07) in the plate count range. Estimates based on observed thermal death times and D values for % RH = 100 diverged so that D values generally gave a more conservative estimate over the temperature range 90 to 125 C. Estimates of Z sub F and Z sub L ranged from 32.1 to 58.3 C for % RH of or = 0.07 and 100. A Z sub D = 30.0 was obtained for data observed at % RH or = 0.07.
Price, Stephen F.; Payne, Antony J.; Howat, Ian M.; Smith, Benjamin E.
2011-01-01
We use a three-dimensional, higher-order ice flow model and a realistic initial condition to simulate dynamic perturbations to the Greenland ice sheet during the last decade and to assess their contribution to sea level by 2100. Starting from our initial condition, we apply a time series of observationally constrained dynamic perturbations at the marine termini of Greenland’s three largest outlet glaciers, Jakobshavn Isbræ, Helheim Glacier, and Kangerdlugssuaq Glacier. The initial and long-term diffusive thinning within each glacier catchment is then integrated spatially and temporally to calculate a minimum sea-level contribution of approximately 1 ± 0.4 mm from these three glaciers by 2100. Based on scaling arguments, we extend our modeling to all of Greenland and estimate a minimum dynamic sea-level contribution of approximately 6 ± 2 mm by 2100. This estimate of committed sea-level rise is a minimum because it ignores mass loss due to future changes in ice sheet dynamics or surface mass balance. Importantly, > 75% of this value is from the long-term, diffusive response of the ice sheet, suggesting that the majority of sea-level rise from Greenland dynamics during the past decade is yet to come. Assuming similar and recurring forcing in future decades and a self-similar ice dynamical response, we estimate an upper bound of 45 mm of sea-level rise from Greenland dynamics by 2100. These estimates are constrained by recent observations of dynamic mass loss in Greenland and by realistic model behavior that accounts for both the long-term cumulative mass loss and its decay following episodic boundary forcing. PMID:21576500
Price, Stephen F; Payne, Antony J; Howat, Ian M; Smith, Benjamin E
2011-05-31
We use a three-dimensional, higher-order ice flow model and a realistic initial condition to simulate dynamic perturbations to the Greenland ice sheet during the last decade and to assess their contribution to sea level by 2100. Starting from our initial condition, we apply a time series of observationally constrained dynamic perturbations at the marine termini of Greenland's three largest outlet glaciers, Jakobshavn Isbræ, Helheim Glacier, and Kangerdlugssuaq Glacier. The initial and long-term diffusive thinning within each glacier catchment is then integrated spatially and temporally to calculate a minimum sea-level contribution of approximately 1 ± 0.4 mm from these three glaciers by 2100. Based on scaling arguments, we extend our modeling to all of Greenland and estimate a minimum dynamic sea-level contribution of approximately 6 ± 2 mm by 2100. This estimate of committed sea-level rise is a minimum because it ignores mass loss due to future changes in ice sheet dynamics or surface mass balance. Importantly, > 75% of this value is from the long-term, diffusive response of the ice sheet, suggesting that the majority of sea-level rise from Greenland dynamics during the past decade is yet to come. Assuming similar and recurring forcing in future decades and a self-similar ice dynamical response, we estimate an upper bound of 45 mm of sea-level rise from Greenland dynamics by 2100. These estimates are constrained by recent observations of dynamic mass loss in Greenland and by realistic model behavior that accounts for both the long-term cumulative mass loss and its decay following episodic boundary forcing.
Generalized Forchheimer Flows of Isentropic Gases
NASA Astrophysics Data System (ADS)
Celik, Emine; Hoang, Luan; Kieu, Thinh
2018-03-01
We consider generalized Forchheimer flows of either isentropic gases or slightly compressible fluids in porous media. By using Muskat's and Ward's general form of the Forchheimer equations, we describe the fluid dynamics by a doubly nonlinear parabolic equation for the appropriately defined pseudo-pressure. The volumetric flux boundary condition is converted to a time-dependent Robin-type boundary condition for this pseudo-pressure. We study the corresponding initial boundary value problem, and estimate the L^∞ and W^{1,2-a} (with 0
QCD matter thermalization at the RHIC and the LHC
NASA Astrophysics Data System (ADS)
Xu, Zhe; Cheng, Luan; El, Andrej; Gallmeister, Kai; Greiner, Carsten
2009-06-01
Employing the perturbative QCD inspired parton cascade, we investigate kinetic and chemical equilibration of the partonic matter created in central heavy ion collisions at RHIC and LHC energies. Two types of initial conditions are chosen. One is generated by the model of wounded nucleons using the PYTHIA event generator and Glauber geometry. Another is considered as a color glass condensate. We show that kinetic equilibration is almost independent of the chosen initial conditions, whereas there is a sensitive dependence for chemical equilibration. The time scale of thermalization lies between 1 and 1.5 fm/c. The final parton transverse energy obtained from BAMPS calculations is compared with the RHIC data and is estimated for the LHC energy.
Production of Methane and Water from Crew Plastic Waste
NASA Technical Reports Server (NTRS)
Captain, Janine; Santiago, Eddie; Parrish, Clyde; Strayer, Richard F.; Garland, Jay L.
2008-01-01
Recycling is a technology that will be key to creating a self sustaining lunar outpost. The plastics used for food packaging provide a source of material that could be recycled to produce water and methane. The recycling of these plastics will require some additional resources that will affect the initial estimate of starting materials that will have to be transported from earth, mainly oxygen, energy and mass. These requirements will vary depending on the recycling conditions. The degredation products of these plastics will vary under different atmospheric conditions. An estimate of the the production rate of methane and water using typical ISRU processes along with the plastic recycling will be presented.
On degenerate coupled transport processes in porous media with memory phenomena
NASA Astrophysics Data System (ADS)
Beneš, Michal; Pažanin, Igor
2018-06-01
In this paper we prove the existence of weak solutions to degenerate parabolic systems arising from the fully coupled moisture movement, solute transport of dissolved species and heat transfer through porous materials. Physically relevant mixed Dirichlet-Neumann boundary conditions and initial conditions are considered. Existence of a global weak solution of the problem is proved by means of semidiscretization in time, proving necessary uniform estimates and by passing to the limit from discrete approximations. Degeneration occurs in the nonlinear transport coefficients which are not assumed to be bounded below and above by positive constants. Degeneracies in transport coefficients are overcome by proving suitable a-priori $L^{\\infty}$-estimates based on De Giorgi and Moser iteration technique.
On the effects of subvirial initial conditions and the birth temperature of R136
NASA Astrophysics Data System (ADS)
Caputo, Daniel P.; de Vries, Nathan; Portegies Zwart, Simon
2014-11-01
We investigate the effect of different initial virial temperatures, Q, on the dynamics of star clusters. We find that the virial temperature has a strong effect on many aspects of the resulting system, including among others: the fraction of bodies escaping from the system, the depth of the collapse of the system, and the strength of the mass segregation. These differences deem the practice of using `cold' initial conditions no longer a simple choice of convenience. The choice of initial virial temperature must be carefully considered as its impact on the remainder of the simulation can be profound. We discuss the pitfalls and aim to describe the general behaviour of the collapse and the resultant system as a function of the virial temperature so that a well-reasoned choice of initial virial temperature can be made. We make a correction to the previous theoretical estimate for the minimum radius, Rmin, of the cluster at the deepest moment of collapse to include a Q dependency, Rmin ≈ Q + N(-1/3), where N is the number of particles. We use our numerical results to infer more about the initial conditions of the young cluster R136. Based on our analysis, we find that R136 was likely formed with a rather cool, but not cold, initial virial temperature (Q ≈ 0.13). Using the same analysis method, we examined 15 other young clusters and found the most common initial virial temperature to be between 0.18 and 0.25.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anikovsky, V.V.; Karzov, G.P.; Timofeev, B.T.
The paper demonstrates an insufficiency of some requirements native Norms (when comparing them with the foreign requirements for the consideration of calculating situations): (1) leak before break (LBB); (2) short cracks; (3) preliminary loading (warm prestressing). In particular, the paper presents (1) Comparison of native and foreign normative requirements (PNAE G-7-002-86, Code ASME, BS 1515, KTA) on permissible stress levels and specifically on the estimation of crack initiation and propagation; (2) comparison of RF and USA Norms of pressure vessel material acceptance and also data of pressure vessel hydrotests; (3) comparison of Norms on the presence of defects (RF andmore » USA) in NPP vessels, developments of defect schematization rules; foundation of a calculated defect (semi-axis correlation a/b) for pressure vessel and piping components: (4) sequence of defect estimation (growth of initial defects and critical crack sizes) proceeding from the concept LBB; (5) analysis of crack initiation and propagation conditions according to the acting Norms (including crack jumps); (6) necessity to correct estimation methods of ultimate states of brittle an ductile fracture and elastic-plastic region as applied to calculating situation: (a) LBB and (b) short cracks; (7) necessity to correct estimation methods of ultimate states with the consideration of static and cyclic loading (warm prestressing effect) of pressure vessel; estimation of the effect stability; (8) proposals on PNAE G-7-002-86 Norm corrections.« less
Training to estimate blood glucose and to form associations with initial hunger
Ciampolini, Mario; Bianchi, Riccardo
2006-01-01
Background The will to eat is a decision associated with conditioned responses and with unconditioned body sensations that reflect changes in metabolic biomarkers. Here, we investigate whether this decision can be delayed until blood glucose is allowed to fall to low levels, when presumably feeding behavior is mostly unconditioned. Following such an eating pattern might avoid some of the metabolic risk factors that are associated with high glycemia. Results In this 7-week study, patients were trained to estimate their blood glucose at meal times by associating feelings of hunger with glycemic levels determined by standard blood glucose monitors and to eat only when glycemia was < 85 mg/dL. At the end of the 7-week training period, estimated and measured glycemic values were found to be linearly correlated in the trained group (r = 0.82; p = 0.0001) but not in the control (untrained) group (r = 0.10; p = 0.40). Fewer subjects in the trained group were hungry than those in the control group (p = 0.001). The 18 hungry subjects of the trained group had significantly lower glucose levels (80.1 ± 6.3 mg/dL) than the 42 hungry control subjects (89.2 ± 10.2 mg/dL; p = 0.01). Moreover, the trained hungry subjects estimated their glycemia (78.1 ± 6.7 mg/dL; estimation error: 3.2 ± 2.4% of the measured glycemia) more accurately than the control hungry subjects (75.9 ± 9.8 mg/dL; estimation error: 16.7 ± 11.0%; p = 0.0001). Also the estimation error of the entire trained group (4.7 ± 3.6%) was significantly lower than that of the control group (17.1 ± 11.5%; p = 0.0001). A value of glycemia at initial feelings of hunger was provisionally identified as 87 mg/dL. Below this level, estimation showed lower error in both trained (p = 0.04) and control subjects (p = 0.001). Conclusion Subjects could be trained to accurately estimate their blood glucose and to recognize their sensations of initial hunger at low glucose concentrations. These results suggest that it is possible to make a behavioral distinction between unconditioned and conditioned hunger, and to achieve a cognitive will to eat by training. PMID:17156448
The isentropic quantum drift-diffusion model in two or three space dimensions
NASA Astrophysics Data System (ADS)
Chen, Xiuqing
2009-05-01
We investigate the isentropic quantum drift-diffusion model, a fourth order parabolic system, in space dimensions d = 2, 3. First, we establish the global weak solutions with large initial value and periodic boundary conditions. Then we show the semiclassical limit by delicate interpolation estimates and compactness argument.
Tree mortality risk of oak due to gypsy moth
K.W. Gottschalk; J.J. Colbert; D.L. Feicht
1998-01-01
We present prediction models for estimating tree mortality resulting from gypsy moth, Lymantria dispar, defoliation in mixed oak, Quercus sp., forests. These models differ from previous work by including defoliation as a factor in the analysis. Defoliation intensity, initial tree crown condition (crown vigour), crown position, and...
Under EPA’s Green Infrastructure Initiative, a variety of research activities are underway to evaluate the effectiveness of green infrastructure in mitigating the effects of urbanization and stormwater impacts on stream biota and habitat. Effectiveness of both site-scale st...
NASA Astrophysics Data System (ADS)
Ostrikov, V. N.; Plakhotnikov, O. V.
2014-12-01
Using considerable experimental material, we examine whether it is possible to recalculate the initial data of hyperspectral aircraft survey into spectral radiance factors (SRF). The errors of external calibration for various observation conditions and different instruments for data receiving are estimated.
1983-12-01
8217°%. .. o..’% - * 2’ . *. -o- . *o.oo o ,o ;j ’:-’ List of Figures Figure Page 1. System Identification of the Aerothermodynamic Environment of... System (STS) has of fered the engineering community a unique opportunity to flight test a reentry, hypersonic vehicle. The key 4 to the Shuttle’s...of the system (Refs. 7,8,9,10). Although the initial test flights have now been completed, data analysis and expansion of the existing data base
NASA Astrophysics Data System (ADS)
Raju, P. V. S.; Potty, Jayaraman; Mohanty, U. C.
2011-09-01
Comprehensive sensitivity analyses on physical parameterization schemes of Weather Research Forecast (WRF-ARW core) model have been carried out for the prediction of track and intensity of tropical cyclones by taking the example of cyclone Nargis, which formed over the Bay of Bengal and hit Myanmar on 02 May 2008, causing widespread damages in terms of human and economic losses. The model performances are also evaluated with different initial conditions of 12 h intervals starting from the cyclogenesis to the near landfall time. The initial and boundary conditions for all the model simulations are drawn from the global operational analysis and forecast products of National Center for Environmental Prediction (NCEP-GFS) available for the public at 1° lon/lat resolution. The results of the sensitivity analyses indicate that a combination of non-local parabolic type exchange coefficient PBL scheme of Yonsei University (YSU), deep and shallow convection scheme with mass flux approach for cumulus parameterization (Kain-Fritsch), and NCEP operational cloud microphysics scheme with diagnostic mixed phase processes (Ferrier), predicts better track and intensity as compared against the Joint Typhoon Warning Center (JTWC) estimates. Further, the final choice of the physical parameterization schemes selected from the above sensitivity experiments is used for model integration with different initial conditions. The results reveal that the cyclone track, intensity and time of landfall are well simulated by the model with an average intensity error of about 8 hPa, maximum wind error of 12 m s-1and track error of 77 km. The simulations also show that the landfall time error and intensity error are decreasing with delayed initial condition, suggesting that the model forecast is more dependable when the cyclone approaches the coast. The distribution and intensity of rainfall are also well simulated by the model and comparable with the TRMM estimates.
Anelone, Anet J N; Spurgeon, Sarah K
2017-02-01
It is demonstrated that the reachability paradigm from variable structure control theory is a suitable framework to monitor and predict the progression of the human immunodeficiency virus (HIV) infection following initiation of antiretroviral therapy (ART). A manifold is selected which characterises the infection-free steady-state. A model of HIV infection together with an associated reachability analysis is used to formulate a dynamical condition for the containment of HIV infection on the manifold. This condition is tested using data from two different HIV clinical trials which contain measurements of the CD4+ T cell count and HIV load in the peripheral blood collected from HIV infected individuals for the six month period following initiation of ART. The biological rates of the model are estimated using the multi-point identification method and data points collected in the initial period of the trial. Using the parameter estimates and the numerical solutions of the model, the predictions of the reachability analysis are shown to be consistent with the clinical diagnosis at the conclusion of the trial. The methodology captures the dynamical characteristics of eventual successful, failed and marginal outcomes. The findings evidence that the reachability analysis is an appropriate tool to monitor and develop personalised antiretroviral treatment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coble, Jamie; Orton, Christopher; Schwantes, Jon
Abstract—The Multi-Isotope Process (MIP) Monitor provides an efficient approach to monitoring the process conditions in used nuclear fuel reprocessing facilities to support process verification and validation. The MIP Monitor applies multivariate analysis to gamma spectroscopy of reprocessing streams in order to detect small changes in the gamma spectrum, which may indicate changes in process conditions. This research extends the MIP Monitor by characterizing a used fuel sample after initial dissolution according to the type of reactor of origin (pressurized or boiling water reactor), initial enrichment, burn up, and cooling time. Simulated gamma spectra were used to develop and test threemore » fuel characterization algorithms. The classification and estimation models employed are based on the partial least squares regression (PLS) algorithm. A PLS discriminate analysis model was developed which perfectly classified reactor type. Locally weighted PLS models were fitted on-the-fly to estimate continuous fuel characteristics. Burn up was predicted within 0.1% root mean squared percent error (RMSPE) and both cooling time and initial enrichment within approximately 2% RMSPE. This automated fuel characterization can be used to independently verify operator declarations of used fuel characteristics and inform the MIP Monitor anomaly detection routines at later stages of the fuel reprocessing stream to improve sensitivity to changes in operational parameters and material diversions.« less
Clement, Matthew; O'Keefe, Joy M; Walters, Brianne
2015-01-01
While numerous methods exist for estimating abundance when detection is imperfect, these methods may not be appropriate due to logistical difficulties or unrealistic assumptions. In particular, if highly mobile taxa are frequently absent from survey locations, methods that estimate a probability of detection conditional on presence will generate biased abundance estimates. Here, we propose a new estimator for estimating abundance of mobile populations using telemetry and counts of unmarked animals. The estimator assumes that the target population conforms to a fission-fusion grouping pattern, in which the population is divided into groups that frequently change in size and composition. If assumptions are met, it is not necessary to locate all groups in the population to estimate abundance. We derive an estimator, perform a simulation study, conduct a power analysis, and apply the method to field data. The simulation study confirmed that our estimator is asymptotically unbiased with low bias, narrow confidence intervals, and good coverage, given a modest survey effort. The power analysis provided initial guidance on survey effort. When applied to small data sets obtained by radio-tracking Indiana bats, abundance estimates were reasonable, although imprecise. The proposed method has the potential to improve abundance estimates for mobile species that have a fission-fusion social structure, such as Indiana bats, because it does not condition detection on presence at survey locations and because it avoids certain restrictive assumptions.
Improving the quality of parameter estimates obtained from slug tests
Butler, J.J.; McElwee, C.D.; Liu, W.
1996-01-01
The slug test is one of the most commonly used field methods for obtaining in situ estimates of hydraulic conductivity. Despite its prevalence, this method has received criticism from many quarters in the ground-water community. This criticism emphasizes the poor quality of the estimated parameters, a condition that is primarily a product of the somewhat casual approach that is often employed in slug tests. Recently, the Kansas Geological Survey (KGS) has pursued research directed it improving methods for the performance and analysis of slug tests. Based on extensive theoretical and field research, a series of guidelines have been proposed that should enable the quality of parameter estimates to be improved. The most significant of these guidelines are: (1) three or more slug tests should be performed at each well during a given test period; (2) two or more different initial displacements (Ho) should be used at each well during a test period; (3) the method used to initiate a test should enable the slug to be introduced in a near-instantaneous manner and should allow a good estimate of Ho to be obtained; (4) data-acquisition equipment that enables a large quantity of high quality data to be collected should be employed; (5) if an estimate of the storage parameter is needed, an observation well other than the test well should be employed; (6) the method chosen for analysis of the slug-test data should be appropriate for site conditions; (7) use of pre- and post-analysis plots should be an integral component of the analysis procedure, and (8) appropriate well construction parameters should be employed. Data from slug tests performed at a number of KGS field sites demonstrate the importance of these guidelines.
Stable forming conditions and geometrical expansion of L-shape rings in ring rolling process
NASA Astrophysics Data System (ADS)
Quagliato, Luca; Berti, Guido A.; Kim, Dongwook; Kim, Naksoo
2018-05-01
Based on previous research results concerning the radial-axial ring rolling process of flat rings, this paper details an innovative approach for the determination of the stable forming conditions to successfully simulate the radial ring rolling process of L-shape profiled rings. In addition to that, an analytical model for the estimation of the geometrical expansion of L-shape rings from its initial flat ring preform is proposed and validated by comparing its results with those of numerical simulations. By utilizing the proposed approach, steady forming conditions could be achieved, granting a uniform expansion of the ring throughout the process for all of the six tested cases of rings having the final outer diameter of the flange ranging from 545mm and 1440mm. The validation of the proposed approach allowed concluding that the geometrical expansion of the ring, as estimated by the proposed analytical model, is in good agreement with the results of the numerical simulation, with a maximum error of 2.18%, in the estimation of the ring wall diameter, 1.42% of the ring flange diameter and 1.87% for the estimation of the inner diameter of the ring, respectively.
Using Empirical Data to Estimate Potential Functions in Commodity Markets: Some Initial Results
NASA Astrophysics Data System (ADS)
Shen, C.; Haven, E.
2017-12-01
This paper focuses on estimating real and quantum potentials from financial commodities. The log returns of six common commodities are considered. We find that some phenomena, such as the vertical potential walls and the time scale issue of the variation on returns, also exists in commodity markets. By comparing the quantum and classical potentials, we attempt to demonstrate that the information within these two types of potentials is different. We believe this empirical result is consistent with the theoretical assumption that quantum potentials (when embedded into social science contexts) may contain some social cognitive or market psychological information, while classical potentials mainly reflect `hard' market conditions. We also compare the two potential forces and explore their relationship by simply estimating the Pearson correlation between them. The Medium or weak interaction effect may indicate that the cognitive system among traders may be affected by those `hard' market conditions.
Continuous-variable quantum probes for structured environments
NASA Astrophysics Data System (ADS)
Bina, Matteo; Grasselli, Federico; Paris, Matteo G. A.
2018-01-01
We address parameter estimation for structured environments and suggest an effective estimation scheme based on continuous-variables quantum probes. In particular, we investigate the use of a single bosonic mode as a probe for Ohmic reservoirs, and obtain the ultimate quantum limits to the precise estimation of their cutoff frequency. We assume the probe prepared in a Gaussian state and determine the optimal working regime, i.e., the conditions for the maximization of the quantum Fisher information in terms of the initial preparation, the reservoir temperature, and the interaction time. Upon investigating the Fisher information of feasible measurements, we arrive at a remarkable simple result: homodyne detection of canonical variables allows one to achieve the ultimate quantum limit to precision under suitable, mild, conditions. Finally, upon exploiting a perturbative approach, we find the invariant sweet spots of the (tunable) characteristic frequency of the probe, able to drive the probe towards the optimal working regime.
Under EPA’s Green Infrastructure Initiative, a variety of research activities are underway to evaluate the effectiveness of green infrastructure in mitigating the effects of urbanization and stormwater impacts on stream biota and habitat. Effectiveness of both site-scale s...
Mapping of quantitative trait loci controlling adaptive traits in coastal Douglas-fir. III
Kathleen D. Jermstad; Daniel L. Bassoni; Keith S. Jech; Gary A. Ritchie; Nicholas C. Wheeler; David B. Neale
2003-01-01
Quantitative trait loci (QTL) were mapped in the woody perennial Douglas fir (Pseudotsuga menziesii var. menziesii [Mirb.] Franco) for complex traits controlling the timing of growth initiation and growth cessation. QTL were estimated under controlled environmental conditions to identify QTL interactions with photoperiod, moisture stress, winter chilling, and spring...
Under EPA’s Green Infrastructure Initiative, a variety of research activities are underway to evaluate the effectiveness of green infrastructure in mitigating the effects of urbanization and stormwater impacts on stream biota and habitat. One aspect of this is evaluating th...
NASA Technical Reports Server (NTRS)
Vukovich, F. M. (Principal Investigator)
1982-01-01
Infrared and visible HCMM data were used to examine the potential application of these data to define initial and boundary conditions for mesoscale numerical models. Various boundary layer models were used to calculate the distribution of the surface heat flux, specific humidity depression (the difference between the specific humidity in the air at approxmately the 10 m level and the specific humidity at the ground), and the eddy vicosity in a 72 km by 72 km area centered about St. Louis, Missouri. Various aspects of the implications of the results on the meteorology of St. Louis are discussed. Overall, the results indicated that a reasonable estimate of the surface heat flux, urban albedo, ground temperature, and specific humidity depression can be obtained using HCMM satellite data. Values of the ground-specific humidity can be obtained if the distribution of the air-specific humidity is available. More research is required in estimating the absolute magnitude of the specific humidity depression because calculations may be sensitive to model parameters.
NASA Astrophysics Data System (ADS)
Ma, Hongliang; Xu, Shijie
2014-09-01
This paper presents an improved real-time sequential filter (IRTSF) for magnetometer-only attitude and angular velocity estimation of spacecraft during its attitude changing (including fast and large angular attitude maneuver, rapidly spinning or uncontrolled tumble). In this new magnetometer-only attitude determination technique, both attitude dynamics equation and first time derivative of measured magnetic field vector are directly leaded into filtering equations based on the traditional single vector attitude determination method of gyroless and real-time sequential filter (RTSF) of magnetometer-only attitude estimation. The process noise model of IRTSF includes attitude kinematics and dynamics equations, and its measurement model consists of magnetic field vector and its first time derivative. The observability of IRTSF for small or large angular velocity changing spacecraft is evaluated by an improved Lie-Differentiation, and the degrees of observability of IRTSF for different initial estimation errors are analyzed by the condition number and a solved covariance matrix. Numerical simulation results indicate that: (1) the attitude and angular velocity of spacecraft can be estimated with sufficient accuracy using IRTSF from magnetometer-only data; (2) compared with that of RTSF, the estimation accuracies and observability degrees of attitude and angular velocity using IRTSF from magnetometer-only data are both improved; and (3) universality: the IRTSF of magnetometer-only attitude and angular velocity estimation is observable for any different initial state estimation error vector.
Pre- and postprocessing techniques for determining goodness of computational meshes
NASA Technical Reports Server (NTRS)
Oden, J. Tinsley; Westermann, T.; Bass, J. M.
1993-01-01
Research in error estimation, mesh conditioning, and solution enhancement for finite element, finite difference, and finite volume methods has been incorporated into AUDITOR, a modern, user-friendly code, which operates on 2D and 3D unstructured neutral files to improve the accuracy and reliability of computational results. Residual error estimation capabilities provide local and global estimates of solution error in the energy norm. Higher order results for derived quantities may be extracted from initial solutions. Within the X-MOTIF graphical user interface, extensive visualization capabilities support critical evaluation of results in linear elasticity, steady state heat transfer, and both compressible and incompressible fluid dynamics.
Localized strain measurements of the intervertebral disc annulus during biaxial tensile testing.
Karakolis, Thomas; Callaghan, Jack P
2015-01-01
Both inter-lamellar and intra-lamellar failures of the annulus have been described as potential modes of disc herniation. Attempts to characterize initial lamellar failure of the annulus have involved tensile testing of small tissue samples. The purpose of this study was to evaluate a method of measuring local surface strains through image analysis of a tensile test conducted on an isolated sample of annular tissue in order to enhance future studies of intervertebral disc failure. An annulus tissue sample was biaxial strained to 10%. High-resolution images captured the tissue surface throughout testing. Three test conditions were evaluated: submerged, non-submerged and marker. Surface strains were calculated for the two non-marker conditions based on motion of virtual tracking points. Tracking algorithm parameters (grid resolution and template size) were varied to determine the effect on estimated strains. Accuracy of point tracking was assessed through a comparison of the non-marker conditions to a condition involving markers placed on tissue surface. Grid resolution had a larger effect on local strain than template size. Average local strain error ranged from 3% to 9.25% and 0.1% to 2.0%, for the non-submerged and submerged conditions, respectively. Local strain estimation has a relatively high potential for error. Submerging the tissue provided superior strain estimates.
NASA Astrophysics Data System (ADS)
Maples, S.; Fogg, G. E.; Harter, T.
2015-12-01
Accurate estimation of groundwater (GW) budgets and effective management of agricultural GW pumping remains a challenge in much of California's Central Valley (CV) due to a lack of irrigation well metering. CVHM and C2VSim are two regional-scale integrated hydrologic models that provide estimates of historical and current CV distributed pumping rates. However, both models estimate GW pumping using conceptually different agricultural water models with uncertainties that have not been adequately investigated. Here, we evaluate differences in distributed agricultural GW pumping and recharge estimates related to important differences in the conceptual framework and model assumptions used to simulate surface water (SW) and GW interaction across the root zone. Differences in the magnitude and timing of GW pumping and recharge were evaluated for a subregion (~1000 mi2) coincident with Yolo County, CA, to provide similar initial and boundary conditions for both models. Synthetic, multi-year datasets of land-use, precipitation, evapotranspiration (ET), and SW deliveries were prescribed for each model to provide realistic end-member scenarios for GW-pumping demand and recharge. Results show differences in the magnitude and timing of GW-pumping demand, deep percolation, and recharge. Discrepancies are related, in large part, to model differences in the estimation of ET requirements and representation of soil-moisture conditions. CVHM partitions ET demand, while C2VSim uses a bulk ET rate, resulting in differences in both crop-water and GW-pumping demand. Additionally, CVHM assumes steady-state soil-moisture conditions, and simulates deep percolation as a function of irrigation inefficiencies, while C2VSim simulates deep percolation as a function of transient soil-moisture storage conditions. These findings show that estimates of GW-pumping demand are sensitive to these important conceptual differences, which can impact conjunctive-use water management decisions in the CV.
Deuterium fractionation and H2D+ evolution in turbulent and magnetized cloud cores
NASA Astrophysics Data System (ADS)
Körtgen, Bastian; Bovino, Stefano; Schleicher, Dominik R. G.; Giannetti, Andrea; Banerjee, Robi
2017-08-01
High-mass stars are expected to form from dense prestellar cores. Their precise formation conditions are widely discussed, including their virial condition, which results in slow collapse for supervirial cores with strong support by turbulence or magnetic fields, or fast collapse for subvirial sources. To disentangle their formation processes, measurements of the deuterium fractions are frequently employed to approximately estimate the ages of these cores and to obtain constraints on their dynamical evolution. We here present 3D magnetohydrodynamical simulations including for the first time an accurate non-equilibrium chemical network with 21 gas-phase species plus dust grains and 213 reactions. With this network we model the deuteration process in fully depleted prestellar cores in great detail and determine its response to variations in the initial conditions. We explore the dependence on the initial gas column density, the turbulent Mach number, the mass-to-magnetic flux ratio and the distribution of the magnetic field, as well as the initial ortho-to-para ratio (OPR) of H2. We find qualitatively good agreement with recent observations of deuterium fractions in quiescent sources. Our results show that deuteration is rather efficient, even when assuming a conservative OPR of 3 and highly subvirial initial conditions, leading to large deuterium fractions already within roughly a free-fall time. We discuss the implications of our results and give an outlook to relevant future investigations.
Decomposition Technique for Remaining Useful Life Prediction
NASA Technical Reports Server (NTRS)
Saha, Bhaskar (Inventor); Goebel, Kai F. (Inventor); Saxena, Abhinav (Inventor); Celaya, Jose R. (Inventor)
2014-01-01
The prognostic tool disclosed here decomposes the problem of estimating the remaining useful life (RUL) of a component or sub-system into two separate regression problems: the feature-to-damage mapping and the operational conditions-to-damage-rate mapping. These maps are initially generated in off-line mode. One or more regression algorithms are used to generate each of these maps from measurements (and features derived from these), operational conditions, and ground truth information. This decomposition technique allows for the explicit quantification and management of different sources of uncertainty present in the process. Next, the maps are used in an on-line mode where run-time data (sensor measurements and operational conditions) are used in conjunction with the maps generated in off-line mode to estimate both current damage state as well as future damage accumulation. Remaining life is computed by subtracting the instance when the extrapolated damage reaches the failure threshold from the instance when the prediction is made.
Fatigue Life Estimation under Cumulative Cyclic Loading Conditions
NASA Technical Reports Server (NTRS)
Kalluri, Sreeramesh; McGaw, Michael A; Halford, Gary R.
1999-01-01
The cumulative fatigue behavior of a cobalt-base superalloy, Haynes 188 was investigated at 760 C in air. Initially strain-controlled tests were conducted on solid cylindrical gauge section specimens of Haynes 188 under fully-reversed, tensile and compressive mean strain-controlled fatigue tests. Fatigue data from these tests were used to establish the baseline fatigue behavior of the alloy with 1) a total strain range type fatigue life relation and 2) the Smith-Wastson-Topper (SWT) parameter. Subsequently, two load-level multi-block fatigue tests were conducted on similar specimens of Haynes 188 at the same temperature. Fatigue lives of the multi-block tests were estimated with 1) the Linear Damage Rule (LDR) and 2) the nonlinear Damage Curve Approach (DCA) both with and without the consideration of mean stresses generated during the cumulative fatigue tests. Fatigue life predictions by the nonlinear DCA were much closer to the experimentally observed lives than those obtained by the LDR. In the presence of mean stresses, the SWT parameter estimated the fatigue lives more accurately under tensile conditions than under compressive conditions.
Nexo, M A; Cleal, B; Hagelund, Lise; Willaing, I; Olesen, K
2017-12-15
The increasing number of people with chronic diseases challenges workforce capacity. Type 2 diabetes (T2D) can have work-related consequences, such as early retirement. Laws of most high-income countries require workplaces to provide accommodations to enable people with chronic disabilities to manage their condition at work. A barrier to successful implementation of such accommodations can be lack of co-workers' willingness to support people with T2D. This study aimed to examine the willingness to pay (WTP) of people with and without T2D for five workplace initiatives that help individuals with type 2 diabetes manage their diabetes at work. Three samples with employed Danish participants were drawn from existing online panels: a general population sample (n = 600), a T2D sample (n = 693), and a matched sample of people without diabetes (n = 539). Participants completed discrete choice experiments eliciting their WTP (reduction in monthly salary, €/month) for five hypothetical workplace initiatives: part-time job, customized work, extra breaks with pay, and time off for medical consultations with and without pay. WTP was estimated by conditional logits models. Bootstrapping was used to estimate confidence intervals for WTP. There was an overall WTP for all initiatives. Average WTP for all attributes was 34 €/month (95% confidence interval [CI]: 27-43] in the general population sample, 32 €/month (95% CI: 26-38) in the T2D sample, and 55 €/month (95% CI: 43-71) in the matched sample. WTP for additional breaks with pay was considerably lower than for the other initiatives in all samples. People with T2D had significantly lower WTP than people without diabetes for part-time work, customized work, and time off without pay, but not for extra breaks or time off with pay. For people with and without T2D, WTP was present for initiatives that could improve management of diabetes at the workplace. WTP was lowest among people with T2D. Implementation of these initiatives seems feasible and may help unnecessary exclusion of people with T2D from work.
Variable Selection for Support Vector Machines in Moderately High Dimensions
Zhang, Xiang; Wu, Yichao; Wang, Lan; Li, Runze
2015-01-01
Summary The support vector machine (SVM) is a powerful binary classification tool with high accuracy and great flexibility. It has achieved great success, but its performance can be seriously impaired if many redundant covariates are included. Some efforts have been devoted to studying variable selection for SVMs, but asymptotic properties, such as variable selection consistency, are largely unknown when the number of predictors diverges to infinity. In this work, we establish a unified theory for a general class of nonconvex penalized SVMs. We first prove that in ultra-high dimensions, there exists one local minimizer to the objective function of nonconvex penalized SVMs possessing the desired oracle property. We further address the problem of nonunique local minimizers by showing that the local linear approximation algorithm is guaranteed to converge to the oracle estimator even in the ultra-high dimensional setting if an appropriate initial estimator is available. This condition on initial estimator is verified to be automatically valid as long as the dimensions are moderately high. Numerical examples provide supportive evidence. PMID:26778916
Significance of the model considering mixed grain-size for inverse analysis of turbidites
NASA Astrophysics Data System (ADS)
Nakao, K.; Naruse, H.; Tokuhashi, S., Sr.
2016-12-01
A method for inverse analysis of turbidity currents is proposed for application to field observations. Estimation of initial condition of the catastrophic events from field observations has been important for sedimentological researches. For instance, there are various inverse analyses to estimate hydraulic conditions from topography observations of pyroclastic flows (Rossano et al., 1996), real-time monitored debris-flow events (Fraccarollo and Papa, 2000), tsunami deposits (Jaffe and Gelfenbaum, 2007) and ancient turbidites (Falcini et al., 2009). These inverse analyses need forward models and the most turbidity current models employ uniform grain-size particles. The turbidity currents, however, are the best characterized by variation of grain-size distribution. Though there are numerical models of mixed grain-sized particles, the models have difficulty in feasibility of application to natural examples because of calculating costs (Lesshaft et al., 2011). Here we expand the turbidity current model based on the non-steady 1D shallow-water equation at low calculation costs for mixed grain-size particles and applied the model to the inverse analysis. In this study, we compared two forward models considering uniform and mixed grain-size particles respectively. We adopted inverse analysis based on the Simplex method that optimizes the initial conditions (thickness, depth-averaged velocity and depth-averaged volumetric concentration of a turbidity current) with multi-point start and employed the result of the forward model [h: 2.0 m, U: 5.0 m/s, C: 0.01%] as reference data. The result shows that inverse analysis using the mixed grain-size model found the known initial condition of reference data even if the condition where the optimization started is deviated from the true solution, whereas the inverse analysis using the uniform grain-size model requires the condition in which the starting parameters for optimization must be in quite narrow range near the solution. The uniform grain-size model often reaches to local optimum condition that is significantly different from true solution. In conclusion, we propose a method of optimization based on the model considering mixed grain-size particles, and show its application to examples of turbidites in the Kiyosumi Formation, Boso Peninsula, Japan.
Lü, Chun-guang; Wang, Wei-he; Yang, Wen-bo; Tian, Qing-iju; Lu, Shan; Chen, Yun
2015-11-01
New hyperspectral sensor to detect total ozone is considered to be carried on geostationary orbit platform in the future, because local troposphere ozone pollution and diurnal variation of ozone receive more and more attention. Sensors carried on geostationary satellites frequently obtain images on the condition of larger observation angles so that it has higher requirements of total ozone retrieval on these observation geometries. TOMS V8 algorithm is developing and widely used in low orbit ozone detecting sensors, but it still lack of accuracy on big observation geometry, therefore, how to improve the accuracy of total ozone retrieval is still an urgent problem that demands immediate solution. Using moderate resolution atmospheric transmission, MODT-RAN, synthetic UV backscatter radiance in the spectra region from 305 to 360 nm is simulated, which refers to clear sky, multi angles (12 solar zenith angles and view zenith angles) and 26 standard profiles, moreover, the correlation and trends between atmospheric total ozone and backward scattering of the earth UV radiation are analyzed based on the result data. According to these result data, a new modified initial total ozone estimation model in TOMS V8 algorithm is considered to be constructed in order to improve the initial total ozone estimating accuracy on big observation geometries. The analysis results about total ozone and simulated UV backscatter radiance shows: Radiance in 317.5 nm (R₃₁₇.₅) decreased as the total ozone rise. Under the small solar zenith Angle (SZA) and the same total ozone, R₃₁₇.₅ decreased with the increase of view zenith Angle (VZA) but increased on the large SZA. Comparison of two fit models shows: without the condition that both SZA and VZA are large (> 80°), exponential fitting model and logarithm fitting model all show high fitting precision (R² > 0.90), and precision of the two decreased as the SZA and VZA rise. In most cases, the precision of logarithm fitting mode is about 0.9% higher than exponential fitting model. With the increasing of VZA or SZA, the fitting precision gradually lower, and the fall is more in the larger VZA or SZA. In addition, the precision of fitting mode exist a plateau in the small SZA range. The modified initial total ozone estimating model (ln(I) vs. Ω) is established based on logarithm fitting mode, and compared with traditional estimating model (I vs. ln(Ω)), that shows: the RMSE of ln(I) vs. Ω and I vs. ln(Ω) all have the down trend with the rise of total ozone. In the low region of total ozone (175-275 DU), the RMSE is obvious higher than high region (425-525 DU), moreover, a RMSE peak and a trough exist in 225 and 475 DU respectively. With the increase of VZA and SZA, the RMSE of two initial estimating models are overall rise, and the upraising degree is ln(I) vs. Ω obvious with the growing of SZA and VZA. The estimating result by modified model is better than traditional model on the whole total ozone range (RMSE is 0.087%-0.537% lower than traditional model), especially on lower total ozone region and large observation geometries. Traditional estimating model relies on the precision of exponential fitting model, and modified estimating model relies on the precision of logarithm fitting model. The improvement of the estimation accuracy by modified initial total ozone estimating model expand the application range of TOMS V8 algorithm. For sensor carried on geostationary orbit platform, there is no doubt that the modified estimating model can help improve the inversion accuracy on wide spatial and time range This modified model could give support and reference to TOMS algorithm update in the future.
Parametrisation of initial conditions for seasonal stream flow forecasting in the Swiss Rhine basin
NASA Astrophysics Data System (ADS)
Schick, Simon; Rössler, Ole; Weingartner, Rolf
2016-04-01
Current climate forecast models show - to the best of our knowledge - low skill in forecasting climate variability in Central Europe at seasonal lead times. When it comes to seasonal stream flow forecasting, initial conditions thus play an important role. Here, initial conditions refer to the catchments moisture at the date of forecast, i.e. snow depth, stream flow and lake level, soil moisture content, and groundwater level. The parametrisation of these initial conditions can take place at various spatial and temporal scales. Examples are the grid size of a distributed model or the time aggregation of predictors in statistical models. Therefore, the present study aims to investigate the extent to which the parametrisation of initial conditions at different spatial scales leads to differences in forecast errors. To do so, we conduct a forecast experiment for the Swiss Rhine at Basel, which covers parts of Germany, Austria, and Switzerland and is southerly bounded by the Alps. Seasonal mean stream flow is defined for the time aggregation of 30, 60, and 90 days and forecasted at 24 dates within the calendar year, i.e. at the 1st and 16th day of each month. A regression model is employed due to the various anthropogenic effects on the basins hydrology, which often are not quantifiable but might be grasped by a simple black box model. Furthermore, the pool of candidate predictors consists of antecedent temperature, precipitation, and stream flow only. This pragmatic approach follows the fact that observations of variables relevant for hydrological storages are either scarce in space or time (soil moisture, groundwater level), restricted to certain seasons (snow depth), or regions (lake levels, snow depth). For a systematic evaluation, we therefore focus on the comprehensive archives of meteorological observations and reanalyses to estimate the initial conditions via climate variability prior to the date of forecast. The experiment itself is based on four different approaches, whose differences in model skill were estimated within a rigorous cross-validation framework for the period 1982-2013: The predictands are regressed on antecedent temperature, precipitation, and stream flow. Here, temperature and precipitation constitute basin averages out of the E-OBS gridded data set. As in 1., but temperature and precipitation are used at the E-OBS grid scale (0.25 degree in longitude and latitude) without spatial averaging. As in 1., but the regression model is applied to 66 gauged subcatchments of the Rhine basin. Forecasts for these subcatchments are then simply summed and upscaled to the area of the Rhine basin. As in 3., but the forecasts at the subcatchment scale are additionally weighted in terms of hydrological representativeness of the corresponding subcatchment.
2017-01-01
This work investigates the design of alternative monitoring tools based on state estimators for industrial crystallization systems with nucleation, growth, and agglomeration kinetics. The estimation problem is regarded as a structure design problem where the estimation model and the set of innovated states have to be chosen; the estimator is driven by the available measurements of secondary variables. On the basis of Robust Exponential estimability arguments, it is found that the concentration is distinguishable with temperature and solid fraction measurements while the crystal size distribution (CSD) is not. Accordingly, a state estimator structure is selected such that (i) the concentration (and other distinguishable states) are innovated by means of the secondary measurements processed with the geometric estimator (GE), and (ii) the CSD is estimated by means of a rigorous model in open loop mode. The proposed estimator has been tested through simulations showing good performance in the case of mismatch in the initial conditions, parametric plant-model mismatch, and noisy measurements. PMID:28890604
Porru, Marcella; Özkan, Leyla
2017-08-30
This work investigates the design of alternative monitoring tools based on state estimators for industrial crystallization systems with nucleation, growth, and agglomeration kinetics. The estimation problem is regarded as a structure design problem where the estimation model and the set of innovated states have to be chosen; the estimator is driven by the available measurements of secondary variables. On the basis of Robust Exponential estimability arguments, it is found that the concentration is distinguishable with temperature and solid fraction measurements while the crystal size distribution (CSD) is not. Accordingly, a state estimator structure is selected such that (i) the concentration (and other distinguishable states) are innovated by means of the secondary measurements processed with the geometric estimator (GE), and (ii) the CSD is estimated by means of a rigorous model in open loop mode. The proposed estimator has been tested through simulations showing good performance in the case of mismatch in the initial conditions, parametric plant-model mismatch, and noisy measurements.
Investigation of practical initial attenuation image estimates in TOF-MLAA reconstruction for PET/MR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Ju-Chieh, E-mail: chengjuchieh@gmail.com; Y
Purpose: Time-of-flight joint attenuation and activity positron emission tomography reconstruction requires additional calibration (scale factors) or constraints during or post-reconstruction to produce a quantitative μ-map. In this work, the impact of various initializations of the joint reconstruction was investigated, and the initial average mu-value (IAM) method was introduced such that the forward-projection of the initial μ-map is already very close to that of the reference μ-map, thus reducing/minimizing the offset (scale factor) during the early iterations of the joint reconstruction. Consequently, the accuracy and efficiency of unconstrained joint reconstruction such as time-of-flight maximum likelihood estimation of attenuation and activity (TOF-MLAA)more » can be improved by the proposed IAM method. Methods: 2D simulations of brain and chest were used to evaluate TOF-MLAA with various initial estimates which include the object filled with water uniformly (conventional initial estimate), bone uniformly, the average μ-value uniformly (IAM magnitude initialization method), and the perfect spatial μ-distribution but with a wrong magnitude (initialization in terms of distribution). 3D GATE simulation was also performed for the chest phantom under a typical clinical scanning condition, and the simulated data were reconstructed with a fully corrected list-mode TOF-MLAA algorithm with various initial estimates. The accuracy of the average μ-values within the brain, chest, and abdomen regions obtained from the MR derived μ-maps was also evaluated using computed tomography μ-maps as the gold-standard. Results: The estimated μ-map with the initialization in terms of magnitude (i.e., average μ-value) was observed to reach the reference more quickly and naturally as compared to all other cases. Both 2D and 3D GATE simulations produced similar results, and it was observed that the proposed IAM approach can produce quantitative μ-map/emission when the corrections for physical effects such as scatter and randoms were included. The average μ-value obtained from MR derived μ-map was accurate within 5% with corrections for bone, fat, and uniform lungs. Conclusions: The proposed IAM-TOF-MLAA can produce quantitative μ-map without any calibration provided that there are sufficient counts in the measured data. For low count data, noise reduction and additional regularization/rescaling techniques need to be applied and investigated. The average μ-value within the object is prior information which can be extracted from MR and patient database, and it is feasible to obtain accurate average μ-value using MR derived μ-map with corrections as demonstrated in this work.« less
Numerical scheme approximating solution and parameters in a beam equation
NASA Astrophysics Data System (ADS)
Ferdinand, Robert R.
2003-12-01
We present a mathematical model which describes vibration in a metallic beam about its equilibrium position. This model takes the form of a nonlinear second-order (in time) and fourth-order (in space) partial differential equation with boundary and initial conditions. A finite-element Galerkin approximation scheme is used to estimate model solution. Infinite-dimensional model parameters are then estimated numerically using an inverse method procedure which involves the minimization of a least-squares cost functional. Numerical results are presented and future work to be done is discussed.
AGM-88E Advanced Anti-Radiation Guided Missile (AGM-88E AARGM)
2015-12-01
0.0 0.0 Acq O&M 0.0 0.0 -- 0.0 0.0 0.0 0.0 Total 1528.5 1661.1 N/A 2107.4 1861.4 2026.2 2663.7 1 APB Breach Confidence Level Confidence Level of...normal conditions, encountering average levels of technical, schedule, and programmatic risk and external interference. Based on the rigor in methods...SAR Baseline to Current SAR Baseline (TY $M) Initial PAUC Development Estimate Changes PAUC Production Estimate Econ Qty Sch Eng Est Oth Spt Total
Banks, H Thomas; Robbins, Danielle; Sutton, Karyn L
2013-01-01
In this paper we present new results for differentiability of delay systems with respect to initial conditions and delays. After motivating our results with a wide range of delay examples arising in biology applications, we further note the need for sensitivity functions (both traditional and generalized sensitivity functions), especially in control and estimation problems. We summarize general existence and uniqueness results before turning to our main results on differentiation with respect to delays, etc. Finally we discuss use of our results in the context of estimation problems.
Iterative initial condition reconstruction
NASA Astrophysics Data System (ADS)
Schmittfull, Marcel; Baldauf, Tobias; Zaldarriaga, Matias
2017-07-01
Motivated by recent developments in perturbative calculations of the nonlinear evolution of large-scale structure, we present an iterative algorithm to reconstruct the initial conditions in a given volume starting from the dark matter distribution in real space. In our algorithm, objects are first moved back iteratively along estimated potential gradients, with a progressively reduced smoothing scale, until a nearly uniform catalog is obtained. The linear initial density is then estimated as the divergence of the cumulative displacement, with an optional second-order correction. This algorithm should undo nonlinear effects up to one-loop order, including the higher-order infrared resummation piece. We test the method using dark matter simulations in real space. At redshift z =0 , we find that after eight iterations the reconstructed density is more than 95% correlated with the initial density at k ≤0.35 h Mpc-1 . The reconstruction also reduces the power in the difference between reconstructed and initial fields by more than 2 orders of magnitude at k ≤0.2 h Mpc-1 , and it extends the range of scales where the full broadband shape of the power spectrum matches linear theory by a factor of 2-3. As a specific application, we consider measurements of the baryonic acoustic oscillation (BAO) scale that can be improved by reducing the degradation effects of large-scale flows. In our idealized dark matter simulations, the method improves the BAO signal-to-noise ratio by a factor of 2.7 at z =0 and by a factor of 2.5 at z =0.6 , improving standard BAO reconstruction by 70% at z =0 and 30% at z =0.6 , and matching the optimal BAO signal and signal-to-noise ratio of the linear density in the same volume. For BAO, the iterative nature of the reconstruction is the most important aspect.
2012-09-30
order to understand its role in transporting moisture into the upper troposphere and effect on the initiation and propagation phases of the Madden...estimates of cloud base from ceilometer. The gray lines are composted insolation measurements to indicate day vs night conditions.
Joint Center Estimation Using Single-Frame Optimization: Part 1: Numerical Simulation.
Frick, Eric; Rahmatalla, Salam
2018-04-04
The biomechanical models used to refine and stabilize motion capture processes are almost invariably driven by joint center estimates, and any errors in joint center calculation carry over and can be compounded when calculating joint kinematics. Unfortunately, accurate determination of joint centers is a complex task, primarily due to measurements being contaminated by soft-tissue artifact (STA). This paper proposes a novel approach to joint center estimation implemented via sequential application of single-frame optimization (SFO). First, the method minimizes the variance of individual time frames’ joint center estimations via the developed variance minimization method to obtain accurate overall initial conditions. These initial conditions are used to stabilize an optimization-based linearization of human motion that determines a time-varying joint center estimation. In this manner, the complex and nonlinear behavior of human motion contaminated by STA can be captured as a continuous series of unique rigid-body realizations without requiring a complex analytical model to describe the behavior of STA. This article intends to offer proof of concept, and the presented method must be further developed before it can be reasonably applied to human motion. Numerical simulations were introduced to verify and substantiate the efficacy of the proposed methodology. When directly compared with a state-of-the-art inertial method, SFO reduced the error due to soft-tissue artifact in all cases by more than 45%. Instead of producing a single vector value to describe the joint center location during a motion capture trial as existing methods often do, the proposed method produced time-varying solutions that were highly correlated ( r > 0.82) with the true, time-varying joint center solution.
A science-based, watershed strategy to support effective remediation of abandoned mine lands
Buxton, Herbert T.; Nimick, David A.; Von Guerard, Paul; Church, Stan E.; Frazier, Ann G.; Gray, John R.; Lipin, Bruce R.; Marsh, Sherman P.; Woodward, Daniel F.; Kimball, Briant A.; Finger, Susan E.; Ischinger, Lee S.; Fordham, John C.; Power, Martha S.; Bunch, Christine M.; Jones, John W.
1997-01-01
A U.S. Geological Survey Abandoned Mine Lands Initiative will develop a strategy for gathering and communicating the scientific information needed to formulate effective and cost-efficient remediation of abandoned mine lands. A watershed approach will identify, characterize, and remediate contaminated sites that have the most profound effect on water and ecosystem quality within a watershed. The Initiative will be conducted during 1997 through 2001 in two pilot watersheds, the Upper Animas River watershed in Colorado and the Boulder River watershed in Montana. Initiative efforts are being coordinated with the U.S. Forest Service, Bureau of Land Management, National Park Service, and other stakeholders which are using the resulting scientific information to design and implement remediation activities. The Initiative has the following eight objective-oriented components: estimate background (pre-mining) conditions; define baseline (current) conditions; identify target sites (major contaminant sources); characterize target sites and processes affecting contaminant dispersal; characterize ecosystem health and controlling processes at target sites; develop remediation goals and monitoring network; provide an integrated, quality-assured and accessible data network; and document lessons learned for future applications of the watershed approach.
Model Error Estimation for the CPTEC Eta Model
NASA Technical Reports Server (NTRS)
Tippett, Michael K.; daSilva, Arlindo
1999-01-01
Statistical data assimilation systems require the specification of forecast and observation error statistics. Forecast error is due to model imperfections and differences between the initial condition and the actual state of the atmosphere. Practical four-dimensional variational (4D-Var) methods try to fit the forecast state to the observations and assume that the model error is negligible. Here with a number of simplifying assumption, a framework is developed for isolating the model error given the forecast error at two lead-times. Two definitions are proposed for the Talagrand ratio tau, the fraction of the forecast error due to model error rather than initial condition error. Data from the CPTEC Eta Model running operationally over South America are used to calculate forecast error statistics and lower bounds for tau.
Survival of a proto-atmosphere through the stage of giant impacts: the mechanical aspects
NASA Astrophysics Data System (ADS)
Genda, Hidenori; Abe, Yutaka
2003-07-01
When a giant impact occurs, atmosphere loss may occur due to global ground motion excited by a strong shock wave traveling in the planetary interior. Here, the relations between the ground motion and the amount of the lost atmosphere are systematically investigated through calculations of a spherically one-dimensional atmospheric motion for various initial atmospheric conditions. The fraction of the lost atmosphere to the total mass of the atmosphere is found to be controlled only by the ground velocity and, insensitive to the initial atmospheric conditions. Unlike the previous studies (Ahrens, 1990, Origin of the Earth, H.E. Newson, J.H. Jones (Eds.), pp. 211-227; Ahrens, 1993, Annu. Rev. Earth Planet. Sci. 21, 525-555; Chen and Ahrens, 1997, Phys. Earth Planet. Inter. 100, 21-26); the estimated loss fraction for the giant impact is only 20%. Significant escape occurs only when the ground velocity is close to the escape velocity. Thus, most of the atmosphere should survive the giant impact. The cause of the difference from previous estimates is discussed from energetic and dynamic points of view. Moreover, if our estimates are applied to the atmosphere of the impactor planet, a significant fraction of it is carried to the target planet. Survival of the proto-atmosphere has very important effects on the origin and evolution of the terrestrial planets' volatile budget.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xuejun; Tang, Qiuhong; Liu, Xingcai
Real-time monitoring and predicting drought development with several months in advance is of critical importance for drought risk adaptation and mitigation. In this paper, we present a drought monitoring and seasonal forecasting framework based on the Variable Infiltration Capacity (VIC) hydrologic model over Southwest China (SW). The satellite precipitation data are used to force VIC model for near real-time estimate of land surface hydrologic conditions. As initialized with satellite-aided monitoring, the climate model-based forecast (CFSv2_VIC) and ensemble streamflow prediction (ESP)-based forecast (ESP_VIC) are both performed and evaluated through their ability in reproducing the evolution of the 2009/2010 severe drought overmore » SW. The results show that the satellite-aided monitoring is able to provide reasonable estimate of forecast initial conditions (ICs) in a real-time manner. Both of CFSv2_VIC and ESP_VIC exhibit comparable performance against the observation-based estimates for the first month, whereas the predictive skill largely drops beyond 1-month. Compared to ESP_VIC, CFSv2_VIC shows better performance as indicated by the smaller ensemble range. This study highlights the value of this operational framework in generating near real-time ICs and giving a reliable prediction with 1-month ahead, which has great implications for drought risk assessment, preparation and relief.« less
Weak Value Amplification is Suboptimal for Estimation and Detection
NASA Astrophysics Data System (ADS)
Ferrie, Christopher; Combes, Joshua
2014-01-01
We show by using statistically rigorous arguments that the technique of weak value amplification does not perform better than standard statistical techniques for the tasks of single parameter estimation and signal detection. Specifically, we prove that postselection, a necessary ingredient for weak value amplification, decreases estimation accuracy and, moreover, arranging for anomalously large weak values is a suboptimal strategy. In doing so, we explicitly provide the optimal estimator, which in turn allows us to identify the optimal experimental arrangement to be the one in which all outcomes have equal weak values (all as small as possible) and the initial state of the meter is the maximal eigenvalue of the square of the system observable. Finally, we give precise quantitative conditions for when weak measurement (measurements without postselection or anomalously large weak values) can mitigate the effect of uncharacterized technical noise in estimation.
Establishing endangered species recovery criteria using predictive simulation modeling
McGowan, Conor P.; Catlin, Daniel H.; Shaffer, Terry L.; Gratto-Trevor, Cheri L.; Aron, Carol
2014-01-01
Listing a species under the Endangered Species Act (ESA) and developing a recovery plan requires U.S. Fish and Wildlife Service to establish specific and measurable criteria for delisting. Generally, species are listed because they face (or are perceived to face) elevated risk of extinction due to issues such as habitat loss, invasive species, or other factors. Recovery plans identify recovery criteria that reduce extinction risk to an acceptable level. It logically follows that the recovery criteria, the defined conditions for removing a species from ESA protections, need to be closely related to extinction risk. Extinction probability is a population parameter estimated with a model that uses current demographic information to project the population into the future over a number of replicates, calculating the proportion of replicated populations that go extinct. We simulated extinction probabilities of piping plovers in the Great Plains and estimated the relationship between extinction probability and various demographic parameters. We tested the fit of regression models linking initial abundance, productivity, or population growth rate to extinction risk, and then, using the regression parameter estimates, determined the conditions required to reduce extinction probability to some pre-defined acceptable threshold. Binomial regression models with mean population growth rate and the natural log of initial abundance were the best predictors of extinction probability 50 years into the future. For example, based on our regression models, an initial abundance of approximately 2400 females with an expected mean population growth rate of 1.0 will limit extinction risk for piping plovers in the Great Plains to less than 0.048. Our method provides a straightforward way of developing specific and measurable recovery criteria linked directly to the core issue of extinction risk. Published by Elsevier Ltd.
Jones, R. E.; Ward, D. K.
2016-07-18
Here, given the unique optical properties of LiF, it is often used as an observation window in high-temperature and -pressure experiments; hence, estimates of its transmission properties are necessary to interpret observations. Since direct measurements of the thermal conductivity of LiF at the appropriate conditions are difficult, we resort to molecular simulation methods. Using an empirical potential validated against ab initio phonon density of states, we estimate the thermal conductivity of LiF at high temperatures (1000–4000 K) and pressures (100–400 GPa) with the Green-Kubo method. We also compare these estimates to those derived directly from ab initio data. To ascertainmore » the correct phase of LiF at these extreme conditions, we calculate the (relative) phase stability of the B1 and B2 structures using a quasiharmonic ab initio model of the free energy. We also estimate the thermal conductivity of LiF in an uniaxial loading state that emulates initial stages of compression in high-stress ramp loading experiments and show the degree of anisotropy induced in the conductivity due to deformation.« less
Assessing concentration uncertainty estimates from passive microwave sea ice products
NASA Astrophysics Data System (ADS)
Meier, W.; Brucker, L.; Miller, J. A.
2017-12-01
Sea ice concentration is an essential climate variable and passive microwave derived estimates of concentration are one of the longest satellite-derived climate records. However, until recently uncertainty estimates were not provided. Numerous validation studies provided insight into general error characteristics, but the studies have found that concentration error varied greatly depending on sea ice conditions. Thus, an uncertainty estimate from each observation is desired, particularly for initialization, assimilation, and validation of models. Here we investigate three sea ice products that include an uncertainty for each concentration estimate: the NASA Team 2 algorithm product, the EUMETSAT Ocean and Sea Ice Satellite Application Facility (OSI-SAF) product, and the NOAA/NSIDC Climate Data Record (CDR) product. Each product estimates uncertainty with a completely different approach. The NASA Team 2 product derives uncertainty internally from the algorithm method itself. The OSI-SAF uses atmospheric reanalysis fields and a radiative transfer model. The CDR uses spatial variability from two algorithms. Each approach has merits and limitations. Here we evaluate the uncertainty estimates by comparing the passive microwave concentration products with fields derived from the NOAA VIIRS sensor. The results show that the relationship between the product uncertainty estimates and the concentration error (relative to VIIRS) is complex. This may be due to the sea ice conditions, the uncertainty methods, as well as the spatial and temporal variability of the passive microwave and VIIRS products.
Solute partitioning under continuous cooling conditions as a cooling rate indicator. [in lunar rocks
NASA Technical Reports Server (NTRS)
Onorato, P. I. K.; Hopper, R. W.; Yinnon, H.; Uhlmann, D. R.; Taylor, L. A.; Garrison, J. R.; Hunter, R.
1981-01-01
A model of solute partitioning in a finite body under conditions of continuous cooling is developed for the determination of cooling rates from concentration profile data, and applied to the partitioning of zirconium between ilmenite and ulvospinel in the Apollo 15 Elbow Crater rocks. Partitioning in a layered composite solid is described numerically in terms of concentration profiles and diffusion coefficients which are functions of time and temperature, respectively; a program based on the model can be used to calculate concentration profiles for various assumed cooling rates given the diffusion coefficients in the two phases and the equilibrium partitioning ratio over a range of temperatures. In the case of the Elbow Rock gabbros, the cooling rates are calculated from measured concentration ratios 10 microns from the interphase boundaries under the assumptions of uniform and equilibrium initial conditions at various starting temperatures. It is shown that the specimens could not have had uniform concentrations profiles at the previously suggested initial temperature of 1350 K. It is concluded that even under conditions where the initial temperature, grain sizes and solute diffusion coefficients are not well characterized, the model can be used to estimate the cooling rate of a grain assemblage to within an order of magnitude.
Zhang, Z. Fred; White, Signe K.; Bonneville, Alain; ...
2014-12-31
Numerical simulations have been used for estimating CO2 injectivity, CO2 plume extent, pressure distribution, and Area of Review (AoR), and for the design of CO2 injection operations and monitoring network for the FutureGen project. The simulation results are affected by uncertainties associated with numerous input parameters, the conceptual model, initial and boundary conditions, and factors related to injection operations. Furthermore, the uncertainties in the simulation results also vary in space and time. The key need is to identify those uncertainties that critically impact the simulation results and quantify their impacts. We introduce an approach to determine the local sensitivity coefficientmore » (LSC), defined as the response of the output in percent, to rank the importance of model inputs on outputs. The uncertainty of an input with higher sensitivity has larger impacts on the output. The LSC is scalable by the error of an input parameter. The composite sensitivity of an output to a subset of inputs can be calculated by summing the individual LSC values. We propose a local sensitivity coefficient method and applied it to the FutureGen 2.0 Site in Morgan County, Illinois, USA, to investigate the sensitivity of input parameters and initial conditions. The conceptual model for the site consists of 31 layers, each of which has a unique set of input parameters. The sensitivity of 11 parameters for each layer and 7 inputs as initial conditions is then investigated. For CO2 injectivity and plume size, about half of the uncertainty is due to only 4 or 5 of the 348 inputs and 3/4 of the uncertainty is due to about 15 of the inputs. The initial conditions and the properties of the injection layer and its neighbour layers contribute to most of the sensitivity. Overall, the simulation outputs are very sensitive to only a small fraction of the inputs. However, the parameters that are important for controlling CO2 injectivity are not the same as those controlling the plume size. The three most sensitive inputs for injectivity were the horizontal permeability of Mt Simon 11 (the injection layer), the initial fracture-pressure gradient, and the residual aqueous saturation of Mt Simon 11, while those for the plume area were the initial salt concentration, the initial pressure, and the initial fracture-pressure gradient. The advantages of requiring only a single set of simulation results, scalability to the proper parameter errors, and easy calculation of the composite sensitivities make this approach very cost-effective for estimating AoR uncertainty and guiding cost-effective site characterization, injection well design, and monitoring network design for CO2 storage projects.« less
Specification and Prediction of the Radiation Environment Using Data Assimilative VERB code
NASA Astrophysics Data System (ADS)
Shprits, Yuri; Kellerman, Adam
2016-07-01
We discuss how data assimilation can be used for the reconstruction of long-term evolution, bench-marking of the physics based codes and used to improve the now-casting and focusing of the radiation belts and ring current. We also discuss advanced data assimilation methods such as parameter estimation and smoothing. We present a number of data assimilation applications using the VERB 3D code. The 3D data assimilative VERB allows us to blend together data from GOES, RBSP A and RBSP B. 1) Model with data assimilation allows us to propagate data to different pitch angles, energies, and L-shells and blends them together with the physics-based VERB code in an optimal way. We illustrate how to use this capability for the analysis of the previous events and for obtaining a global and statistical view of the system. 2) The model predictions strongly depend on initial conditions that are set up for the model. Therefore, the model is as good as the initial conditions that it uses. To produce the best possible initial conditions, data from different sources (GOES, RBSP A, B, our empirical model predictions based on ACE) are all blended together in an optimal way by means of data assimilation, as described above. The resulting initial conditions do not have gaps. This allows us to make more accurate predictions. Real-time prediction framework operating on our website, based on GOES, RBSP A, B and ACE data, and 3D VERB, is presented and discussed.
NASA Astrophysics Data System (ADS)
Pioldi, Fabio; Rizzi, Egidio
2016-08-01
This paper proposes a new output-only element-level system identification and input estimation technique, towards the simultaneous identification of modal parameters, input excitation time history and structural features at the element-level by adopting earthquake-induced structural response signals. The method, named Full Dynamic Compound Inverse Method (FDCIM), releases strong assumptions of earlier element-level techniques, by working with a two-stage iterative algorithm. Jointly, a Statistical Average technique, a modification process and a parameter projection strategy are adopted at each stage to achieve stronger convergence for the identified estimates. The proposed method works in a deterministic way and is completely developed in State-Space form. Further, it does not require continuous- to discrete-time transformations and does not depend on initialization conditions. Synthetic earthquake-induced response signals from different shear-type buildings are generated to validate the implemented procedure, also with noise-corrupted cases. The achieved results provide a necessary condition to demonstrate the effectiveness of the proposed identification method.
Application of a statistical emulator to fire emission modeling
Marwan Katurji; Jovanka Nikolic; Shiyuan Zhong; Scott Pratt; Lejiang Yu; Warren E. Heilman
2015-01-01
We have demonstrated the use of an advanced Gaussian-Process (GP) emulator to estimate wildland fire emissions over a wide range of fuel and atmospheric conditions. The Fire Emission Production Simulator, or FEPS, is used to produce an initial set of emissions data that correspond to some selected values in the domain of the input fuel and atmospheric parameters for...
NASA Astrophysics Data System (ADS)
Singh, K.; Sandu, A.; Bowman, K. W.; Parrington, M.; Jones, D. B. A.; Lee, M.
2011-08-01
Chemistry transport models determine the evolving chemical state of the atmosphere by solving the fundamental equations that govern physical and chemical transformations subject to initial conditions of the atmospheric state and surface boundary conditions, e.g., surface emissions. The development of data assimilation techniques synthesize model predictions with measurements in a rigorous mathematical framework that provides observational constraints on these conditions. Two families of data assimilation methods are currently widely used: variational and Kalman filter (KF). The variational approach is based on control theory and formulates data assimilation as a minimization problem of a cost functional that measures the model-observations mismatch. The Kalman filter approach is rooted in statistical estimation theory and provides the analysis covariance together with the best state estimate. Suboptimal Kalman filters employ different approximations of the covariances in order to make the computations feasible with large models. Each family of methods has both merits and drawbacks. This paper compares several data assimilation methods used for global chemical data assimilation. Specifically, we evaluate data assimilation approaches for improving estimates of the summertime global tropospheric ozone distribution in August 2006 based on ozone observations from the NASA Tropospheric Emission Spectrometer and the GEOS-Chem chemistry transport model. The resulting analyses are compared against independent ozonesonde measurements to assess the effectiveness of each assimilation method. All assimilation methods provide notable improvements over the free model simulations, which differ from the ozonesonde measurements by about 20 % (below 200 hPa). Four dimensional variational data assimilation with window lengths between five days and two weeks is the most accurate method, with mean differences between analysis profiles and ozonesonde measurements of 1-5 %. Two sequential assimilation approaches (three dimensional variational and suboptimal KF), although derived from different theoretical considerations, provide similar ozone estimates, with relative differences of 5-10 % between the analyses and ozonesonde measurements. Adjoint sensitivity analysis techniques are used to explore the role of of uncertainties in ozone precursors and their emissions on the distribution of tropospheric ozone. A novel technique is introduced that projects 3-D-Variational increments back to an equivalent initial condition, which facilitates comparison with 4-D variational techniques.
NASA Astrophysics Data System (ADS)
Landry, B. J.; Wu, H.; Wenzel, S. P.; Gates, S. J.; Fytanidis, D. K.; Garcia, M. H.
2017-12-01
Unexploded ordnances (UXOs) can be found at the bottom of coastal areas as the residue of military wartime activities, training or accidents. These underwater objects are hazards for humans and the coastal environment increasing the need for addressing the knowledge gaps regarding the initiation of motion, fate and transport of UXOs under currents and wave conditions. Extensive experimental analysis was conducted for the initiation of motion of UXOs under various rigid bed roughness conditions (smooth PVC, pitted steel, marbles, gravels and bed of spherical particles) for both unidirectional and oscillatory flows. Particle image velocimetry measurements were conducted under both flow conditions to resolve the flow structure estimate the critical flow conditions for initiation of motion of UXOs. Analysis of the experimental observations shows that the geometrical characteristics of the UXOs, their properties (i.e. volume, mass) and their orientation with respect to the mean flow play an important role on the reorientation and mobility of the examined objects. A novel unified initiation of motion diagram is proposed using an effective/unified hydrodynamic roughness and a new length scale which includes the effect of the projected area and the bed-UXO contact area. Both unidirectional and oscillatory critical flow conditions collapsed into a single dimensionless diagram highlighting the importance and practical applicability of the proposed work. In addition to the rigid bed experiments, the burial dynamics of proud UXOs on a mobile sand bed were also examined. The complex flow-bedform-UXOs interactions were evaluated which highlighted the effect of munition density on burial rate and final burial depth. Burial dynamics and mechanisms for motion were examined for various UXOs types, and results show that, for the case of the low density UXOs under energetic conditions, lateral transport coexists with burial. Prior to burial, UXO re-orientation was also observed depending on the geometric characteristics of the objects.
Power conditioning equipment for a thermoelectric outer planet spacecraft, volume 1, book 1
NASA Technical Reports Server (NTRS)
Andrews, R. E. (Editor)
1972-01-01
Equipment was designed to receive power from a radioisotope thermoelectric generator source, condition, distribute, and control this power for the spacecraft loads. The TOPS mission, aimed at a representative tour of the outer planets, would operate for an estimated 12 year period. Unique design characteristics required for the power conditioning equipment results from the long mission time and the need for autonomous on-board operations due to large communications distances and the associated time delays of ground initiated actions. The salient features of the selected power subsystem configuration are: (1) The PCE regulates the power from the radioisotope thermoelectric generator power source at 30 vdc by means of a quad-redundant shunt regulator; (2) 30 vdc power is used by certain loads, but is more generally inverted and distributed as square-wave ac power; (3) a protected bus is used to assure that power is always available to the control computer subsystem to permit corrective action to be initiated in response to fault conditions; and (4) various levels of redundancy are employed to provide high subsystem reliability.
Fast auto-focus scheme based on optical defocus fitting model
NASA Astrophysics Data System (ADS)
Wang, Yeru; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting; Cen, Min
2018-04-01
An optical defocus fitting model-based (ODFM) auto-focus scheme is proposed. Considering the basic optical defocus principle, the optical defocus fitting model is derived to approximate the potential-focus position. By this accurate modelling, the proposed auto-focus scheme can make the stepping motor approach the focal plane more accurately and rapidly. Two fitting positions are first determined for an arbitrary initial stepping motor position. Three images (initial image and two fitting images) at these positions are then collected to estimate the potential-focus position based on the proposed ODFM method. Around the estimated potential-focus position, two reference images are recorded. The auto-focus procedure is then completed by processing these two reference images and the potential-focus image to confirm the in-focus position using a contrast based method. Experimental results prove that the proposed scheme can complete auto-focus within only 5 to 7 steps with good performance even under low-light condition.
Cycle-expansion method for the Lyapunov exponent, susceptibility, and higher moments.
Charbonneau, Patrick; Li, Yue Cathy; Pfister, Henry D; Yaida, Sho
2017-09-01
Lyapunov exponents characterize the chaotic nature of dynamical systems by quantifying the growth rate of uncertainty associated with the imperfect measurement of initial conditions. Finite-time estimates of the exponent, however, experience fluctuations due to both the initial condition and the stochastic nature of the dynamical path. The scale of these fluctuations is governed by the Lyapunov susceptibility, the finiteness of which typically provides a sufficient condition for the law of large numbers to apply. Here, we obtain a formally exact expression for this susceptibility in terms of the Ruelle dynamical ζ function for one-dimensional systems. We further show that, for systems governed by sequences of random matrices, the cycle expansion of the ζ function enables systematic computations of the Lyapunov susceptibility and of its higher-moment generalizations. The method is here applied to a class of dynamical models that maps to static disordered spin chains with interactions stretching over a varying distance and is tested against Monte Carlo simulations.
Estimation of effective connectivity using multi-layer perceptron artificial neural network.
Talebi, Nasibeh; Nasrabadi, Ali Motie; Mohammad-Rezazadeh, Iman
2018-02-01
Studies on interactions between brain regions estimate effective connectivity, (usually) based on the causality inferences made on the basis of temporal precedence. In this study, the causal relationship is modeled by a multi-layer perceptron feed-forward artificial neural network, because of the ANN's ability to generate appropriate input-output mapping and to learn from training examples without the need of detailed knowledge of the underlying system. At any time instant, the past samples of data are placed in the network input, and the subsequent values are predicted at its output. To estimate the strength of interactions, the measure of " Causality coefficient " is defined based on the network structure, the connecting weights and the parameters of hidden layer activation function. Simulation analysis demonstrates that the method, called "CREANN" (Causal Relationship Estimation by Artificial Neural Network), can estimate time-invariant and time-varying effective connectivity in terms of MVAR coefficients. The method shows robustness with respect to noise level of data. Furthermore, the estimations are not significantly influenced by the model order (considered time-lag), and the different initial conditions (initial random weights and parameters of the network). CREANN is also applied to EEG data collected during a memory recognition task. The results implicate that it can show changes in the information flow between brain regions, involving in the episodic memory retrieval process. These convincing results emphasize that CREANN can be used as an appropriate method to estimate the causal relationship among brain signals.
Non-Rigid Structure Estimation in Trajectory Space from Monocular Vision
Wang, Yaming; Tong, Lingling; Jiang, Mingfeng; Zheng, Junbao
2015-01-01
In this paper, the problem of non-rigid structure estimation in trajectory space from monocular vision is investigated. Similar to the Point Trajectory Approach (PTA), based on characteristic points’ trajectories described by a predefined Discrete Cosine Transform (DCT) basis, the structure matrix was also calculated by using a factorization method. To further optimize the non-rigid structure estimation from monocular vision, the rank minimization problem about structure matrix is proposed to implement the non-rigid structure estimation by introducing the basic low-rank condition. Moreover, the Accelerated Proximal Gradient (APG) algorithm is proposed to solve the rank minimization problem, and the initial structure matrix calculated by the PTA method is optimized. The APG algorithm can converge to efficient solutions quickly and lessen the reconstruction error obviously. The reconstruction results of real image sequences indicate that the proposed approach runs reliably, and effectively improves the accuracy of non-rigid structure estimation from monocular vision. PMID:26473863
Energy and maximum norm estimates for nonlinear conservation laws
NASA Technical Reports Server (NTRS)
Olsson, Pelle; Oliger, Joseph
1994-01-01
We have devised a technique that makes it possible to obtain energy estimates for initial-boundary value problems for nonlinear conservation laws. The two major tools to achieve the energy estimates are a certain splitting of the flux vector derivative f(u)(sub x), and a structural hypothesis, referred to as a cone condition, on the flux vector f(u). These hypotheses are fulfilled for many equations that occur in practice, such as the Euler equations of gas dynamics. It should be noted that the energy estimates are obtained without any assumptions on the gradient of the solution u. The results extend to weak solutions that are obtained as point wise limits of vanishing viscosity solutions. As a byproduct we obtain explicit expressions for the entropy function and the entropy flux of symmetrizable systems of conservation laws. Under certain circumstances the proposed technique can be applied repeatedly so as to yield estimates in the maximum norm.
NASA Technical Reports Server (NTRS)
Lapenta, William M.; Crosson, William; Dembek, Scott; Lakhtakia, Mercedes
1998-01-01
It is well known that soil moisture is a characteristic of the land surface that strongly affects the partitioning of outgoing radiation into sensible and latent heat which significantly impacts both weather and climate. Detailed land surface schemes are now being coupled to mesoscale atmospheric models in order to represent the effect of soil moisture upon atmospheric simulations. However, there is little direct soil moisture data available to initialize these models on regional to continental scales. As a result, a Soil Hydrology Model (SHM) is currently being used to generate an indirect estimate of the soil moisture conditions over the continental United States at a grid resolution of 36 Km on a daily basis since 8 May 1995. The SHM is forced by analyses of atmospheric observations including precipitation and contains detailed information on slope soil and landcover characteristics.The purpose of this paper is to evaluate the utility of initializing a detailed coupled model with the soil moisture data produced by SHM.
Patellofemoral joint stress during running with alterations in foot strike pattern.
Vannatta, Charles Nathan; Kernozek, Thomas W
2015-05-01
This study aimed to quantify differences in patellofemoral joint stress that may occur when healthy runners alter their foot strike pattern from their habitual rearfoot strike to a forefoot strike to gain insight on the potential etiology and treatment methods of patellofemoral pain. Sixteen healthy female runners completed 20 running trials in a controlled laboratory setting under rearfoot strike and forefoot strike conditions. Kinetic and kinematic data were used to drive a static optimization technique to estimate individual muscle forces to input into a model of the patellofemoral joint to estimate joint stress during running. Peak patellofemoral joint stress and the stress-time integral over stance phase decreased by 27% and 12%, respectively, in the forefoot strike condition (P < 0.001). Peak vertical ground reaction force increased slightly in the forefoot strike condition (P < 0.001). Peak quadriceps force and average hamstring force decreased, whereas gastrocnemius and soleus muscle forces increased when running with a forefoot strike (P < 0.05). Knee flexion angle at initial contact increased (P < 0.001), total knee excursion decreased (P < 0.001), and no change occurred in peak knee flexion angle (P = 0.238). Step length did not change between conditions (P = 0.375), but the leading leg landed with the foot positioned with a horizontal distance closer to the hip at initial contact in the forefoot strike condition (P < 0.001). Altering one's strike pattern to a forefoot strike results in consistent reductions in patellofemoral joint stress independent of changes in step length. Thus, implementation of forefoot strike training programs may be warranted in the treatment of runners with patellofemoral pain. However, it is suggested that the transition to a forefoot strike pattern should be completed in a graduated manner.
Hybrid Weighted Minimum Norm Method A new method based LORETA to solve EEG inverse problem.
Song, C; Zhuang, T; Wu, Q
2005-01-01
This Paper brings forward a new method to solve EEG inverse problem. Based on following physiological characteristic of neural electrical activity source: first, the neighboring neurons are prone to active synchronously; second, the distribution of source space is sparse; third, the active intensity of the sources are high centralized, we take these prior knowledge as prerequisite condition to develop the inverse solution of EEG, and not assume other characteristic of inverse solution to realize the most commonly 3D EEG reconstruction map. The proposed algorithm takes advantage of LORETA's low resolution method which emphasizes particularly on 'localization' and FOCUSS's high resolution method which emphasizes particularly on 'separability'. The method is still under the frame of the weighted minimum norm method. The keystone is to construct a weighted matrix which takes reference from the existing smoothness operator, competition mechanism and study algorithm. The basic processing is to obtain an initial solution's estimation firstly, then construct a new estimation using the initial solution's information, repeat this process until the solutions under last two estimate processing is keeping unchanged.
The economics of treatment for infants with respiratory distress syndrome.
Neil, N; Sullivan, S D; Lessler, D S
1998-01-01
To define clinical outcomes and prevailing patterns of care for the initial hospitalization of infants at greatest risk for respiratory distress syndrome (RDS); to estimate direct medical care costs associated with the initial hospitalization; and to introduce and demonstrate a simulation technique for the economic evaluation of health care technologies. Clinical outcomes and usual-care algorithms were determined for infants with RDS in three birthweight categories (500-1,000g; >1,000-1,500g; and >1,500g) using literature- and expert-panel-based data. The experts were practitioners from major U.S. hospitals who were directly involved in the clinical care of such infants. Using the framework derived from the usual care patterns and outcomes, the authors developed an itemized "micro-costing" economic model to simulate the costs associated with the initial hospitalization of a hypothetical RDS patient. The model is computerized and dynamic; unit costs, frequencies, number of days, probabilities and population multipliers are all variable and can be modified on the basis of new information or local conditions. Aggregated unit costs are used to estimate the expected medical costs of treatment per patient. Expected costs of initial hospitalization per uncomplicated surviving infant with RDS were estimated to be $101,867 for 500-1,000g infants; $64,524 for >1,000-1,500g infants; and $27,224 for >1,500g infants. Incremental costs of complications among survivors were estimated to be $22,155 (500-1,000g); $11,041 (>1,000-1,500g); and $2,448 (>1,500 g). Expected costs of initial hospitalization per case (including non-survivors) were $100,603; $72,353; and $28,756, respectively. An itemized model such as the one developed here serves as a benchmark for the economic assessment of treatment costs and utilization. Moreover, it offers a powerful tool for the prospective evaluation of new technologies or procedures designed to reduce the incidence of, severity of, and/or total hospital resource use ascribed to RDS.
NASA Astrophysics Data System (ADS)
Sarna, Neeraj; Torrilhon, Manuel
2018-01-01
We define certain criteria, using the characteristic decomposition of the boundary conditions and energy estimates, which a set of stable boundary conditions for a linear initial boundary value problem, involving a symmetric hyperbolic system, must satisfy. We first use these stability criteria to show the instability of the Maxwell boundary conditions proposed by Grad (Commun Pure Appl Math 2(4):331-407, 1949). We then recognise a special block structure of the moment equations which arises due to the recursion relations and the orthogonality of the Hermite polynomials; the block structure will help us in formulating stable boundary conditions for an arbitrary order Hermite discretization of the Boltzmann equation. The formulation of stable boundary conditions relies upon an Onsager matrix which will be constructed such that the newly proposed boundary conditions stay close to the Maxwell boundary conditions at least in the lower order moments.
Seelig, Amber D; Bensley, Kara M; Williams, Emily C; Armenta, Richard F; Rivera, Anna C; Peterson, Arthur V; Jacobson, Isabel G; Littman, Alyson J; Maynard, Charles; Bricker, Jonathan B; Rull, Rudolph P; Boyko, Edward J
2018-06-06
The aim of this study was to determine whether specific individual posttraumatic stress disorder (PTSD) symptoms or symptom clusters predict cigarette smoking initiation. Longitudinal data from the Millennium Cohort Study were used to estimate the relative risk for smoking initiation associated with PTSD symptoms among 2 groups: (1) all individuals who initially indicated they were nonsmokers (n = 44,968, main sample) and (2) a subset of the main sample who screened positive for PTSD (n = 1622). Participants were military service members who completed triennial comprehensive surveys that included assessments of smoking and PTSD symptoms. Complementary log-log models were fit to estimate the relative risk for subsequent smoking initiation associated with each of the 17 symptoms that comprise the PTSD Checklist and 5 symptom clusters. Models were adjusted for demographics, military factors, comorbid conditions, and other PTSD symptoms or clusters. In the main sample, no individual symptoms or clusters predicted smoking initiation. However, in the subset with PTSD, the symptoms "feeling irritable or having angry outbursts" (relative risk [RR] 1.41, 95% confidence interval [CI] 1.13-1.76) and "feeling as though your future will somehow be cut short" (RR 1.19, 95% CI 1.02-1.40) were associated with increased risk for subsequent smoking initiation. Certain PTSD symptoms were associated with higher risk for smoking initiation among current and former service members with PTSD. These results may help identify individuals who might benefit from more intensive smoking prevention efforts included with PTSD treatment.
Spectral estimates of intercepted solar radiation by corn and soybean canopies
NASA Technical Reports Server (NTRS)
Gallo, K. P.; Brooks, C. C.; Daughtry, C. S. T.; Bauer, M. E.; Vanderbilt, V. C.
1982-01-01
Attention is given to the development of methods for combining spectral and meteorological data in crop yield models which are capable of providing accurate estimates of crop condition and yields throughout the growing season. The present investigation is concerned with initial tests of these concepts using spectral and agronomic data acquired in controlled experiments. The data were acquired at the Purdue University Agronomy Farm, 10 km northwest of West Lafayette, Indiana. Data were obtained throughout several growing seasons for corn and soybeans. Five methods or models for predicting yields were examined. On the basis of the obtained results, it is concluded that estimating intercepted solar radiation using spectral data is a viable approach for merging spectral and meteorological data in crop yield models.
Diffusion phenomenon for linear dissipative wave equations in an exterior domain
NASA Astrophysics Data System (ADS)
Ikehata, Ryo
Under the general condition of the initial data, we will derive the crucial estimates which imply the diffusion phenomenon for the dissipative linear wave equations in an exterior domain. In order to derive the diffusion phenomenon for dissipative wave equations, the time integral method which was developed by Ikehata and Matsuyama (Sci. Math. Japon. 55 (2002) 33) plays an effective role.
Single bubble of an electronegative gas in transformer oil in the presence of an electric field
NASA Astrophysics Data System (ADS)
Gadzhiev, M. Kh.; Tyuftyaev, A. S.; Il'ichev, M. V.
2017-10-01
The influence of the electric field on a single air bubble in transformer oil has been studied. It has been shown that, depending on its size, the bubble may initiate breakdown. The sizes of air and sulfur hexafluoride bubbles at which breakdown will not be observed have been estimated based on the condition for the avalanche-to-streamer transition.
Assessment of initial soil moisture conditions for event-based rainfall-runoff modelling
NASA Astrophysics Data System (ADS)
Tramblay, Yves; Bouvier, Christophe; Martin, Claude; Didon-Lescot, Jean-François; Todorovik, Dragana; Domergue, Jean-Marc
2010-06-01
Flash floods are the most destructive natural hazards that occur in the Mediterranean region. Rainfall-runoff models can be very useful for flash flood forecasting and prediction. Event-based models are very popular for operational purposes, but there is a need to reduce the uncertainties related to the initial moisture conditions estimation prior to a flood event. This paper aims to compare several soil moisture indicators: local Time Domain Reflectometry (TDR) measurements of soil moisture, modelled soil moisture through the Interaction-Sol-Biosphère-Atmosphère (ISBA) component of the SIM model (Météo-France), antecedent precipitation and base flow. A modelling approach based on the Soil Conservation Service-Curve Number method (SCS-CN) is used to simulate the flood events in a small headwater catchment in the Cevennes region (France). The model involves two parameters: one for the runoff production, S, and one for the routing component, K. The S parameter can be interpreted as the maximal water retention capacity, and acts as the initial condition of the model, depending on the antecedent moisture conditions. The model was calibrated from a 20-flood sample, and led to a median Nash value of 0.9. The local TDR measurements in the deepest layers of soil (80-140 cm) were found to be the best predictors for the S parameter. TDR measurements averaged over the whole soil profile, outputs of the SIM model, and the logarithm of base flow also proved to be good predictors, whereas antecedent precipitations were found to be less efficient. The good correlations observed between the TDR predictors and the S calibrated values indicate that monitoring soil moisture could help setting the initial conditions for simplified event-based models in small basins.
Thermo-mechanical models of obduction applied to the Oman ophiolite
NASA Astrophysics Data System (ADS)
Thibault, Duretz; Philippe, Agard; Philippe, Yamato; Céline, Ducassou; Taras, Gerya; Evguenii, Burov
2015-04-01
During obduction regional-scale fragments of oceanic lithosphere (ophiolites) are emplaced somewhat enigmatically on top of lighter continental lithosphere. We herein use two-dimensional thermo-mechanical models to investigate the feasibility and controlling parameters of obduction. The models are designed using available geological data from the Oman (Semail) ophiolite. Initial and boundary conditions are constrained by plate kinematic and geochronological data and modeling results are validated against petrological and structural observations. The reference model consists of three distinct stages: (1) initiation of oceanic subduction initiation away from Arabian margin, (2) emplacement of the Oman Ophiolite atop the Arabian margin, (2) dome-like exhumation of the subducted Arabian margin beneath the overlying ophiolite. A parametric study suggests that 350-400 km of shortening allows to best fit both the peak P-T conditions of the subducted margin (1.5-2.5 GPa / 450-600°C) and the dimensions of the ophiolite (~170 km width), in agreement with previous estimations. Our results further confirm that the locus of obduction initiation is close to the eastern edge of the Arabian margin (~100 km) and indicate that obduction is facilitated by a strong continental basement rheology.
Conditional flood frequency and catchment state: a simulation approach
NASA Astrophysics Data System (ADS)
Brettschneider, Marco; Bourgin, François; Merz, Bruno; Andreassian, Vazken; Blaquiere, Simon
2017-04-01
Catchments have memory and the conditional flood frequency distribution for a time period ahead can be seen as non-stationary: it varies with the catchment state and climatic factors. From a risk management perspective, understanding the link of conditional flood frequency to catchment state is a key to anticipate potential periods of higher flood risk. Here, we adopt a simulation approach to explore the link between flood frequency obtained by continuous rainfall-runoff simulation and the initial state of the catchment. The simulation chain is based on i) a three state rainfall generator applied at the catchment scale, whose parameters are estimated for each month, and ii) the GR4J lumped rainfall-runoff model, whose parameters are calibrated with all available data. For each month, a large number of stochastic realizations of the continuous rainfall generator for the next 12 months are used as inputs for the GR4J model in order to obtain a large number of stochastic realizations for the next 12 months. This process is then repeated for 50 different initial states of the soil moisture reservoir of the GR4J model and for all the catchments. Thus, 50 different conditional flood frequency curves are obtained for the 50 different initial catchment states. We will present an analysis of the link between the catchment states, the period of the year and the strength of the conditioning of the flood frequency compared to the unconditional flood frequency. A large sample of diverse catchments in France will be used.
NASA Astrophysics Data System (ADS)
Hutchinson, G. L.; Livingston, G. P.; Healy, R. W.; Striegl, R. G.
2000-04-01
We employed a three-dimensional finite difference gas diffusion model to simulate the performance of chambers used to measure surface-atmosphere trace gas exchange. We found that systematic errors often result from conventional chamber design and deployment protocols, as well as key assumptions behind the estimation of trace gas exchange rates from observed concentration data. Specifically, our simulations showed that (1) when a chamber significantly alters atmospheric mixing processes operating near the soil surface, it also nearly instantaneously enhances or suppresses the postdeployment gas exchange rate, (2) any change resulting in greater soil gas diffusivity, or greater partitioning of the diffusing gas to solid or liquid soil fractions, increases the potential for chamber-induced measurement error, and (3) all such errors are independent of the magnitude, kinetics, and/or distribution of trace gas sources, but greater for trace gas sinks with the same initial absolute flux. Finally, and most importantly, we found that our results apply to steady state as well as non-steady-state chambers, because the slow rate of gas diffusion in soil inhibits recovery of the former from their initial non-steady-state condition. Over a range of representative conditions, the error in steady state chamber estimates of the trace gas flux varied from -30 to +32%, while estimates computed by linear regression from non-steady-state chamber concentrations were 2 to 31% too small. Although such errors are relatively small in comparison to the temporal and spatial variability characteristic of trace gas exchange, they bias the summary statistics for each experiment as well as larger scale trace gas flux estimates based on them.
Hutchinson, G.L.; Livingston, G.P.; Healy, R.W.; Striegl, Robert G.
2000-01-01
We employed a three-dimensional finite difference gas diffusion model to simulate the performance of chambers used to measure surface-atmosphere tace gas exchange. We found that systematic errors often result from conventional chamber design and deployment protocols, as well as key assumptions behind the estimation of trace gas exchange rates from observed concentration data. Specifically, our simulationshowed that (1) when a chamber significantly alters atmospheric mixing processes operating near the soil surface, it also nearly instantaneously enhances or suppresses the postdeployment gas exchange rate, (2) any change resulting in greater soil gas diffusivity, or greater partitioning of the diffusing gas to solid or liquid soil fractions, increases the potential for chamber-induced measurement error, and (3) all such errors are independent of the magnitude, kinetics, and/or distribution of trace gas sources, but greater for trace gas sinks with the same initial absolute flux. Finally, and most importantly, we found that our results apply to steady state as well as non-steady-state chambers, because the slow rate of gas diffusion in soil inhibits recovery of the former from their initial non-steady-state condition. Over a range of representative conditions, the error in steady state chamber estimates of the trace gas flux varied from -30 to +32%, while estimates computed by linear regression from non-steadystate chamber concentrations were 2 to 31% too small. Although such errors are relatively small in comparison to the temporal and spatial variability characteristic of trace gas exchange, they bias the summary statistics for each experiment as well as larger scale trace gas flux estimates based on them.
Effect of insurance parity on substance abuse treatment.
Azzone, Vanessa; Frank, Richard G; Normand, Sharon-Lise T; Burnam, M Audrey
2011-02-01
This study examined the impact of insurance parity on the use, cost, and quality of substance abuse treatment. The authors compared substance abuse treatment spending and utilization from 1999 to 2002 for continuously enrolled beneficiaries covered by Federal Employees Health Benefit (FEHB) plans, which require parity coverage of mental health and substance use disorders, with spending and utilization among beneficiaries in a matched set of health plans without parity coverage. Logistic regression models estimated the probability of any substance abuse service use. Conditional on use, linear models estimated total and out-of-pocket spending. Logistic regression models for three quality indicators for substance abuse treatment were also estimated: identification of adult enrollees with a new substance abuse diagnosis, treatment initiation, and treatment engagement. Difference-in-difference estimates were computed as (postparity - preparity) differences in outcomes in plans without parity subtracted from those in FEHB plans. There were no significant differences between FEHB and non-FEHB plans in rates of change in average utilization of substance abuse services. Conditional on service utilization, the rate of substance abuse treatment out-of-pocket spending declined significantly in the FEHB plans compared with the non-FEHB plans (mean difference=-$101.09, 95% confidence interval [CI]=-$198.06 to -$4.12), whereas changes in total plan spending per user did not differ significantly. With parity, more patients had new diagnoses of a substance use disorder (difference-in-difference risk=.10%, CI=.02% to .19%). No statistically significant differences were found for rates of initiation and engagement in substance abuse treatment. Findings suggest that for continuously enrolled populations, providing parity of substance abuse treatment coverage improved insurance protection but had little impact on utilization, costs for plans, or quality of care.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nordström, Jan, E-mail: jan.nordstrom@liu.se; Wahlsten, Markus, E-mail: markus.wahlsten@liu.se
We consider a hyperbolic system with uncertainty in the boundary and initial data. Our aim is to show that different boundary conditions give different convergence rates of the variance of the solution. This means that we can with the same knowledge of data get a more or less accurate description of the uncertainty in the solution. A variety of boundary conditions are compared and both analytical and numerical estimates of the variance of the solution are presented. As an application, we study the effect of this technique on Maxwell's equations as well as on a subsonic outflow boundary for themore » Euler equations.« less
Data assimialation for real-time prediction and reanalysis
NASA Astrophysics Data System (ADS)
Shprits, Y.; Kellerman, A. C.; Podladchikova, T.; Kondrashov, D. A.; Ghil, M.
2015-12-01
We discuss the how data assimilation can be used for the analysis of individual satellite anomalies, development of long-term evolution reconstruction that can be used for the specification models, and use of data assimilation to improve the now-casting and focusing of the radiation belts. We also discuss advanced data assimilation methods such as parameter estimation and smoothing.The 3D data assimilative VERB allows us to blend together data from GOES, RBSP A and RBSP B. Real-time prediction framework operating on our web site based on GOES, RBSP A, B and ACE data and 3D VERB is presented and discussed. In this paper we present a number of application of the data assimilation with the VERB 3D code. 1) Model with data assimilation allows to propagate data to different pitch angles, energies, and L-shells and blends them together with the physics based VERB code in an optimal way. We illustrate how we use this capability for the analysis of the previous events and for obtaining a global and statistical view of the system. 2) The model predictions strongly depend on initial conditions that are set up for the model. Therefore the model is as good as the initial conditions that it uses. To produce the best possible initial condition data from different sources ( GOES, RBSP A, B, our empirical model predictions based on ACE) are all blended together in an optimal way by means of data assimilation as described above. The resulting initial condition does not have gaps. That allows us to make a more accurate predictions.
Overview of entry risk predictions
NASA Astrophysics Data System (ADS)
Mrozinski, R.; Mendeck, G.; Cutri-Kohart, R.
Risk to people on the ground from uncontrolled entries of spacecraft is a primary concern when analyzing end-of-life disposal options for satellites. Countries must balance this risk with the need to mitigate an exponentially growing space debris population. Currently the United States does this via guidelines that call for a satellite to be disposed of in a controlled manner if an uncontrolled entry would be too risky to people on the ground. This risk is measured by a quantity called "casualty expectation", or E , where casualty expectation is defined as the expectedc number of people suffering death or injury due to a spacecraft entry event. If Ec exceeds 1 in 10,000, U. S. guidelines state that the entry should be controlled rather than uncontrolled. Since this guideline can have serious impacts on the cost, lifetime, and even the mission and functionality of a satellite, it is critical that this quantity be estimated well, and decision makers understand all assumptions and limitations inherent in the resulting value. This paper discusses several issues regarding estimates of casualty expectation, beginning with an overview of relevant United States policies and guidelines. The equation the space industry typically uses to estimate casualty expectation is presented, along with a look at the sensitivity of the results to the typical assumptions, models, and initial condition uncertainties. Differences in these modeling issues with respect to launch failure Ec estimates are included in the discussion. An alternate quantity to assess risks due to spacecraft entries is introduced. "Probability of casualty", or Pc , is defined as the probability of one or more instances of people suffering death or injury due to a spacecraft entry event. The equation to estimate Pc is derived, where the same assumptions, modeling, and initial condition issues for Ec apply. Several examples are then given of both Ec and Pc estimate calculations. Due to the difficult issues in estimating both Ec and Pc, it is argued that "true" absolute quantities can never be computed. However, E and Pc are ideal for relativec analyses against a standard tool that eliminates these issues. Such a tool is recommended for assessing compliance with requirements of regulating institutions.
NASA Astrophysics Data System (ADS)
Jiao, J.; Trautz, A.; Zhang, Y.; Illangasekera, T.
2017-12-01
Subsurface flow and transport characterization under data-sparse condition is addressed by a new and computationally efficient inverse theory that simultaneously estimates parameters, state variables, and boundary conditions. Uncertainty in static data can be accounted for while parameter structure can be complex due to process uncertainty. The approach has been successfully extended to inverting transient and unsaturated flows as well as contaminant source identification under unknown initial and boundary conditions. In one example, by sampling numerical experiments simulating two-dimensional steady-state flow in which tracer migrates, a sequential inversion scheme first estimates the flow field and permeability structure before the evolution of tracer plume and dispersivities are jointly estimated. Compared to traditional inversion techniques, the theory does not use forward simulations to assess model-data misfits, thus the knowledge of the difficult-to-determine site boundary condition is not required. To test the general applicability of the theory, data generated during high-precision intermediate-scale experiments (i.e., a scale intermediary to the field and column scales) in large synthetic aquifers can be used. The design of such experiments is not trivial as laboratory conditions have to be selected to mimic natural systems in order to provide useful data, thus requiring a variety of sensors and data collection strategies. This paper presents the design of such an experiment in a synthetic, multi-layered aquifer with dimensions of 242.7 x 119.3 x 7.7 cm3. Different experimental scenarios that will generate data to validate the theory are presented.
NASA Astrophysics Data System (ADS)
Hardwick, Robert J.; Vennin, Vincent; Byrnes, Christian T.; Torrado, Jesús; Wands, David
2017-10-01
We study the stochastic distribution of spectator fields predicted in different slow-roll inflation backgrounds. Spectator fields have a negligible energy density during inflation but may play an important dynamical role later, even giving rise to primordial density perturbations within our observational horizon today. During de-Sitter expansion there is an equilibrium solution for the spectator field which is often used to estimate the stochastic distribution during slow-roll inflation. However slow roll only requires that the Hubble rate varies slowly compared to the Hubble time, while the time taken for the stochastic distribution to evolve to the de-Sitter equilibrium solution can be much longer than a Hubble time. We study both chaotic (monomial) and plateau inflaton potentials, with quadratic, quartic and axionic spectator fields. We give an adiabaticity condition for the spectator field distribution to relax to the de-Sitter equilibrium, and find that the de-Sitter approximation is never a reliable estimate for the typical distribution at the end of inflation for a quadratic spectator during monomial inflation. The existence of an adiabatic regime at early times can erase the dependence on initial conditions of the final distribution of field values. In these cases, spectator fields acquire sub-Planckian expectation values. Otherwise spectator fields may acquire much larger field displacements than suggested by the de-Sitter equilibrium solution. We quantify the information about initial conditions that can be obtained from the final field distribution. Our results may have important consequences for the viability of spectator models for the origin of structure, such as the simplest curvaton models.
Lightning can strike twice: an unlucky patient of neurological interest.
Gilbee, Ebony S
2013-06-24
Poliomyelitis, once a worldwide epidemic, is becoming increasingly rare owing to the introduction of the polio vaccine in the 1950s. It is estimated that the number of cases of polio has reduced by 99% since the Global Polio Eradication Initiative (GPEI) started in 1988. Amyotrophic lateral sclerosis (ALS) is another relatively uncommon condition which also affects anterior horn cells with debilitating neurological, and deadly, consequences. An unusual case of an aggressive form of ALS developing in a 72-year-old patient with paralytic poliomyelitis in childhood is presented. Her initial presentation was puzzling, and our approach to the diagnostic dilemma is discussed.
Homogeneous buoyancy-generated turbulence
NASA Technical Reports Server (NTRS)
Batchelor, G. K.; Canuto, V. M.; Chasnov, J. R.
1992-01-01
Using a theoretical analysis of fundamental equations and a numerical simulation of the flow field, the statistically homogeneous motion that is generated by buoyancy forces after the creation of homogeneous random fluctuations in the density of infinite fluid at an initial instant is examined. It is shown that analytical results together with numerical results provide a comprehensive description of the 'birth, life, and death' of buoyancy-generated turbulence. Results of numerical simulations yielded the mean-square density mean-square velocity fluctuations and the associated spectra as functions of time for various initial conditions, and the time required for the mean-square density fluctuation to fall to a specified small value was estimated.
Lightning can strike twice: an unlucky patient of neurological interest
Gilbee, Ebony S
2013-01-01
Poliomyelitis, once a worldwide epidemic, is becoming increasingly rare owing to the introduction of the polio vaccine in the 1950s. It is estimated that the number of cases of polio has reduced by 99% since the Global Polio Eradication Initiative (GPEI) started in 1988. Amyotrophic lateral sclerosis (ALS) is another relatively uncommon condition which also affects anterior horn cells with debilitating neurological, and deadly, consequences. An unusual case of an aggressive form of ALS developing in a 72-year-old patient with paralytic poliomyelitis in childhood is presented. Her initial presentation was puzzling, and our approach to the diagnostic dilemma is discussed. PMID:23814000
Zhou, Jin; Tracy, Timothy S; Remmel, Rory P
2010-11-01
Bilirubin, an end product of heme catabolism, is primarily eliminated via glucuronic acid conjugation by UGT1A1. Impaired bilirubin conjugation, caused by inhibition of UGT1A1, can result in clinical consequences, including jaundice and kernicterus. Thus, evaluation of the ability of new drug candidates to inhibit UGT1A1-catalyzed bilirubin glucuronidation in vitro has become common practice. However, the instability of bilirubin and its glucuronides presents substantial technical challenges to conduct in vitro bilirubin glucuronidation assays. Furthermore, because bilirubin can be diglucuronidated through a sequential reaction, establishment of initial rate conditions can be problematic. To address these issues, a robust high-performance liquid chromatography assay to measure both bilirubin mono- and diglucuronide conjugates was developed, and the incubation conditions for bilirubin glucuronidation by human embryonic kidney 293-expressed UGT1A1 were carefully characterized. Our results indicated that bilirubin glucuronidation should be assessed at very low protein concentrations (0.05 mg/ml protein) and over a short incubation time (5 min) to assure initial rate conditions. Under these conditions, bilirubin total glucuronide formation exhibited a hyperbolic (Michaelis-Menten) kinetic profile with a K(m) of ∼0.2 μM. In addition, under these initial rate conditions, the relative proportions between the total monoglucuronide and the diglucuronide product were constant across the range of bilirubin concentration evaluated (0.05-2 μM), with the monoglucuronide being the predominant species (∼70%). In conclusion, establishment of appropriate incubation conditions (i.e., very low protein concentrations and short incubation times) is necessary to properly characterize the kinetics of bilirubin glucuronidation in a recombinant UGT1A1 system.
A meta-analytic review of moral licensing.
Blanken, Irene; van de Ven, Niels; Zeelenberg, Marcel
2015-04-01
Moral licensing refers to the effect that when people initially behave in a moral way, they are later more likely to display behaviors that are immoral, unethical, or otherwise problematic. We provide a state-of-the-art overview of moral licensing by conducting a meta-analysis of 91 studies (7,397 participants) that compare a licensing condition with a control condition. Based on this analysis, the magnitude of the moral licensing effect is estimated to be a Cohen's d of 0.31. We tested potential moderators and found that published studies tend to have larger moral licensing effects than unpublished studies. We found no empirical evidence for other moderators that were theorized to be of importance. The effect size estimate implies that studies require many more participants to draw solid conclusions about moral licensing and its possible moderators. © 2015 by the Society for Personality and Social Psychology, Inc.
He, Ning; Sun, Hechun; Dai, Miaomiao
2014-05-01
To evaluate the influence of temperature and humidity on the drug stability by initial average rate experiment, and to obtained the kinetic parameters. The effect of concentration error, drug degradation extent, humidity and temperature numbers, humidity and temperature range, and average humidity and temperature on the accuracy and precision of kinetic parameters in the initial average rate experiment was explored. The stability of vitamin C, as a solid state model, was investigated by an initial average rate experiment. Under the same experimental conditions, the kinetic parameters obtained from this proposed method were comparable to those from classical isothermal experiment at constant humidity. The estimates were more accurate and precise by controlling the extent of drug degradation, changing humidity and temperature range, or by setting the average temperature closer to room temperature. Compared with isothermal experiments at constant humidity, our proposed method saves time, labor, and materials.
NASA Astrophysics Data System (ADS)
Boley, Aaron C.; Hayfield, Tristen; Mayer, Lucio; Durisen, Richard H.
2010-06-01
We explore the initial conditions for fragments in the extended regions (r≳50AU) of gravitationally unstable disks. We combine analytic estimates for the fragmentation of spiral arms with 3D SPH simulations to show that initial fragment masses are in the gas giant regime. These initial fragments will have substantial angular momentum, and should form disks with radii of a few AU. We show that clumps will survive for multiple orbits before they undergo a second, rapid collapse due to H 2 dissociation and that it is possible to destroy bound clumps by transporting them into the inner disk. The consequences of disrupted clumps for planet formation, dust processing, and disk evolution are discussed. We argue that it is possible to produce Earth-mass cores in the outer disk during the earliest phases of disk evolution.
Changyou Sun; Daowei Zhang
2010-01-01
In this article, the results of an initial attempt to estimate the effects of state attributes on plant location and investment expenditure were presented for the forest products industry in the southern United States. A conditional logit model was used to analyze new plant births, and a time-series cross-section model to assess the total capital expenditure....
CONSTRAINTS ON THE PHYSICAL PROPERTIES OF MAIN BELT COMET P/2013 R3 FROM ITS BREAKUP EVENT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirabayashi, Masatoshi; Sánchez, Diego Paul; Gabriel, Travis
2014-07-01
Jewitt et al. recently reported that main belt comet P/2013 R3 experienced a breakup, probably due to rotational disruption, with its components separating on mutually hyperbolic orbits. We propose a technique for constraining physical properties of the proto-body, especially the initial spin period and cohesive strength, as a function of the body's estimated size and density. The breakup conditions are developed by combining mutual orbit dynamics of the smaller components and the failure condition of the proto-body. Given a proto-body with a bulk density ranging from 1000 kg m{sup –3} to 1500 kg m{sup –3} (a typical range of the bulk density of C-type asteroids),more » we obtain possible values of the cohesive strength (40-210 Pa) and the initial spin state (0.48-1.9 hr). From this result, we conclude that although the proto-body could have been a rubble pile, it was likely spinning beyond its gravitational binding limit and would have needed cohesive strength to hold itself together. Additional observations of P/2013 R3 will enable stronger constraints on this event, and the present technique will be able to give more precise estimates of its internal structure.« less
NASA Astrophysics Data System (ADS)
Jun, Li; Huicheng, Yin
2018-05-01
The paper is devoted to investigating long time behavior of smooth small data solutions to 3-D quasilinear wave equations outside of compact convex obstacles with Neumann boundary conditions. Concretely speaking, when the surface of a 3-D compact convex obstacle is smooth and the quasilinear wave equation fulfills the null condition, we prove that the smooth small data solution exists globally provided that the Neumann boundary condition on the exterior domain is given. One of the main ingredients in the current paper is the establishment of local energy decay estimates of the solution itself. As an application of the main result, the global stability to 3-D static compressible Chaplygin gases in exterior domain is shown under the initial irrotational perturbation with small amplitude.
Trends and uncertainties in budburst projections of Norway spruce in Northern Europe.
Olsson, Cecilia; Olin, Stefan; Lindström, Johan; Jönsson, Anna Maria
2017-12-01
Budburst is regulated by temperature conditions, and a warming climate is associated with earlier budburst. A range of phenology models has been developed to assess climate change effects, and they tend to produce different results. This is mainly caused by different model representations of tree physiology processes, selection of observational data for model parameterization, and selection of climate model data to generate future projections. In this study, we applied (i) Bayesian inference to estimate model parameter values to address uncertainties associated with selection of observational data, (ii) selection of climate model data representative of a larger dataset, and (iii) ensembles modeling over multiple initial conditions, model classes, model parameterizations, and boundary conditions to generate future projections and uncertainty estimates. The ensemble projection indicated that the budburst of Norway spruce in northern Europe will on average take place 10.2 ± 3.7 days earlier in 2051-2080 than in 1971-2000, given climate conditions corresponding to RCP 8.5. Three provenances were assessed separately (one early and two late), and the projections indicated that the relationship among provenance will remain also in a warmer climate. Structurally complex models were more likely to fail predicting budburst for some combinations of site and year than simple models. However, they contributed to the overall picture of current understanding of climate impacts on tree phenology by capturing additional aspects of temperature response, for example, chilling. Model parameterizations based on single sites were more likely to result in model failure than parameterizations based on multiple sites, highlighting that the model parameterization is sensitive to initial conditions and may not perform well under other climate conditions, whether the change is due to a shift in space or over time. By addressing a range of uncertainties, this study showed that ensemble modeling provides a more robust impact assessment than would a single phenology model run.
Lebel, Karina; Hamel, Mathieu; Duval, Christian; Nguyen, Hung; Boissy, Patrick
2018-01-01
Joint kinematics can be assessed using orientation estimates from Attitude and Heading Reference Systems (AHRS). However, magnetically-perturbed environments affect the accuracy of the estimated orientations. This study investigates, both in controlled and human mobility conditions, a trial calibration technic based on a 2D photograph with a pose estimation algorithm to correct initial difference in AHRS Inertial reference frames and improve joint angle accuracy. In controlled conditions, two AHRS were solidly affixed onto a wooden stick and a series of static and dynamic trials were performed in varying environments. Mean accuracy of relative orientation between the two AHRS was improved from 24.4° to 2.9° using the proposed correction method. In human conditions, AHRS were placed on the shank and the foot of a participant who performed repeated trials of straight walking and walking while turning, varying the level of magnetic perturbation in the starting environment and the walking speed. Mean joint orientation accuracy went from 6.7° to 2.8° using the correction algorithm. The impact of starting environment was also greatly reduced, up to a point where one could consider it as non-significant from a clinical point of view (maximum mean difference went from 8° to 0.6°). The results obtained demonstrate that the proposed method improves significantly the mean accuracy of AHRS joint orientation estimations in magnetically-perturbed environments and can be implemented in post processing of AHRS data collected during biomechanical evaluation of motion. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Lane, John E.; Kasparis, Takis; Jones, W. Linwood; Metzger, Philip T.
2009-01-01
Methodologies to improve disdrometer processing, loosely based on mathematical techniques common to the field of particle flow and fluid mechanics, are examined and tested. The inclusion of advection and vertical wind field estimates appear to produce significantly improved results in a Lagrangian hydrometeor trajectory model, in spite of very strict assumptions of noninteracting hydrometeors, constant vertical air velocity, and time independent advection during the scan time interval. Wind field data can be extracted from each radar elevation scan by plotting and analyzing reflectivity contours over the disdrometer site and by collecting the radar radial velocity data to obtain estimates of advection. Specific regions of disdrometer spectra (drop size versus time) often exhibit strong gravitational sorting signatures, from which estimates of vertical velocity can be extracted. These independent wind field estimates become inputs and initial conditions to the Lagrangian trajectory simulation of falling hydrometeors.
NASA Technical Reports Server (NTRS)
Murphy, K. A.
1988-01-01
A parameter estimation algorithm is developed which can be used to estimate unknown time- or state-dependent delays and other parameters (e.g., initial condition) appearing within a nonlinear nonautonomous functional differential equation. The original infinite dimensional differential equation is approximated using linear splines, which are allowed to move with the variable delay. The variable delays are approximated using linear splines as well. The approximation scheme produces a system of ordinary differential equations with nice computational properties. The unknown parameters are estimated within the approximating systems by minimizing a least-squares fit-to-data criterion. Convergence theorems are proved for time-dependent delays and state-dependent delays within two classes, which say essentially that fitting the data by using approximations will, in the limit, provide a fit to the data using the original system. Numerical test examples are presented which illustrate the method for all types of delay.
NASA Technical Reports Server (NTRS)
Murphy, K. A.
1990-01-01
A parameter estimation algorithm is developed which can be used to estimate unknown time- or state-dependent delays and other parameters (e.g., initial condition) appearing within a nonlinear nonautonomous functional differential equation. The original infinite dimensional differential equation is approximated using linear splines, which are allowed to move with the variable delay. The variable delays are approximated using linear splines as well. The approximation scheme produces a system of ordinary differential equations with nice computational properties. The unknown parameters are estimated within the approximating systems by minimizing a least-squares fit-to-data criterion. Convergence theorems are proved for time-dependent delays and state-dependent delays within two classes, which say essentially that fitting the data by using approximations will, in the limit, provide a fit to the data using the original system. Numerical test examples are presented which illustrate the method for all types of delay.
High Temperature Chemistry in the Columbia Accident Investigation
NASA Technical Reports Server (NTRS)
Jacobson, Nathan; Opila, Elizabeth; Tallant, David; Simpson, Regina
2004-01-01
Initial estimates on the temperature and conditions of the breach in Columbia's wing focused on analyses of the slag deposits. These deposits are complex mixtures of the reinforced carbon/carbon (RCC) constituents, insulation material, and wing structural materials. However it was possible to clearly discern melted/solidified Cerachrome(R) insulation, indicating the temperatures had exceeded 1760 C. Current research focuses on the carbon/carbon in the path from the breach. Carbon morphology indicates heavy oxidation and erosion. Raman spectroscopy yielded further temperature estimates. A technique developed at Sandia National Laboratories is based on crystallite size in carbon chars. Lower temperatures yield nanocrystalline graphite; whereas higher temperatures yield larger graphite crystals. By comparison to standards the temperatures on the recovered RCC fragments were estimated to have been greater than 2700 C.
NASA Astrophysics Data System (ADS)
Singh, Shailesh Kumar; Zammit, Christian; Hreinsson, Einar; Woods, Ross; Clark, Martyn; Hamlet, Alan
2013-04-01
Increased access to water is a key pillar of the New Zealand government plan for economic growths. Variable climatic conditions coupled with market drivers and increased demand on water resource result in critical decision made by water managers based on climate and streamflow forecast. Because many of these decisions have serious economic implications, accurate forecast of climate and streamflow are of paramount importance (eg irrigated agriculture and electricity generation). New Zealand currently does not have a centralized, comprehensive, and state-of-the-art system in place for providing operational seasonal to interannual streamflow forecasts to guide water resources management decisions. As a pilot effort, we implement and evaluate an experimental ensemble streamflow forecasting system for the Waitaki and Rangitata River basins on New Zealand's South Island using a hydrologic simulation model (TopNet) and the familiar ensemble streamflow prediction (ESP) paradigm for estimating forecast uncertainty. To provide a comprehensive database for evaluation of the forecasting system, first a set of retrospective model states simulated by the hydrologic model on the first day of each month were archived from 1972-2009. Then, using the hydrologic simulation model, each of these historical model states was paired with the retrospective temperature and precipitation time series from each historical water year to create a database of retrospective hindcasts. Using the resulting database, the relative importance of initial state variables (such as soil moisture and snowpack) as fundamental drivers of uncertainties in forecasts were evaluated for different seasons and lead times. The analysis indicate that the sensitivity of flow forecast to initial condition uncertainty is depend on the hydrological regime and season of forecast. However initial conditions do not have a large impact on seasonal flow uncertainties for snow dominated catchments. Further analysis indicates that this result is valid when the hindcast database is conditioned by ENSO classification. As a result hydrological forecasts based on ESP technique, where present initial conditions with histological forcing data are used may be plausible for New Zealand catchments.
Monte Carlo based NMR simulations of open fractures in porous media
NASA Astrophysics Data System (ADS)
Lukács, Tamás; Balázs, László
2014-05-01
According to the basic principles of nuclear magnetic resonance (NMR), a measurement's free induction decay curve has an exponential characteristic and its parameter is the transversal relaxation time, T2, given by the Bloch equations in rotating frame. In our simulations we are observing that particular case when the bulk's volume is neglectable to the whole system, the vertical movement is basically zero, hence the diffusion part of the T2 relation can be editted out. This small-apertured situations are common in sedimentary layers, and the smallness of the observed volume enable us to calculate with just the bulk relaxation and the surface relaxation. The simulation uses the Monte-Carlo method, so it is based on a random-walk generator which provides the brownian motions of the particles by uniformly distributed, pseudorandom generated numbers. An attached differential equation assures the bulk relaxation, the initial and the iterated conditions guarantee the simulation's replicability and enable having consistent estimations. We generate an initial geometry of a plain segment with known height, with given number of particles, the spatial distribution is set to equal to each simulation, and the surface-volume ratio remains at a constant value. It follows that to the given thickness of the open fracture, from the fitted curve's parameter, the surface relaxivity is determinable. The calculated T2 distribution curves are also indicating the inconstancy in the observed fracture situations. The effect of varying the height of the lamina at a constant diffusion coefficient also produces characteristic anomaly and for comparison we have run the simulation with the same initial volume, number of particles and conditions in spherical bulks, their profiles are clear and easily to understand. The surface relaxation enables us to estimate the interaction beetwen the materials of boundary with this two geometrically well-defined bulks, therefore the distribution takes as a basis in estimation of the porosity and can be use of identifying small-grained porous media.
Katriel, G.; Yaari, R.; Huppert, A.; Roll, U.; Stone, L.
2011-01-01
This paper presents new computational and modelling tools for studying the dynamics of an epidemic in its initial stages that use both available incidence time series and data describing the population's infection network structure. The work is motivated by data collected at the beginning of the H1N1 pandemic outbreak in Israel in the summer of 2009. We formulated a new discrete-time stochastic epidemic SIR (susceptible-infected-recovered) model that explicitly takes into account the disease's specific generation-time distribution and the intrinsic demographic stochasticity inherent to the infection process. Moreover, in contrast with many other modelling approaches, the model allows direct analytical derivation of estimates for the effective reproductive number (Re) and of their credible intervals, by maximum likelihood and Bayesian methods. The basic model can be extended to include age–class structure, and a maximum likelihood methodology allows us to estimate the model's next-generation matrix by combining two types of data: (i) the incidence series of each age group, and (ii) infection network data that provide partial information of ‘who-infected-who’. Unlike other approaches for estimating the next-generation matrix, the method developed here does not require making a priori assumptions about the structure of the next-generation matrix. We show, using a simulation study, that even a relatively small amount of information about the infection network greatly improves the accuracy of estimation of the next-generation matrix. The method is applied in practice to estimate the next-generation matrix from the Israeli H1N1 pandemic data. The tools developed here should be of practical importance for future investigations of epidemics during their initial stages. However, they require the availability of data which represent a random sample of the real epidemic process. We discuss the conditions under which reporting rates may or may not influence our estimated quantities and the effects of bias. PMID:21247949
Double torsion fracture mechanics testing of shales under chemically reactive conditions
NASA Astrophysics Data System (ADS)
Chen, X.; Callahan, O. A.; Holder, J. T.; Olson, J. E.; Eichhubl, P.
2015-12-01
Fracture properties of shales is vital for applications such as shale and tight gas development, and seal performance of carbon storage reservoirs. We analyze the fracture behavior from samples of Marcellus, Woodford, and Mancos shales using double-torsion (DT) load relaxation fracture tests. The DT test allows the determination of mode-I fracture toughness (KIC), subcritical crack growth index (SCI), and the stress-intensity factor vs crack velocity (K-V) curves. Samples are tested at ambient air and aqueous conditions with variable ionic concentrations of NaCl and CaCl2, and temperatures up to 70 to determine the effects of chemical/environmental conditions on fracture. Under ambient air condition, KIC determined from DT tests is 1.51±0.32, 0.85±0.25, 1.08±0.17 MPam1/2 for Marcellus, Woodford, and Mancos shales, respectively. Tests under water showed considerable change of KIC compared to ambient condition, with 10.6% increase for Marcellus, 36.5% decrease for Woodford, and 6.7% decrease for Mancos shales. SCI under ambient air condition is between 56 and 80 for the shales tested. The presence of water results in a significant reduction of the SCI from 70% to 85% compared to air condition. Tests under chemically reactive solutions are currently being performed with temperature control. K-V curves under ambient air conditions are linear with stable SCI throughout the load-relaxation period. However, tests conducted under water result in an initial cracking period with SCI values comparable to ambient air tests, which then gradually transition into stable but significantly lower SCI values of 10-20. The non-linear K-V curves reveal that crack propagation in shales is initially limited by the transport of chemical agents due to their low permeability. Only after the initial cracking do interactions at the crack tip lead to cracking controlled by faster stress corrosion reactions. The decrease of SCI in water indicates higher crack propagation velocity due to faster stress corrosion rate in water than in ambient air. The experimental results are applicable for the prediction of fracture initiation based on KIC, modeling fracture pattern based on SCI, and the estimation of dynamic fracture propagation such as crack growth velocity and crack re-initiation.
Rambo, Philip L; Callahan, Jennifer L; Hogan, Lindsey R; Hullmann, Stephanie; Wrape, Elizabeth
2015-01-01
Recent efforts have contributed to significant advances in the detection of malingered performances in adults during cognitive assessment. However, children's ability to purposefully underperform has received relatively little attention. The purpose of the present investigation was to examine children's performances on common intellectual measures, as well as two symptom validity measures: the Test of Memory Malingering and the Dot-Counting Test. This was accomplished through the administration of measures to children ages 6 to 12 years old in randomly assigned full-effort (control) and poor-effort (treatment) conditions. Prior to randomization, children's general intellectual functioning (i.e., IQ) was estimated via administration of the Kaufman Brief Intellectual Battery-Second Edition (KBIT-2). Multivariate analyses revealed that the conditions significantly differed on some but not all administered measures. Specifically, children's estimated IQ in the treatment condition significantly differed from the full-effort IQ initially obtained from the same children on the KBIT-2, as well as from the IQs obtained in the full-effort control condition. These findings suggest that children are fully capable of willfully underperforming during cognitive testing; however, consistent with prior investigations, some measures evidence greater sensitivity than others in evaluating effort.
Shared sensory estimates for human motion perception and pursuit eye movements.
Mukherjee, Trishna; Battifarano, Matthew; Simoncini, Claudio; Osborne, Leslie C
2015-06-03
Are sensory estimates formed centrally in the brain and then shared between perceptual and motor pathways or is centrally represented sensory activity decoded independently to drive awareness and action? Questions about the brain's information flow pose a challenge because systems-level estimates of environmental signals are only accessible indirectly as behavior. Assessing whether sensory estimates are shared between perceptual and motor circuits requires comparing perceptual reports with motor behavior arising from the same sensory activity. Extrastriate visual cortex both mediates the perception of visual motion and provides the visual inputs for behaviors such as smooth pursuit eye movements. Pursuit has been a valuable testing ground for theories of sensory information processing because the neural circuits and physiological response properties of motion-responsive cortical areas are well studied, sensory estimates of visual motion signals are formed quickly, and the initiation of pursuit is closely coupled to sensory estimates of target motion. Here, we analyzed variability in visually driven smooth pursuit and perceptual reports of target direction and speed in human subjects while we manipulated the signal-to-noise level of motion estimates. Comparable levels of variability throughout viewing time and across conditions provide evidence for shared noise sources in the perception and action pathways arising from a common sensory estimate. We found that conditions that create poor, low-gain pursuit create a discrepancy between the precision of perception and that of pursuit. Differences in pursuit gain arising from differences in optic flow strength in the stimulus reconcile much of the controversy on this topic. Copyright © 2015 the authors 0270-6474/15/358515-16$15.00/0.
Shared Sensory Estimates for Human Motion Perception and Pursuit Eye Movements
Mukherjee, Trishna; Battifarano, Matthew; Simoncini, Claudio
2015-01-01
Are sensory estimates formed centrally in the brain and then shared between perceptual and motor pathways or is centrally represented sensory activity decoded independently to drive awareness and action? Questions about the brain's information flow pose a challenge because systems-level estimates of environmental signals are only accessible indirectly as behavior. Assessing whether sensory estimates are shared between perceptual and motor circuits requires comparing perceptual reports with motor behavior arising from the same sensory activity. Extrastriate visual cortex both mediates the perception of visual motion and provides the visual inputs for behaviors such as smooth pursuit eye movements. Pursuit has been a valuable testing ground for theories of sensory information processing because the neural circuits and physiological response properties of motion-responsive cortical areas are well studied, sensory estimates of visual motion signals are formed quickly, and the initiation of pursuit is closely coupled to sensory estimates of target motion. Here, we analyzed variability in visually driven smooth pursuit and perceptual reports of target direction and speed in human subjects while we manipulated the signal-to-noise level of motion estimates. Comparable levels of variability throughout viewing time and across conditions provide evidence for shared noise sources in the perception and action pathways arising from a common sensory estimate. We found that conditions that create poor, low-gain pursuit create a discrepancy between the precision of perception and that of pursuit. Differences in pursuit gain arising from differences in optic flow strength in the stimulus reconcile much of the controversy on this topic. PMID:26041919
Williams, Christopher; Dugger, Bruce D.; Brasher, Michael G.; Coluccy, John M.; Cramer, Dane M.; Eadie, John M.; Gray, Matthew J.; Hagy, Heath M.; Livolsi, Mark; McWilliams, Scott R.; Petrie, Matthew; Soulliere, Gregory J.; Tirpak, John M.; Webb, Elisabeth B.
2014-01-01
Population-based habitat conservation planning for migrating and wintering waterfowl in North America is carried out by habitat Joint Venture (JV) initiatives and is based on the premise that food can limit demography (i.e. food limitation hypothesis). Consequently, planners use bioenergetic models to estimate food (energy) availability and population-level energy demands at appropriate spatial and temporal scales, and translate these values into regional habitat objectives. While simple in principle, there are both empirical and theoretical challenges associated with calculating energy supply and demand including: 1) estimating food availability, 2) estimating the energy content of specific foods, 3) extrapolating site-specific estimates of food availability to landscapes for focal species, 4) applicability of estimates from a single species to other species, 5) estimating resting metabolic rate, 6) estimating cost of daily behaviours, and 7) estimating costs of thermoregulation or tissue synthesis. Most models being used are daily ration models (DRMs) whose set of simplifying assumptions are well established and whose use is widely accepted and feasible given the empirical data available to populate such models. However, DRMs do not link habitat objectives to metrics of ultimate ecological importance such as individual body condition or survival, and largely only consider food-producing habitats. Agent-based models (ABMs) provide a possible alternative for creating more biologically realistic models under some conditions; however, ABMs require different types of empirical inputs, many of which have yet to be estimated for key North American waterfowl. Decisions about how JVs can best proceed with habitat conservation would benefit from the use of sensitivity analyses that could identify the empirical and theoretical uncertainties that have the greatest influence on efforts to estimate habitat carrying capacity. Development of ABMs at restricted, yet biologically relevant spatial scales, followed by comparisons of their outputs to those generated from more simplistic, deterministic models can provide a means of assessing degrees of dissimilarity in how alternative models describe desired landscape conditions for migrating and wintering waterfowl.
Accurate Initial State Estimation in a Monocular Visual–Inertial SLAM System
Chen, Jing; Zhou, Zixiang; Leng, Zhen; Fan, Lei
2018-01-01
The fusion of monocular visual and inertial cues has become popular in robotics, unmanned vehicles and augmented reality fields. Recent results have shown that optimization-based fusion strategies outperform filtering strategies. Robust state estimation is the core capability for optimization-based visual–inertial Simultaneous Localization and Mapping (SLAM) systems. As a result of the nonlinearity of visual–inertial systems, the performance heavily relies on the accuracy of initial values (visual scale, gravity, velocity and Inertial Measurement Unit (IMU) biases). Therefore, this paper aims to propose a more accurate initial state estimation method. On the basis of the known gravity magnitude, we propose an approach to refine the estimated gravity vector by optimizing the two-dimensional (2D) error state on its tangent space, then estimate the accelerometer bias separately, which is difficult to be distinguished under small rotation. Additionally, we propose an automatic termination criterion to determine when the initialization is successful. Once the initial state estimation converges, the initial estimated values are used to launch the nonlinear tightly coupled visual–inertial SLAM system. We have tested our approaches with the public EuRoC dataset. Experimental results show that the proposed methods can achieve good initial state estimation, the gravity refinement approach is able to efficiently speed up the convergence process of the estimated gravity vector, and the termination criterion performs well. PMID:29419751
A theory of stationarity and asymptotic approach in dissipative systems
NASA Astrophysics Data System (ADS)
Rubel, Michael Thomas
2007-05-01
The approximate dynamics of many physical phenomena, including turbulence, can be represented by dissipative systems of ordinary differential equations. One often turns to numerical integration to solve them. There is an incompatibility, however, between the answers it can produce (i.e., specific solution trajectories) and the questions one might wish to ask (e.g., what behavior would be typical in the laboratory?) To determine its outcome, numerical integration requires more detailed initial conditions than a laboratory could normally provide. In place of initial conditions, experiments stipulate how tests should be carried out: only under statistically stationary conditions, for example, or only during asymptotic approach to a final state. Stipulations such as these, rather than initial conditions, are what determine outcomes in the laboratory.This theoretical study examines whether the points of view can be reconciled: What is the relationship between one's statistical stipulations for how an experiment should be carried out--stationarity or asymptotic approach--and the expected results? How might those results be determined without invoking initial conditions explicitly?To answer these questions, stationarity and asymptotic approach conditions are analyzed in detail. Each condition is treated as a statistical constraint on the system--a restriction on the probability density of states that might be occupied when measurements take place. For stationarity, this reasoning leads to a singular, invariant probability density which is already familiar from dynamical systems theory. For asymptotic approach, it leads to a new, more regular probability density field. A conjecture regarding what appears to be a limit relationship between the two densities is presented.By making use of the new probability densities, one can derive output statistics directly, avoiding the need to create or manipulate initial data, and thereby avoiding the conceptual incompatibility mentioned above. This approach also provides a clean way to derive reduced-order models, complete with local and global error estimates, as well as a way to compare existing reduced-order models objectively.The new approach is explored in the context of five separate test problems: a trivial one-dimensional linear system, a damped unforced linear oscillator in two dimensions, the isothermal Rayleigh-Plesset equation, Lorenz's equations, and the Stokes limit of Burgers' equation in one space dimension. In each case, various output statistics are deduced without recourse to initial conditions. Further, reduced-order models are constructed for asymptotic approach of the damped unforced linear oscillator, the isothermal Rayleigh-Plesset system, and Lorenz's equations, and for stationarity of Lorenz's equations.
Predicting future protection of respirator users: Statistical approaches and practical implications.
Hu, Chengcheng; Harber, Philip; Su, Jing
2016-01-01
The purpose of this article is to describe a statistical approach for predicting a respirator user's fit factor in the future based upon results from initial tests. A statistical prediction model was developed based upon joint distribution of multiple fit factor measurements over time obtained from linear mixed effect models. The model accounts for within-subject correlation as well as short-term (within one day) and longer-term variability. As an example of applying this approach, model parameters were estimated from a research study in which volunteers were trained by three different modalities to use one of two types of respirators. They underwent two quantitative fit tests at the initial session and two on the same day approximately six months later. The fitted models demonstrated correlation and gave the estimated distribution of future fit test results conditional on past results for an individual worker. This approach can be applied to establishing a criterion value for passing an initial fit test to provide reasonable likelihood that a worker will be adequately protected in the future; and to optimizing the repeat fit factor test intervals individually for each user for cost-effective testing.
Multivariate analysis of gamma spectra to characterize used nuclear fuel
Coble, Jamie; Orton, Christopher; Schwantes, Jon
2017-01-17
The Multi-Isotope Process (MIP) Monitor provides an efficient means to monitor the process conditions in used nuclear fuel reprocessing facilities to support process verification and validation. The MIP Monitor applies multivariate analysis to gamma spectroscopy of key stages in the reprocessing stream in order to detect small changes in the gamma spectrum, which may indicate changes in process conditions. This research extends the MIP Monitor by characterizing a used fuel sample after initial dissolution according to the type of reactor of origin (pressurized or boiling water reactor; PWR and BWR, respectively), initial enrichment, burn up, and cooling time. Simulated gammamore » spectra were used in this paper to develop and test three fuel characterization algorithms. The classification and estimation models employed are based on the partial least squares regression (PLS) algorithm. A PLS discriminate analysis model was developed which perfectly classified reactor type for the three PWR and three BWR reactor designs studied. Locally weighted PLS models were fitted on-the-fly to estimate the remaining fuel characteristics. For the simulated gamma spectra considered, burn up was predicted with 0.1% root mean squared percent error (RMSPE) and both cooling time and initial enrichment with approximately 2% RMSPE. Finally, this approach to automated fuel characterization can be used to independently verify operator declarations of used fuel characteristics and to inform the MIP Monitor anomaly detection routines at later stages of the fuel reprocessing stream to improve sensitivity to changes in operational parameters that may indicate issues with operational control or malicious activities.« less
Multivariate analysis of gamma spectra to characterize used nuclear fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coble, Jamie; Orton, Christopher; Schwantes, Jon
The Multi-Isotope Process (MIP) Monitor provides an efficient means to monitor the process conditions in used nuclear fuel reprocessing facilities to support process verification and validation. The MIP Monitor applies multivariate analysis to gamma spectroscopy of key stages in the reprocessing stream in order to detect small changes in the gamma spectrum, which may indicate changes in process conditions. This research extends the MIP Monitor by characterizing a used fuel sample after initial dissolution according to the type of reactor of origin (pressurized or boiling water reactor; PWR and BWR, respectively), initial enrichment, burn up, and cooling time. Simulated gammamore » spectra were used in this paper to develop and test three fuel characterization algorithms. The classification and estimation models employed are based on the partial least squares regression (PLS) algorithm. A PLS discriminate analysis model was developed which perfectly classified reactor type for the three PWR and three BWR reactor designs studied. Locally weighted PLS models were fitted on-the-fly to estimate the remaining fuel characteristics. For the simulated gamma spectra considered, burn up was predicted with 0.1% root mean squared percent error (RMSPE) and both cooling time and initial enrichment with approximately 2% RMSPE. Finally, this approach to automated fuel characterization can be used to independently verify operator declarations of used fuel characteristics and to inform the MIP Monitor anomaly detection routines at later stages of the fuel reprocessing stream to improve sensitivity to changes in operational parameters that may indicate issues with operational control or malicious activities.« less
NASA Astrophysics Data System (ADS)
Li, Y. J.; Kokkinaki, Amalia; Darve, Eric F.; Kitanidis, Peter K.
2017-08-01
The operation of most engineered hydrogeological systems relies on simulating physical processes using numerical models with uncertain parameters and initial conditions. Predictions by such uncertain models can be greatly improved by Kalman-filter techniques that sequentially assimilate monitoring data. Each assimilation constitutes a nonlinear optimization, which is solved by linearizing an objective function about the model prediction and applying a linear correction to this prediction. However, if model parameters and initial conditions are uncertain, the optimization problem becomes strongly nonlinear and a linear correction may yield unphysical results. In this paper, we investigate the utility of one-step ahead smoothing, a variant of the traditional filtering process, to eliminate nonphysical results and reduce estimation artifacts caused by nonlinearities. We present the smoothing-based compressed state Kalman filter (sCSKF), an algorithm that combines one step ahead smoothing, in which current observations are used to correct the state and parameters one step back in time, with a nonensemble covariance compression scheme, that reduces the computational cost by efficiently exploring the high-dimensional state and parameter space. Numerical experiments show that when model parameters are uncertain and the states exhibit hyperbolic behavior with sharp fronts, as in CO2 storage applications, one-step ahead smoothing reduces overshooting errors and, by design, gives physically consistent state and parameter estimates. We compared sCSKF with commonly used data assimilation methods and showed that for the same computational cost, combining one step ahead smoothing and nonensemble compression is advantageous for real-time characterization and monitoring of large-scale hydrogeological systems with sharp moving fronts.
Baum, Rex L.; Godt, Jonathan W.; Savage, William Z.
2010-01-01
Shallow rainfall-induced landslides commonly occur under conditions of transient infiltration into initially unsaturated soils. In an effort to predict the timing and location of such landslides, we developed a model of the infiltration process using a two-layer system that consists of an unsaturated zone above a saturated zone and implemented this model in a geographic information system (GIS) framework. The model links analytical solutions for transient, unsaturated, vertical infiltration above the water table to pressure-diffusion solutions for pressure changes below the water table. The solutions are coupled through a transient water table that rises as water accumulates at the base of the unsaturated zone. This scheme, though limited to simplified soil-water characteristics and moist initial conditions, greatly improves computational efficiency over numerical models in spatially distributed modeling applications. Pore pressures computed by these coupled models are subsequently used in one-dimensional slope-stability computations to estimate the timing and locations of slope failures. Applied over a digital landscape near Seattle, Washington, for an hourly rainfall history known to trigger shallow landslides, the model computes a factor of safety for each grid cell at any time during a rainstorm. The unsaturated layer attenuates and delays the rainfall-induced pore-pressure response of the model at depth, consistent with observations at an instrumented hillside near Edmonds, Washington. This attenuation results in realistic estimates of timing for the onset of slope instability (7 h earlier than observed landslides, on average). By considering the spatial distribution of physical properties, the model predicts the primary source areas of landslides.
Too Much of a Good Thing? Exploring the Impact of Wealth on Weight.
Au, Nicole; Johnston, David W
2015-11-01
Obesity, like many health conditions, is more prevalent among the socioeconomically disadvantaged. In our data, very poor women are three times more likely to be obese and five times more likely to be severely obese than rich women. Despite this strong correlation, it remains unclear whether higher wealth causes lower obesity. In this paper, we use nationally representative panel data and exogenous wealth shocks (primarily inheritances and lottery wins) to shed light on this issue. Our estimates show that wealth improvements increase weight for women, but not men. This effect differs by initial wealth and weight-an average-sized wealth shock received by initially poor and obese women is estimated to increase weight by almost 10 lb. Importantly, for some females, the effects appear permanent. We also find that a change in diet is the most likely explanation for the weight gain. Overall, the results suggest that additional wealth may exacerbate rather than alleviate weight problems. Copyright © 2014 John Wiley & Sons, Ltd.
Sewage outfall plume dispersion observations with an autonomous underwater vehicle.
Ramos, P; Cunha, S R; Neves, M V; Pereira, F L; Quintaneiro, I
2005-01-01
This work represents one of the first successful applications of Autonomous Underwater Vehicles (AUVs) for interdisciplinary coastal research. A monitoring mission to study the shape and estimate the initial dilution of the S. Jacinto sewage outfall plume using an AUV was performed on July 2002. An efficient sampling strategy enabling greater improvements in spatial and temporal range of detection demonstrated that the sewage effluent plume can be clearly traced using naturally occurring tracers in the wastewater. The outfall plume was found at the surface highly influenced by the weak stratification and low currents. Dilution varying with distance downstream was estimated from the plume rise over the outfall diffuser until a nearly constant value of 130:1, 60 m from the diffuser, indicating the near field end. Our results demonstrate that AUVs can provide high-quality measurements of physical properties of effluent plumes in a very effective manner and valuable considerations about the initial mixing processes under real oceanic conditions can be further investigated.
Lifetime Estimation of the Upper Stage of GSAT-14 in Geostationary Transfer Orbit.
Jeyakodi David, Jim Fletcher; Sharma, Ram Krishan
2014-01-01
The combination of atmospheric drag and lunar and solar perturbations in addition to Earth's oblateness influences the orbital lifetime of an upper stage in geostationary transfer orbit (GTO). These high eccentric orbits undergo fluctuations in both perturbations and velocity and are very sensitive to the initial conditions. The main objective of this paper is to predict the reentry time of the upper stage of the Indian geosynchronous satellite launch vehicle, GSLV-D5, which inserted the satellite GSAT-14 into a GTO on January 05, 2014, with mean perigee and apogee altitudes of 170 km and 35975 km. Four intervals of near linear variation of the mean apogee altitude observed were used in predicting the orbital lifetime. For these four intervals, optimal values of the initial osculating eccentricity and ballistic coefficient for matching the mean apogee altitudes were estimated with the response surface methodology using a genetic algorithm. It was found that the orbital lifetime from these four time spans was between 144 and 148 days.
Lifetime Estimation of the Upper Stage of GSAT-14 in Geostationary Transfer Orbit
Jeyakodi David, Jim Fletcher; Sharma, Ram Krishan
2014-01-01
The combination of atmospheric drag and lunar and solar perturbations in addition to Earth's oblateness influences the orbital lifetime of an upper stage in geostationary transfer orbit (GTO). These high eccentric orbits undergo fluctuations in both perturbations and velocity and are very sensitive to the initial conditions. The main objective of this paper is to predict the reentry time of the upper stage of the Indian geosynchronous satellite launch vehicle, GSLV-D5, which inserted the satellite GSAT-14 into a GTO on January 05, 2014, with mean perigee and apogee altitudes of 170 km and 35975 km. Four intervals of near linear variation of the mean apogee altitude observed were used in predicting the orbital lifetime. For these four intervals, optimal values of the initial osculating eccentricity and ballistic coefficient for matching the mean apogee altitudes were estimated with the response surface methodology using a genetic algorithm. It was found that the orbital lifetime from these four time spans was between 144 and 148 days. PMID:27437491
Lunar PMAD technology assessment
NASA Technical Reports Server (NTRS)
Metcalf, Kenneth J.
1992-01-01
This report documents an initial set of power conditioning models created to generate 'ballpark' power management and distribution (PMAD) component mass and size estimates. It contains converter, rectifier, inverter, transformer, remote bus isolator (RBI), and remote power controller (RPC) models. These models allow certain studies to be performed; however, additional models are required to assess a full range of PMAD alternatives. The intent is to eventually form a library of PMAD models that will allow system designers to evaluate various power system architectures and distribution techniques quickly and consistently. The models in this report are designed primarily for space exploration initiative (SEI) missions requiring continuous power and supporting manned operations. The mass estimates were developed by identifying the stages in a component and obtaining mass breakdowns for these stages from near term electronic hardware elements. Technology advances were then incorporated to generate hardware masses consistent with the 2000 to 2010 time period. The mass of a complete component is computed by algorithms that calculate the masses of the component stages, control and monitoring, enclosure, and thermal management subsystem.
Batstone, D J; Torrijos, M; Ruiz, C; Schmidt, J E
2004-01-01
The model structure in anaerobic digestion has been clarified following publication of the IWA Anaerobic Digestion Model No. 1 (ADM1). However, parameter values are not well known, and uncertainty and variability in the parameter values given is almost unknown. Additionally, platforms for identification of parameters, namely continuous-flow laboratory digesters, and batch tests suffer from disadvantages such as long run times, and difficulty in defining initial conditions, respectively. Anaerobic sequencing batch reactors (ASBRs) are sequenced into fill-react-settle-decant phases, and offer promising possibilities for estimation of parameters, as they are by nature, dynamic in behaviour, and allow repeatable behaviour to establish initial conditions, and evaluate parameters. In this study, we estimated parameters describing winery wastewater (most COD as ethanol) degradation using data from sequencing operation, and validated these parameters using unsequenced pulses of ethanol and acetate. The model used was the ADM1, with an extension for ethanol degradation. Parameter confidence spaces were found by non-linear, correlated analysis of the two main Monod parameters; maximum uptake rate (k(m)), and half saturation concentration (K(S)). These parameters could be estimated together using only the measured acetate concentration (20 points per cycle). From interpolating the single cycle acetate data to multiple cycles, we estimate that a practical "optimal" identifiability could be achieved after two cycles for the acetate parameters, and three cycles for the ethanol parameters. The parameters found performed well in the short term, and represented the pulses of acetate and ethanol (within 4 days of the winery-fed cycles) very well. The main discrepancy was poor prediction of pH dynamics, which could be due to an unidentified buffer with an overall influence the same as a weak base (possibly CaCO3). Based on this work, ASBR systems are effective for parameter estimation, especially for comparative wastewater characterisation. The main disadvantages are heavy computational requirements for multiple cycles, and difficulty in establishing the correct biomass concentration in the reactor, though the last is also a disadvantage for continuous fixed film reactors, and especially, batch tests.
Cost effectiveness of the Oregon quitline "free patch initiative".
Fellows, Jeffrey L; Bush, Terry; McAfee, Tim; Dickerson, John
2007-12-01
We estimated the cost effectiveness of the Oregon tobacco quitline's "free patch initiative" compared to the pre-initiative programme. Using quitline utilisation and cost data from the state, intervention providers and patients, we estimated annual programme use and costs for media promotions and intervention services. We also estimated annual quitline registration calls and the number of quitters and life years saved for the pre-initiative and free patch initiative programmes. Service utilisation and 30-day abstinence at six months were obtained from 959 quitline callers. We compared the cost effectiveness of the free patch initiative (media and intervention costs) to the pre-initiative service offered to insured and uninsured callers. We conducted sensitivity analyses on key programme costs and outcomes by estimating a best case and worst case scenario for each intervention strategy. Compared to the pre-intervention programme, the free patch initiative doubled registered calls, increased quitting fourfold and reduced total costs per quit by $2688. We estimated annual paid media costs were $215 per registered tobacco user for the pre-initiative programme and less than $4 per caller during the free patch initiative. Compared to the pre-initiative programme, incremental quitline promotion and intervention costs for the free patch initiative were $86 (range $22-$353) per life year saved. Compared to the pre-initiative programme, the free patch initiative was a highly cost effective strategy for increasing quitting in the population.
Developing Novel Frameworks for Many-Body Ensembles
2016-03-17
RETURN YOUR FORM TO THE ABOVE ADDRESS. Massachusetts Institute of Technology (MIT) 77 Massachusetts Ave. NE18-901 Cambridge , MA 02139 -4307 15-Jul-2015...of-equilibrium dynamics and to estimate prob- Page 4 of 9 Figure 2: Illustration of the dendro- gram representation. The rectangle on the left shows...isolation as illustrated in Figure 4. Starting from random initial conditions, an ensemble of particle pairs was simulated to establish the long-time
NASA Technical Reports Server (NTRS)
Jacobson, Nathan S.; Opila, Elizabeth J.; Tallant, David
2005-01-01
Initial estimates on the temperature and conditions of the breach in the Space Shuttle Columbia's wing focused on analyses of the slag deposits. These deposits are complex mixtures of the reinforced carbon/carbon (RCC) constituents, insulation material, and wing structural materials. Identification of melted/solidified Cerachrome insulation (Thermal Ceramics, Inc., Augusta, GA) indicated that the temperatures at the breach had exceeded 1760 C.
Characterization of Ice Roughness From Simulated Icing Encounters
NASA Technical Reports Server (NTRS)
Anderson, David N.; Shin, Jaiwon
1997-01-01
Detailed measurements of the size of roughness elements on ice accreted on models in the NASA Lewis Icing Research Tunnel (IRT) were made in a previous study. Only limited data from that study have been published, but included were the roughness element height, diameter and spacing. In the present study, the height and spacing data were found to correlate with the element diameter, and the diameter was found to be a function primarily of the non-dimensional parameters freezing fraction and accumulation parameter. The width of the smooth zone which forms at the leading edge of the model was found to decrease with increasing accumulation parameter. Although preliminary, the success of these correlations suggests that it may be possible to develop simple relationships between ice roughness and icing conditions for use in ice-accretion-prediction codes. These codes now require an ice-roughness estimate to determine convective heat transfer. Studies using a 7.6-cm-diameter cylinder and a 53.3-cm-chord NACA 0012 airfoil were also performed in which a 1/2-min icing spray at an initial set of conditions was followed by a 9-1/2-min spray at a second set of conditions. The resulting ice shape was compared with that from a full 10-min spray at the second set of conditions. The initial ice accumulation appeared to have no effect on the final ice shape. From this result, it would appear the accreting ice is affected very little by the initial roughness or shape features.
High temperature measurement of water vapor absorption
NASA Technical Reports Server (NTRS)
Keefer, Dennis; Lewis, J. W. L.; Eskridge, Richard
1985-01-01
An investigation was undertaken to measure the absorption coefficient, at a wavelength of 10.6 microns, for mixtures of water vapor and a diluent gas at high temperature and pressure. The experimental concept was to create the desired conditions of temperature and pressure in a laser absorption wave, similar to that which would be created in a laser propulsion system. A simplified numerical model was developed to predict the characteristics of the absorption wave and to estimate the laser intensity threshold for initiation. A non-intrusive method for temperature measurement utilizing optical laser-beam deflection (OLD) and optical spark breakdown produced by an excimer laser, was thoroughly investigated and found suitable for the non-equilibrium conditions expected in the wave. Experiments were performed to verify the temperature measurement technique, to screen possible materials for surface initiation of the laser absorption wave and to attempt to initiate an absorption wave using the 1.5 kW carbon dioxide laser. The OLD technique was proven for air and for argon, but spark breakdown could not be produced in helium. It was not possible to initiate a laser absorption wave in mixtures of water and helium or water and argon using the 1.5 kW laser, a result which was consistent with the model prediction.
Rupp, Kalman
2012-01-01
Various factors outside the control of decision makers may affect the rate at which disability applications are allowed or denied during the initial step of eligibility determination in the Social Security Disability Insurance (DI) and Supplemental Security Income (SSI) programs. In this article, using individual-level data on applications, I estimate the role of three important factors--the demographic characteristics of applicants, the diagnostic mix of applicants, and the local unemployment rate--in affecting the probability of an initial allowance and state allowance rates. I use a random sample of initial determinations from 1993 through 2008 and a fixed-effects multiple regression framework. The empirical results show that the demographic and diagnostic characteristics of applicants and the local unemployment rate substantially affect the initial allowance rate. An increase in the local unemployment rate tends to be associated with a decrease in the initial allowance rate. This negative relationship holds for adult DI and SSI applicants and for SSI childhood applicants.
Influence of Composition and Deformation Conditions on the Strength and Brittleness of Shale Rock
NASA Astrophysics Data System (ADS)
Rybacki, E.; Reinicke, A.; Meier, T.; Makasi, M.; Dresen, G. H.
2015-12-01
Stimulation of shale gas reservoirs by hydraulic fracturing operations aims to increase the production rate by increasing the rock surface connected to the borehole. Prospective shales are often believed to display high strength and brittleness to decrease the breakdown pressure required to (re-) initiate a fracture as well as slow healing of natural and hydraulically induced fractures to increase the lifetime of the fracture network. Laboratory deformation tests were performed on several, mainly European black shales with different mineralogical composition, porosity and maturity at ambient and elevated pressures and temperatures. Mechanical properties such as compressive strength and elastic moduli strongly depend on shale composition, porosity, water content, structural anisotropy, and on pressure (P) and temperature (T) conditions, but less on strain rate. We observed a transition from brittle to semibrittle deformation at high P-T conditions, in particular for high porosity shales. At given P-T conditions, the variation of compressive strength and Young's modulus with composition can be roughly estimated from the volumetric proportion of all components including organic matter and pores. We determined also brittleness index values based on pre-failure deformation behavior, Young's modulus and bulk composition. At low P-T conditions, where samples showed pronounced post-failure weakening, brittleness may be empirically estimated from bulk composition or Young's modulus. Similar to strength, at given P-T conditions, brittleness depends on the fraction of all components and not the amount of a specific component, e.g. clays, alone. Beside strength and brittleness, knowledge of the long term creep properties of shales is required to estimate in-situ stress anisotropy and the healing of (propped) hydraulic fractures.
Knights, Jonathan; Rohatagi, Shashank
2015-12-01
Although there is a body of literature focused on minimizing the effect of dosing inaccuracies on pharmacokinetic (PK) parameter estimation, most of the work centers on missing doses. No attempt has been made to specifically characterize the effect of error in reported dosing times. Additionally, existing work has largely dealt with cases in which the compound of interest is dosed at an interval no less than its terminal half-life. This work provides a case study investigating how error in patient reported dosing times might affect the accuracy of structural model parameter estimation under sparse sampling conditions when the dosing interval is less than the terminal half-life of the compound, and the underlying kinetics are monoexponential. Additional effects due to noncompliance with dosing events are not explored and it is assumed that the structural model and reasonable initial estimates of the model parameters are known. Under the conditions of our simulations, with structural model CV % ranging from ~20 to 60 %, parameter estimation inaccuracy derived from error in reported dosing times was largely controlled around 10 % on average. Given that no observed dosing was included in the design and sparse sampling was utilized, we believe these error results represent a practical ceiling given the variability and parameter estimates for the one-compartment model. The findings suggest additional investigations may be of interest and are noteworthy given the inability of current PK software platforms to accommodate error in dosing times.
Parameter estimation in plasmonic QED
NASA Astrophysics Data System (ADS)
Jahromi, H. Rangani
2018-03-01
We address the problem of parameter estimation in the presence of plasmonic modes manipulating emitted light via the localized surface plasmons in a plasmonic waveguide at the nanoscale. The emitter that we discuss is the nitrogen vacancy centre (NVC) in diamond modelled as a qubit. Our goal is to estimate the β factor measuring the fraction of emitted energy captured by waveguide surface plasmons. The best strategy to obtain the most accurate estimation of the parameter, in terms of the initial state of the probes and different control parameters, is investigated. In particular, for two-qubit estimation, it is found although we may achieve the best estimation at initial instants by using the maximally entangled initial states, at long times, the optimal estimation occurs when the initial state of the probes is a product one. We also find that decreasing the interqubit distance or increasing the propagation length of the plasmons improve the precision of the estimation. Moreover, decrease of spontaneous emission rate of the NVCs retards the quantum Fisher information (QFI) reduction and therefore the vanishing of the QFI, measuring the precision of the estimation, is delayed. In addition, if the phase parameter of the initial state of the two NVCs is equal to πrad, the best estimation with the two-qubit system is achieved when initially the NVCs are maximally entangled. Besides, the one-qubit estimation has been also analysed in detail. Especially, we show that, using a two-qubit probe, at any arbitrary time, enhances considerably the precision of estimation in comparison with one-qubit estimation.
NASA Astrophysics Data System (ADS)
Redemann, J.; Livingston, J. M.; Shinozuka, Y.; Kacenelenbogen, M. S.; Russell, P. B.; LeBlanc, S. E.; Vaughan, M.; Ferrare, R. A.; Hostetler, C. A.; Rogers, R. R.; Burton, S. P.; Torres, O.; Remer, L. A.; Stier, P.; Schutgens, N.
2014-12-01
We describe a technique for combining CALIOP aerosol backscatter, MODIS spectral AOD (aerosol optical depth), and OMI AAOD (absorption aerosol optical depth) retrievals for the purpose of estimating full spectral sets of aerosol radiative properties, and ultimately for calculating the 3-D distribution of direct aerosol radiative forcing. We present results using one year of data collected in 2007 and show comparisons of the aerosol radiative property estimates to collocated AERONET retrievals. Use of the recently released MODIS Collection 6 data for aerosol optical depths derived with the dark target and deep blue algorithms has extended the coverage of the multi-sensor estimates towards higher latitudes. Initial calculations of seasonal clear-sky aerosol radiative forcing based on our multi-sensor aerosol retrievals compare well with over-ocean and top of the atmosphere IPCC-2007 model-based results, and with more recent assessments in the "Climate Change Science Program Report: Atmospheric Aerosol Properties and Climate Impacts" (2009). For the first time, we present comparisons of our multi-sensor aerosol direct radiative forcing estimates to values derived from a subset of models that participated in the latest AeroCom initiative. We discuss the major challenges that exist in extending our clear-sky results to all-sky conditions. On the basis of comparisons to suborbital measurements, we present some of the limitations of the MODIS and CALIOP retrievals in the presence of adjacent or underlying clouds. Strategies for meeting these challenges are discussed.
NASA Astrophysics Data System (ADS)
Lim, Sungsoo; Lee, Seohyung; Kim, Jun-geon; Lee, Daeho
2018-01-01
The around-view monitoring (AVM) system is one of the major applications of advanced driver assistance systems and intelligent transportation systems. We propose an on-line calibration method, which can compensate misalignments for AVM systems. Most AVM systems use fisheye undistortion, inverse perspective transformation, and geometrical registration methods. To perform these procedures, the parameters for each process must be known; the procedure by which the parameters are estimated is referred to as the initial calibration. However, when only using the initial calibration data, we cannot compensate misalignments, caused by changing equilibria of cars. Moreover, even small changes such as tire pressure levels, passenger weight, or road conditions can affect a car's equilibrium. Therefore, to compensate for this misalignment, additional techniques are necessary, specifically an on-line calibration method. On-line calibration can recalculate homographies, which can correct any degree of misalignment using the unique features of ordinary parking lanes. To extract features from the parking lanes, this method uses corner detection and a pattern matching algorithm. From the extracted features, homographies are estimated using random sample consensus and parameter estimation. Finally, the misaligned epipolar geographies are compensated via the estimated homographies. Thus, the proposed method can render image planes parallel to the ground. This method does not require any designated patterns and can be used whenever cars are placed in a parking lot. The experimental results show the robustness and efficiency of the method.
How Do Vision and Hearing Impact Pedestrian Time-to-Arrival Judgments?
Roper, JulieAnne M.; Hassan, Shirin E.
2014-01-01
Purpose To determine how accurate normally-sighted male and female pedestrians were at making time-to-arrival (TTA) judgments of approaching vehicles when using just their hearing or both their hearing and vision. Methods Ten male and 14 female subjects with confirmed normal vision and hearing estimated the TTA of approaching vehicles along an unsignalized street under two sensory conditions: (i) using both habitual vision and hearing; and (ii) using habitual hearing only. All subjects estimated how long the approaching vehicle would take to reach them (ie the TTA). The actual TTA of vehicles was also measured using custom made sensors. The error in TTA judgments for each subject under each sensory condition was calculated as the difference between the actual and estimated TTA. A secondary timing experiment was also conducted to adjust each subject’s TTA judgments for their “internal metronome”. Results Error in TTA judgments changed significantly as a function of both the actual TTA (p<0.0001) and sensory condition (p<0.0001). While no main effect for gender was found (p=0.19), the way the TTA judgments varied within each sensory condition for each gender was different (p<0.0001). Females tended to be as accurate under either condition (p≥0.01) with the exception of TTA judgments made when the actual TTA was two seconds or less and eight seconds or longer, during which the vision and hearing condition was more accurate (p≤0.002). Males made more accurate TTA judgments under the hearing only condition for actual TTA values five seconds or less (p<0.0001), after which there were no significant differences between the two conditions (p≥0.01). Conclusions Our data suggests that males and females use visual and auditory information differently when making TTA judgments. While the sensory condition did not affect the females’ accuracy in judgments, males initially tended to be more accurate when using their hearing only. PMID:24509543
Crack initiation modeling of a directionally-solidified nickel-base superalloy
NASA Astrophysics Data System (ADS)
Gordon, Ali Page
Combustion gas turbine components designed for application in electric power generation equipment are subject to periodic replacement as a result of cracking, damage, and mechanical property degeneration that render them unsafe for continued operation. In view of the significant costs associated with inspecting, servicing, and replacing damaged components, there has been much interest in developing models that not only predict service life, but also estimate the evolved microstructural state of the material. This thesis explains manifestations of microstructural damage mechanisms that facilitate fatigue crack nucleation in a newly-developed directionally-solidified (DS) Ni-base superalloy components exposed to elevated temperatures and high stresses. In this study, models were developed and validated for damage and life prediction using DS GTD-111 as the subject material. This material, proprietary to General Electric Energy, has a chemical composition and grain structure designed to withstand creep damage occurring in the first and second stage blades of gas-powered turbines. The service conditions in these components, which generally exceed 600°C, facilitate the onset of one or more damage mechanisms related to fatigue, creep, or environment. The study was divided into an empirical phase, which consisted of experimentally simulating service conditions in fatigue specimens, and a modeling phase, which entailed numerically simulating the stress-strain response of the material. Experiments have been carried out to simulate a variety of thermal, mechanical, and environmental operating conditions endured by longitudinally (L) and transversely (T) oriented DS GTD-111. Both in-phase and out-of-phase thermo-mechanical fatigue tests were conducted. In some cases, tests in extreme environments/temperatures were needed to isolate one or at most two of the mechanisms causing damage. Microstructural examinations were carried out via SEM and optical microscopy. A continuum crystal plasticity model was used to simulate the material behavior in the L and T orientations. The constitutive model was implemented in ABAQUS and a parameter estimation scheme was developed to obtain the material constants. A physically-based model was developed for correlating crack initiation life based on the experimental life data and predictions are made using the crack initiation model. Assuming a unique relationship between the damage fraction and cycle fraction with respect to cycles to crack initiation for each damage mode, the total crack initiation life has been represented in terms of the individual damage components (fatigue, creep-fatigue, creep, and oxidation-fatigue) observed at the end state of crack initiation.
Measuring and modeling maize evapotranspiration under plastic film-mulching condition
NASA Astrophysics Data System (ADS)
Li, Sien; Kang, Shaozhong; Zhang, Lu; Ortega-Farias, Samuel; Li, Fusheng; Du, Taisheng; Tong, Ling; Wang, Sufen; Ingman, Mark; Guo, Weihua
2013-10-01
Plastic film-mulching techniques have been widely used over a variety of agricultural crops for saving water and improving yield. Accurate estimation of crop evapotranspiration (ET) under the film-mulching condition is critical for optimizing crop water management. After taking the mulching effect on soil evaporation (Es) into account, our study adjusted the original Shuttleworth-Wallace model (MSW) in estimating maize ET and Es under the film-mulching condition. Maize ET and Es respectively measured by eddy covariance and micro-lysimeter methods during 2007 and 2008 were used to validate the performance of the Penman-Monteith (PM), the original Shuttleworth-Wallace (SW) and the MSW models in arid northwest China. Results indicate that all three models significantly overestimated ET during the initial crop stage in the both years, which may be due to the underestimation of canopy resistance induced by the Jarvis model for the drought stress in the stage. For the entire experimental period, the SW model overestimated half-hourly maize ET by 17% compared with the eddy covariance method (ETEC) and overestimated daily Es by 241% compared with the micro-lysimeter measurements (EL), while the PM model only underestimated daily maize ET by 6%, and the MSW model only underestimated half-hourly maize ET by 2% and Es by 7% during the whole period. Thus the PM and MSW models significantly improved the accuracy against the original SW model and can be used to estimate ET and Es under the film-mulching condition.
Two approaches to the rapid screening of crystallization conditions
NASA Technical Reports Server (NTRS)
Mcpherson, Alexander
1992-01-01
A screening procedure is described for estimating conditions under which crystallization will proceed, thus providing a starting point for more careful experiments. The initial procedure uses the experimental setup of McPherson (1982) which supports 24 individual hanging drop experiments for screening variables such as the precipitant type, the pH, the temperature, and the effects of certain additives and which uses about 1 mg of protein. A second approach is proposed (which is rather hypothetical at this stage and needs a larger sample), based on the isoelectric focusing of protein samples on concentration gradients of common precipitating agents. Using this approach, crystals of concanavalin B and canavalin were obtained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
none,
2011-09-01
This report covers an assessment of 182 different heating, ventilation, and air-conditioning (HVAC) technologies for U.S. commercial buildings to identify and provide analysis on 17 priority technology options in various stages of development. The analyses include an estimation of technical energy-savings potential, description of technical maturity, description of non-energy benefits, description of current barriers for market adoption, and description of the technology’s applicability to different building or HVAC equipment types. From these technology descriptions, are suggestions for potential research, development and demonstration (RD&D) initiatives that would support further development of the priority technology options.
Microfluidics for simultaneous quantification of platelet adhesion and blood viscosity
Yeom, Eunseop; Park, Jun Hong; Kang, Yang Jun; Lee, Sang Joon
2016-01-01
Platelet functions, including adhesion, activation, and aggregation have an influence on thrombosis and the progression of atherosclerosis. In the present study, a new microfluidic-based method is proposed to estimate platelet adhesion and blood viscosity simultaneously. Blood sample flows into an H-shaped microfluidic device with a peristaltic pump. Since platelet aggregation may be initiated by the compression of rotors inside the peristaltic pump, platelet aggregates may adhere to the H-shaped channel. Through correlation mapping, which visualizes decorrelation of the streaming blood flow, the area of adhered platelets (APlatelet) can be estimated without labeling platelets. The platelet function is estimated by determining the representative index IA·T based on APlatelet and contact time. Blood viscosity is measured by monitoring the flow conditions in the one side channel of the H-shaped device. Based on the relation between interfacial width (W) and pressure ratio of sample flows to the reference, blood sample viscosity (μ) can be estimated by measuring W. Biophysical parameters (IA·T, μ) are compared for normal and diabetic rats using an ex vivo extracorporeal model. This microfluidic-based method can be used for evaluating variations in the platelet adhesion and blood viscosity of animal models with cardiovascular diseases under ex vivo conditions. PMID:27118101
Pattabi, Kamaraj; Vadivoo, Selvaraj; Bhome, Arvind; Brashier, Bill; Bhattacharya, Prashanta; Mehendale, Sanjay M
2017-01-01
Background Chronic obstructive pulmonary disease (COPD) is a common preventable and treatable chronic respiratory disease, which affects 210 million people globally. Global and national guidelines exist for the management of COPD. Although evidence-based, they are inadequate to address the phenotypic and genotypic heterogeneity in India. Co-existence of other chronic respiratory diseases can adversely influence the prognosis of COPD. India has a huge burden of COPD with various risk factors and comorbid conditions. However, valid prevalence estimates employing spirometry as the diagnostic tool and data on important comorbid conditions are not available. This study protocol is designed to address this knowledge gap and eventually to build a database to undertake long-term cohort studies to describe the phenotypic and genotypic heterogeneity among COPD patients in India. Objectives The primary objective is to estimate the prevalence of COPD among adults aged ≥25 years for each gender in India. The secondary objective is to identify the risk factors for COPD and important comorbid conditions such as asthma and post-tuberculosis sequelae. It is also proposed to validate the currently available definitions for COPD diagnosis in India. Methods and analysis A cross-sectional study will be undertaken among the populations of sub-urban areas of Chennai and Shillong cities, which represent the Southern and Northeastern regions of India. We will collect data on sociodemographic variables, economic characteristics, risk factors of COPD and comorbidities. The Global Initiative for Obstructive Lung Disease (GOLD) and Global Initiative for Asthma (GINA) definitions will be used for the diagnosis of COPD and asthma. Data will be analysed for estimation of the prevalence of COPD, asthma and associated factors. Ethics and dissemination This study proposal was approved by the respective institutional ethics committees of participating institutions. The results will be disseminated through publications in the peer-reviewed journals and a report will be submitted to the concerned public health authorities in India for developing appropriate research and management policies. PMID:28554925
NASA Astrophysics Data System (ADS)
Matthews, Thomas P.; Anastasio, Mark A.
2017-12-01
The initial pressure and speed of sound (SOS) distributions cannot both be stably recovered from photoacoustic computed tomography (PACT) measurements alone. Adjunct ultrasound computed tomography (USCT) measurements can be employed to estimate the SOS distribution. Under the conventional image reconstruction approach for combined PACT/USCT systems, the SOS is estimated from the USCT measurements alone and the initial pressure is estimated from the PACT measurements by use of the previously estimated SOS. This approach ignores the acoustic information in the PACT measurements and may require many USCT measurements to accurately reconstruct the SOS. In this work, a joint reconstruction method where the SOS and initial pressure distributions are simultaneously estimated from combined PACT/USCT measurements is proposed. This approach allows accurate estimation of both the initial pressure distribution and the SOS distribution while requiring few USCT measurements.
Design Mining Interacting Wind Turbines.
Preen, Richard J; Bull, Larry
2016-01-01
An initial study has recently been presented of surrogate-assisted evolutionary algorithms used to design vertical-axis wind turbines wherein candidate prototypes are evaluated under fan-generated wind conditions after being physically instantiated by a 3D printer. Unlike other approaches, such as computational fluid dynamics simulations, no mathematical formulations were used and no model assumptions were made. This paper extends that work by exploring alternative surrogate modelling and evolutionary techniques. The accuracy of various modelling algorithms used to estimate the fitness of evaluated individuals from the initial experiments is compared. The effect of temporally windowing surrogate model training samples is explored. A surrogate-assisted approach based on an enhanced local search is introduced; and alternative coevolution collaboration schemes are examined.
Process equipped with a sloped UV lamp for the fabrication of gradient-refractive-index lenses.
Liu, Jui-Hsiang; Chiu, Yi-Hong
2009-05-01
In this investigation, a method for the preparation of gradient-refractive-index (GRIN) lenses by UV-energy-controlled polymerization has been developed. A glass reaction tube equipped with a sloped UV lamp was designed. Methyl methacrylate and diphenyl sulfide were used as the reactive monomer and nonreactive dopant, respectively. Ciba IRGACURE 184 (1-hydroxy-cyclohexyl-phenyl-ketone) was used as the initiator. The effects of initiator concentration, the addition of acrylic polymers, and the preparation conditions on the optical characteristics of the GRIN lenses produced by this method were also investigated. Refractive index distributions and image transmission properties were estimated for all GRIN lenses prepared.
Gonçalves, Marcio A D; Tokach, Mike D; Dritz, Steve S; Bello, Nora M; Touchette, Kevin J; Goodband, Robert D; DeRouchey, Joel M; Woodworth, Jason C
2018-03-06
Two experiments were conducted to estimate the standardized ileal digestible valine:lysine (SID Val:Lys) dose response effects in 25- to 45-kg pigs under commercial conditions. In experiment 1, a total of 1,134 gilts (PIC 337 × 1050), initially 31.2 kg ± 2.0 kg body weight (BW; mean ± SD) were used in a 19-d growth trial with 27 pigs per pen and seven pens per treatment. In experiment 2, a total of 2,100 gilts (PIC 327 × 1050), initially 25.4 ± 1.9 kg BW were used in a 22-d growth trial with 25 pigs per pen and 12 pens per treatment. Treatments were blocked by initial BW in a randomized complete block design. In experiment 1, there were a total of six dietary treatments with SID Val at 59.0, 62.5, 65.9, 69.6, 73.0, and 75.5% of Lys and for experiment 2 there were a total of seven dietary treatments with SID Val at 57.0, 60.6, 63.9, 67.5, 71.1, 74.4, and 78.0% of Lys. Experimental diets were formulated to ensure that Lys was the second limiting amino acid throughout the experiments. Initially, linear mixed models were fitted to data from each experiment. Then, data from the two experiments were combined to estimate dose-responses using a broken-line linear ascending (BLL) model, broken-line quadratic ascending (BLQ) model, or quadratic polynomial (QP). Model fit was compared using Bayesian information criterion (BIC). In experiment 1, ADG increased linearly (P = 0.009) with increasing SID Val:Lys with no apparent significant impact on G:F. In experiment 2, ADG and ADFI increased in a quadratic manner (P < 0.002) with increasing SID Val:Lys whereas G:F increased linearly (P < 0.001). Overall, the best-fitting model for ADG was a QP, whereby the maximum mean ADG was estimated at a 73.0% (95% CI: [69.5, >78.0%]) SID Val:Lys. For G:F, the overall best-fitting model was a QP with maximum estimated mean G:F at 69.0% (95% CI: [64.0, >78.0]) SID Val:Lys ratio. However, 99% of the maximum mean performance for ADG and G:F were achieved at, 68% and 63% SID Val:Lys ratio, respectively. Therefore, the SID Val:Lys requirement ranged from73.0% for maximum ADG to 63.2% SID Val:Lys to achieve 99% of maximum G:F in 25- to 45-kg BW pigs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardwick, Robert J.; Vennin, Vincent; Wands, David
We study the stochastic distribution of spectator fields predicted in different slow-roll inflation backgrounds. Spectator fields have a negligible energy density during inflation but may play an important dynamical role later, even giving rise to primordial density perturbations within our observational horizon today. During de-Sitter expansion there is an equilibrium solution for the spectator field which is often used to estimate the stochastic distribution during slow-roll inflation. However slow roll only requires that the Hubble rate varies slowly compared to the Hubble time, while the time taken for the stochastic distribution to evolve to the de-Sitter equilibrium solution can bemore » much longer than a Hubble time. We study both chaotic (monomial) and plateau inflaton potentials, with quadratic, quartic and axionic spectator fields. We give an adiabaticity condition for the spectator field distribution to relax to the de-Sitter equilibrium, and find that the de-Sitter approximation is never a reliable estimate for the typical distribution at the end of inflation for a quadratic spectator during monomial inflation. The existence of an adiabatic regime at early times can erase the dependence on initial conditions of the final distribution of field values. In these cases, spectator fields acquire sub-Planckian expectation values. Otherwise spectator fields may acquire much larger field displacements than suggested by the de-Sitter equilibrium solution. We quantify the information about initial conditions that can be obtained from the final field distribution. Our results may have important consequences for the viability of spectator models for the origin of structure, such as the simplest curvaton models.« less
NASA Astrophysics Data System (ADS)
Rashid, Ahmar; Khambampati, Anil Kumar; Kim, Bong Seok; Liu, Dong; Kim, Sin; Kim, Kyung Youn
EIT image reconstruction is an ill-posed problem, the spatial resolution of the estimated conductivity distribution is usually poor and the external voltage measurements are subject to variable noise. Therefore, EIT conductivity estimation cannot be used in the raw form to correctly estimate the shape and size of complex shaped regional anomalies. An efficient algorithm employing a shape based estimation scheme is needed. The performance of traditional inverse algorithms, such as the Newton Raphson method, used for this purpose is below par and depends upon the initial guess and the gradient of the cost functional. This paper presents the application of differential evolution (DE) algorithm to estimate complex shaped region boundaries, expressed as coefficients of truncated Fourier series, using EIT. DE is a simple yet powerful population-based, heuristic algorithm with the desired features to solve global optimization problems under realistic conditions. The performance of the algorithm has been tested through numerical simulations, comparing its results with that of the traditional modified Newton Raphson (mNR) method.
Single-shot quantum state estimation via a continuous measurement in the strong backaction regime
NASA Astrophysics Data System (ADS)
Cook, Robert L.; Riofrío, Carlos A.; Deutsch, Ivan H.
2014-09-01
We study quantum tomography based on a stochastic continuous-time measurement record obtained from a probe field collectively interacting with an ensemble of identically prepared systems. In comparison to previous studies, we consider here the case in which the measurement-induced backaction has a non-negligible effect on the dynamical evolution of the ensemble. We formulate a maximum likelihood estimate for the initial quantum state given only a single instance of the continuous diffusive measurement record. We apply our estimator to the simplest problem: state tomography of a single pure qubit, which, during the course of the measurement, is also subjected to dynamical control. We identify a regime where the many-body system is well approximated at all times by a separable pure spin coherent state, whose Bloch vector undergoes a conditional stochastic evolution. We simulate the results of our estimator and show that we can achieve close to the upper bound of fidelity set by the optimal generalized measurement. This estimate is compared to, and significantly outperforms, an equivalent estimator that ignores measurement backaction.
A variational approach to parameter estimation in ordinary differential equations.
Kaschek, Daniel; Timmer, Jens
2012-08-14
Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.
NASA Technical Reports Server (NTRS)
Hill, Jesse K.; Isensee, Joan E.; Cornett, Robert H.; Bohlin, Ralph C.; O'Connell, Robert W.; Roberts, Morton S.; Smith, Andrew M.; Stecher, Theodore P.
1994-01-01
UV stellar photometry is presented for 1563 stars within a 40 minutes circular field in the Large Magellanic Cloud (LMC), excluding the 10 min x 10 min field centered on R136 investigated earlier by Hill et al. (1993). Magnitudes are computed from images obtained by the Ultraviolet Imaging Telescope (UIT) in bands centered at 1615 A and 2558 A. Stellar masses and extinctions are estimated for the stars in associations using the evolutionary models of Schaerer et al. (1993), assuming the age is 4 Myr and that the local LMC extinction follows the Fitzpatrick (1985) 30 Dor extinction curve. The estimated slope of the initial mass function (IMF) for massive stars (greater than 15 solar mass) within the Lucke and Hodge (LH) associations is Gamma = -1.08 +/- 0.2. Initial masses and extinctions for stars not within LH associations are estimated assuming that the stellar age is either 4 Myr or half the stellar lifetime, whichever is larger. The estimated slope of the IMF for massive stars not within LH associations is Gamma = -1.74 +/- 0.3 (assuming continuous star formation), compared with Gamma = -1.35, and Gamma = -1.7 +/- 0.5, obtained for the Galaxy by Salpeter (1955) and Scalo (1986), respectively, and Gamma = -1.6 obtained for massive stars in the Galaxy by Garmany, Conti, & Chiosi (1982). The shallower slope of the association IMF suggests that not only is the star formation rate higher in associations, but that the local conditions favor the formation of higher mass stars there. We make no corrections for binaries or incompleteness.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prince, K.R.; Schneider, B.J.
This study obtained estimates of the hydraulic properties of the upper glacial and Magothy aquifers in the East Meadow area for use in analyzing the movement of reclaimed waste water through the aquifer system. This report presents drawdown and recovery data form the two aquifer tests of 1978 and 1985, describes the six methods of analysis used, and summarizes the results of the analyses in tables and graphs. The drawdown and recovery data were analyzed through three simple analytical equations, two curve-matching techniques, and a finite-element radial-flow model. The resulting estimates of hydraulic conductivity, anisotropy, and storage characteristics were usedmore » as initial input values to the finite-element radial-flow model (Reilly, 1984). The flow model was then used to refine the estimates of the aquifer properties by more accurately representing the aquifer geometry and field conditions of the pumping tests.« less
Characterization of classical static noise via qubit as probe
NASA Astrophysics Data System (ADS)
Javed, Muhammad; Khan, Salman; Ullah, Sayed Arif
2018-03-01
The dynamics of quantum Fisher information (QFI) of a single qubit coupled to classical static noise is investigated. The analytical relation for QFI fixes the optimal initial state of the qubit that maximizes it. An approximate limit for the time of coupling that leads to physically useful results is identified. Moreover, using the approach of quantum estimation theory and the analytical relation for QFI, the qubit is used as a probe to precisely estimate the disordered parameter of the environment. Relation for optimal interaction time with the environment is obtained, and condition for the optimal measurement of the noise parameter of the environment is given. It is shown that all values, in the mentioned range, of the noise parameter are estimable with equal precision. A comparison of our results with the previous studies in different classical environments is made.
Measuring and Specifying Combinatorial Coverage of Test Input Configurations
Kuhn, D. Richard; Kacker, Raghu N.; Lei, Yu
2015-01-01
A key issue in testing is how many tests are needed for a required level of coverage or fault detection. Estimates are often based on error rates in initial testing, or on code coverage. For example, tests may be run until a desired level of statement or branch coverage is achieved. Combinatorial methods present an opportunity for a different approach to estimating required test set size, using characteristics of the test set. This paper describes methods for estimating the coverage of, and ability to detect, t-way interaction faults of a test set based on a covering array. We also develop a connection between (static) combinatorial coverage and (dynamic) code coverage, such that if a specific condition is satisfied, 100% branch coverage is assured. Using these results, we propose practical recommendations for using combinatorial coverage in specifying test requirements. PMID:28133442
Determination of soil degradation from flooding for estimating ecosystem services in Slovakia
NASA Astrophysics Data System (ADS)
Hlavcova, Kamila; Szolgay, Jan; Karabova, Beata; Kohnova, Silvia
2015-04-01
Floods as natural hazards are related to soil health, land-use and land management. They not only represent threats on their own, but can also be triggered, controlled and amplified by interactions with other soil threats and soil degradation processes. Among the many direct impacts of flooding on soil health, including soil texture, structure, changes in the soil's chemical properties, deterioration of soil aggregation and water holding capacity, etc., are soil erosion, mudflows, depositions of sediment and debris. Flooding is initiated by a combination of predispositive and triggering factors and apart from climate drivers it is related to the physiographic conditions of the land, state of the soil, land use and land management. Due to the diversity and complexity of their potential interactions, diverse methodologies and approaches are needed for describing a particular type of event in a specific environment, especially in ungauged sites. In engineering studies and also in many rainfall-runoff models, the SCS-CN method has remained widely applied for soil and land use-based estimations of direct runoff and flooding potential. The SCS-CN method is an empirical rainfall-runoff model developed by the USDA Natural Resources Conservation Service (formerly called the Soil Conservation Service or SCS). The runoff curve number (CN) is based on the hydrological soil characteristics, land use, land management and antecedent saturation conditions of soil. Since the method and curve numbers were derived on the basis of an empirical analysis of rainfall-runoff events from small catchments and hillslope plots monitored by the USDA, the use of the method for the conditions of Slovakia raises uncertainty and can cause inaccurate results in determining direct runoff. The objective of the study presented (also within the framework of the EU-FP7 RECARE Project) was to develop the SCS - CN methodology for the flood conditions in Slovakia (and especially for the RECARE pilot site of Myjava), with an emphasis on the determination of soil degradation from flooding for estimating ecosystem services. The parameters of the SCS-CN methodology were regionalised empirically based on actual rainfall and discharge measurements. Since there has been no appropriate methodology provided for the regionalisation of SCS-CN method parameters in Slovakia, such as runoff curve numbers and initial abstraction coefficients (λ), the work presented is important for the correct application of the SCS-CN method in our conditions.
NASA Technical Reports Server (NTRS)
Dean, Bruce H. (Inventor)
2009-01-01
A method of recovering unknown aberrations in an optical system includes collecting intensity data produced by the optical system, generating an initial estimate of a phase of the optical system, iteratively performing a phase retrieval on the intensity data to generate a phase estimate using an initial diversity function corresponding to the intensity data, generating a phase map from the phase retrieval phase estimate, decomposing the phase map to generate a decomposition vector, generating an updated diversity function by combining the initial diversity function with the decomposition vector, generating an updated estimate of the phase of the optical system by removing the initial diversity function from the phase map. The method may further include repeating the process beginning with iteratively performing a phase retrieval on the intensity data using the updated estimate of the phase of the optical system in place of the initial estimate of the phase of the optical system, and using the updated diversity function in place of the initial diversity function, until a predetermined convergence is achieved.
Crisis in Mexico: Assessing the Merida Initiative and its Impact on Us-Mexican Security
2009-04-01
s poor economic conditions to extort large amounts of cash from individuals wishing to cross the US-Mexican border in search of a job and a better...people crossed the US-Mexican border illegally in 2001. 31 The US State Department‟s Trafficking in Persons Report classifies Mexico as a Tier 2 in...southern borders. 33 Mexico estimates that approximately 400,000 illegal immigrants cross the Mexican-Guatemalan border each year. President
Kobau, Rosemarie; Cui, Wanjun; Zack, Matthew M
2017-07-01
Healthy People 2020, a national health promotion initiative, calls for increasing the proportion of U.S. adults who self-report good or better health. The Patient-Reported Outcomes Measurement Information System (PROMIS) Global Health Scale (GHS) was identified as a reliable and valid set of items of self-reported physical and mental health to monitor these two domains across the decade. The purpose of this study was to examine the percentage of adults with an epilepsy history who met the Healthy People 2020 target for self-reported good or better health and to compare these percentages to adults with history of other common chronic conditions. Using the 2010 National Health Interview Survey, we compared and estimated the age-standardized prevalence of reporting good or better physical and mental health among adults with five selected chronic conditions including epilepsy, diabetes, heart disease, cancer, and hypertension. We examined response patterns for physical and mental health scale among adults with these five conditions. The percentages of adults with epilepsy who reported good or better physical health (52%) or mental health (54%) were significantly below the Healthy People 2020 target estimate of 80% for both outcomes. Significantly smaller percentages of adults with an epilepsy history reported good or better physical health than adults with heart disease, cancer, or hypertension. Significantly smaller percentages of adults with an epilepsy history reported good or better mental health than adults with all other four conditions. Health and social service providers can implement and enhance existing evidence-based clinical interventions and public health programs and strategies shown to improve outcomes in epilepsy. These estimates can be used to assess improvements in the Healthy People 2020 Health-Related Quality of Life and Well-Being Objective throughout the decade. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Clark, Elizabeth; Wood, Andy; Nijssen, Bart; Mendoza, Pablo; Newman, Andy; Nowak, Kenneth; Arnold, Jeffrey
2017-04-01
In an automated forecast system, hydrologic data assimilation (DA) performs the valuable function of correcting raw simulated watershed model states to better represent external observations, including measurements of streamflow, snow, soil moisture, and the like. Yet the incorporation of automated DA into operational forecasting systems has been a long-standing challenge due to the complexities of the hydrologic system, which include numerous lags between state and output variations. To help demonstrate that such methods can succeed in operational automated implementations, we present results from the real-time application of an ensemble particle filter (PF) for short-range (7 day lead) ensemble flow forecasts in western US river basins. We use the System for Hydromet Applications, Research and Prediction (SHARP), developed by the National Center for Atmospheric Research (NCAR) in collaboration with the University of Washington, U.S. Army Corps of Engineers, and U.S. Bureau of Reclamation. SHARP is a fully automated platform for short-term to seasonal hydrologic forecasting applications, incorporating uncertainty in initial hydrologic conditions (IHCs) and in hydrometeorological predictions through ensemble methods. In this implementation, IHC uncertainty is estimated by propagating an ensemble of 100 temperature and precipitation time series through conceptual and physically-oriented models. The resulting ensemble of derived IHCs exhibits a broad range of possible soil moisture and snow water equivalent (SWE) states. The PF selects and/or weights and resamples the IHCs that are most consistent with external streamflow observations, and uses the particles to initialize a streamflow forecast ensemble driven by ensemble precipitation and temperature forecasts downscaled from the Global Ensemble Forecast System (GEFS). We apply this method in real-time for several basins in the western US that are important for water resources management, and perform a hindcast experiment to evaluate the utility of PF-based data assimilation on streamflow forecasts skill. This presentation describes findings, including a comparison of sequential and non-sequential particle weighting methods.
NASA Astrophysics Data System (ADS)
Clark, E.; Wood, A.; Nijssen, B.; Newman, A. J.; Mendoza, P. A.
2016-12-01
The System for Hydrometeorological Applications, Research and Prediction (SHARP), developed at the National Center for Atmospheric Research (NCAR), University of Washington, U.S. Army Corps of Engineers, and U.S. Bureau of Reclamation, is a fully automated ensemble prediction system for short-term to seasonal applications. It incorporates uncertainty in initial hydrologic conditions (IHCs) and in hydrometeorological predictions. In this implementation, IHC uncertainty is estimated by propagating an ensemble of 100 plausible temperature and precipitation time series through the Sacramento/Snow-17 model. The forcing ensemble explicitly accounts for measurement and interpolation uncertainties in the development of gridded meteorological forcing time series. The resulting ensemble of derived IHCs exhibits a broad range of possible soil moisture and snow water equivalent (SWE) states. To select the IHCs that are most consistent with the observations, we employ a particle filter (PF) that weights IHC ensemble members based on observations of streamflow and SWE. These particles are then used to initialize ensemble precipitation and temperature forecasts downscaled from the Global Ensemble Forecast System (GEFS), generating a streamflow forecast ensemble. We test this method in two basins in the Pacific Northwest that are important for water resources management: 1) the Green River upstream of Howard Hanson Dam, and 2) the South Fork Flathead River upstream of Hungry Horse Dam. The first of these is characterized by mixed snow and rain, while the second is snow-dominated. The PF-based forecasts are compared to forecasts based on a single IHC (corresponding to median streamflow) paired with the full GEFS ensemble, and 2) the full IHC ensemble, without filtering, paired with the full GEFS ensemble. In addition to assessing improvements in the spread of IHCs, we perform a hindcast experiment to evaluate the utility of PF-based data assimilation on streamflow forecasts at 1- to 7-day lead times.
Monte Carlo simulation of the transmission of measles: Beyond the mass action principle
NASA Astrophysics Data System (ADS)
Zekri, Nouredine; Clerc, Jean Pierre
2002-04-01
We present a Monte Carlo simulation of the transmission of measles within a population sample during its growing and equilibrium states by introducing two different vaccination schedules of one and two doses. We study the effects of the contact rate per unit time ξ as well as the initial conditions on the persistence of the disease. We found a weak effect of the initial conditions while the disease persists when ξ lies in the range 1/L-10/L (L being the latent period). Further comparison with existing data, prediction of future epidemics and other estimations of the vaccination efficiency are provided. Finally, we compare our approach to the models using the mass action principle in the first and another epidemic region and found the incidence independent of the number of susceptibles after the epidemic peak while it strongly fluctuates in its growing region. This method can be easily applied to other human, animal, and plant diseases and includes more complicated parameters.
Active influence in dynamical models of structural balance in social networks
NASA Astrophysics Data System (ADS)
Summers, Tyler H.; Shames, Iman
2013-07-01
We consider a nonlinear dynamical system on a signed graph, which can be interpreted as a mathematical model of social networks in which the links can have both positive and negative connotations. In accordance with a concept from social psychology called structural balance, the negative links play a key role in both the structure and dynamics of the network. Recent research has shown that in a nonlinear dynamical system modeling the time evolution of “friendliness levels” in the network, two opposing factions emerge from almost any initial condition. Here we study active external influence in this dynamical model and show that any agent in the network can achieve any desired structurally balanced state from any initial condition by perturbing its own local friendliness levels. Based on this result, we also introduce a new network centrality measure for signed networks. The results are illustrated in an international-relations network using United Nations voting record data from 1946 to 2008 to estimate friendliness levels amongst various countries.
Simultaneous measurements of concentration and velocity in the Richtmyer-Meshkov instability
NASA Astrophysics Data System (ADS)
Reese, Dan; Ames, Alex; Noble, Chris; Oakley, Jason; Rothamer, David; Bonazza, Riccardo
2017-11-01
The Richtmyer-Meshkov instability (RMI) is studied experimentally in the Wisconsin Shock Tube Laboratory (WiSTL) using a broadband, shear layer initial condition at the interface between a helium-acetone mixture and argon. This interface (Atwood number A=0.7) is accelerated by either a M=1.6 or M=2.2 planar shock wave, and the development of the RMI is investigated through simultaneous planar laser-induced fluorescence (PLIF) and particle image velocimetry (PIV) measurements at the initial condition and four post-shock times. Three Reynolds stresses, the planar turbulent kinetic energy, the Taylor microscale are calculated from the concentration and velocity fields. The external Reynolds number is estimated from the Taylor scale and the velocity statistics. The results suggest that the flow transitions to fully developed turbulence by the third post-shock time for the high Mach number case, while it may not at the lower Mach number. The authors would like to acknowledge the support of the Department of Energy.
NASA Astrophysics Data System (ADS)
Akbarov, S. D.; Ipek, C.
This work studies the influence of the imperfectness of the interface conditions on the dispersion of the axisymmetric longitudinal waves in the pre-strained bi-material hollow cylinder. The investigations are made within the 3D linearized theory of elastic waves in elastic bodies with initial stresses. It is assumed that the materials of the layers of the hollow cylinder are made from hyper elastic compressible materials and the elasticity relations of those are given through the harmonic potential. The shear spring type imperfectness of the interface conditions is considered and the degree of this imperfectness is estimated by the shear-spring parameter. Numerical results on the influence of this parameter on the behavior of the dispersion curves are presented and discussed.
FIREX mission requirements document for renewable resources
NASA Technical Reports Server (NTRS)
Carsey, F.; Dixon, T.
1982-01-01
The initial experimental program and mission requirements for a satellite synthetic aperture radar (SAR) system FIREX (Free-Flying Imaging Radar Experiment) for renewable resources is described. The spacecraft SAR is a C-band and L-band VV polarized system operating at two angles of incidence which is designated as a research instrument for crop identification, crop canopy condition assessments, soil moisture condition estimation, forestry type and condition assessments, snow water equivalent and snow wetness assessments, wetland and coastal land type identification and mapping, flood extent mapping, and assessment of drainage characteristics of watersheds for water resources applications. Specific mission design issues such as the preferred incidence angles for vegetation canopy measurements and the utility of a dual frequency (L and C-band) or dual polarization system as compared to the baseline system are addressed.
Evaluating The Reliability of Point Estimates of Wetland Evaporation
NASA Astrophysics Data System (ADS)
Gavin, H.; Agnew, C. T.
The Penman-Monteith formulation of evaporation has been criticised for its reliance upon point estimates raising concerns that areal estimates of wetland evaporation based upon single weather stations can be misleading. Typically wetlands are composed of a complex mosaic of land cover types each of which can produce different evaporative rates. The need to account for wetland patches when monitoring hydrological fluxes has been noted, while Morton (1983) has long argued for a fundamentally different approach to the calculation of regional evaporation. This paper presents the work carried out at wet grassland in Southern England that was monitored with several automatic weather stations (AWS) and a bowen ratio station to investigate microclimate variations. The significance of fetch was examined using the approach adopted by Gash (1986) based upon surface roughness to estimate the fraction of evaporation sensed from a specific distance upwind of the monitoring station. This theoretical analysis reveals that the fraction of evaporation contributed by the surrounding area steadily increases to a value of 77% at a distance of 224m and thereafter declines rapidly, under stable atmospheric conditions. Thus point climate observations may not reflect surface conditions at greater distances. This result was tested through the deployment offour AWS around the wetland. The data yielded a different response, suggesting that homogeneous conditions prevailed and the central AWS did provide reliable areal estimates of evaporation. The apparent contradiction is a result of not accounting for wind speeds found in wetlands that lead to widespread atmospheric mixing. These findings are typical of moist conditions whereas for example Guo and Scheupp (1994) found that a patchwork of dry fields and wet ditches, characteristic of the study site in summer, could produce differences of up to 50% in evaporation. The paper will also present the initial results of an investigation of the role of dry patches upon wetland evaporation estimates. Morton, F.I. 1983 Operational estimates of evapotranspiration and their significance to the science and practice of hydrology. Journal of Hydrology 66 1:76. Gash, J.H.C. 1986 A note on estimating the effect of limited fetch on micrometeorological evaporation measurements. Boundary Layer Meteorology 35: 409-413. Guo, Y. Schuepp, P.H. 1994a On surface energy balance over the northern wetlands 1. The effects of small-scale temperature and wetness heterogeneity. Journal of Geophysical Research 99 (D1) 1601-1612.
Comparative assessment of techniques for initial pose estimation using monocular vision
NASA Astrophysics Data System (ADS)
Sharma, Sumant; D`Amico, Simone
2016-06-01
This work addresses the comparative assessment of initial pose estimation techniques for monocular navigation to enable formation-flying and on-orbit servicing missions. Monocular navigation relies on finding an initial pose, i.e., a coarse estimate of the attitude and position of the space resident object with respect to the camera, based on a minimum number of features from a three dimensional computer model and a single two dimensional image. The initial pose is estimated without the use of fiducial markers, without any range measurements or any apriori relative motion information. Prior work has been done to compare different pose estimators for terrestrial applications, but there is a lack of functional and performance characterization of such algorithms in the context of missions involving rendezvous operations in the space environment. Use of state-of-the-art pose estimation algorithms designed for terrestrial applications is challenging in space due to factors such as limited on-board processing power, low carrier to noise ratio, and high image contrasts. This paper focuses on performance characterization of three initial pose estimation algorithms in the context of such missions and suggests improvements.
NASA Astrophysics Data System (ADS)
Yaparova, N.
2017-10-01
We consider the problem of heating a cylindrical body with an internal thermal source when the main characteristics of the material such as specific heat, thermal conductivity and material density depend on the temperature at each point of the body. We can control the surface temperature and the heat flow from the surface inside the cylinder, but it is impossible to measure the temperature on axis and the initial temperature in the entire body. This problem is associated with the temperature measurement challenge and appears in non-destructive testing, in thermal monitoring of heat treatment and technical diagnostics of operating equipment. The mathematical model of heating is represented as nonlinear parabolic PDE with the unknown initial condition. In this problem, both the Dirichlet and Neumann boundary conditions are given and it is required to calculate the temperature values at the internal points of the body. To solve this problem, we propose the numerical method based on using of finite-difference equations and a regularization technique. The computational scheme involves solving the problem at each spatial step. As a result, we obtain the temperature function at each internal point of the cylinder beginning from the surface down to the axis. The application of the regularization technique ensures the stability of the scheme and allows us to significantly simplify the computational procedure. We investigate the stability of the computational scheme and prove the dependence of the stability on the discretization steps and error level of the measurement results. To obtain the experimental temperature error estimates, computational experiments were carried out. The computational results are consistent with the theoretical error estimates and confirm the efficiency and reliability of the proposed computational scheme.
Hiemstra, Marieke; Engels, Rutger C M E; van Schayck, Onno C P; Otten, Roy
2016-01-01
The home-based smoking prevention programme 'Smoke-free Kids' did not have an effect on primary outcome smoking initiation. A possible explanation may be that the programme has a delayed effect. The aim of this study was to evaluate the effects on the development of important precursors of smoking: smoking-related cognitions. We used a cluster randomised controlled trial in 9- to 11-year-old children and their mothers. The intervention condition received five activity modules, including a communication sheet for mothers, by mail at four-week intervals. The control condition received a fact-based programme. Secondary outcomes were attitudes, self-efficacy and social norms. Latent growth curves analyses were used to calculate the development of cognitions over time. Subsequently, path modelling was used to estimate the programme effects on the initial level and growth of each cognition. Analyses were performed on 1398 never-smoking children at baseline. Results showed that for children in the intervention condition, perceived maternal norms increased less strongly as compared to the control condition (β = -.10, p = .03). No effects were found for the other cognitions. Based on the limited effects, we do not assume that the programme will have a delayed effect on smoking behaviour later during adolescence.
Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Yupeng, E-mail: yupeng@ualberta.ca; Deutsch, Clayton V.
2012-06-15
In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells.more » In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.« less
A practical guideline for intracranial volume estimation in patients with Alzheimer's disease
2015-01-01
Background Intracranial volume (ICV) is an important normalization measure used in morphometric analyses to correct for head size in studies of Alzheimer Disease (AD). Inaccurate ICV estimation could introduce bias in the outcome. The current study provides a decision aid in defining protocols for ICV estimation in patients with Alzheimer disease in terms of sampling frequencies that can be optimally used on the volumetric MRI data, and the type of software most suitable for use in estimating the ICV measure. Methods Two groups of 22 subjects are considered, including adult controls (AC) and patients with Alzheimer Disease (AD). Reference measurements were calculated for each subject by manually tracing intracranial cavity by the means of visual inspection. The reliability of reference measurements were assured through intra- and inter- variation analyses. Three publicly well-known software packages (Freesurfer, FSL, and SPM) were examined in their ability to automatically estimate ICV across the groups. Results Analysis of the results supported the significant effect of estimation method, gender, cognitive condition of the subject and the interaction among method and cognitive condition factors in the measured ICV. Results on sub-sampling studies with a 95% confidence showed that in order to keep the accuracy of the interleaved slice sampling protocol above 99%, the sampling period cannot exceed 20 millimeters for AC and 15 millimeters for AD. Freesurfer showed promising estimates for both adult groups. However SPM showed more consistency in its ICV estimation over the different phases of the study. Conclusions This study emphasized the importance in selecting the appropriate protocol, the choice of the sampling period in the manual estimation of ICV and selection of suitable software for the automated estimation of ICV. The current study serves as an initial framework for establishing an appropriate protocol in both manual and automatic ICV estimations with different subject populations. PMID:25953026
NASA Astrophysics Data System (ADS)
Hanan, E. J.; Tague, C.; Choate, J.; Liu, M.; Adam, J. C.
2016-12-01
Disturbance is a major force regulating C dynamics in terrestrial ecosystems. Evaluating future C balance in disturbance-prone systems requires understanding the underlying mechanisms that drive ecosystem processes over multiple scales of space and time. Simulation modeling is a powerful tool for bridging these scales, however, model projections are limited by large uncertainties in the initial state of vegetation C and N stores. Watershed models typically use one of two methods to initialize these stores. Spin up involves running a model until vegetation reaches steady state based on climate. This "potential" state however assumes the vegetation across the entire watershed has reached maturity and has a homogeneous age distribution. Yet to reliably represent C and N dynamics in disturbance-prone systems, models should be initialized to reflect their non-equilibrium conditions. Alternatively, remote sensing of a single vegetation parameter (typically leaf area index; LAI) can be combined with allometric relationships to allocate C and N to model stores and can reflect non-steady-state conditions. However, allometric relationships are species and region specific and do not account for environmental variation, thus resulting in C and N stores that may be unstable. To address this problem, we developed a new approach for initializing C and N pools using the watershed-scale ecohydrologic model RHESSys. The new approach merges the mechanistic stability of spinup with the spatial fidelity of remote sensing. Unlike traditional spin up, this approach supports non-homogeneous stand ages. We tested our approach in a pine-dominated watershed in central Idaho, which partially burned in July of 2000. We used LANDSAT and MODIS data to calculate LAI across the watershed following the 2000 fire. We then ran three sets of simulations using spin up, direct measurements, and the combined approach to initialize vegetation C and N stores, and compared our results to remotely sensed LAI following the simulation period. Model estimates of C, N, and water fluxes varied depending on which approach was used. The combined approach provided the best LAI estimates after 10 years of simulation. This method shows promise for improving projections of C, N, and water fluxes in disturbance-prone watersheds.
Quantitative Pointwise Estimate of the Solution of the Linearized Boltzmann Equation
NASA Astrophysics Data System (ADS)
Lin, Yu-Chu; Wang, Haitao; Wu, Kung-Chien
2018-04-01
We study the quantitative pointwise behavior of the solutions of the linearized Boltzmann equation for hard potentials, Maxwellian molecules and soft potentials, with Grad's angular cutoff assumption. More precisely, for solutions inside the finite Mach number region (time like region), we obtain the pointwise fluid structure for hard potentials and Maxwellian molecules, and optimal time decay in the fluid part and sub-exponential time decay in the non-fluid part for soft potentials. For solutions outside the finite Mach number region (space like region), we obtain sub-exponential decay in the space variable. The singular wave estimate, regularization estimate and refined weighted energy estimate play important roles in this paper. Our results extend the classical results of Liu and Yu (Commun Pure Appl Math 57:1543-1608, 2004), (Bull Inst Math Acad Sin 1:1-78, 2006), (Bull Inst Math Acad Sin 6:151-243, 2011) and Lee et al. (Commun Math Phys 269:17-37, 2007) to hard and soft potentials by imposing suitable exponential velocity weight on the initial condition.
Exemplar-based human action pose correction.
Shen, Wei; Deng, Ke; Bai, Xiang; Leyvand, Tommer; Guo, Baining; Tu, Zhuowen
2014-07-01
The launch of Xbox Kinect has built a very successful computer vision product and made a big impact on the gaming industry. This sheds lights onto a wide variety of potential applications related to action recognition. The accurate estimation of human poses from the depth image is universally a critical step. However, existing pose estimation systems exhibit failures when facing severe occlusion. In this paper, we propose an exemplar-based method to learn to correct the initially estimated poses. We learn an inhomogeneous systematic bias by leveraging the exemplar information within a specific human action domain. Furthermore, as an extension, we learn a conditional model by incorporation of pose tags to further increase the accuracy of pose correction. In the experiments, significant improvements on both joint-based skeleton correction and tag prediction are observed over the contemporary approaches, including what is delivered by the current Kinect system. Our experiments for the facial landmark correction also illustrate that our algorithm can improve the accuracy of other detection/estimation systems.
Esteban, Segundo; Girón-Sierra, Jose M.; Polo, Óscar R.; Angulo, Manuel
2016-01-01
Most satellites use an on-board attitude estimation system, based on available sensors. In the case of low-cost satellites, which are of increasing interest, it is usual to use magnetometers and Sun sensors. A Kalman filter is commonly recommended for the estimation, to simultaneously exploit the information from sensors and from a mathematical model of the satellite motion. It would be also convenient to adhere to a quaternion representation. This article focuses on some problems linked to this context. The state of the system should be represented in observable form. Singularities due to alignment of measured vectors cause estimation problems. Accommodation of the Kalman filter originates convergence difficulties. The article includes a new proposal that solves these problems, not needing changes in the Kalman filter algorithm. In addition, the article includes assessment of different errors, initialization values for the Kalman filter; and considers the influence of the magnetic dipole moment perturbation, showing how to handle it as part of the Kalman filter framework. PMID:27809250
Stress estimation in reservoirs using an integrated inverse method
NASA Astrophysics Data System (ADS)
Mazuyer, Antoine; Cupillard, Paul; Giot, Richard; Conin, Marianne; Leroy, Yves; Thore, Pierre
2018-05-01
Estimating the stress in reservoirs and their surroundings prior to the production is a key issue for reservoir management planning. In this study, we propose an integrated inverse method to estimate such initial stress state. The 3D stress state is constructed with the displacement-based finite element method assuming linear isotropic elasticity and small perturbations in the current geometry of the geological structures. The Neumann boundary conditions are defined as piecewise linear functions of depth. The discontinuous functions are determined with the CMA-ES (Covariance Matrix Adaptation Evolution Strategy) optimization algorithm to fit wellbore stress data deduced from leak-off tests and breakouts. The disregard of the geological history and the simplified rheological assumptions mean that only the stress field, statically admissible and matching the wellbore data should be exploited. The spatial domain of validity of this statement is assessed by comparing the stress estimations for a synthetic folded structure of finite amplitude with a history constructed assuming a viscous response.
Quantitative Pointwise Estimate of the Solution of the Linearized Boltzmann Equation
NASA Astrophysics Data System (ADS)
Lin, Yu-Chu; Wang, Haitao; Wu, Kung-Chien
2018-06-01
We study the quantitative pointwise behavior of the solutions of the linearized Boltzmann equation for hard potentials, Maxwellian molecules and soft potentials, with Grad's angular cutoff assumption. More precisely, for solutions inside the finite Mach number region (time like region), we obtain the pointwise fluid structure for hard potentials and Maxwellian molecules, and optimal time decay in the fluid part and sub-exponential time decay in the non-fluid part for soft potentials. For solutions outside the finite Mach number region (space like region), we obtain sub-exponential decay in the space variable. The singular wave estimate, regularization estimate and refined weighted energy estimate play important roles in this paper. Our results extend the classical results of Liu and Yu (Commun Pure Appl Math 57:1543-1608, 2004), (Bull Inst Math Acad Sin 1:1-78, 2006), (Bull Inst Math Acad Sin 6:151-243, 2011) and Lee et al. (Commun Math Phys 269:17-37, 2007) to hard and soft potentials by imposing suitable exponential velocity weight on the initial condition.
Esteban, Segundo; Girón-Sierra, Jose M; Polo, Óscar R; Angulo, Manuel
2016-10-31
Most satellites use an on-board attitude estimation system, based on available sensors. In the case of low-cost satellites, which are of increasing interest, it is usual to use magnetometers and Sun sensors. A Kalman filter is commonly recommended for the estimation, to simultaneously exploit the information from sensors and from a mathematical model of the satellite motion. It would be also convenient to adhere to a quaternion representation. This article focuses on some problems linked to this context. The state of the system should be represented in observable form. Singularities due to alignment of measured vectors cause estimation problems. Accommodation of the Kalman filter originates convergence difficulties. The article includes a new proposal that solves these problems, not needing changes in the Kalman filter algorithm. In addition, the article includes assessment of different errors, initialization values for the Kalman filter; and considers the influence of the magnetic dipole moment perturbation, showing how to handle it as part of the Kalman filter framework.
Walker, Rachel A; Andreansky, Christopher; Ray, Madelyn H; McDannald, Michael A
2018-06-01
Childhood adversity is associated with exaggerated threat processing and earlier alcohol use initiation. Conclusive links remain elusive, as childhood adversity typically co-occurs with detrimental socioeconomic factors, and its impact is likely moderated by biological sex. To unravel the complex relationships among childhood adversity, sex, threat estimation, and alcohol use initiation, we exposed female and male Long-Evans rats to early adolescent adversity (EAA). In adulthood, >50 days following the last adverse experience, threat estimation was assessed using a novel fear discrimination procedure in which cues predict a unique probability of footshock: danger (p = 1.00), uncertainty (p = .25), and safety (p = .00). Alcohol use initiation was assessed using voluntary access to 20% ethanol, >90 days following the last adverse experience. During development, EAA slowed body weight gain in both females and males. In adulthood, EAA selectively inflated female threat estimation, exaggerating fear to uncertainty and safety, but promoted alcohol use initiation across sexes. Meaningful relationships between threat estimation and alcohol use initiation were not observed, underscoring the independent effects of EAA. Results isolate the contribution of EAA to adult threat estimation, alcohol use initiation, and reveal moderation by biological sex. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
In-flight alignment using H ∞ filter for strapdown INS on aircraft.
Pei, Fu-Jun; Liu, Xuan; Zhu, Li
2014-01-01
In-flight alignment is an effective way to improve the accuracy and speed of initial alignment for strapdown inertial navigation system (INS). During the aircraft flight, strapdown INS alignment was disturbed by lineal and angular movements of the aircraft. To deal with the disturbances in dynamic initial alignment, a novel alignment method for SINS is investigated in this paper. In this method, an initial alignment error model of SINS in the inertial frame is established. The observability of the system is discussed by piece-wise constant system (PWCS) theory and observable degree is computed by the singular value decomposition (SVD) theory. It is demonstrated that the system is completely observable, and all the system state parameters can be estimated by optimal filter. Then a H ∞ filter was designed to resolve the uncertainty of measurement noise. The simulation results demonstrate that the proposed algorithm can reach a better accuracy under the dynamic disturbance condition.
Chen, Rongjun; Slater, Nigel K H; Gatlin, Larry A; Kramer, Tony; Shalaev, Evgenyi Y
2008-01-01
Sublimation from lactose and sucrose solutions has been monitored by temperature measurement, visual observation, heat flux sensing and manometric measurements. Estimates of energy transfer rates to the subliming mass made from visual observations and heat flux measurements are in broad agreement, demonstrating for the first time that heat flux sensors can be used to monitor the progress of lyophilization in individual vials with low sample volumes. Furthermore, it is shown that under identical lyophilization conditions the initial rate of drying for lactose solutions is low with little water sublimation for up to 150 minutes, which contrasts markedly with the much faster initial rate of drying for sucrose solutions. Measurement of the initial heat flux between shelf and vial indicated a lower flux to a 10% lactose solution than to a 10% sucrose solution.
Wagner, Chad R.
2007-01-01
The use of one-dimensional hydraulic models currently is the standard method for estimating velocity fields through a bridge opening for scour computations and habitat assessment. Flood-flow contraction through bridge openings, however, is hydrodynamically two dimensional and often three dimensional. Although there is awareness of the utility of two-dimensional models to predict the complex hydraulic conditions at bridge structures, little guidance is available to indicate whether a one- or two-dimensional model will accurately estimate the hydraulic conditions at a bridge site. The U.S. Geological Survey, in cooperation with the North Carolina Department of Transportation, initiated a study in 2004 to compare one- and two-dimensional model results with field measurements at complex riverine and tidal bridges in North Carolina to evaluate the ability of each model to represent field conditions. The field data consisted of discharge and depth-averaged velocity profiles measured with an acoustic Doppler current profiler and surveyed water-surface profiles for two high-flow conditions. For the initial study site (U.S. Highway 13 over the Tar River at Greenville, North Carolina), the water-surface elevations and velocity distributions simulated by the one- and two-dimensional models showed appreciable disparity in the highly sinuous reach upstream from the U.S. Highway 13 bridge. Based on the available data from U.S. Geological Survey streamgaging stations and acoustic Doppler current profiler velocity data, the two-dimensional model more accurately simulated the water-surface elevations and the velocity distributions in the study reach, and contracted-flow magnitudes and direction through the bridge opening. To further compare the results of the one- and two-dimensional models, estimated hydraulic parameters (flow depths, velocities, attack angles, blocked flow width) for measured high-flow conditions were used to predict scour depths at the U.S. Highway 13 bridge by using established methods. Comparisons of pier-scour estimates from both models indicated that the scour estimates from the two-dimensional model were as much as twice the depth of the estimates from the one-dimensional model. These results can be attributed to higher approach velocities and the appreciable flow angles at the piers simulated by the two-dimensional model and verified in the field. Computed flood-frequency estimates of the 10-, 50-, 100-, and 500-year return-period floods on the Tar River at Greenville were also simulated with both the one- and two-dimensional models. The simulated water-surface profiles and velocity fields of the various return-period floods were used to compare the modeling approaches and provide information on what return-period discharges would result in road over-topping and(or) pressure flow. This information is essential in the design of new and replacement structures. The ability to accurately simulate water-surface elevations and velocity magnitudes and distributions at bridge crossings is essential in assuring that bridge plans balance public safety with the most cost-effective design. By compiling pertinent bridge-site characteristics and relating them to the results of several model-comparison studies, the framework for developing guidelines for selecting the most appropriate model for a given bridge site can be accomplished.
Temporal and spatial foliations of spacetimes.
NASA Astrophysics Data System (ADS)
Herold, H.
For the solution of initial-value problems in numerical relativity usually the (3+1) splitting of Einstein's equations is employed. An important part of this splitting is the choice of the temporal gauge condition. In order to estimate the quality of time-evolution schemes, different time slicings of given well-known spherically symmetric spacetimes have been studied. Besides the maximal slicing condition the harmonic slicing prescription has been used to calculate temporal foliations of the Schwarzschild and the Oppenheimer-Snyder spacetime. Additionally, the author has studied a recently proposed, geometrically motivated spatial gauge condition, which is defined by considering the foliations of the three-dimensional space-like hypersurfaces by 2-surfaces of constant mean extrinsic curvature. Apart from the equations for the shift vector, which can be derived for this gauge condition, he has investigated such spatial foliations for well-known stationary axially symmetric spacetimes, namely for the Kerr metric and for numerically determined solutions for rapidly rotating neutron stars.
NASA Astrophysics Data System (ADS)
Cho, Hyunjung; Jin, Kyeong Sik; Lee, Jaegeun; Lee, Kun-Hong
2018-07-01
Small angle x-ray scattering (SAXS) was used to estimate the degree of polymerization of polymer-grafted carbon nanotubes (CNTs) synthesized using a ‘grafting from’ method. This analysis characterizes the grafted polymer chains without cleaving them from CNTs, and provides reliable data that can complement conventional methods such as thermogravimetric analysis or transmittance electron microscopy. Acrylonitrile was polymerized from the surface of the CNTs by using redox initiation to produce poly-acrylonitrile-grafted CNTs (PAN-CNTs). Polymerization time and the initiation rate were varied to control the degree of polymerization. Radius of gyration (R g ) of PAN-CNTs was determined using the Guinier plot obtained from SAXS solution analysis. The results showed consistent values according to the polymerization condition, up to a maximum R g = 125.70 Å whereas that of pristine CNTs was 99.23 Å. The dispersibility of PAN-CNTs in N,N-dimethylformamide was tested using ultraviolet–visible-near infrared spectroscopy and was confirmed to increase as the degree of polymerization increased. This analysis will be helpful to estimate the degree of polymerization of any polymer-grafted CNTs synthesized using the ‘grafting from’ method and to fabricate polymer/CNT composite materials.
NASA Astrophysics Data System (ADS)
Han, Lu; Gao, Kun; Gong, Chen; Zhu, Zhenyu; Guo, Yue
2017-08-01
On-orbit Modulation Transfer Function (MTF) is an important indicator to evaluate the performance of the optical remote sensors in a satellite. There are many methods to estimate MTF, such as pinhole method, slit method and so on. Among them, knife-edge method is quite efficient, easy-to-use and recommended in ISO12233 standard for the wholefrequency MTF curve acquisition. However, the accuracy of the algorithm is affected by Edge Spread Function (ESF) fitting accuracy significantly, which limits the range of application. So in this paper, an optimized knife-edge method using Powell algorithm is proposed to improve the ESF fitting precision. Fermi function model is the most popular ESF fitting model, yet it is vulnerable to the initial values of the parameters. Considering the characteristics of simple and fast convergence, Powell algorithm is applied to fit the accurate parameters adaptively with the insensitivity to the initial parameters. Numerical simulation results reveal the accuracy and robustness of the optimized algorithm under different SNR, edge direction and leaning angles conditions. Experimental results using images of the camera in ZY-3 satellite show that this method is more accurate than the standard knife-edge method of ISO12233 in MTF estimation.
Cho, Hyunjung; Jin, Kyeong Sik; Lee, Jaegeun; Lee, Kun-Hong
2018-07-06
Small angle x-ray scattering (SAXS) was used to estimate the degree of polymerization of polymer-grafted carbon nanotubes (CNTs) synthesized using a 'grafting from' method. This analysis characterizes the grafted polymer chains without cleaving them from CNTs, and provides reliable data that can complement conventional methods such as thermogravimetric analysis or transmittance electron microscopy. Acrylonitrile was polymerized from the surface of the CNTs by using redox initiation to produce poly-acrylonitrile-grafted CNTs (PAN-CNTs). Polymerization time and the initiation rate were varied to control the degree of polymerization. Radius of gyration (R g ) of PAN-CNTs was determined using the Guinier plot obtained from SAXS solution analysis. The results showed consistent values according to the polymerization condition, up to a maximum R g = 125.70 Å whereas that of pristine CNTs was 99.23 Å. The dispersibility of PAN-CNTs in N,N-dimethylformamide was tested using ultraviolet-visible-near infrared spectroscopy and was confirmed to increase as the degree of polymerization increased. This analysis will be helpful to estimate the degree of polymerization of any polymer-grafted CNTs synthesized using the 'grafting from' method and to fabricate polymer/CNT composite materials.
NASA Astrophysics Data System (ADS)
González-Carrasco, J. F.; Gonzalez, G.; Aránguiz, R.; Catalan, P. A.; Cienfuegos, R.; Urrutia, A.; Shrivastava, M. N.; Yagi, Y.; Moreno, M.
2015-12-01
Tsunami inundation maps are a powerful tool to design evacuation plans of coastal communities, additionally can be used as a guide to territorial planning and assessment of structural damages in port facilities and critical infrastructure (Borrero et al., 2003; Barberopoulou et al., 2011; Power et al., 2012; Mueller et al., 2015). The accuracy of inundation estimation is highly correlated with tsunami initial conditions, e.g. seafloor vertical deformation, displaced water volume and potential energy (Bolshakova et al., 2011). Usually, the initial conditions are estimated using homogeneous rupture models based in historical worst-case scenario. However tsunamigenic events occurred in central Chilean continental margin showed a heterogeneous slip distribution of source with patches of high slip, correlated with fully-coupled interseismic zones (Moreno et al., 2012). The main objective of this work is to evaluate the predictive capacity of interseismic coupling models based on geodetic data comparing them with homogeneous fault slip model constructed using scaling laws (Blaser et al., 2010) to estimate inundation and runup in coastal areas. To test our hypothesis we select a seismic gap of Maule, where occurred the last large tsunamigenic earthquake in the chilean subduction zone, using the interseismic coupling models (ISC) proposed by Moreno et al., 2011 and Métois et al., 2013. We generate a slip deficit distribution to build a tsunami source supported by geological information such as slab depth (Hayes et al., 2012), strike, rake and dip (Dziewonski et al., 1981; Ekström et al., 2012) to model tsunami generation, propagation and shoreline impact using Neowave 2D (Yamazaki et al., 2009). We compare the tsunami scenario of Mw 8.8, Maule based in coseismic slip distribution proposed by Moreno et al., 2012 with homogeneous and heterogeneous models to identify the accuracy of our results with sea level time series and regional runup data (Figure 1). The estimation of tsunami source using ISC model can be useful to improve the analysis of tsunami threat, based in more realistic slip distribution.
Rogue waves and large deviations in deep sea.
Dematteis, Giovanni; Grafke, Tobias; Vanden-Eijnden, Eric
2018-01-30
The appearance of rogue waves in deep sea is investigated by using the modified nonlinear Schrödinger (MNLS) equation in one spatial dimension with random initial conditions that are assumed to be normally distributed, with a spectrum approximating realistic conditions of a unidirectional sea state. It is shown that one can use the incomplete information contained in this spectrum as prior and supplement this information with the MNLS dynamics to reliably estimate the probability distribution of the sea surface elevation far in the tail at later times. Our results indicate that rogue waves occur when the system hits unlikely pockets of wave configurations that trigger large disturbances of the surface height. The rogue wave precursors in these pockets are wave patterns of regular height, but with a very specific shape that is identified explicitly, thereby allowing for early detection. The method proposed here combines Monte Carlo sampling with tools from large deviations theory that reduce the calculation of the most likely rogue wave precursors to an optimization problem that can be solved efficiently. This approach is transferable to other problems in which the system's governing equations contain random initial conditions and/or parameters.
Effects of sources on time-domain finite difference models.
Botts, Jonathan; Savioja, Lauri
2014-07-01
Recent work on excitation mechanisms in acoustic finite difference models focuses primarily on physical interpretations of observed phenomena. This paper offers an alternative view by examining the properties of models from the perspectives of linear algebra and signal processing. Interpretation of a simulation as matrix exponentiation clarifies the separate roles of sources as boundaries and signals. Boundary conditions modify the matrix and thus its modal structure, and initial conditions or source signals shape the solution, but not the modal structure. Low-frequency artifacts are shown to follow from eigenvalues and eigenvectors of the matrix, and previously reported artifacts are predicted from eigenvalue estimates. The role of source signals is also briefly discussed.
Aerodynamic parameter estimation via Fourier modulating function techniques
NASA Technical Reports Server (NTRS)
Pearson, A. E.
1995-01-01
Parameter estimation algorithms are developed in the frequency domain for systems modeled by input/output ordinary differential equations. The approach is based on Shinbrot's method of moment functionals utilizing Fourier based modulating functions. Assuming white measurement noises for linear multivariable system models, an adaptive weighted least squares algorithm is developed which approximates a maximum likelihood estimate and cannot be biased by unknown initial or boundary conditions in the data owing to a special property attending Shinbrot-type modulating functions. Application is made to perturbation equation modeling of the longitudinal and lateral dynamics of a high performance aircraft using flight-test data. Comparative studies are included which demonstrate potential advantages of the algorithm relative to some well established techniques for parameter identification. Deterministic least squares extensions of the approach are made to the frequency transfer function identification problem for linear systems and to the parameter identification problem for a class of nonlinear-time-varying differential system models.
Lempel-Ziv complexity analysis of one dimensional cellular automata.
Estevez-Rams, E; Lora-Serrano, R; Nunes, C A J; Aragón-Fernández, B
2015-12-01
Lempel-Ziv complexity measure has been used to estimate the entropy density of a string. It is defined as the number of factors in a production factorization of a string. In this contribution, we show that its use can be extended, by using the normalized information distance, to study the spatiotemporal evolution of random initial configurations under cellular automata rules. In particular, the transfer information from time consecutive configurations is studied, as well as the sensitivity to perturbed initial conditions. The behavior of the cellular automata rules can be grouped in different classes, but no single grouping captures the whole nature of the involved rules. The analysis carried out is particularly appropriate for studying the computational processing capabilities of cellular automata rules.
Previsic, Mirko; Karthikeyan, Anantha; Lewis, Tony; McCarthy, John
2017-07-26
Capex numbers are in $/kW, Opex numbers in $/kW-yr. Cost Estimates provided herein are based on concept design and basic engineering data and have high levels of uncertainties embedded. This reference economic scenario was done for a very large device version of the OE Buoy technology, which is not presently on Ocean Energy's technology development pathway but will be considered in future business plan development. The DOE reference site condition is considered a low power-density site, compared with many of the planned initial deployment locations for the OE Buoy. Many of the sites considered for the initial commercial deployment of the OE buoy feature much higher wave power densities and shorter period waves. Both of these characteristics will improve the OE buoy's commercial viability.
Lempel-Ziv complexity analysis of one dimensional cellular automata
NASA Astrophysics Data System (ADS)
Estevez-Rams, E.; Lora-Serrano, R.; Nunes, C. A. J.; Aragón-Fernández, B.
2015-12-01
Lempel-Ziv complexity measure has been used to estimate the entropy density of a string. It is defined as the number of factors in a production factorization of a string. In this contribution, we show that its use can be extended, by using the normalized information distance, to study the spatiotemporal evolution of random initial configurations under cellular automata rules. In particular, the transfer information from time consecutive configurations is studied, as well as the sensitivity to perturbed initial conditions. The behavior of the cellular automata rules can be grouped in different classes, but no single grouping captures the whole nature of the involved rules. The analysis carried out is particularly appropriate for studying the computational processing capabilities of cellular automata rules.
NASA Technical Reports Server (NTRS)
Haering, E. A., Jr.; Burcham, F. W., Jr.
1984-01-01
A simulation study was conducted to optimize minimum time and fuel consumption paths for an F-15 airplane powered by two F100 Engine Model Derivative (EMD) engines. The benefits of using variable stall margin (uptrim) to increase performance were also determined. This study supports the NASA Highly Integrated Digital Electronic Control (HIDEC) program. The basis for this comparison was minimum time and fuel used to reach Mach 2 at 13,716 m (45,000 ft) from the initial conditions of Mach 0.15 at 1524 m (5000 ft). Results were also compared to a pilot's estimated minimum time and fuel trajectory determined from the F-15 flight manual and previous experience. The minimum time trajectory took 15 percent less time than the pilot's estimate for the standard EMD engines, while the minimum fuel trajectory used 1 percent less fuel than the pilot's estimate for the minimum fuel trajectory. The F-15 airplane with EMD engines and uptrim, was 23 percent faster than the pilot's estimate. The minimum fuel used was 5 percent less than the estimate.
Selecting remediation goals by assessing the natural attenuation capacity of groundwater systems
Chapelle, Francis H.; Bradley, Paul M.
1998-01-01
Remediation goals for the source areas of a chlorinated ethene‐contaminated groundwater plume were identified by assessing the natural attenuation capacity of the aquifer system. The redox chemistry of the site indicates that sulfate‐reducing (H2 ∼ 2 nanomoles [nM]) per liter conditions near the contaminant source grade to Fe(III)‐reducing conditions (H2 ∼ 0.5 nM) downgradient of the source. Sulfate‐reducing conditions facilitate the initial reduction of perchloroethene (PCE) to trichloroethene (TCE), cis‐dichloroethene (cis‐DCE), and vinyl chloride (VC). Subsequently, the Fe(III)‐reducing conditions drive the oxidation of cis‐DCE and VC to carbon dioxide and chloride. This sequence gives the aquifer a substantial capacity for biodegrading chlorinated ethenes. Natural attenuation capacity (the slope of the steady‐state contaminant concentration profile along a groundwater flowpath) is a function of biodegradation rates, aquifer dispersive characteristics, and groundwater flow velocity. The natural attenuation capacity at the Kings Bay, Georgia site was assessed by estimating groundwater flowrates (∼0.23 ± 0.12 m/d) and aquifer dispersivity (∼1 m) from hydrologic and scale considerations. Apparent biodegradation rate constants (PCE and TCE ∼ 0.01 d−1; cis‐DCE and VC ∼ 0.025 d−1) were estimated from observed contaminant concentration changes along aquifer flowpaths. A boundary‐value problem approach was used to estimate levels to which contaminant concentrations in the source areas must be lowered (by engineered removal), or groundwater flow velocities lowered (by pumping) for the natural attenuation capacity to achieve maximum concentration limits (MCLs) prior to reaching a predetermined regulatory point of compliance.
Triggering conditions and mobility of debris flows associated to complex earthflows
NASA Astrophysics Data System (ADS)
Malet, J.-P.; Laigle, D.; Remaître, A.; Maquaire, O.
2005-03-01
Landslides on black marl slopes of the French Alps are, in most cases, complex catastrophic failures in which the initial structural slides transform into slow-moving earthflows. Under specific hydrological conditions, these earthflows can transform into debris flows. Due to their sediment volume and their high mobility, debris flow induced by landslides are far much dangerous than these resulting from continuous erosive processes. A fundamental point to correctly delineate the area exposed to debris flows on the alluvial fans is therefore to understand why and how some earthflows transform into debris flow while most of them stabilize. In this paper, a case of transformation from earthflow to debris flow is presented and analysed. An approach combining geomorphology, hydrology, geotechnics and rheology is adopted to model the debris flow initiation (failure stage) and its runout (postfailure stage). Using the Super-Sauze earthflow (Alpes-de-Haute-Provence, France) as a case study, the objective is to characterize the hydrological and mechanical conditions leading to debris flow initiation in such cohesive material. Results show a very good agreement between the observed runout distances and these calculated using the debris flow modeling code Cemagref 1-D. The deposit thickness in the depositional area and the velocities of the debris flows are also well reproduced. Furthermore, a dynamic slope stability analysis shows that conditions in the debris source area under average pore water pressures and moisture contents are close to failure. A small excess of water can therefore initiate failure. Seepage analysis is used to estimate the volume of debris that can be released for several hydroclimatic conditions. The failed volumes are then introduced in the Cemagref 1-D runout code to propose debris flow hazard scenarios. Results show that clayey earthflow can transform under 5-year return period rainfall conditions into 1-km runout debris flow of volumes ranging between 2000 to 5000 m 3. Slope failures induced by 25-year return period rainfall can trigger large debris flow events (30,000 to 50,000 m 3) that can reach the alluvial fan and cause damage.
Numerical Simulation of Stress evolution and earthquake sequence of the Tibetan Plateau
NASA Astrophysics Data System (ADS)
Dong, Peiyu; Hu, Caibo; Shi, Yaolin
2015-04-01
The India-Eurasia's collision produces N-S compression and results in large thrust fault in the southern edge of the Tibetan Plateau. Differential eastern flow of the lower crust of the plateau leads to large strike-slip faults and normal faults within the plateau. From 1904 to 2014, more than 30 earthquakes of Mw > 6.5 occurred sequentially in this distinctive tectonic environment. How did the stresses evolve during the last 110 years, how did the earthquakes interact with each other? Can this knowledge help us to forecast the future seismic hazards? In this essay, we tried to simulate the evolution of the stress field and the earthquake sequence in the Tibetan plateau within the last 110 years with a 2-D finite element model. Given an initial state of stress, the boundary condition was constrained by the present-day GPS observation, which was assumed as a constant rate during the 110 years. We calculated stress evolution year by year, and earthquake would occur if stress exceed the crustal strength. Stress changes due to each large earthquake in the sequence was calculated and contributed to the stress evolution. A key issue is the choice of initial stress state of the modeling, which is actually unknown. Usually, in the study of earthquake triggering, people assume the initial stress is zero, and only calculate the stress changes by large earthquakes - the Coulomb failure stress changes (Δ CFS). To some extent, this simplified method is a powerful tool because it can reveal which fault or which part of a fault becomes more risky or safer relatively. Nonetheless, it has not utilized all information available to us. The earthquake sequence reveals, though far from complete, some information about the stress state in the region. If the entire region is close to a self-organized critical or subcritical state, earthquake stress drop provides an estimate of lower limit of initial state. For locations no earthquakes occurred during the period, initial stress has to be lower than certain value. For locations where large earthquakes occurred during the 110 years, the initial stresses can be inverted if the strength is estimated and the tectonic loading is assumed constant. Therefore, although initial stress state is unknown, we can try to make estimate of a range of it. In this study, we estimated a reasonable range of initial stress, and then based on Coulomb-Mohr criterion to regenerate the earthquake sequence, starting from the Daofu earthquake of 1904. We calculated the stress field evolution of the sequence, considering both the tectonic loading and interaction between the earthquakes. Ultimately we got a sketch of the present stress. Of course, a single model with certain initial stress is just one possible model. Consequently the potential seismic hazards distribution based on a single model is not convincing. We made test on hundreds of possible initial stress state, all of them can produce the historical earthquake sequence occurred, and summarized all kinds of calculated probabilities of the future seismic activity. Although we cannot provide the exact state in the future, but we can narrow the estimate of regions where is in high probability of risk. Our primary results indicate that the Xianshuihe fault and adjacent area is one of such zones with higher risk than other regions in the future. During 2014, there were 6 earthquakes (M > 5.0) happened in this region, which correspond with our result in some degree. We emphasized the importance of the initial stress field for the earthquake sequence, and provided a probabilistic assessment for future seismic hazards. This study may bring some new insights to estimate the initial stress, earthquake triggering, and the stress field evolution .
NASA Astrophysics Data System (ADS)
Kuznetsov, A. V.; Kamantsev, I. S.; Zadvorkin, S. M.; Drukarenko, N. A.; Goruleva, L. S.; Veselova, V. E.
2017-12-01
An approach to the estimation of the residual durability of structural elements in view of their initial stress-strain state is proposed. The adequacy of the developed approach is confirmed by experiments on cyclic loading of specimens without pronounced stress concentrators simulating the work of real structural elements under conditions of overshooting the total stresses causing local plastic deformation of the material, with regard for residual stresses.
Stephens, Melika H; Grey, Andrew; Fernandez, Justin; Kalluru, Ramanamma; Faasse, Kate; Horne, Anne; Petrie, Keith J
2016-01-01
To investigate the efficacy of 3-D printed bone models as a tool to facilitate initiation of bisphosphonate treatment among individuals who were newly diagnosed with osteoporosis. Fifty eight participants with estimated fracture risk above that at which guidelines recommend pharmacological intervention were randomised to receive either a standard physician interview or an interview augmented by the presentation of 3-D bone models. Participants' beliefs about osteoporosis and bisphosphonate treatment, initiation of bisphosphonate therapy assessed at two months using self-report and pharmacy dispensing data. Individuals in the 3-D bone model intervention condition were more emotionally affected by osteoporosis immediately after the interview (p = .04) and reported a greater understanding of osteoporosis at follow-up (p = .04), than the control group. While a greater proportion of the intervention group initiated an oral bisphosphonate regimen (alendronate) (52%) in comparison with the control group (21%), the overall initiation of medication for osteoporosis, including infusion (zoledronate), did not differ significantly (intervention group 62%, control group 45%, p = .19). The presentation of 3-D bone models during a medical consultation can modify cognitive and emotional representations relevant to treatment initiation among people with osteoporosis and might facilitate commencement of bisphosphonate treatment.
NASA Astrophysics Data System (ADS)
Wang, Daosheng; Cao, Anzhou; Zhang, Jicai; Fan, Daidu; Liu, Yongzhi; Zhang, Yue
2018-06-01
Based on the theory of inverse problems, a three-dimensional sigma-coordinate cohesive sediment transport model with the adjoint data assimilation is developed. In this model, the physical processes of cohesive sediment transport, including deposition, erosion and advection-diffusion, are parameterized by corresponding model parameters. These parameters are usually poorly known and have traditionally been assigned empirically. By assimilating observations into the model, the model parameters can be estimated using the adjoint method; meanwhile, the data misfit between model results and observations can be decreased. The model developed in this work contains numerous parameters; therefore, it is necessary to investigate the parameter sensitivity of the model, which is assessed by calculating a relative sensitivity function and the gradient of the cost function with respect to each parameter. The results of parameter sensitivity analysis indicate that the model is sensitive to the initial conditions, inflow open boundary conditions, suspended sediment settling velocity and resuspension rate, while the model is insensitive to horizontal and vertical diffusivity coefficients. A detailed explanation of the pattern of sensitivity analysis is also given. In ideal twin experiments, constant parameters are estimated by assimilating 'pseudo' observations. The results show that the sensitive parameters are estimated more easily than the insensitive parameters. The conclusions of this work can provide guidance for the practical applications of this model to simulate sediment transport in the study area.
Yang, Jing; Ye, Shu-jun; Wu, Ji-chun
2011-05-01
This paper studied on the influence of bioclogging on permeability of saturated porous media. Laboratory hydraulic tests were conducted in a two-dimensional C190 sand-filled cell (55 cm wide x 45 cm high x 1.28 cm thick) to investigate growth of the mixed microorganisms (KB-1) and influence of biofilm on permeability of saturated porous media under condition of rich nutrition. Biomass distributions in the water and on the sand in the cell were measured by protein analysis. The biofilm distribution on the sand was observed by confocal laser scanning microscopy. Permeability was measured by hydraulic tests. The biomass levels measured in water and on the sand increased with time, and were highest at the bottom of the cell. The biofilm on the sand at the bottom of the cell was thicker. The results of the hydraulic tests demonstrated that the permeability due to biofilm growth was estimated to be average 12% of the initial value. To investigate the spatial distribution of permeability in the two dimensional cell, three models (Taylor, Seki, and Clement) were used to calculate permeability of porous media with biofilm growth. The results of Taylor's model showed reduction in permeability of 2-5 orders magnitude. The Clement's model predicted 3%-98% of the initial value. Seki's model could not be applied in this study. Conclusively, biofilm growth could obviously decrease the permeability of two dimensional saturated porous media, however, the reduction was much less than that estimated in one dimensional condition. Additionally, under condition of two dimensional saturated porous media with rich nutrition, Seki's model could not be applied, Taylor's model predicted bigger reductions, and the results of Clement's model were closest to the result of hydraulic test.
A simple method for identifying parameter correlations in partially observed linear dynamic models.
Li, Pu; Vu, Quoc Dong
2015-12-14
Parameter estimation represents one of the most significant challenges in systems biology. This is because biological models commonly contain a large number of parameters among which there may be functional interrelationships, thus leading to the problem of non-identifiability. Although identifiability analysis has been extensively studied by analytical as well as numerical approaches, systematic methods for remedying practically non-identifiable models have rarely been investigated. We propose a simple method for identifying pairwise correlations and higher order interrelationships of parameters in partially observed linear dynamic models. This is made by derivation of the output sensitivity matrix and analysis of the linear dependencies of its columns. Consequently, analytical relations between the identifiability of the model parameters and the initial conditions as well as the input functions can be achieved. In the case of structural non-identifiability, identifiable combinations can be obtained by solving the resulting homogenous linear equations. In the case of practical non-identifiability, experiment conditions (i.e. initial condition and constant control signals) can be provided which are necessary for remedying the non-identifiability and unique parameter estimation. It is noted that the approach does not consider noisy data. In this way, the practical non-identifiability issue, which is popular for linear biological models, can be remedied. Several linear compartment models including an insulin receptor dynamics model are taken to illustrate the application of the proposed approach. Both structural and practical identifiability of partially observed linear dynamic models can be clarified by the proposed method. The result of this method provides important information for experimental design to remedy the practical non-identifiability if applicable. The derivation of the method is straightforward and thus the algorithm can be easily implemented into a software packet.
NASA Astrophysics Data System (ADS)
Tanioka, Yuichiro
2017-04-01
After tsunami disaster due to the 2011 Tohoku-oki great earthquake, improvement of the tsunami forecast has been an urgent issue in Japan. National Institute of Disaster Prevention is installing a cable network system of earthquake and tsunami observation (S-NET) at the ocean bottom along the Japan and Kurile trench. This cable system includes 125 pressure sensors (tsunami meters) which are separated by 30 km. Along the Nankai trough, JAMSTEC already installed and operated the cable network system of seismometers and pressure sensors (DONET and DONET2). Those systems are the most dense observation network systems on top of source areas of great underthrust earthquakes in the world. Real-time tsunami forecast has depended on estimation of earthquake parameters, such as epicenter, depth, and magnitude of earthquakes. Recently, tsunami forecast method has been developed using the estimation of tsunami source from tsunami waveforms observed at the ocean bottom pressure sensors. However, when we have many pressure sensors separated by 30km on top of the source area, we do not need to estimate the tsunami source or earthquake source to compute tsunami. Instead, we can initiate a tsunami simulation from those dense tsunami observed data. Observed tsunami height differences with a time interval at the ocean bottom pressure sensors separated by 30 km were used to estimate tsunami height distribution at a particular time. In our new method, tsunami numerical simulation was initiated from those estimated tsunami height distribution. In this paper, the above method is improved and applied for the tsunami generated by the 2011 Tohoku-oki great earthquake. Tsunami source model of the 2011 Tohoku-oki great earthquake estimated using observed tsunami waveforms, coseimic deformation observed by GPS and ocean bottom sensors by Gusman et al. (2012) is used in this study. The ocean surface deformation is computed from the source model and used as an initial condition of tsunami simulation. By assuming that this computed tsunami is a real tsunami and observed at ocean bottom sensors, new tsunami simulation is carried out using the above method. The station distribution (each station is separated by 15 min., about 30 km) observed tsunami waveforms which were actually computed from the source model. Tsunami height distributions are estimated from the above method at 40, 80, and 120 seconds after the origin time of the earthquake. The Near-field Tsunami Inundation forecast method (Gusman et al. 2014) was used to estimate the tsunami inundation along the Sanriku coast. The result shows that the observed tsunami inundation was well explained by those estimated inundation. This also shows that it takes about 10 minutes to estimate the tsunami inundation from the origin time of the earthquake. This new method developed in this paper is very effective for a real-time tsunami forecast.
Walvoord, Michelle Ann; Stonestrom, David A.; Andraski, Brian J.; Striegl, Robert G.
2004-01-01
Natural flow regimes in deep unsaturated zones of arid interfluvial environments are rarely in hydraulic equilibrium with near-surface boundary conditions imposed by present-day plant–soil–atmosphere dynamics. Nevertheless, assessments of water resources and contaminant transport require realistic estimates of gas, water, and solute fluxes under past, present, and projected conditions. Multimillennial transients that are captured in current hydraulic, chemical, and isotopic profiles can be interpreted to constrain alternative scenarios of paleohydrologic evolution following climatic and vegetational shifts from pluvial to arid conditions. However, interpreting profile data with numerical models presents formidable challenges in that boundary conditions must be prescribed throughout the entire Holocene, when we have at most a few decades of actual records. Models of profile development at the Amargosa Desert Research Site include substantial uncertainties from imperfectly known initial and boundary conditions when simulating flow and solute transport over millennial timescales. We show how multiple types of profile data, including matric potentials and porewater concentrations of Cl−, δD, δ18O, can be used in multiphase heat, flow, and transport models to expose and reduce uncertainty in paleohydrologic reconstructions. Results indicate that a dramatic shift in the near-surface water balance occurred approximately 16000 yr ago, but that transitions in precipitation, temperature, and vegetation were not necessarily synchronous. The timing of the hydraulic transition imparts the largest uncertainty to model-predicted contemporary fluxes. In contrast, the uncertainties associated with initial (late Pleistocene) conditions and boundary conditions during the Holocene impart only small uncertainties to model-predicted contemporaneous fluxes.
Luo, Rutao; Piovoso, Michael J.; Martinez-Picado, Javier; Zurakowski, Ryan
2012-01-01
Mathematical models based on ordinary differential equations (ODE) have had significant impact on understanding HIV disease dynamics and optimizing patient treatment. A model that characterizes the essential disease dynamics can be used for prediction only if the model parameters are identifiable from clinical data. Most previous parameter identification studies for HIV have used sparsely sampled data from the decay phase following the introduction of therapy. In this paper, model parameters are identified from frequently sampled viral-load data taken from ten patients enrolled in the previously published AutoVac HAART interruption study, providing between 69 and 114 viral load measurements from 3–5 phases of viral decay and rebound for each patient. This dataset is considerably larger than those used in previously published parameter estimation studies. Furthermore, the measurements come from two separate experimental conditions, which allows for the direct estimation of drug efficacy and reservoir contribution rates, two parameters that cannot be identified from decay-phase data alone. A Markov-Chain Monte-Carlo method is used to estimate the model parameter values, with initial estimates obtained using nonlinear least-squares methods. The posterior distributions of the parameter estimates are reported and compared for all patients. PMID:22815727
Hydrous komatiites from Commondale, South Africa: An experimental study
NASA Astrophysics Data System (ADS)
Barr, J. A.; Grove, T. L.; Wilson, A. H.
2009-06-01
This study examines the emplacement conditions of komatiites in the 3.33 Ga Commondale Ultramafic Suite in South Africa. The komatiites of Commondale are unlike any other komatiites in both their physical structure and chemical nature. Komatiite unit chill margins preserve original komatiite liquid compositions with an Mg# of 0.91, MgO = 31.9 wt.%, Al 2O 3 wt.%/TiO 2 = 80 (wt.%), and SiO 2 content of 49.7 wt.%. A common feature throughout the komatiite sequence is the presence of orthopyroxene spinifex, where original orthopyroxene crystals are still preserved. The compositional information preserved in the most primitive of the natural pyroxenes present in these spinifex zones (Mg# = 0.92), provides insight into the original emplacement conditions of the komatiites. This study used anhydrous and hydrous equilibrium experiments, along with disequilibrium cooling-rate experiments, to quantify the crystallization conditions of the Commondale komatiites. The anhydrous, 1-atm liquidus was found at 1550 °C, with Fo97 olivine being the initial crystallizing phase, followed by spinel and then by protoenstatite, Mg# 0.95, at 1335 °C. The phase relations were also examined at 200 MPa under H 2O saturated conditions. The addition of ~ 4 wt.% H 2O lowers the appearance temperature of the initial pyroxene by 210 °C, thereby producing orthopyroxene with a Mg# closer to that of the most primitive preserved orthopyroxenes found in the komatiites. Additionally, dynamic cooling-rate experiments show that the natural pyroxenes preserve a chemical signature indicative of crystallization and cooling within an inflated flow complex. Estimates of the pre-eruptive H 2O content for the Commondale komatiites are between ~ 2 and 4.3 wt.% H 2O in the liquid. This range is similar to that estimated for 3.5 Ga komatiites of the Barberton Mountainland and may indicate formation of both suites in similar tectonic environments.
De Nisco, Giuseppe; Zhang, Peng; Calò, Karol; Liu, Xiao; Ponzini, Raffaele; Bignardi, Cristina; Rizzo, Giovanna; Deng, Xiaoyan; Gallo, Diego; Morbiducci, Umberto
2018-02-08
Personalized computational hemodynamics (CH) is a promising tool to clarify/predict the link between low density lipoproteins (LDL) transport in aorta, disturbed shear and atherogenesis. However, CH uses simplifying assumptions that represent sources of uncertainty. In particular, modelling blood-side to wall LDL transfer is challenged by the cumbersomeness of protocols needed to obtain reliable LDL concentration profile estimations. This paucity of data is limiting the establishment of rigorous CH protocols able to balance the trade-offs among the variety of in vivo data to be acquired, and the accuracy required by biological/clinical applications. In this study, we analyze the impact of LDL concentration initialization (initial conditions, ICs) and inflow boundary conditions (BCs) on CH models of LDL blood-to-wall transfer in aorta. Technically, in an image-based model of human aorta, two different inflow BCs are generated imposing subject-specific inflow 3D PC-MRI measured or idealized (flat) velocity profiles. For each simulated BC, four different ICs for LDL concentration are applied, imposing as IC the LDL distribution resulting from steady-state simulations with average conditions, or constant LDL concentration values. Based on CH results, we conclude that: (1) the imposition of realistic 3D velocity profiles as inflow BC reduces the uncertainty affecting the representation of LDL transfer; (2) different LDL concentration ICs lead to markedly different patterns of LDL transfer. Given that it is not possible to verify in vivo the proper LDL concentration initialization to be applied, we suggest to carefully set and unambiguously declare the imposed BCs and LDL concentration IC when modelling LDL transfer in aorta, in order to obtain reproducible and ultimately comparable results among different laboratories. Copyright © 2017 Elsevier Ltd. All rights reserved.
Muram, David; Kaltenboeck, Anna; Boytsov, Natalie; Hayes-Larson, Eleanor; Ivanova, Jasmina; Birnbaum, Howard G; Swindle, Ralph
2015-11-01
Patterns of care following topical testosterone agent (TTA) initiation are poorly understood. This study aimed to characterize care following TTA initiation and compare results between patients with and without a serum testosterone (T) assay within 30 days before and including TTA initiation. Adult men (N=4,146) initiating TTAs from January 1, 2011, to March 31, 2012, were identified from a commercially insured database. Patients were included if they initiated at recommended starting dose (RSD) and had ≥12 and ≥6 months of continuous eligibility preinitiation (baseline) and postinitiation (study period), respectively. Patients were stratified by preinitiation T assay. Maintenance dose attainment month was determined using unadjusted generalized estimating equations regression to compare dose relative to RSD month by month. Outcomes included maintenance dose attainment month, time to stopping of index TTA refills or a claim for nonindex testosterone replacement therapy (TRT), and proportion of patients with study period T assay or diagnosis of hypogonadism (HG) or another low testosterone condition, and were compared using chi-square and Wilcoxon rank-sum tests for categorical and continuous variables, respectively. Maintenance dose was attained in Month 4 postinitiation, at 115.2% of RSD. Approximately 46% of patients had a preinitiation T assay; these men were more likely to receive a diagnosis of HG or another low testosterone condition, to have a follow-up T assay, to continue treatment by filling a nonindex TRT, and less likely to stop refilling treatment with their index TTA. Differences in care following TTA initiation suggest that preinitiation T assays (i.e., guideline-based care) may be helpful in ensuring treatment benefits. © The Author(s) 2014.
Assessment of watershed regionalization for the land use change parameterization
NASA Astrophysics Data System (ADS)
Randusová, Beata; Kohnová, Silvia; Studvová, Zuzana; Marková, Romana; Nosko, Radovan
2016-04-01
The estimation of design discharges and water levels of extreme floods is one of the most important parts of the design process for a large number of engineering projects and studies. Floods and other natural hazards initiated by climate, soil, and land use changes are highly important in the 21st century. Flood risks and design flood estimation is particularly challenging. Methods of design flood estimation can be applied either locally or regionally. To obtain the design values in such cases where no recorded data exist, many countries have adopted procedures that fit the local conditions and requirements. One of these methods is the Soil Conservation Service - Curve number (SCS-CN) method which is often used in design flood estimation for ungauged sites. The SCS-CN method is an empirical rainfall-runoff model developed by the USDA Natural Resources Conservation Service (formerly called the Soil Conservation Service or SCS). The runoff curve number (CN) is based on the hydrological soil characteristics, land use, land management and antecedent saturation conditions of soil. This study is focused on development of the SCS-CN methodology for the changing land use conditions in Slovak basins (with the pilot site of the Myjava catchment), which regionalize actual state of land use data and actual rainfall and discharge measurements of the selected river basins. In this study the state of the water erosion and sediment transport along with a subsequent proposal of erosion control measures was analyzed as well. The regionalized SCS-CN method was subsequently used for assessing the effectiveness of this control measure to reduce runoff from the selected basin. For the determination of the sediment transport from the control measure to the Myjava basin, the SDR (Sediment Delivery Ratio) model was used.
A unifying view of synchronization for data assimilation in complex nonlinear networks
NASA Astrophysics Data System (ADS)
Abarbanel, Henry D. I.; Shirman, Sasha; Breen, Daniel; Kadakia, Nirag; Rey, Daniel; Armstrong, Eve; Margoliash, Daniel
2017-12-01
Networks of nonlinear systems contain unknown parameters and dynamical degrees of freedom that may not be observable with existing instruments. From observable state variables, we want to estimate the connectivity of a model of such a network and determine the full state of the model at the termination of a temporal observation window during which measurements transfer information to a model of the network. The model state at the termination of a measurement window acts as an initial condition for predicting the future behavior of the network. This allows the validation (or invalidation) of the model as a representation of the dynamical processes producing the observations. Once the model has been tested against new data, it may be utilized as a predictor of responses to innovative stimuli or forcing. We describe a general framework for the tasks involved in the "inverse" problem of determining properties of a model built to represent measured output from physical, biological, or other processes when the measurements are noisy, the model has errors, and the state of the model is unknown when measurements begin. This framework is called statistical data assimilation and is the best one can do in estimating model properties through the use of the conditional probability distributions of the model state variables, conditioned on observations. There is a very broad arena of applications of the methods described. These include numerical weather prediction, properties of nonlinear electrical circuitry, and determining the biophysical properties of functional networks of neurons. Illustrative examples will be given of (1) estimating the connectivity among neurons with known dynamics in a network of unknown connectivity, and (2) estimating the biophysical properties of individual neurons in vitro taken from a functional network underlying vocalization in songbirds.
Reproducibility of Interferon Gamma (IFN-γ) Release Assays. A Systematic Review
Tagmouti, Saloua; Slater, Madeline; Benedetti, Andrea; Kik, Sandra V.; Banaei, Niaz; Cattamanchi, Adithya; Metcalfe, John; Dowdy, David; van Zyl Smit, Richard; Dendukuri, Nandini
2014-01-01
Rationale: Interferon gamma (IFN-γ) release assays for latent tuberculosis infection result in a larger-than-expected number of conversions and reversions in occupational screening programs, and reproducibility of test results is a concern. Objectives: Knowledge of the relative contribution and extent of the individual sources of variability (immunological, preanalytical, or analytical) could help optimize testing protocols. Methods: We performed a systematic review of studies published by October 2013 on all potential sources of variability of commercial IFN-γ release assays (QuantiFERON-TB Gold In-Tube and T-SPOT.TB). The included studies assessed test variability under identical conditions and under different conditions (the latter both overall and stratified by individual sources of variability). Linear mixed effects models were used to estimate within-subject SD. Measurements and Main Results: We identified a total of 26 articles, including 7 studies analyzing variability under the same conditions, 10 studies analyzing variability with repeat testing over time under different conditions, and 19 studies reporting individual sources of variability. Most data were on QuantiFERON (only three studies on T-SPOT.TB). A considerable number of conversions and reversions were seen around the manufacturer-recommended cut-point. The estimated range of variability of IFN-γ response in QuantiFERON under identical conditions was ±0.47 IU/ml (coefficient of variation, 13%) and ±0.26 IU/ml (30%) for individuals with an initial IFN-γ response in the borderline range (0.25–0.80 IU/ml). The estimated range of variability in noncontrolled settings was substantially larger (±1.4 IU/ml; 60%). Blood volume inoculated into QuantiFERON tubes and preanalytic delay were identified as key sources of variability. Conclusions: This systematic review shows substantial variability with repeat IFN-γ release assays testing even under identical conditions, suggesting that reversions and conversions around the existing cut-point should be interpreted with caution. PMID:25188809
Davis, Kevin C; Blitstein, Jonathan L; Evans, W Douglas; Kamyab, Kian
2010-07-21
Prior research supports the notion that parents have the ability to influence their children's decisions regarding sexual behavior. Yet parent-based approaches to curbing teen pregnancy and STDs have been relatively unexplored. The Parents Speak Up National Campaign (PSUNC) is a multimedia campaign that attempts to fill this void by targeting parents of teens to encourage parent-child communication about waiting to have sex. The campaign follows a theoretical framework that identifies cognitions that are targeted in campaign messages and theorized to influence parent-child communication. While a previous experimental study showed PSUNC messages to be effective in increasing parent-child communication, it did not address how these effects manifest through the PSUNC theoretical framework. The current study examines the PSUNC theoretical framework by 1) estimating the impact of PSUNC on specific cognitions identified in the theoretical framework and 2) examining whether those cognitions are indeed associated with parent-child communication Our study consists of a randomized efficacy trial of PSUNC messages under controlled conditions. A sample of 1,969 parents was randomly assigned to treatment (PSUNC exposure) and control (no exposure) conditions. Parents were surveyed at baseline, 4 weeks, 6 months, 12 months, and 18 months post-baseline. Linear regression procedures were used in our analyses. Outcome variables included self-efficacy to communicate with child, long-term outcome expectations that communication would be successful, and norms on appropriate age for sexual initiation. We first estimated multivariable models to test whether these cognitive variables predict parent-child communication longitudinally. Longitudinal change in each cognitive variable was then estimated as a function of treatment condition, controlling for baseline individual characteristics. Norms related to appropriate age for sexual initiation and outcome expectations that communication would be successful were predictive of parent-child communication among both mothers and fathers. Treatment condition mothers exhibited larger changes than control mothers in both of these cognitive variables. Fathers exhibited no exposure effects. Results suggest that within a controlled setting, the "wait until older norm" and long-term outcome expectations were appropriate cognitions to target and the PSUNC media materials were successful in impacting them, particularly among mothers. This study highlights the importance of theoretical frameworks for parent-focused campaigns that identify appropriate behavioral precursors that are both predictive of a campaign's distal behavioral outcome and sensitive to campaign messages.
Neuert, Mark A C; Dunning, Cynthia E
2013-09-01
Strain energy-based adaptive material models are used to predict bone resorption resulting from stress shielding induced by prosthetic joint implants. Generally, such models are governed by two key parameters: a homeostatic strain-energy state (K) and a threshold deviation from this state required to initiate bone reformation (s). A refinement procedure has been performed to estimate these parameters in the femur and glenoid; this study investigates the specific influences of these parameters on resulting density distributions in the distal ulna. A finite element model of a human ulna was created using micro-computed tomography (µCT) data, initialized to a homogeneous density distribution, and subjected to approximate in vivo loading. Values for K and s were tested, and the resulting steady-state density distribution compared with values derived from µCT images. The sensitivity of these parameters to initial conditions was examined by altering the initial homogeneous density value. The refined model parameters selected were then applied to six additional human ulnae to determine their performance across individuals. Model accuracy using the refined parameters was found to be comparable with that found in previous studies of the glenoid and femur, and gross bone structures, such as the cortical shell and medullary canal, were reproduced. The model was found to be insensitive to initial conditions; however, a fair degree of variation was observed between the six specimens. This work represents an important contribution to the study of changes in load transfer in the distal ulna following the implementation of commercial orthopedic implants.
NASA Technical Reports Server (NTRS)
Achuthavarier, Deepthi; Koster, Randal; Marshak, Jelena; Schubert, Siegfried; Molod, Andrea
2018-01-01
In this study, we examine the prediction skill and predictability of the Madden Julian Oscillation (MJO) in a recent version of the NASA GEOS-5 atmosphere-ocean coupled model run at at 1/2 degree horizontal resolution. The results are based on a suite of hindcasts produced as part of the NOAA SubX project, consisting of seven ensemble members initialized every 5 days for the period 1999-2015. The atmospheric initial conditions were taken from the Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2), and the ocean and the sea ice were taken from a GMAO ocean analysis. The land states were initialized from the MERRA-2 land output, which is based on observation-corrected precipitation fields. We investigated the MJO prediction skill in terms of the bivariate correlation coefficient for the real-time multivariate MJO (RMM) indices. The correlation coefficient stays at or above 0.5 out to forecast lead times of 26-36 days, with a pronounced increase in skill for forecasts initialized from phase 3, when the MJO convective anomaly is located in the central tropical Indian Ocean. A corresponding estimate of the upper limit of the predictability is calculated by considering a single ensemble member as the truth and verifying the ensemble mean of the remaining members against that. The predictability estimates fall between 35-37 days (taken as forecast lead when the correlation reaches 0.5) and are rather insensitive to the initial MJO phase. The model shows slightly higher skill when the initial conditions contain strong MJO events compared to weak events, although the difference in skill is evident only from lead 1 to 20. Similar to other models, the RMM-index-based skill arises mostly from the circulation components of the index. The skill of the convective component of the index drops to 0.5 by day 20 as opposed to day 30 for circulation fields. The propagation of the MJO anomalies over the Maritime Continent does not appear problematic in the GEOS-5 hindcasts implying that the Maritime Continent predictability barrier may not be a major concern in this model. Finally, the MJO prediction skill in this version of GEOS-5 is superior to that of the current seasonal prediction system at the GMAO; this could be partly attributed to a slightly better representation of the MJO in the free running version of this model and partly to the improved atmospheric initialization from MERRA-2.
Gong, Xue Wen; Liu, Hao; Sun, Jing Sheng; Ma, Xiao Jian; Wang, Wan Ning; Cui, Yong Sheng
2017-04-18
An experiment was conducted to investigate soil evaporation (E), crop transpiration (T), evapotranspiration (ET) and the ratio of evaporation to evapotranspiration (E/ET) of drip-irrigated tomato, which was planted in a typical solar greenhouse in the North China, under different water conditions [irrigation amount was determined based on accumulated pan evaporation (E p ) of 20 cm pan evaporation, and two treatments were designed with full irrigation (0.9E p ) and deficit irrigation (0.5E p )] at different growth stages in 2015 and 2016 at Xinxiang Comprehensive Experimental Station, Chinese Academy of Agricultural Sciences. Effects of deficit irrigation on crop coefficient (K c ) and variation of water stress coefficient (K s ) throughout the growing season were also discussed. E, T and ET of tomato were calculated with a dual crop coefficient approach, and compared with the measured data. Results indicated that E in the full irrigation was 21.5% and 20.4% higher than that in the deficit irrigation in 2015 and 2016, respectively, accounting for 24.0% and 25.0% of ET in the whole growing season. The maximum E/ET was measured in the initial stage of tomato, while the minimum obtained in the middle stage. The K c the full irrigation was 0.45, 0.89, 1.06 and 0.93 in the initial, development, middle, and late stage of tomato, and 0.45, 0.89, 0.87 and 0.41 the deficit irrigation. The K s the deficit irrigation was 0.98, 0.93, 0.78 and 0.39 in the initial, development, middle, and late stage, respectively. The dual crop coefficient method could accurately estimate ET of greenhouse tomato under different water conditions in 2015 and 2016 seasons with the mean absolute error (MAE) of 0.36-0.48 mm·d -1 , root mean square error (RMSE) of 0.44-0.65 mm·d -1 . The method also estimated E and T accurately with MAE of 0.15-0.19 and 0.26-0.56 mm·d -1 , and with RMSE of 0.20-0.24 and 0.33-0.72 mm·d -1 , respectively.
Auditing of suppliers as the requirement of quality management systems in construction
NASA Astrophysics Data System (ADS)
Harasymiuk, Jolanta; Barski, Janusz
2017-07-01
The choice of a supplier of construction materials can be important factor of increase or reduction of building works costs. Construction materials present from 40 for 70% of investment task depending on kind of works being provided for realization. There is necessity of estimate of suppliers from the point of view of effectiveness of construction undertaking and necessity from the point of view of conformity of taken operation by executives of construction job and objects within the confines of systems of managements quality being initiated in their organizations. The estimate of suppliers of construction materials and subexecutives of special works is formal requirement in quality management systems, which meets the requirements of the ISO 9001 standard. The aim of this paper is to show possibilities of making use of anaudit for estimate of credibility and reliability of the supplier of construction materials. The article describes kinds of audits, that were carried in quality management systems, with particular taking into consideration audits called as second-site. One characterizes the estimate criterions of qualitative ability and method of choice of the supplier of construction materials. The paper shows also propositions of exemplary questions, that would be estimated in audit process, the way of conducting of this estimate and conditionality of estimate.
Rajkumar, Prabu; Pattabi, Kamaraj; Vadivoo, Selvaraj; Bhome, Arvind; Brashier, Bill; Bhattacharya, Prashanta; Mehendale, Sanjay M
2017-05-29
Chronic obstructive pulmonary disease (COPD) is a common preventable and treatable chronic respiratory disease, which affects 210 million people globally. Global and national guidelines exist for the management of COPD. Although evidence-based, they are inadequate to address the phenotypic and genotypic heterogeneity in India. Co-existence of other chronic respiratory diseases can adversely influence the prognosis of COPD.India has a huge burden of COPD with various risk factors and comorbid conditions. However, valid prevalence estimates employing spirometry as the diagnostic tool and data on important comorbid conditions are not available. This study protocol is designed to address this knowledge gap and eventually to build a database to undertake long-term cohort studies to describe the phenotypic and genotypic heterogeneity among COPD patients in India. The primary objective is to estimate the prevalence of COPD among adults aged ≥25 years for each gender in India. The secondary objective is to identify the risk factors for COPD and important comorbid conditions such as asthma and post-tuberculosis sequelae. It is also proposed to validate the currently available definitions for COPD diagnosis in India. A cross-sectional study will be undertaken among the populations of sub-urban areas of Chennai and Shillong cities, which represent the Southern and Northeastern regions of India. We will collect data on sociodemographic variables, economic characteristics, risk factors of COPD and comorbidities. The Global Initiative for Obstructive Lung Disease (GOLD) and Global Initiative for Asthma (GINA) definitions will be used for the diagnosis of COPD and asthma. Data will be analysed for estimation of the prevalence of COPD, asthma and associated factors. This study proposal was approved by the respective institutional ethics committees of participating institutions. The results will be disseminated through publications in the peer-reviewed journals and a report will be submitted to the concerned public health authorities in India for developing appropriate research and management policies. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Jet and disc luminosities in tidal disruption events
NASA Astrophysics Data System (ADS)
Piran, Tsvi; Sądowski, Aleksander; Tchekhovskoy, Alexander
2015-10-01
Tidal disruption events (TDEs) explore the whole range of accretion rates and configurations. A challenging question is what the corresponding light curves of these events are. We explore numerically the disc luminosity and the conditions within the inner region of the disc using a fully general relativistic slim disc model. Those conditions determine the magnitude of the magnetic field that engulfs the black hole and this, in turn, determines the Blandford-Znajek jet power. We estimate this power in two different ways and show that they are self-consistent. We find, as expected earlier from analytic arguments , that neither the disc luminosity nor the jet power follows the accretion rate throughout the disruption event. The disc luminosity varies only logarithmically with the accretion rate at super-Eddington luminosities. The jet power follows initially the accretion rate but remains constant after the transition from super- to sub-Eddington. At lower accretion rates at the end of the magnetically arrested disc (MAD) phase, the disc becomes thin and the jet may stop altogether. These new estimates of the jet power and disc luminosity that do not simply follow the mass fallback rate should be taken into account when searching for TDEs and analysing light curves of TDE candidates. Identification of some of the above-mentioned transitions may enable us to estimate better TDE parameters.
Benoussaad, Mourad; Poignet, Philippe; Hayashibe, Mitsuhiro; Azevedo-Coste, Christine; Fattal, Charles; Guiraud, David
2013-06-01
We investigated the parameter identification of a multi-scale physiological model of skeletal muscle, based on Huxley's formulation. We focused particularly on the knee joint controlled by quadriceps muscles under electrical stimulation (ES) in subjects with a complete spinal cord injury. A noninvasive and in vivo identification protocol was thus applied through surface stimulation in nine subjects and through neural stimulation in one ES-implanted subject. The identification protocol included initial identification steps, which are adaptations of existing identification techniques to estimate most of the parameters of our model. Then we applied an original and safer identification protocol in dynamic conditions, which required resolution of a nonlinear programming (NLP) problem to identify the serial element stiffness of quadriceps. Each identification step and cross validation of the estimated model in dynamic condition were evaluated through a quadratic error criterion. The results highlighted good accuracy, the efficiency of the identification protocol and the ability of the estimated model to predict the subject-specific behavior of the musculoskeletal system. From the comparison of parameter values between subjects, we discussed and explored the inter-subject variability of parameters in order to select parameters that have to be identified in each patient.
Sahoo, Avimanyu; Xu, Hao; Jagannathan, Sarangapani
2016-01-01
This paper presents a novel adaptive neural network (NN) control of single-input and single-output uncertain nonlinear discrete-time systems under event sampled NN inputs. In this control scheme, the feedback signals are transmitted, and the NN weights are tuned in an aperiodic manner at the event sampled instants. After reviewing the NN approximation property with event sampled inputs, an adaptive state estimator (SE), consisting of linearly parameterized NNs, is utilized to approximate the unknown system dynamics in an event sampled context. The SE is viewed as a model and its approximated dynamics and the state vector, during any two events, are utilized for the event-triggered controller design. An adaptive event-trigger condition is derived by using both the estimated NN weights and a dead-zone operator to determine the event sampling instants. This condition both facilitates the NN approximation and reduces the transmission of feedback signals. The ultimate boundedness of both the NN weight estimation error and the system state vector is demonstrated through the Lyapunov approach. As expected, during an initial online learning phase, events are observed more frequently. Over time with the convergence of the NN weights, the inter-event times increase, thereby lowering the number of triggered events. These claims are illustrated through the simulation results.
CH-47F Improved Cargo Helicopter (CH-47F)
2015-12-01
Confidence Level Confidence Level of cost estimate for current APB: 50% The Confidence Level of the CH-47F APB cost estimate, which was approved on April...M) Initial PAUC Development Estimate Changes PAUC Production Estimate Econ Qty Sch Eng Est Oth Spt Total 10.316 -0.491 3.003 -0.164 2.273 7.378...SAR Baseline to Current SAR Baseline (TY $M) Initial APUC Development Estimate Changes APUC Production Estimate Econ Qty Sch Eng Est Oth Spt Total
CHAP-2 heat-transfer analysis of the Fort St. Vrain reactor core
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotas, J.F.; Stroh, K.R.
1983-01-01
The Los Alamos National Laboratory is developing the Composite High-Temperature Gas-Cooled Reactor Analysis Program (CHAP) to provide advanced best-estimate predictions of postulated accidents in gas-cooled reactor plants. The CHAP-2 reactor-core model uses the finite-element method to initialize a two-dimensional temperature map of the Fort St. Vrain (FSV) core and its top and bottom reflectors. The code generates a finite-element mesh, initializes noding and boundary conditions, and solves the nonlinear Laplace heat equation using temperature-dependent thermal conductivities, variable coolant-channel-convection heat-transfer coefficients, and specified internal fuel and moderator heat-generation rates. This paper discusses this method and analyzes an FSV reactor-core accident thatmore » simulates a control-rod withdrawal at full power.« less
Double Scaling in the Relaxation Time in the β -Fermi-Pasta-Ulam-Tsingou Model
NASA Astrophysics Data System (ADS)
Lvov, Yuri V.; Onorato, Miguel
2018-04-01
We consider the original β -Fermi-Pasta-Ulam-Tsingou system; numerical simulations and theoretical arguments suggest that, for a finite number of masses, a statistical equilibrium state is reached independently of the initial energy of the system. Using ensemble averages over initial conditions characterized by different Fourier random phases, we numerically estimate the time scale of equipartition and we find that for very small nonlinearity it matches the prediction based on exact wave-wave resonant interaction theory. We derive a simple formula for the nonlinear frequency broadening and show that when the phenomenon of overlap of frequencies takes place, a different scaling for the thermalization time scale is observed. Our result supports the idea that the Chirikov overlap criterion identifies a transition region between two different relaxation time scalings.
Glycine's radiolytic destruction in ices: first in situ laboratory measurements for Mars.
Gerakines, Perry A; Hudson, Reggie L
2013-07-01
We report new laboratory studies of the radiation-induced destruction of glycine-containing ices for a range of temperatures and compositions that allow extrapolation to martian conditions. In situ infrared spectroscopy was used to study glycine decay rates as a function of temperature (from 15 to 280 K) and initial glycine concentrations in six mixtures whose compositions ranged from dry glycine to H2O+glycine (300:1). Results are presented in several systems of units, with cautions concerning their use. The half-life of glycine under the surface of Mars is estimated as an extrapolation of this data set to martian conditions, and trends in decay rates are described as are applications to Mars' near-surface chemistry.
NASA Astrophysics Data System (ADS)
Trifonov, A. P.; Korchagin, Yu. E.; Korol'kov, S. V.
2018-05-01
We synthesize the quasi-likelihood, maximum-likelihood, and quasioptimal algorithms for estimating the arrival time and duration of a radio signal with unknown amplitude and initial phase. The discrepancies between the hardware and software realizations of the estimation algorithm are shown. The characteristics of the synthesized-algorithm operation efficiency are obtained. Asymptotic expressions for the biases, variances, and the correlation coefficient of the arrival-time and duration estimates, which hold true for large signal-to-noise ratios, are derived. The accuracy losses of the estimates of the radio-signal arrival time and duration because of the a priori ignorance of the amplitude and initial phase are determined.
Using VS30 to Estimate Station ML Adjustments (dML)
NASA Astrophysics Data System (ADS)
Yong, A.; Herrick, J.; Cochran, E. S.; Andrews, J. R.; Yu, E.
2017-12-01
Currently, new seismic stations added to a regional seismic network cannot be used to calculate local or Richter magnitude (ML) until a revised region-wide amplitude decay function is developed. The new station must record a minimum number of local and regional events that meet specific amplitude requirements prior to re-calibration of the amplitude decay function. Therefore, there can be significant delay between when a new station starts contributing real-time waveform packets and when the data can be included in magnitude estimation. The station component adjustments (dML; Uhrhammer et al., 2011) are calculated after first inverting for a new regional amplitude decay function, constrained by the sum of dML for long-running stations. Here, we propose a method to calculate an initial dML using known or proxy values of seismic site conditions. For site conditions, we use the time-averaged shear-wave velocity (VS) of the upper 30 m (VS30). We solve for dML as described in Equation (1) by Uhrhammer et al. (2011): ML = log (A) - log A0 (r) + dML, where A is the maximum Wood and Anderson (1925) trace amplitude (mm), r is the distance (km), and dML is the station adjustment. Measured VS30 and estimated dML data are comprised of records from 887 horizontal components (east-west and north-south orientations) from 93 seismic monitoring stations in the California Integrated Seismic Network. VS30 values range from 202 m/s to 1464 m/s and dML range from -1.10 to 0.39. VS30 and dML exhibit a positive correlation coefficient (R = 0.72), indicating that as VS30 increases, dML increases. This implies that greater site amplification (i.e., lower VS30) results in smaller ML. When we restrict VS30 < 760 m/s to focus on dML at soft soil to soft rock sites, R increases to 0.80. In locations where measured VS30 data are unavailable, we evaluate the use of proxy-based VS30 estimates based on geology, topographic slope and terrain classification, as well as other hybridized methods. Measured VS30 data or proxy-based VS30 estimates can be used for initial dML estimates that allow new stations to contribute to regional network ML estimates immediately without the need to wait until a minimum set of earthquake data has been recorded.
NASA Astrophysics Data System (ADS)
Wada, Y.; Flörke, M.; Hanasaki, N.; Eisner, S.; Fischer, G.; Tramberend, S.; Satoh, Y.; van Vliet, M. T. H.; Yillia, P.; Ringler, C.; Burek, P.; Wiberg, D.
2016-01-01
To sustain growing food demand and increasing standard of living, global water use increased by nearly 6 times during the last 100 years, and continues to grow. As water demands get closer and closer to the water availability in many regions, each drop of water becomes increasingly valuable and water must be managed more efficiently and intensively. However, soaring water use worsens water scarcity conditions already prevalent in semi-arid and arid regions, increasing uncertainty for sustainable food production and economic development. Planning for future development and investments requires that we prepare water projections for the future. However, estimations are complicated because the future of the world's waters will be influenced by a combination of environmental, social, economic, and political factors, and there is only limited knowledge and data available about freshwater resources and how they are being used. The Water Futures and Solutions (WFaS) initiative coordinates its work with other ongoing scenario efforts for the sake of establishing a consistent set of new global water scenarios based on the shared socio-economic pathways (SSPs) and the representative concentration pathways (RCPs). The WFaS "fast-track" assessment uses three global water models, namely H08, PCR-GLOBWB, and WaterGAP. This study assesses the state of the art for estimating and projecting water use regionally and globally in a consistent manner. It provides an overview of different approaches, the uncertainty, strengths and weaknesses of the various estimation methods, types of management and policy decisions for which the current estimation methods are useful. We also discuss additional information most needed to be able to improve water use estimates and be able to assess a greater range of management options across the water-energy-climate nexus.
NASA Technical Reports Server (NTRS)
Wada, Y.; Florke, M.; Hanasaki, N.; Eisner, S.; Fischer, G.; Tramberend, S.; Satoh, Y.; van Vliet, M. T. H.; Yillia, P.; Ringler, C.;
2016-01-01
To sustain growing food demand and increasing standard of living, global water use increased by nearly 6 times during the last 100 years, and continues to grow. As water demands get closer and closer to the water availability in many regions, each drop of water becomes increasingly valuable and water must be managed more efficiently and intensively. However, soaring water use worsens water scarcity conditions already prevalent in semi-arid and arid regions, increasing uncertainty for sustainable food production and economic development. Planning for future development and investments requires that we prepare water projections for the future. However, estimations are complicated because the future of the world's waters will be influenced by a combination of environmental, social, economic, and political factors, and there is only limited knowledge and data available about freshwater resources and how they are being used. The Water Futures and Solutions (WFaS) initiative coordinates its work with other ongoing scenario efforts for the sake of establishing a consistent set of new global water scenarios based on the shared socio-economic pathways (SSPs) and the representative concentration pathways (RCPs). The WFaS "fast track" assessment uses three global water models, namely H08, PCR-GLOBWB, and WaterGAP. This study assesses the state of the art for estimating and projecting water use regionally and globally in a consistent manner. It provides an overview of different approaches, the uncertainty, strengths and weaknesses of the various estimation methods, types of management and policy decisions for which the current estimation methods are useful. We also discuss additional information most needed to be able to improve water use estimates and be able to assess a greater range of management options across the water-energy-climate nexus.
NASA Astrophysics Data System (ADS)
Wada, Y.; Flörke, M.; Hanasaki, N.; Eisner, S.; Fischer, G.; Tramberend, S.; Satoh, Y.; van Vliet, M. T. H.; Yillia, P.; Ringler, C.; Wiberg, D.
2015-08-01
To sustain growing food demand and increasing standard of living, global water use increased by nearly 6 times during the last 100 years and continues to grow. As water demands get closer and closer to the water availability in many regions, each drop of water becomes increasingly valuable and water must be managed more efficiently and intensively. However, soaring water use worsens water scarcity condition already prevalent in semi-arid and arid regions, increasing uncertainty for sustainable food production and economic development. Planning for future development and investments requires that we prepare water projections for the future. However, estimations are complicated because the future of world's waters will be influenced by a combination of environmental, social, economic, and political factors, and there is only limited knowledge and data available about freshwater resources and how they are being used. The Water Futures and Solutions initiative (WFaS) coordinates its work with other on-going scenario efforts for the sake of establishing a consistent set of new global water scenarios based on the Shared Socioeconomic Pathways (SSPs) and the Representative Concentration Pathways (RCPs). The WFaS "fast-track" assessment uses three global water models, namely H08, PCR-GLOBWB, and WaterGAP. This study assesses the state of the art for estimating and projecting water use regionally and globally in a consistent manner. It provides an overview of different approaches, the uncertainty, strengths and weaknesses of the various estimation methods, types of management and policy decisions for which the current estimation methods are useful. We also discuss additional information most needed to be able to improve water use estimates and be able to assess a greater range of management options across the water-energy-climate nexus.
Korennoy, F I; Gulenkin, V M; Gogin, A E; Vergne, T; Karaulov, A K
2017-12-01
In 1977, Ukraine experienced a local epidemic of African swine fever (ASF) in the Odessa region. A total of 20 settlements were affected during the course of the epidemic, including both large farms and backyard households. Thanks to timely interventions, the virus circulation was successfully eradicated within 6 months, leading to no additional outbreaks. Detailed report of the outbreak's investigation has been publically available from 2014. The report contains some quantitative data that allow studying the ASF-spread dynamics in the course of the epidemic. In our study, we used this historical epidemic to estimate the basic reproductive number of the ASF virus both within and between farms. The basic reproductive number (R 0 ) represents the average number of secondary infections caused by one infectious unit during its infectious period in a susceptible population. Calculations were made under assumption of an exponential initial growth by fitting the approximating curve to the initial segments of the epidemic curves. The R 0 both within farm and between farms was estimated at 7.46 (95% confidence interval: 5.68-9.21) and 1.65 (1.42-1.88), respectively. Corresponding daily transmission rates were estimated at 1.07 (0.81-1.32) and 0.09 (0.07-0.10). These estimations based on historical data are consistent with those using data generated by the recent epidemic currently affecting eastern Europe. Such results contribute to the published knowledge on the ASF transmission dynamics under natural conditions and could be used to model and predict the spread of ASF in affected and non-affected regions and to evaluate the effectiveness of different control measures. © 2016 Blackwell Verlag GmbH.
An analysis of competitive bidding by providers for indigent medical care contracts.
Kirkman-Liff, B L; Christianson, J B; Hillman, D G
1985-01-01
This article develops a model of behavior in bidding for indigent medical care contracts in which bidders set bid prices to maximize their expected utility, conditional on estimates of variables which affect the payoff associated with winning or losing a contract. The hypotheses generated by this model are tested empirically using data from the first round of bidding in the Arizona indigent health care experiment. The behavior of bidding organizations in Arizona is found to be consistent in most respects with the predictions of the model. Bid prices appear to have been influenced by estimated costs and by expectations concerning the potential loss from not securing a contract, the initial wealth of the bidding organization, and the expected number of competitors in the bidding process. PMID:4086301
Modeling highway travel time distribution with conditional probability models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliveira Neto, Francisco Moraes; Chin, Shih-Miao; Hwang, Ho-Ling
ABSTRACT Under the sponsorship of the Federal Highway Administration's Office of Freight Management and Operations, the American Transportation Research Institute (ATRI) has developed performance measures through the Freight Performance Measures (FPM) initiative. Under this program, travel speed information is derived from data collected using wireless based global positioning systems. These telemetric data systems are subscribed and used by trucking industry as an operations management tool. More than one telemetric operator submits their data dumps to ATRI on a regular basis. Each data transmission contains truck location, its travel time, and a clock time/date stamp. Data from the FPM program providesmore » a unique opportunity for studying the upstream-downstream speed distributions at different locations, as well as different time of the day and day of the week. This research is focused on the stochastic nature of successive link travel speed data on the continental United States Interstates network. Specifically, a method to estimate route probability distributions of travel time is proposed. This method uses the concepts of convolution of probability distributions and bivariate, link-to-link, conditional probability to estimate the expected distributions for the route travel time. Major contribution of this study is the consideration of speed correlation between upstream and downstream contiguous Interstate segments through conditional probability. The established conditional probability distributions, between successive segments, can be used to provide travel time reliability measures. This study also suggests an adaptive method for calculating and updating route travel time distribution as new data or information is added. This methodology can be useful to estimate performance measures as required by the recent Moving Ahead for Progress in the 21st Century Act (MAP 21).« less
NASA Astrophysics Data System (ADS)
Aamir, Muhammad; Liao, Qiang; Hong, Wang; Xun, Zhu; Song, Sihong; Sajid, Muhammad
2017-02-01
High heat transfer performance of spray cooling on structured surface might be an additional measure to increase the safety of an installation against any threat caused by rapid increase in the temperature. The purpose of present experimental study is to explore heat transfer performance of structured surface under different spray conditions and surface temperatures. Two cylindrical stainless steel samples were used, one with pyramid pins structured surface and other with smooth surface. Surface heat flux of 3.60, 3.46, 3.93 and 4.91 MW/m2 are estimated for sample initial average temperature of 600, 700, 800 and 900 °C, respectively for an inlet pressure of 1.0 MPa. A maximum cooling rate of 507 °C/s was estimated for an inlet pressure of 0.7 MPa at 900 °C for structured surface while for smooth surface maximum cooling rate of 356 °C/s was attained at 1.0 MPa for 700 °C. Structured surface performed better to exchange heat during spray cooling at initial sample temperature of 900 °C with a relative increase in surface heat flux by factor of 1.9, 1.56, 1.66 and 1.74 relative to smooth surface, for inlet pressure of 0.4, 0.7, 1.0 and 1.3 MPa, respectively. For smooth surface, a decreasing trend in estimated heat flux is observed, when initial sample temperature was increased from 600 to 900 °C. Temperature-based function specification method was utilized to estimate surface heat flux and surface temperature. Limited published work is available about the application of structured surface spray cooling techniques for safety of stainless steel structures at very high temperature scenario such as nuclear safety vessel and liquid natural gas storage tanks.
Chapelle, Francis H.; Thomas, Lashun K.; Bradley, Paul M.; Rectanus, Heather V.; Widdowson, Mark A.
2012-01-01
Aquifer sediment and groundwater chemistry data from 15 Department of Defense facilities located throughout the United States were collected and analyzed with the goal of estimating the amount of natural organic carbon needed to initiate reductive dechlorination in groundwater systems. Aquifer sediments were analyzed for hydroxylamine and NaOH-extractable organic carbon, yielding a probable underestimate of potentially bioavailable organic carbon (PBOC). Aquifer sediments were also analyzed for total organic carbon (TOC) using an elemental combustion analyzer, yielding a probable overestimate of bioavailable carbon. Concentrations of PBOC correlated linearly with TOC with a slope near one. However, concentrations of PBOC were consistently five to ten times lower than TOC. When mean concentrations of dissolved oxygen observed at each site were plotted versus PBOC, it showed that anoxic conditions were initiated at approximately 200 mg/kg of PBOC. Similarly, the accumulation of reductive dechlorination daughter products relative to parent compounds increased at a PBOC concentration of approximately 200 mg/kg. Concentrations of total hydrolysable amino acids (THAA) in sediments also increased at approximately 200 mg/kg, and bioassays showed that sediment CO2 production correlated positively with THAA. The results of this study provide an estimate for threshold amounts of bioavailable carbon present in aquifer sediments (approximately 200 mg/kg of PBOC; approximately 1,000 to 2,000 mg/kg of TOC) needed to support reductive dechlorination in groundwater systems.
Pradhan, Sudeep; Song, Byungjeong; Lee, Jaeyeon; Chae, Jung-Woo; Kim, Kyung Im; Back, Hyun-Moon; Han, Nayoung; Kwon, Kwang-Il; Yun, Hwi-Yeol
2017-12-01
Exploratory preclinical, as well as clinical trials, may involve a small number of patients, making it difficult to calculate and analyze the pharmacokinetic (PK) parameters, especially if the PK parameters show very high inter-individual variability (IIV). In this study, the performance of a classical first-order conditional estimation with interaction (FOCE-I) and expectation maximization (EM)-based Markov chain Monte Carlo Bayesian (BAYES) estimation methods were compared for estimating the population parameters and its distribution from data sets having a low number of subjects. In this study, 100 data sets were simulated with eight sampling points for each subject and with six different levels of IIV (5%, 10%, 20%, 30%, 50%, and 80%) in their PK parameter distribution. A stochastic simulation and estimation (SSE) study was performed to simultaneously simulate data sets and estimate the parameters using four different methods: FOCE-I only, BAYES(C) (FOCE-I and BAYES composite method), BAYES(F) (BAYES with all true initial parameters and fixed ω 2 ), and BAYES only. Relative root mean squared error (rRMSE) and relative estimation error (REE) were used to analyze the differences between true and estimated values. A case study was performed with a clinical data of theophylline available in NONMEM distribution media. NONMEM software assisted by Pirana, PsN, and Xpose was used to estimate population PK parameters, and R program was used to analyze and plot the results. The rRMSE and REE values of all parameter (fixed effect and random effect) estimates showed that all four methods performed equally at the lower IIV levels, while the FOCE-I method performed better than other EM-based methods at higher IIV levels (greater than 30%). In general, estimates of random-effect parameters showed significant bias and imprecision, irrespective of the estimation method used and the level of IIV. Similar performance of the estimation methods was observed with theophylline dataset. The classical FOCE-I method appeared to estimate the PK parameters more reliably than the BAYES method when using a simple model and data containing only a few subjects. EM-based estimation methods can be considered for adapting to the specific needs of a modeling project at later steps of modeling.
Factors affecting UV/H2O2 inactivation of Bacillus atrophaeus spores in drinking water.
Zhang, Yongji; Zhang, Yiqing; Zhou, Lingling; Tan, Chaoqun
2014-05-05
This study aims at estimating the performance of the Bacillus atrophaeus spores inactivation by the UV treatment with addition of H2O2. The effect of factors affecting the inactivation was investigated, including initial H2O2 dose, UV irradiance, initial cell density, initial solution pH and various inorganic anions. Under the experimental conditions, the B. atrophaeus spores inactivation followed both the modified Hom Model and the Chick's Model. The results revealed that the H2O2 played dual roles in the reactions, while the optimum reduction of 5.88lg was received at 0.5mM H2O2 for 10min. The inactivation effect was affected by the UV irradiance, while better inactivation effect was achieved at higher irradiance. An increase in the initial cell density slowed down the inactivation process. A slight acid condition at pH 5 was considered as the optimal pH value. The inactivation effect within 10min followed the order of pH 5>pH 7>pH 9>pH 3>pH 11. The effects of three added inorganic anions were investigated and compared, including sulfate (SO4(2)(-)), nitrate (NO3(-)) and carbonate (CO3(2)(-)). The sequence of inactivation effect within 10min followed the order of control group>SO4(2)(-)>NO3(-)>CO3(2)(-). Copyright © 2014 Elsevier B.V. All rights reserved.
Riley, Gerald F; Rupp, Kalman
2015-01-01
Objective To estimate cumulative DI, SSI, Medicare, and Medicaid expenditures from initial disability benefit award to death or age 65. Data Sources Administrative records for a cohort of new CY2000 DI and SSI awardees aged 18–64. Study Design Actual expenditures were obtained for 2000–2006/7. Subsequent expenditures were simulated using a regression-adjusted Markov process to assign individuals to annual disability benefit coverage states. Program expenditures were simulated conditional on assigned benefit coverage status. Estimates reflect present value of expenditures at initial award in 2000 and are expressed in constant 2012 dollars. Expenditure estimates were also updated to reflect benefit levels and characteristics of new awardees in 2012. Data Collection We matched records for a 10 percent nationally representative sample. Principal Findings Overall average cumulative expenditures are $292,401 through death or age 65, with 51.4 percent for cash benefits and 48.6 percent for health care. Expenditures are about twice the average for individuals first awarded benefits at age 18–30. Overall average expenditures increased by 10 percent when updated for a simulated 2012 cohort. Conclusions Data on cumulative expenditures, especially combined across programs, are useful for evaluating the long-term payoff of investments designed to modify entry to and exit from the disability rolls. PMID:25109322
Dawes, Richard; Passalacqua, Alessio; Wagner, Albert F; Sewell, Thomas D; Minkoff, Michael; Thompson, Donald L
2009-04-14
We develop two approaches for growing a fitted potential energy surface (PES) by the interpolating moving least-squares (IMLS) technique using classical trajectories. We illustrate both approaches by calculating nitrous acid (HONO) cis-->trans isomerization trajectories under the control of ab initio forces from low-level HF/cc-pVDZ electronic structure calculations. In this illustrative example, as few as 300 ab initio energy/gradient calculations are required to converge the isomerization rate constant at a fixed energy to approximately 10%. Neither approach requires any preliminary electronic structure calculations or initial approximate representation of the PES (beyond information required for trajectory initial conditions). Hessians are not required. Both approaches rely on the fitting error estimation properties of IMLS fits. The first approach, called IMLS-accelerated direct dynamics, propagates individual trajectories directly with no preliminary exploratory trajectories. The PES is grown "on the fly" with the computation of new ab initio data only when a fitting error estimate exceeds a prescribed tight tolerance. The second approach, called dynamics-driven IMLS fitting, uses relatively inexpensive exploratory trajectories to both determine and fit the dynamically accessible configuration space. Once exploratory trajectories no longer find configurations with fitting error estimates higher than the designated accuracy, the IMLS fit is considered to be complete and usable in classical trajectory calculations or other applications.
NASA Astrophysics Data System (ADS)
Shriwastaw, R. S.; Sawarn, Tapan K.; Banerjee, Suparna; Rath, B. N.; Dubey, J. S.; Kumar, Sunil; Singh, J. L.; Bhasin, Vivek
2017-09-01
The present study involves the estimation of ring tensile properties of Indian Pressurised Heavy Water Reactor (IPHWR) fuel cladding made of Zircaloy-4, subjected to experiments under a simulated loss-of-coolant-accident (LOCA) condition. Isothermal steam oxidation experiments were conducted on clad tube specimens at temperatures ranging from 900 to 1200 °C at an interval of 50 °C for different soaking periods with subsequent quenching in water at ambient temperature. The specimens, which survived quenching, were then subjected to ambient temperature ring tension test (RTT). The microstructure was correlated with the mechanical properties. The yield strength (YS) and ultimate tensile strength (UTS) increased initially with rise in oxidation temperature and time duration but then decreased with further increase in oxidation. Ductility is adversely affected with rising oxidation temperature and longer holding time. A higher fraction of load bearing phase and lower oxygen content in it ensures higher residual ductility. Cladding shows almost zero ductility behavior in RIT when load bearing phase fraction is less than 0.72 and its average oxygen concentration is greater than 0.58 wt%.
Gossip and Distributed Kalman Filtering: Weak Consensus Under Weak Detectability
NASA Astrophysics Data System (ADS)
Kar, Soummya; Moura, José M. F.
2011-04-01
The paper presents the gossip interactive Kalman filter (GIKF) for distributed Kalman filtering for networked systems and sensor networks, where inter-sensor communication and observations occur at the same time-scale. The communication among sensors is random; each sensor occasionally exchanges its filtering state information with a neighbor depending on the availability of the appropriate network link. We show that under a weak distributed detectability condition: 1. the GIKF error process remains stochastically bounded, irrespective of the instability properties of the random process dynamics; and 2. the network achieves \\emph{weak consensus}, i.e., the conditional estimation error covariance at a (uniformly) randomly selected sensor converges in distribution to a unique invariant measure on the space of positive semi-definite matrices (independent of the initial state.) To prove these results, we interpret the filtered states (estimates and error covariances) at each node in the GIKF as stochastic particles with local interactions. We analyze the asymptotic properties of the error process by studying as a random dynamical system the associated switched (random) Riccati equation, the switching being dictated by a non-stationary Markov chain on the network graph.
Nutrition advocacy and national development: the PROFILES programme and its application.
Burkhalter, B R; Abel, E; Aguayo, V; Diene, S M; Parlato, M B; Ross, J S
1999-01-01
Investment in nutritional programmes can contribute to economic growth and is cost-effective in improving child survival and development. In order to communicate this to decision-makers, the PROFILES nutrition advocacy and policy development programme was applied in certain developing countries. Effective advocacy is necessary to generate financial and political support for scaling up from small pilot projects and maintaining successful national programmes. The programme uses scientific knowledge to estimate development indicators such as mortality, morbidity, fertility, school performance and labour productivity from the size and nutritional condition of populations. Changes in nutritional condition are estimated from the costs, coverage and effectiveness of proposed programmes. In Bangladesh this approach helped to gain approval and funding for a major nutrition programme. PROFILES helped to promote the nutrition component of an early childhood development programme in the Philippines, and to make nutrition a top priority in Ghana's new national child survival strategy. The application of PROFILES in these and other countries has been supported by the United States Agency for International Development, the United Nations Children's Fund, the World Bank, the Asian Development Bank, the Micronutrient Initiative and other bodies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorrentino, Luigi; Masiani, Renato; Benedetti, Stefano
2008-07-08
This paper presents an ongoing experimental program on unreinforced masonry walls undergoing free rocking. Aim of the laboratory campaign is the estimation of kinetic energy damping exhibited by walls released with non-zero initial conditions of motion. Such energy damping is necessary for dynamic modelling of unreinforced masonry local mechanisms. After a brief review of the literature on this topic, the main features of the laboratory tests are presented. The program involves the experimental investigation of several parameters: 1) unit material (brick or tuff), 2) wall aspect ratio (ranging between 14.5 and 7.1), 3) restraint condition (two-sided or one-sided rocking), andmore » 4) depth of the contact surface between facade and transverse walls (one-sided rocking only). All walls are single wythe and the mortar is pozzuolanic. The campaign is still in progress. However, it is possible to present the results on most of the mechanical properties of mortar and bricks. Moreover, a few time histories are reported, already indicating the need to correct some of the assumptions frequent in the literature.« less
Hydrologic analysis of the Rio Grande Basin north of Embudo, New Mexico; Colorado and New Mexico
Hearne, G.A.; Dewey, J.D.
1988-01-01
Water yield was estimated for each of the five regions that represent contrasting hydrologic regimes in the 10,400 square miles of the Rio Grande basin above Embudo, New Mexico. Water yield was estimated as 2,800 cubic feet per second for the San Juan Mountains, and 28 cubic feet per second for the Taos Plateau. Evapotranspiration exceeded precipitation by 150 cubic feet per second on the Costilla Plains and 2,400 cubic feet per second on the Alamosa Basin. A three-dimensional model was constructed to represent the aquifer system in the Alamosa Basin. A preliminary analysis concluded that: (1) a seven-layer model representing 3,200 feet of saturated thickness could accurately simulate the behavior of the flow equation; and (2) the 1950 condition was approximately stable and would be a satisfactory initial condition. Reasonable modifications to groundwater withdrawals simulated 1950-79 water-level declines close to measured value. Sensitivity tests indicated that evapotranspiration salvage was the major source, 69 to 82 percent, of groundwater withdrawals. Evapotranspiration salvage was projected to be the source of most withdrawals. (USGS)
Probabilistic clustering of rainfall condition for landslide triggering
NASA Astrophysics Data System (ADS)
Rossi, Mauro; Luciani, Silvia; Cesare Mondini, Alessandro; Kirschbaum, Dalia; Valigi, Daniela; Guzzetti, Fausto
2013-04-01
Landslides are widespread natural and man made phenomena. They are triggered by earthquakes, rapid snow melting, human activities, but mostly by typhoons and intense or prolonged rainfall precipitations. In Italy mostly they are triggered by intense precipitation. The prediction of landslide triggered by rainfall precipitations over large areas is commonly based on the exploitation of empirical models. Empirical landslide rainfall thresholds are used to identify rainfall conditions for the possible landslide initiation. It's common practice to define rainfall thresholds by assuming a power law lower boundary in the rainfall intensity-duration or cumulative rainfall-duration space above which landslide can occur. The boundary is defined considering rainfall conditions associated to landslide phenomena using heuristic approaches, and doesn't consider rainfall events not causing landslides. Here we present a new fully automatic method to identify the probability of landslide occurrence associated to rainfall conditions characterized by measures of intensity or cumulative rainfall and rainfall duration. The method splits the rainfall events of the past in two groups: a group of events causing landslides and its complementary, then estimate their probabilistic distributions. Next, the probabilistic membership of the new event to one of the two clusters is estimated. The method doesn't assume a priori any threshold model, but simple exploits the real empirical distribution of rainfall events. The approach was applied in the Umbria region, Central Italy, where a catalogue of landslide timing, were obtained through the search of chronicles, blogs and other source of information in the period 2002-2012. The approach was tested using rain gauge measures and satellite rainfall estimates (NASA TRMM-v6), allowing in both cases the identification of the rainfall condition triggering landslides in the region. Compared to the other existing threshold definition methods, the prosed one (i) largely reduces the subjectivity in the choice of the threshold model and in how it is calculated, and (ii) it can be easier set-up in other study areas. The proposed approach can be conveniently integrated in existing early-warning system to improve the accuracy of the estimation of the real landslide occurrence probability associated to rainfall events and its uncertainty.
NASA Technical Reports Server (NTRS)
Nearing, Grey S.; Crow, Wade T.; Thorp, Kelly R.; Moran, Mary S.; Reichle, Rolf H.; Gupta, Hoshin V.
2012-01-01
Observing system simulation experiments were used to investigate ensemble Bayesian state updating data assimilation of observations of leaf area index (LAI) and soil moisture (theta) for the purpose of improving single-season wheat yield estimates with the Decision Support System for Agrotechnology Transfer (DSSAT) CropSim-Ceres model. Assimilation was conducted in an energy-limited environment and a water-limited environment. Modeling uncertainty was prescribed to weather inputs, soil parameters and initial conditions, and cultivar parameters and through perturbations to model state transition equations. The ensemble Kalman filter and the sequential importance resampling filter were tested for the ability to attenuate effects of these types of uncertainty on yield estimates. LAI and theta observations were synthesized according to characteristics of existing remote sensing data, and effects of observation error were tested. Results indicate that the potential for assimilation to improve end-of-season yield estimates is low. Limitations are due to a lack of root zone soil moisture information, error in LAI observations, and a lack of correlation between leaf and grain growth.
NASA Astrophysics Data System (ADS)
Kramarenko, V. V.; Nikitenkov, A. N.; Molokov, V. Y.; Matveenko, I. A.; Shramok, A. V.
2015-11-01
The article deals with the characteristic of initial condition in fine-grained soils - its structural strength - pstr. Estimation and measurement of this factor at soil testing are of primary importance for defining its physical and mechanical properties as well as for subsequent calculation of foundation settlements that is insufficiently covered in Code of practice, national standard and inefficiently applicable in practice of engineering geological investigations. The article reveals the relationship between soil physical property, its occurrence depth, which will make possible to forecast pstr over the given territory.
NASA Astrophysics Data System (ADS)
Kammerdiner, Alla; Xanthopoulos, Petros; Pardalos, Panos M.
2007-11-01
In this chapter a potential problem with application of the Granger-causality based on the simple vector autoregressive (VAR) modeling to EEG data is investigated. Although some initial studies tested whether the data support the stationarity assumption of VAR, the stability of the estimated model is rarely (if ever) been verified. In fact, in cases when the stability condition is violated the process may exhibit a random walk like behavior or even be explosive. The problem is illustrated by an example.
NASA Astrophysics Data System (ADS)
Wendt, Anke S.; D'Arco, Philippe; Goffé, Bruno; Oberhänsli, Roland
1993-02-01
Radiating tensional cracks around α-quartz inclusions in almandine have been observed in metapelite samples from the southeastern Saih Hatat tectonic window, northeastern Oman Mountains. These almandines show an inclusion-rich (glaucophane + epidote) and strongly deformed core with inclusions of different mineral phases. The rim of the same almandines is inclusion-poor and shows only quartz, apatite, zircon, rutile and Ba sbnd Al phosphates as inclusions. Quartz and apatite inclusions in the rim are single crystals often surrounded by radial cracks. These radial cracks developed during uplift by the dilation of α-quartz (4-5 vol%) without a phase transformation. Subsequently, these cracks were filled with kaolinite, phengite (Si content 3.4 per formula unit, p.f.u.), chlorite and Fe oxides. We calculated the appearance of radial cracks without phase transformation using the mathematical procedure of Van der Molen and Van Roermund [1]. This calculation involves terms for thermal expansion, isothermal compressibility and shear modulus for the example of α-quartz and almandine for the same P and T interval during a retrograde path. Published geothermobarometric estimates give pressures of between 1.0 and 2.0 GPa and temperatures of between 450 and 600°C for the peak conditions for these rocks of the Saih Hatat tectonic window. On the basis of these P-T data we calculated different retrograde P-T paths in the α-quartz domain. Initiation of garnet fracturing is dependent on the P-T starting conditions and the component of isothermal compression of the retrograde path. The calculations yield a set of smooth monotonic curves whose exact position on the P-T plane between 0.1 and 0.6 GPa and 40 and 500°C depends on the initial P-T conditions and the component of isothermal compressibility of the retrograde P-T paths. This model can be used in general terms to estimate pressure and temperature for the following cases: (1) If independent evidence (such as petrological data) allow the determination of the final pressure at which radial cracks appeared, the initial inclusion pressure can be recalculated. (2) If the initial inclusion pressure is known (e.g. from petrological data), the conditions of radial cracking can be calculated, and the pair initial pressure-final pressure leads to an estimate of the shape of the retrograde P-T path as a function of its component of isothermal decompression. In the example from the northeastern Saih Hatat tectonic window the late syntectonic growth of albite + phengite + kaolinite suggests that the final pressure for fracturing ranged between 0.4 GPa and 0.5 GPa at temperatures of 300°C. These values correspond to high initial pressures of at least 2.0 GPa at a temperature of 550°C. The following geodynamic model is suggested: A regionally extended metamorphism led to the growth of inclusion-rich garnets in the rocks from the northeastern Saih Hatat tectonic window at depths of about 30 km ( < 0.1 GPa, about 450°C). Continuing prograde metamorphism at a depth of more than 60 km with P < 2.0 GPa and T ≈ 550°C affected a metapelite unit that is only exposed immediately south of As Sifah village. In this area, clear rims of almandine grew around the older garnets and entrapped mainly quartz and apatite. During uplift along a retrograde P-T path with a large component of isothermal decompression radial cracks around α-quartz inclusions developed in the rims of almandines at a depth of about 12 km (0.4-0.5 GPa, ⩾ 300°C).
Level 1 Tornado PRA for the High Flux Beam Reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bozoki, G.E.; Conrad, C.S.
This report describes a risk analysis primarily directed at providing an estimate for the frequency of tornado induced damage to the core of the High Flux Beam Reactor (HFBR), and thus it constitutes a Level 1 Probabilistic Risk Assessment (PRA) covering tornado induced accident sequences. The basic methodology of the risk analysis was to develop a ``tornado specific`` plant logic model that integrates the internal random hardware failures with failures caused externally by the tornado strike and includes operator errors worsened by the tornado modified environment. The tornado hazard frequency, as well as earlier prepared structural and equipment fragility data,more » were used as input data to the model. To keep modeling/calculational complexity as simple as reasonable a ``bounding`` type, slightly conservative, approach was applied. By a thorough screening process a single dominant initiating event was selected as a representative initiator, defined as: ``Tornado Induced Loss of Offsite Power.`` The frequency of this initiator was determined to be 6.37E-5/year. The safety response of the HFBR facility resulted in a total Conditional Core Damage Probability of .621. Thus, the point estimate of the HFBR`s Tornado Induced Core Damage Frequency (CDF) was found to be: (CDF){sub Tornado} = 3.96E-5/year. This value represents only 7.8% of the internal CDF and thus is considered to be a small contribution to the overall facility risk expressed in terms of total Core Damage Frequency. In addition to providing the estimate of (CDF){sub Tornado}, the report documents, the relative importance of various tornado induced system, component, and operator failures that contribute most to (CDF){sub Tornado}.« less
Increased suicide risk and clinical correlates of suicide among patients with Parkinson's disease.
Lee, Taeyeop; Lee, Hochang Benjamin; Ahn, Myung Hee; Kim, Juyeon; Kim, Mi Sun; Chung, Sun Ju; Hong, Jin Pyo
2016-11-01
Parkinson's disease (PD) is a debilitating, neurodegenerative condition frequently complicated by psychiatric symptoms. Patients with PD may be at higher risk for suicide than the general population, but previous estimates are limited and conflicting. The aim of this study is to estimate the suicide rate based on the clinical case registry and to identify risk factors for suicide among patients diagnosed with PD. The target sample consisted of 4362 patients diagnosed with PD who were evaluated at a general hospital in Seoul, South Korea, from 1996 to 2012. The standardized mortality ratio for suicide among PD patients was estimated. In order to identify the clinical correlates of suicide, case-control study was conducted based on retrospective chart review. The 29 suicide cases (age: 62.3 ± 13.7 years; females: 34.5%) were matched with 116 non-suicide controls (age: 63.5 ± 9.2 years; females 56.9%) by the year of initial PD evaluation. The SMR for suicide in PD patients was 1.99 (95% CI 1.33-2.85). Mean duration from time of initial diagnosis to suicide among cases was 6.1 ± 3.5 years. Case-control analysis revealed that male, initial extremity of motor symptom onset, history of depressive disorder, delusion, any psychiatric disorder, and higher L-dopa dosage were significantly associated with suicide among PD patients. Other PD-related variables such as UPDRS motor score were not significantly associated with death by suicide. Suicide risk in PD patients is approximately 2 times higher than that in the general population. Psychiatric disorders, and also L-dopa medication need further attention with respect to suicide. Copyright © 2016 Elsevier Ltd. All rights reserved.
Cobb, G; Bland, R M
2013-01-01
To explore the financial implications of applying the WHO guidelines for the nutritional management of HIV-infected children in a rural South African HIV programme. WHO guidelines describe Nutritional Care Plans (NCPs) for three categories of HIV-infected children: NCP-A: growing adequately; NCP-B: weight-for-age z-score (WAZ) ≤-2 but no evidence of severe acute malnutrition (SAM), confirmed weight loss/growth curve flattening, or condition with increased nutritional needs (e.g. tuberculosis); NCP-C: SAM. In resource-constrained settings, children requiring NCP-B or NCP-C usually need supplementation to achieve the additional energy recommendation. We estimated the proportion of children initiating antiretroviral treatment (ART) in the Hlabisa HIV Programme who would have been eligible for supplementation in 2010. The cost of supplying 26-weeks supplementation as a proportion of the cost of supplying ART to the same group was calculated. A total of 251 children aged 6 months to 14 years initiated ART. Eighty-eight required 6-month NCP-B, including 41 with a WAZ ≤-2 (no evidence of SAM) and 47 with a WAZ >-2 with co-existent morbidities including tuberculosis. Additionally, 25 children had SAM and required 10-weeks NCP-C followed by 16-weeks NCP-B. Thus, 113 of 251 (45%) children were eligible for nutritional supplementation at an estimated overall cost of $11 136, using 2010 exchange rates. These costs are an estimated additional 11.6% to that of supplying 26-week ART to the 251 children initiated. It is essential to address nutritional needs of HIV-infected children to optimise their health outcomes. Nutritional supplementation should be integral to, and budgeted for, in HIV programmes. © 2012 Blackwell Publishing Ltd.
Komatsu, Takanori; Ohishi, Risa; Shino, Amiu; Akashi, Kinya; Kikuchi, Jun
2014-01-01
In the present study, we applied nuclear magnetic resonance (NMR), as well as near-infrared (NIR) spectroscopy, to Jatropha curcas to fulfill two objectives: (1) to qualitatively examine the seeds stored at different conditions, and (2) to monitor the metabolism of J. curcas during its initial growth stage under stable-isotope-labeling condition (until 15 days after seeding). NIR spectra could non-invasively distinguish differences in storage conditions. NMR metabolic analysis of water-soluble metabolites identified sucrose and raffinose family oligosaccharides as positive markers and gluconic acid as a negative marker of seed germination. Isotopic labeling patteren of metabolites in germinated seedlings cultured in agar-plate containg 13C-glucose and 15N-nitrate was analyzed by zero-quantum-filtered-total correlation spectroscopy (ZQF-TOCSY) and 13C-detected 1H-13C heteronuclear correlation spectroscopy (HETCOR). 13C-detected HETOCR with 13C-optimized cryogenic probe provided high-resolution 13C-NMR spectra of each metabolite in molecular crowd. The 13C-13C/12C bondmer estimated from 1H-13C HETCOR spectra indicated that glutamine and arginine were the major organic compounds for nitrogen and carbon transfer from roots to leaves. PMID:25401292
Statistical Bayesian method for reliability evaluation based on ADT data
NASA Astrophysics Data System (ADS)
Lu, Dawei; Wang, Lizhi; Sun, Yusheng; Wang, Xiaohong
2018-05-01
Accelerated degradation testing (ADT) is frequently conducted in the laboratory to predict the products’ reliability under normal operating conditions. Two kinds of methods, degradation path models and stochastic process models, are utilized to analyze degradation data and the latter one is the most popular method. However, some limitations like imprecise solution process and estimation result of degradation ratio still exist, which may affect the accuracy of the acceleration model and the extrapolation value. Moreover, the conducted solution of this problem, Bayesian method, lose key information when unifying the degradation data. In this paper, a new data processing and parameter inference method based on Bayesian method is proposed to handle degradation data and solve the problems above. First, Wiener process and acceleration model is chosen; Second, the initial values of degradation model and parameters of prior and posterior distribution under each level is calculated with updating and iteration of estimation values; Third, the lifetime and reliability values are estimated on the basis of the estimation parameters; Finally, a case study is provided to demonstrate the validity of the proposed method. The results illustrate that the proposed method is quite effective and accuracy in estimating the lifetime and reliability of a product.
Caro-Vega, Yanink; del Rio, Carlos; Lima, Viviane Dias; Lopez-Cervantes, Malaquias; Crabtree-Ramirez, Brenda; Bautista-Arredondo, Sergio; Colchero, M Arantxa; Sierra-Madero, Juan
2015-01-01
To estimate the impact of late ART initiation on HIV transmission among men who have sex with men (MSM) in Mexico. An HIV transmission model was built to estimate the number of infections transmitted by HIV-infected men who have sex with men (MSM-HIV+) MSM-HIV+ in the short and long term. Sexual risk behavior data were estimated from a nationwide study of MSM. CD4+ counts at ART initiation from a representative national cohort were used to estimate time since infection. Number of MSM-HIV+ on treatment and suppressed were estimated from surveillance and government reports. Status quo scenario (SQ), and scenarios of early ART initiation and increased HIV testing were modeled. We estimated 14239 new HIV infections per year from MSM-HIV+ in Mexico. In SQ, MSM take an average 7.4 years since infection to initiate treatment with a median CD4+ count of 148 cells/mm3(25th-75th percentiles 52-266). In SQ, 68% of MSM-HIV+ are not aware of their HIV status and transmit 78% of new infections. Increasing the CD4+ count at ART initiation to 350 cells/mm3 shortened the time since infection to 2.8 years. Increasing HIV testing to cover 80% of undiagnosed MSM resulted in a reduction of 70% in new infections in 20 years. Initiating ART at 500 cells/mm3 and increasing HIV testing the reduction would be of 75% in 20 years. A substantial number of new HIV infections in Mexico are transmitted by undiagnosed and untreated MSM-HIV+. An aggressive increase in HIV testing coverage and initiating ART at a CD4 count of 500 cells/mm3 in this population would significantly benefit individuals and decrease the number of new HIV infections in Mexico.
Integration and Assessment of Component Health Prognostics in Supervisory Control Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramuhalli, Pradeep; Bonebrake, Christopher A.; Dib, Gerges
Enhanced risk monitors (ERMs) for active components in advanced reactor concepts use predictive estimates of component failure to update, in real time, predictive safety and economic risk metrics. These metrics have been shown to be capable of use in optimizing maintenance scheduling and managing plant maintenance costs. Integrating this information with plant supervisory control systems increases the potential for making control decisions that utilize real-time information on component conditions. Such decision making would limit the possibility of plant operations that increase the likelihood of degrading the functionality of one or more components while maintaining the overall functionality of the plant.more » ERM uses sensor data for providing real-time information about equipment condition for deriving risk monitors. This information is used to estimate the remaining useful life and probability of failure of these components. By combining this information with plant probabilistic risk assessment models, predictive estimates of risk posed by continued plant operation in the presence of detected degradation may be estimated. In this paper, we describe this methodology in greater detail, and discuss its integration with a prototypic software-based plant supervisory control platform. In order to integrate these two technologies and evaluate the integrated system, software to simulate the sensor data was developed, prognostic models for feedwater valves were developed, and several use cases defined. The full paper will describe these use cases, and the results of the initial evaluation.« less
2017-01-01
Emissions from traditional cooking practices in low- and middle-income countries have detrimental health and climate effects; cleaner-burning cookstoves may provide “co-benefits”. Here we assess this potential via in-home measurements of fuel-use and emissions and real-time optical properties of pollutants from traditional and alternative cookstoves in rural Malawi. Alternative cookstove models were distributed by existing initiatives and include a low-cost ceramic model, two forced-draft cookstoves (FDCS; Philips HD4012LS and ACE-1), and three institutional cookstoves. Among household cookstoves, emission factors (EF; g (kg wood)−1) were lowest for the Philips, with statistically significant reductions relative to baseline of 45% and 47% for fine particulate matter (PM2.5) and carbon monoxide (CO), respectively. The Philips was the only cookstove tested that showed significant reductions in elemental carbon (EC) emission rate. Estimated health and climate cobenefits of alternative cookstoves were smaller than predicted from laboratory tests due to the effects of real-world conditions including fuel variability and nonideal operation. For example, estimated daily PM intake and field-measurement-based global warming commitment (GWC) for the Philips FDCS were a factor of 8.6 and 2.8 times higher, respectively, than those based on lab measurements. In-field measurements provide an assessment of alternative cookstoves under real-world conditions and as such likely provide more realistic estimates of their potential health and climate benefits than laboratory tests. PMID:28060518
Genetic and non-genetic factors affecting morphometry of Sirohi goats
Dudhe, S. D.; Yadav, S. B. S.; Nagda, R. K.; Pannu, Urmila; Gahlot, G. C.
2015-01-01
Aim: The aim was to estimate genetic and non-genetic factors affecting morphometric traits of Sirohi goats under field condition. Materials and Methods: The detailed information of all animals on body measurements at birth, 3, 6, 9, and 12 months of age was collected from farmer’s flock under field condition born during 2007-2013 to analyze the effect of genetic and non-genetic factors. The least squares maximum likelihood program was used to estimate genetic and non-genetic parameters affecting morphometric traits. Results and Discussion: Effect of sire, cluster, year of birth, and sex was found to be highly significant (p<0.01) on all three morphometric traits, parity was highly significant (p<0.01) for body height (BH) and body girth (BG) at birth. The h2 estimates for morphometric traits ranged among 0.528±0.163 to 0.709±0.144 for BH, 0.408±0.159 to 0.605±0.192 for body length (BL), and 0.503±0.197 to 0.695±0.161 for BG. Conclusion: The effect of sire was highly significant (p<0.01) and also h² estimate of all morphometric traits were medium to high; therefore, it could be concluded on the basis of present findings that animals with higher body measurements at initial phases of growth will perform better with respect to even body weight traits at later stages of growth. PMID:27047043
Sun, Zhichao; Mukherjee, Bhramar; Estes, Jason P; Vokonas, Pantel S; Park, Sung Kyun
2017-08-15
Joint effects of genetic and environmental factors have been increasingly recognized in the development of many complex human diseases. Despite the popularity of case-control and case-only designs, longitudinal cohort studies that can capture time-varying outcome and exposure information have long been recommended for gene-environment (G × E) interactions. To date, literature on sampling designs for longitudinal studies of G × E interaction is quite limited. We therefore consider designs that can prioritize a subsample of the existing cohort for retrospective genotyping on the basis of currently available outcome, exposure, and covariate data. In this work, we propose stratified sampling based on summaries of individual exposures and outcome trajectories and develop a full conditional likelihood approach for estimation that adjusts for the biased sample. We compare the performance of our proposed design and analysis with combinations of different sampling designs and estimation approaches via simulation. We observe that the full conditional likelihood provides improved estimates for the G × E interaction and joint exposure effects over uncorrected complete-case analysis, and the exposure enriched outcome trajectory dependent design outperforms other designs in terms of estimation efficiency and power for detection of the G × E interaction. We also illustrate our design and analysis using data from the Normative Aging Study, an ongoing longitudinal cohort study initiated by the Veterans Administration in 1963. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Critical Parameters of the Initiation Zone for Spontaneous Dynamic Rupture Propagation
NASA Astrophysics Data System (ADS)
Galis, M.; Pelties, C.; Kristek, J.; Moczo, P.; Ampuero, J. P.; Mai, P. M.
2014-12-01
Numerical simulations of rupture propagation are used to study both earthquake source physics and earthquake ground motion. Under linear slip-weakening friction, artificial procedures are needed to initiate a self-sustained rupture. The concept of an overstressed asperity is often applied, in which the asperity is characterized by its size, shape and overstress. The physical properties of the initiation zone may have significant impact on the resulting dynamic rupture propagation. A trial-and-error approach is often necessary for successful initiation because 2D and 3D theoretical criteria for estimating the critical size of the initiation zone do not provide general rules for designing 3D numerical simulations. Therefore, it is desirable to define guidelines for efficient initiation with minimal artificial effects on rupture propagation. We perform an extensive parameter study using numerical simulations of 3D dynamic rupture propagation assuming a planar fault to examine the critical size of square, circular and elliptical initiation zones as a function of asperity overstress and background stress. For a fixed overstress, we discover that the area of the initiation zone is more important for the nucleation process than its shape. Comparing our numerical results with published theoretical estimates, we find that the estimates by Uenishi & Rice (2004) are applicable to configurations with low background stress and small overstress. None of the published estimates are consistent with numerical results for configurations with high background stress. We therefore derive new equations to estimate the initiation zone size in environments with high background stress. Our results provide guidelines for defining the size of the initiation zone and overstress with minimal effects on the subsequent spontaneous rupture propagation.
A state space based approach to localizing single molecules from multi-emitter images.
Vahid, Milad R; Chao, Jerry; Ward, E Sally; Ober, Raimund J
2017-01-28
Single molecule super-resolution microscopy is a powerful tool that enables imaging at sub-diffraction-limit resolution. In this technique, subsets of stochastically photoactivated fluorophores are imaged over a sequence of frames and accurately localized, and the estimated locations are used to construct a high-resolution image of the cellular structures labeled by the fluorophores. Available localization methods typically first determine the regions of the image that contain emitting fluorophores through a process referred to as detection. Then, the locations of the fluorophores are estimated accurately in an estimation step. We propose a novel localization method which combines the detection and estimation steps. The method models the given image as the frequency response of a multi-order system obtained with a balanced state space realization algorithm based on the singular value decomposition of a Hankel matrix, and determines the locations of intensity peaks in the image as the pole locations of the resulting system. The locations of the most significant peaks correspond to the locations of single molecules in the original image. Although the accuracy of the location estimates is reasonably good, we demonstrate that, by using the estimates as the initial conditions for a maximum likelihood estimator, refined estimates can be obtained that have a standard deviation close to the Cramér-Rao lower bound-based limit of accuracy. We validate our method using both simulated and experimental multi-emitter images.
A hydro-mechanical framework for early warning of rainfall-induced landslides (Invited)
NASA Astrophysics Data System (ADS)
Godt, J.; Lu, N.; Baum, R. L.
2013-12-01
Landslide early warning requires an estimate of the location, timing, and magnitude of initial movement, and the change in volume and momentum of material as it travels down a slope or channel. In many locations advance assessment of landslide location, volume, and momentum is possible, but prediction of landslide timing entails understanding the evolution of rainfall and soil-water conditions, and consequent effects on slope stability in real time. Existing schemes for landslide prediction generally rely on empirical relations between landslide occurrence and rainfall amount and duration, however, these relations do not account for temporally variable rainfall nor the variably saturated processes that control the hydro-mechanical response of hillside materials to rainfall. Although limited by the resolution and accuracy of rainfall forecasts and now-casts in complex terrain and by the inherent difficulty in adequately characterizing subsurface materials, physics-based models provide a general means to quantitatively link rainfall and landslide occurrence. To obtain quantitative estimates of landslide potential from physics-based models using observed or forecasted rainfall requires explicit consideration of the changes in effective stress that result from changes in soil moisture and pore-water pressures. The physics that control soil-water conditions are transient, nonlinear, hysteretic, and dependent on material composition and history. In order to examine the physical processes that control infiltration and effective stress in variably saturated materials, we present field and laboratory results describing intrinsic relations among soil water and mechanical properties of hillside materials. At the REV (representative elementary volume) scale, the interaction between pore fluids and solid grains can be effectively described by the relation between soil suction, soil water content, hydraulic conductivity, and suction stress. We show that these relations can be obtained independently from outflow, shear strength, and deformation tests for a wide range of earth materials. We then compare laboratory results with measurements of pore pressure and moisture content from landslide-prone settings and demonstrate that laboratory results obtained for hillside materials are representative of field conditions. These fundamental relations provide a basis to combine observed or forecasted rainfall with in-situ measurements of soil water conditions using hydro-mechanical models that simulate transient variably saturated flow and slope stability. We conclude that early warning using an approach in which in-situ observations are used to establish initial conditions for hydro-mechanical models is feasible in areas of high landslide risk where laboratory characterization of materials is practical and accurate rainfall information can be obtained. Analogous to weather and climate forecasting, such models could then be applied in an ensemble fashion to obtain quantitative estimates of landslide probability and error. Application to broader regions likely awaits breakthroughs in the development of remotely sensed proxies of soil properties and subsurface moisture conditions.
Nazir, Yusuf; Shuib, Shuwahida; Kalil, Mohd Sahaid; Song, Yuanda; Hamid, Aidil Abdul
2018-06-11
In this study, optimization of growth, lipid and DHA production of Aurantiochytrium SW1 was carried out using response surface methodology (RSM) in optimizing initial fructose concentration, agitation speed and monosodium glutamate (MSG) concentration. Central composite design was applied as the experimental design and analysis of variance (ANOVA) was used to analyze the data. ANOVA analysis revealed that the process which adequately represented by quadratic model was significant (p < 0.0001) for all the response. All the three factors were significant (p < 0.005) in influencing the biomass and lipid data while only two factors (agitation speed and MSG) gave significant effect on DHA production (p < 0.005). The estimated optimal conditions for enhanced growth, lipid and DHA production were 70 g/L fructose, 250 rpm agitation speed and 10 g/L MSG. Consequently, the quadratic model was validated by applying the estimated optimum conditions, which confirmed the model validity where 19.0 g/L biomass, 9.13 g/L lipid and 4.75 g/L of DHA were produced. The growth, lipid and DHA were 28, 36 and 35% respectively higher than that produced in the original medium prior to optimization.
Why Did People Move During the Great Recession?: The Role of Economics in Migration Decisions
Levy, Brian L.; Mouw, Ted; Daniel Perez, Anthony
2017-01-01
Labor migration offers an important mechanism to reallocate workers when there are regional differences in employment conditions. Whereas conventional wisdom suggests migration rates should increase during recessions as workers move out of areas that are hit hardest, initial evidence suggested that overall migration rates declined during the Great Recession, despite large regional differences in unemployment and growth rates. In this paper, we use data from the American Community Survey to analyze internal migration trends before and during the economic downturn. First, we find only a modest decline in the odds of adults leaving distressed labor market areas during the recession, which may result in part from challenges related to the housing price crash. Second, we estimate conditional logit models of destination choice for individuals who migrate across labor market areas and find a substantial effect of economic factors such as labor demand, unemployment, and housing values. We also estimate latent class conditional logit models that test whether there is heterogeneity in preferences for destination characteristics among migrants. Over all, the latent class models suggest that roughly equal percentages of migrants were motivated by economic factors before and during the recession. We conclude that fears of dramatic declines in labor migration seem to be unsubstantiated. PMID:28547003
Why Did People Move During the Great Recession?: The Role of Economics in Migration Decisions.
Levy, Brian L; Mouw, Ted; Daniel Perez, Anthony
2017-04-01
Labor migration offers an important mechanism to reallocate workers when there are regional differences in employment conditions. Whereas conventional wisdom suggests migration rates should increase during recessions as workers move out of areas that are hit hardest, initial evidence suggested that overall migration rates declined during the Great Recession, despite large regional differences in unemployment and growth rates. In this paper, we use data from the American Community Survey to analyze internal migration trends before and during the economic downturn. First, we find only a modest decline in the odds of adults leaving distressed labor market areas during the recession, which may result in part from challenges related to the housing price crash. Second, we estimate conditional logit models of destination choice for individuals who migrate across labor market areas and find a substantial effect of economic factors such as labor demand, unemployment, and housing values. We also estimate latent class conditional logit models that test whether there is heterogeneity in preferences for destination characteristics among migrants. Over all, the latent class models suggest that roughly equal percentages of migrants were motivated by economic factors before and during the recession. We conclude that fears of dramatic declines in labor migration seem to be unsubstantiated.
Reef fish communities are spooked by scuba surveys and may take hours to recover
Cheal, Alistair J.; Miller, Ian R.
2018-01-01
Ecological monitoring programs typically aim to detect changes in the abundance of species of conservation concern or which reflect system status. Coral reef fish assemblages are functionally important for reef health and these are most commonly monitored using underwater visual surveys (UVS) by divers. In addition to estimating numbers, most programs also collect estimates of fish lengths to allow calculation of biomass, an important determinant of a fish’s functional impact. However, diver surveys may be biased because fishes may either avoid or are attracted to divers and the process of estimating fish length could result in fish counts that differ from those made without length estimations. Here we investigated whether (1) general diver disturbance and (2) the additional task of estimating fish lengths affected estimates of reef fish abundance and species richness during UVS, and for how long. Initial estimates of abundance and species richness were significantly higher than those made on the same section of reef after diver disturbance. However, there was no evidence that estimating fish lengths at the same time as abundance resulted in counts different from those made when estimating abundance alone. Similarly, there was little consistent bias among observers. Estimates of the time for fish taxa that avoided divers after initial contact to return to initial levels of abundance varied from three to 17 h, with one group of exploited fishes showing initial attraction to divers that declined over the study period. Our finding that many reef fishes may disperse for such long periods after initial contact with divers suggests that monitoring programs should take great care to minimise diver disturbance prior to surveys. PMID:29844998
In-Flight Alignment Using H ∞ Filter for Strapdown INS on Aircraft
Pei, Fu-Jun; Liu, Xuan; Zhu, Li
2014-01-01
In-flight alignment is an effective way to improve the accuracy and speed of initial alignment for strapdown inertial navigation system (INS). During the aircraft flight, strapdown INS alignment was disturbed by lineal and angular movements of the aircraft. To deal with the disturbances in dynamic initial alignment, a novel alignment method for SINS is investigated in this paper. In this method, an initial alignment error model of SINS in the inertial frame is established. The observability of the system is discussed by piece-wise constant system (PWCS) theory and observable degree is computed by the singular value decomposition (SVD) theory. It is demonstrated that the system is completely observable, and all the system state parameters can be estimated by optimal filter. Then a H ∞ filter was designed to resolve the uncertainty of measurement noise. The simulation results demonstrate that the proposed algorithm can reach a better accuracy under the dynamic disturbance condition. PMID:24511300
Correlates of perceived effectiveness of the Safe Schools/Healthy Students Initiative.
Ellis, Bruce; Alford, Aaron; Yu, Ping; Xiong, Sharon; Hill, Gary; Puckett, Marissa; Mannix, Danyelle; Wells, Michael E
2012-05-01
A three-level growth-curve model was applied to estimate perceived impact growth trajectories, using multi-year data from project and school surveys on outcome and program implementation collected from 59 sites and approximately 1165 participating schools in the Safe Schools and Healthy Students Initiative. Primary interest is to determine whether and how project-level and school-level correlates affect schools' perceptions of the Initiative's effectiveness over time when the effects of the pre-grant environmental conditions, grant operations, and near-term outcomes are considered. Coordination and service integration, comprehensive programs and activities for early childhood development, and change in school involvement were found to be significant predictors of school-perceived overall impact when the effect of poverty was considered. Partnership functioning, perceived importance of school resources, and school involvement were found to be significant predictors of school-perceived impact on substance use prevention when the effect of poverty was considered. Copyright © 2011 Elsevier Ltd. All rights reserved.
Method to monitor HC-SCR catalyst NOx reduction performance for lean exhaust applications
Viola, Michael B [Macomb Township, MI; Schmieg, Steven J [Troy, MI; Sloane, Thompson M [Oxford, MI; Hilden, David L [Shelby Township, MI; Mulawa, Patricia A [Clinton Township, MI; Lee, Jong H [Rochester Hills, MI; Cheng, Shi-Wai S [Troy, MI
2012-05-29
A method for initiating a regeneration mode in selective catalytic reduction device utilizing hydrocarbons as a reductant includes monitoring a temperature within the aftertreatment system, monitoring a fuel dosing rate to the selective catalytic reduction device, monitoring an initial conversion efficiency, selecting a determined equation to estimate changes in a conversion efficiency of the selective catalytic reduction device based upon the monitored temperature and the monitored fuel dosing rate, estimating changes in the conversion efficiency based upon the determined equation and the initial conversion efficiency, and initiating a regeneration mode for the selective catalytic reduction device based upon the estimated changes in conversion efficiency.
Angst, Ueli M.; Boschmann, Carolina; Wagner, Matthias; Elsener, Bernhard
2017-01-01
The aging of reinforced concrete infrastructure in developed countries imposes an urgent need for methods to reliably assess the condition of these structures. Corrosion of the embedded reinforcing steel is the most frequent cause for degradation. While it is well known that the ability of a structure to withstand corrosion depends strongly on factors such as the materials used or the age, it is common practice to rely on threshold values stipulated in standards or textbooks. These threshold values for corrosion initiation (Ccrit) are independent of the actual properties of a certain structure, which clearly limits the accuracy of condition assessments and service life predictions. The practice of using tabulated values can be traced to the lack of reliable methods to determine Ccrit on-site and in the laboratory. Here, an experimental protocol to determine Ccrit for individual engineering structures or structural members is presented. A number of reinforced concrete samples are taken from structures and laboratory corrosion testing is performed. The main advantage of this method is that it ensures real conditions concerning parameters that are well known to greatly influence Ccrit, such as the steel-concrete interface, which cannot be representatively mimicked in laboratory-produced samples. At the same time, the accelerated corrosion test in the laboratory permits the reliable determination of Ccrit prior to corrosion initiation on the tested structure; this is a major advantage over all common condition assessment methods that only permit estimating the conditions for corrosion after initiation, i.e., when the structure is already damaged. The protocol yields the statistical distribution of Ccrit for the tested structure. This serves as a basis for probabilistic prediction models for the remaining time to corrosion, which is needed for maintenance planning. This method can potentially be used in material testing of civil infrastructures, similar to established methods used for mechanical testing. PMID:28892023
Angst, Ueli M; Boschmann, Carolina; Wagner, Matthias; Elsener, Bernhard
2017-08-31
The aging of reinforced concrete infrastructure in developed countries imposes an urgent need for methods to reliably assess the condition of these structures. Corrosion of the embedded reinforcing steel is the most frequent cause for degradation. While it is well known that the ability of a structure to withstand corrosion depends strongly on factors such as the materials used or the age, it is common practice to rely on threshold values stipulated in standards or textbooks. These threshold values for corrosion initiation (Ccrit) are independent of the actual properties of a certain structure, which clearly limits the accuracy of condition assessments and service life predictions. The practice of using tabulated values can be traced to the lack of reliable methods to determine Ccrit on-site and in the laboratory. Here, an experimental protocol to determine Ccrit for individual engineering structures or structural members is presented. A number of reinforced concrete samples are taken from structures and laboratory corrosion testing is performed. The main advantage of this method is that it ensures real conditions concerning parameters that are well known to greatly influence Ccrit, such as the steel-concrete interface, which cannot be representatively mimicked in laboratory-produced samples. At the same time, the accelerated corrosion test in the laboratory permits the reliable determination of Ccrit prior to corrosion initiation on the tested structure; this is a major advantage over all common condition assessment methods that only permit estimating the conditions for corrosion after initiation, i.e., when the structure is already damaged. The protocol yields the statistical distribution of Ccrit for the tested structure. This serves as a basis for probabilistic prediction models for the remaining time to corrosion, which is needed for maintenance planning. This method can potentially be used in material testing of civil infrastructures, similar to established methods used for mechanical testing.
Wireless Concrete Strength Monitoring of Wind Turbine Foundations.
Perry, Marcus; Fusiek, Grzegorz; Niewczas, Pawel; Rubert, Tim; McAlorum, Jack
2017-12-16
Wind turbine foundations are typically cast in place, leaving the concrete to mature under environmental conditions that vary in time and space. As a result, there is uncertainty around the concrete's initial performance, and this can encourage both costly over-design and inaccurate prognoses of structural health. Here, we demonstrate the field application of a dense, wireless thermocouple network to monitor the strength development of an onshore, reinforced-concrete wind turbine foundation. Up-to-date methods in fly ash concrete strength and maturity modelling are used to estimate the distribution and evolution of foundation strength over 29 days of curing. Strength estimates are verified by core samples, extracted from the foundation base. In addition, an artificial neural network, trained using temperature data, is exploited to demonstrate that distributed concrete strengths can be estimated for foundations using only sparse thermocouple data. Our techniques provide a practical alternative to computational models, and could assist site operators in making more informed decisions about foundation design, construction, operation and maintenance.
Wireless Concrete Strength Monitoring of Wind Turbine Foundations
Niewczas, Pawel; Rubert, Tim
2017-01-01
Wind turbine foundations are typically cast in place, leaving the concrete to mature under environmental conditions that vary in time and space. As a result, there is uncertainty around the concrete’s initial performance, and this can encourage both costly over-design and inaccurate prognoses of structural health. Here, we demonstrate the field application of a dense, wireless thermocouple network to monitor the strength development of an onshore, reinforced-concrete wind turbine foundation. Up-to-date methods in fly ash concrete strength and maturity modelling are used to estimate the distribution and evolution of foundation strength over 29 days of curing. Strength estimates are verified by core samples, extracted from the foundation base. In addition, an artificial neural network, trained using temperature data, is exploited to demonstrate that distributed concrete strengths can be estimated for foundations using only sparse thermocouple data. Our techniques provide a practical alternative to computational models, and could assist site operators in making more informed decisions about foundation design, construction, operation and maintenance. PMID:29258176
Initial guidelines and estimates for a power system with inertial (flywheel) energy storage
NASA Technical Reports Server (NTRS)
Slifer, L. W., Jr.
1980-01-01
The starting point for the assessment of a spacecraft power system utilizing inertial (flywheel) energy storage. Both general and specific guidelines are defined for the assessment of a modular flywheel system, operationally similar to but with significantly greater capability than the multimission modular spacecraft (MMS) power system. Goals for the flywheel system are defined in terms of efficiently train estimates and mass estimates for the system components. The inertial storage power system uses a 5 kw-hr flywheel storage component at 50 percent depth of discharge (DOD). It is capable of supporting an average load of 3 kw, including a peak load of 7.5 kw for 10 percent of the duty cycle, in low earth orbit operation. The specific power goal for the system is 10 w/kg, consisting of a 56w/kg (end of life) solar array, a 21.7 w-hr/kg (at 50 percent DOD) flywheel, and 43 w/kg power processing (conditioning, control and distribution).
Simulations in site error estimation for direction finders
NASA Astrophysics Data System (ADS)
López, Raúl E.; Passi, Ranjit M.
1991-08-01
The performance of an algorithm for the recovery of site-specific errors of direction finder (DF) networks is tested under controlled simulated conditions. The simulations show that the algorithm has some inherent shortcomings for the recovery of site errors from the measured azimuth data. These limitations are fundamental to the problem of site error estimation using azimuth information. Several ways for resolving or ameliorating these basic complications are tested by means of simulations. From these it appears that for the effective implementation of the site error determination algorithm, one should design the networks with at least four DFs, improve the alignment of the antennas, and increase the gain of the DFs as much as it is compatible with other operational requirements. The use of a nonzero initial estimate of the site errors when working with data from networks of four or more DFs also improves the accuracy of the site error recovery. Even for networks of three DFs, reasonable site error corrections could be obtained if the antennas could be well aligned.
Costanza-Robinson, Molly S; Zheng, Zheng; Henry, Eric J; Estabrook, Benjamin D; Littlefield, Malcolm H
2012-10-16
Surfactant miscible-displacement experiments represent a conventional means of estimating air-water interfacial area (A(I)) in unsaturated porous media. However, changes in surface tension during the experiment can potentially induce unsaturated flow, thereby altering interfacial areas and violating several fundamental method assumptions, including that of steady-state flow. In this work, the magnitude of surfactant-induced flow was quantified by monitoring moisture content and perturbations to effluent flow rate during miscible-displacement experiments conducted using a range of surfactant concentrations. For systems initially at 83% moisture saturation (S(W)), decreases of 18-43% S(W) occurred following surfactant introduction, with the magnitude and rate of drainage inversely related to the surface tension of the surfactant solution. Drainage induced by 0.1 mM sodium dodecyl benzene sulfonate, commonly used for A(I) estimation, resulted in effluent flow rate increases of up to 27% above steady-state conditions and is estimated to more than double the interfacial area over the course of the experiment. Depending on the surfactant concentration and the moisture content used to describe the system, A(I) estimates varied more than 3-fold. The magnitude of surfactant-induced flow is considerably larger than previously recognized and casts doubt on the reliability of A(I) estimation by surfactant miscible-displacement.
NASA Astrophysics Data System (ADS)
Tagade, Piyush; Hariharan, Krishnan S.; Kolake, Subramanya Mayya; Song, Taewon; Oh, Dukjin
2017-03-01
A novel approach for integrating a pseudo-two dimensional electrochemical thermal (P2D-ECT) model and data assimilation algorithm is presented for lithium-ion cell state estimation. This approach refrains from making any simplifications in the P2D-ECT model while making it amenable for online state estimation. Though deterministic, uncertainty in the initial states induces stochasticity in the P2D-ECT model. This stochasticity is resolved by spectrally projecting the stochastic P2D-ECT model on a set of orthogonal multivariate Hermite polynomials. Volume averaging in the stochastic dimensions is proposed for efficient numerical solution of the resultant model. A state estimation framework is developed using a transformation of the orthogonal basis to assimilate the measurables with this system of equations. Effectiveness of the proposed method is first demonstrated by assimilating the cell voltage and temperature data generated using a synthetic test bed. This validated method is used with the experimentally observed cell voltage and temperature data for state estimation at different operating conditions and drive cycle protocols. The results show increased prediction accuracy when the data is assimilated every 30s. High accuracy of the estimated states is exploited to infer temperature dependent behavior of the lithium-ion cell.
Montgomery, D.R.; Schmidt, K.M.; Dietrich, W.E.; McKean, J.
2009-01-01
The middle of a hillslope hollow in the Oregon Coast Range failed and mobilized as a debris flow during heavy rainfall in November 1996. Automated pressure transducers recorded high spatial variability of pore water pressure within the area that mobilized as a debris flow, which initiated where local upward flow from bedrock developed into overlying colluvium. Postfailure observations of the bedrock surface exposed in the debris flow scar reveal a strong spatial correspondence between elevated piezometric response and water discharging from bedrock fractures. Measurements of apparent root cohesion on the basal (Cb) and lateral (Cl) scarp demonstrate substantial local variability, with areally weighted values of Cb = 0.1 and Cl = 4.6 kPa. Using measured soil properties and basal root strength, the widely used infinite slope model, employed assuming slope parallel groundwater flow, provides a poor prediction of hydrologie conditions at failure. In contrast, a model including lateral root strength (but neglecting lateral frictional strength) gave a predicted critical value of relative soil saturation that fell within the range defined by the arithmetic and geometric mean values at the time of failure. The 3-D slope stability model CLARA-W, used with locally observed pore water pressure, predicted small areas with lower factors of safety within the overall slide mass at sites consistent with field observations of where the failure initiated. This highly variable and localized nature of small areas of high pore pressure that can trigger slope failure means, however, that substantial uncertainty appears inevitable for estimating hydrologie conditions within incipient debris flows under natural conditions. Copyright 2009 by the American Geophysical Union.
DeLorenzo, Christine; Papademetris, Xenophon; Staib, Lawrence H.; Vives, Kenneth P.; Spencer, Dennis D.; Duncan, James S.
2010-01-01
During neurosurgery, nonrigid brain deformation prevents preoperatively-acquired images from accurately depicting the intraoperative brain. Stereo vision systems can be used to track intraoperative cortical surface deformation and update preoperative brain images in conjunction with a biomechanical model. However, these stereo systems are often plagued with calibration error, which can corrupt the deformation estimation. In order to decouple the effects of camera calibration from the surface deformation estimation, a framework that can solve for disparate and often competing variables is needed. Game theory, which was developed to handle decision making in this type of competitive environment, has been applied to various fields from economics to biology. In this paper, game theory is applied to cortical surface tracking during neocortical epilepsy surgery and used to infer information about the physical processes of brain surface deformation and image acquisition. The method is successfully applied to eight in vivo cases, resulting in an 81% decrease in mean surface displacement error. This includes a case in which some of the initial camera calibration parameters had errors of 70%. Additionally, the advantages of using a game theoretic approach in neocortical epilepsy surgery are clearly demonstrated in its robustness to initial conditions. PMID:20129844
Physics-based coastal current tomographic tracking using a Kalman filter.
Wang, Tongchen; Zhang, Ying; Yang, T C; Chen, Huifang; Xu, Wen
2018-05-01
Ocean acoustic tomography can be used based on measurements of two-way travel-time differences between the nodes deployed on the perimeter of the surveying area to invert/map the ocean current inside the area. Data at different times can be related using a Kalman filter, and given an ocean circulation model, one can in principle now cast and even forecast current distribution given an initial distribution and/or the travel-time difference data on the boundary. However, an ocean circulation model requires many inputs (many of them often not available) and is unpractical for estimation of the current field. A simplified form of the discretized Navier-Stokes equation is used to show that the future velocity state is just a weighted spatial average of the current state. These weights could be obtained from an ocean circulation model, but here in a data driven approach, auto-regressive methods are used to obtain the time and space dependent weights from the data. It is shown, based on simulated data, that the current field tracked using a Kalman filter (with an arbitrary initial condition) is more accurate than that estimated by the standard methods where data at different times are treated independently. Real data are also examined.
Spatial resolution in visual memory.
Ben-Shalom, Asaf; Ganel, Tzvi
2015-04-01
Representations in visual short-term memory are considered to contain relatively elaborated information on object structure. Conversely, representations in earlier stages of the visual hierarchy are thought to be dominated by a sensory-based, feed-forward buildup of information. In four experiments, we compared the spatial resolution of different object properties between two points in time along the processing hierarchy in visual short-term memory. Subjects were asked either to estimate the distance between objects or to estimate the size of one of the objects' features under two experimental conditions, of either a short or a long delay period between the presentation of the target stimulus and the probe. When different objects were referred to, similar spatial resolution was found for the two delay periods, suggesting that initial processing stages are sensitive to object-based properties. Conversely, superior resolution was found for the short, as compared with the long, delay when features were referred to. These findings suggest that initial representations in visual memory are hybrid in that they allow fine-grained resolution for object features alongside normal visual sensitivity to the segregation between objects. The findings are also discussed in reference to the distinction made in earlier studies between visual short-term memory and iconic memory.
Kuklinski, Margaret R; Fagan, Abigail A; Hawkins, J David; Briney, John S; Catalano, Richard F
2015-06-01
To determine whether the Communities That Care (CTC) prevention system is a cost-beneficial intervention. Data were from a longitudinal panel of 4,407 youth participating in a randomized controlled trial including 24 towns in 7 states, matched in pairs within state and randomly assigned to condition. Significant differences favoring intervention youth in sustained abstinence from delinquency, alcohol use, and tobacco use through Grade 12 were monetized and compared to economic investment in CTC. CTC was estimated to produce $4,477 in benefits per youth (discounted 2011 dollars). It cost $556 per youth to implement CTC for 5 years. The net present benefit was $3,920. The benefit-cost ratio was $8.22 per dollar invested. The internal rate of return was 21%. Risk that investment would exceed benefits was minimal. Investment was expected to be recouped within 9 years. Sensitivity analyses in which effects were halved yielded positive cost-beneficial results. CTC is a cost-beneficial, community-based approach to preventing initiation of delinquency, alcohol use, and tobacco use. CTC is estimated to generate economic benefits that exceed implementation costs when disseminated with fidelity in communities.
Aggregate and individual replication probability within an explicit model of the research process.
Miller, Jeff; Schwarz, Wolf
2011-09-01
We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by obtaining either a statistically significant result in the same direction or any effect in that direction. We analyze both the probability of successfully replicating a particular experimental effect (i.e., the individual replication probability) and the average probability of successful replication across different studies within some research context (i.e., the aggregate replication probability), and we identify the conditions under which the latter can be approximated using the formulas of Killeen (2005a, 2007). We show how both of these probabilities depend on parameters of the research context that would rarely be known in practice. In addition, we show that the statistical uncertainty associated with the size of an initial observed effect would often prevent accurate estimation of the desired individual replication probability even if these research context parameters were known exactly. We conclude that accurate estimates of replication probability are generally unattainable.
A Gaussian beam method for ultrasonic non-destructive evaluation modeling
NASA Astrophysics Data System (ADS)
Jacquet, O.; Leymarie, N.; Cassereau, D.
2018-05-01
The propagation of high-frequency ultrasonic body waves can be efficiently estimated with a semi-analytic Dynamic Ray Tracing approach using paraxial approximation. Although this asymptotic field estimation avoids the computational cost of numerical methods, it may encounter several limitations in reproducing identified highly interferential features. Nevertheless, some can be managed by allowing paraxial quantities to be complex-valued. This gives rise to localized solutions, known as paraxial Gaussian beams. Whereas their propagation and transmission/reflection laws are well-defined, the fact remains that the adopted complexification introduces additional initial conditions. While their choice is usually performed according to strategies specifically tailored to limited applications, a Gabor frame method has been implemented to indiscriminately initialize a reasonable number of paraxial Gaussian beams. Since this method can be applied for an usefully wide range of ultrasonic transducers, the typical case of the time-harmonic piston radiator is investigated. Compared to the commonly used Multi-Gaussian Beam model [1], a better agreement is obtained throughout the radiated field between the results of numerical integration (or analytical on-axis solution) and the resulting Gaussian beam superposition. Sparsity of the proposed solution is also discussed.
Mali, Ivana; Duarte, Adam; Forstner, Michael R J
2018-01-01
Abundance estimates play an important part in the regulatory and conservation decision-making process. It is important to correct monitoring data for imperfect detection when using these data to track spatial and temporal variation in abundance, especially in the case of rare and elusive species. This paper presents the first attempt to estimate abundance of the Rio Grande cooter ( Pseudemys gorzugi ) while explicitly considering the detection process. Specifically, in 2016 we monitored this rare species at two sites along the Black River, New Mexico via traditional baited hoop-net traps and less invasive visual surveys to evaluate the efficacy of these two sampling designs. We fitted the Huggins closed-capture estimator to estimate capture probabilities using the trap data and distance sampling models to estimate detection probabilities using the visual survey data. We found that only the visual survey with the highest number of observed turtles resulted in similar abundance estimates to those estimated using the trap data. However, the estimates of abundance from the remaining visual survey data were highly variable and often underestimated abundance relative to the estimates from the trap data. We suspect this pattern is related to changes in the basking behavior of the species and, thus, the availability of turtles to be detected even though all visual surveys were conducted when environmental conditions were similar. Regardless, we found that riverine habitat conditions limited our ability to properly conduct visual surveys at one site. Collectively, this suggests visual surveys may not be an effective sample design for this species in this river system. When analyzing the trap data, we found capture probabilities to be highly variable across sites and between age classes and that recapture probabilities were much lower than initial capture probabilities, highlighting the importance of accounting for detectability when monitoring this species. Although baited hoop-net traps seem to be an effective sampling design, it is important to note that this method required a relatively high trap effort to reliably estimate abundance. This information will be useful when developing a larger-scale, long-term monitoring program for this species of concern.
Simulations of small solid accretion on to planetesimals in the presence of gas
NASA Astrophysics Data System (ADS)
Hughes, A. G.; Boley, A. C.
2017-12-01
The growth and migration of planetesimals in a young protoplanetary disc are fundamental to planet formation. In all models of early growth, there are several processes that can inhibit grains from reaching larger sizes. Nevertheless, observations suggest that growth of planetesimals must be rapid. If a small number of 100 km sized planetesimals do manage to form in the disc, then gas drag effects could enable them to efficiently accrete small solids from beyond their gravitationally focused cross-section. This gas-drag-enhanced accretion can allow planetesimals to grow at rapid rates, in principle. We present self-consistent hydrodynamics simulations with direct particle integration and gas-drag coupling to estimate the rate of planetesimal growth due to pebble accretion. Wind tunnel simulations are used to explore a range of particle sizes and disc conditions. We also explore analytic estimates of planetesimal growth and numerically integrate planetesimal drift due to the accretion of small solids. Our results show that, for almost every case that we consider, there is a clearly preferred particle size for accretion that depends on the properties of the accreting planetesimal and the local disc conditions. For solids much smaller than the preferred particle size, accretion rates are significantly reduced as the particles are entrained in the gas and flow around the planetesimal. Solids much larger than the preferred size accrete at rates consistent with gravitational focusing. Our analytic estimates for pebble accretion highlight the time-scales that are needed for the growth of large objects under different disc conditions and initial planetesimal sizes.
NASA Technical Reports Server (NTRS)
Abdol-Hamid, Khaled S.; Ghaffari, Farhad
2011-01-01
Numerical predictions of the longitudinal aerodynamic characteristics for the Ares I class of vehicles, along with the associated error estimate derived from an iterative convergence grid refinement, are presented. Computational results are based on the unstructured grid, Reynolds-averaged Navier-Stokes flow solver USM3D, with an assumption that the flow is fully turbulent over the entire vehicle. This effort was designed to complement the prior computational activities conducted over the past five years in support of the Ares I Project with the emphasis on the vehicle s last design cycle designated as the A106 configuration. Due to a lack of flight data for this particular design s outer mold line, the initial vehicle s aerodynamic predictions and the associated error estimates were first assessed and validated against the available experimental data at representative wind tunnel flow conditions pertinent to the ascent phase of the trajectory without including any propulsion effects. Subsequently, the established procedures were then applied to obtain the longitudinal aerodynamic predictions at the selected flight flow conditions. Sample computed results and the correlations with the experimental measurements are presented. In addition, the present analysis includes the relevant data to highlight the balance between the prediction accuracy against the grid size and, thus, the corresponding computer resource requirements for the computations at both wind tunnel and flight flow conditions. NOTE: Some details have been removed from selected plots and figures in compliance with the sensitive but unclassified (SBU) restrictions. However, the content still conveys the merits of the technical approach and the relevant results.
NASA Astrophysics Data System (ADS)
Goelzer, Heiko; Nowicki, Sophie; Edwards, Tamsin; Beckley, Matthew; Abe-Ouchi, Ayako; Aschwanden, Andy; Calov, Reinhard; Gagliardini, Olivier; Gillet-Chaulet, Fabien; Golledge, Nicholas R.; Gregory, Jonathan; Greve, Ralf; Humbert, Angelika; Huybrechts, Philippe; Kennedy, Joseph H.; Larour, Eric; Lipscomb, William H.; Le clec'h, Sébastien; Lee, Victoria; Morlighem, Mathieu; Pattyn, Frank; Payne, Antony J.; Rodehacke, Christian; Rückamp, Martin; Saito, Fuyuki; Schlegel, Nicole; Seroussi, Helene; Shepherd, Andrew; Sun, Sainan; van de Wal, Roderik; Ziemen, Florian A.
2018-04-01
Earlier large-scale Greenland ice sheet sea-level projections (e.g. those run during the ice2sea and SeaRISE initiatives) have shown that ice sheet initial conditions have a large effect on the projections and give rise to important uncertainties. The goal of this initMIP-Greenland intercomparison exercise is to compare, evaluate, and improve the initialisation techniques used in the ice sheet modelling community and to estimate the associated uncertainties in modelled mass changes. initMIP-Greenland is the first in a series of ice sheet model intercomparison activities within ISMIP6 (the Ice Sheet Model Intercomparison Project for CMIP6), which is the primary activity within the Coupled Model Intercomparison Project Phase 6 (CMIP6) focusing on the ice sheets. Two experiments for the large-scale Greenland ice sheet have been designed to allow intercomparison between participating models of (1) the initial present-day state of the ice sheet and (2) the response in two idealised forward experiments. The forward experiments serve to evaluate the initialisation in terms of model drift (forward run without additional forcing) and in response to a large perturbation (prescribed surface mass balance anomaly); they should not be interpreted as sea-level projections. We present and discuss results that highlight the diversity of data sets, boundary conditions, and initialisation techniques used in the community to generate initial states of the Greenland ice sheet. We find good agreement across the ensemble for the dynamic response to surface mass balance changes in areas where the simulated ice sheets overlap but differences arising from the initial size of the ice sheet. The model drift in the control experiment is reduced for models that participated in earlier intercomparison exercises.
Goelzer, Heiko; Nowicki, Sophie; Edwards, Tamsin; ...
2018-04-19
Earlier large-scale Greenland ice sheet sea-level projections (e.g. those run during the ice2sea and SeaRISE initiatives) have shown that ice sheet initial conditions have a large effect on the projections and give rise to important uncertainties. Here, the goal of this initMIP-Greenland intercomparison exercise is to compare, evaluate, and improve the initialisation techniques used in the ice sheet modelling community and to estimate the associated uncertainties in modelled mass changes. initMIP-Greenland is the first in a series of ice sheet model intercomparison activities within ISMIP6 (the Ice Sheet Model Intercomparison Project for CMIP6), which is the primary activity within themore » Coupled Model Intercomparison Project Phase 6 (CMIP6) focusing on the ice sheets. Two experiments for the large-scale Greenland ice sheet have been designed to allow intercomparison between participating models of (1) the initial present-day state of the ice sheet and (2) the response in two idealised forward experiments. The forward experiments serve to evaluate the initialisation in terms of model drift (forward run without additional forcing) and in response to a large perturbation (prescribed surface mass balance anomaly); they should not be interpreted as sea-level projections. We present and discuss results that highlight the diversity of data sets, boundary conditions, and initialisation techniques used in the community to generate initial states of the Greenland ice sheet. We find good agreement across the ensemble for the dynamic response to surface mass balance changes in areas where the simulated ice sheets overlap but differences arising from the initial size of the ice sheet. The model drift in the control experiment is reduced for models that participated in earlier intercomparison exercises.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goelzer, Heiko; Nowicki, Sophie; Edwards, Tamsin
Earlier large-scale Greenland ice sheet sea-level projections (e.g. those run during the ice2sea and SeaRISE initiatives) have shown that ice sheet initial conditions have a large effect on the projections and give rise to important uncertainties. Here, the goal of this initMIP-Greenland intercomparison exercise is to compare, evaluate, and improve the initialisation techniques used in the ice sheet modelling community and to estimate the associated uncertainties in modelled mass changes. initMIP-Greenland is the first in a series of ice sheet model intercomparison activities within ISMIP6 (the Ice Sheet Model Intercomparison Project for CMIP6), which is the primary activity within themore » Coupled Model Intercomparison Project Phase 6 (CMIP6) focusing on the ice sheets. Two experiments for the large-scale Greenland ice sheet have been designed to allow intercomparison between participating models of (1) the initial present-day state of the ice sheet and (2) the response in two idealised forward experiments. The forward experiments serve to evaluate the initialisation in terms of model drift (forward run without additional forcing) and in response to a large perturbation (prescribed surface mass balance anomaly); they should not be interpreted as sea-level projections. We present and discuss results that highlight the diversity of data sets, boundary conditions, and initialisation techniques used in the community to generate initial states of the Greenland ice sheet. We find good agreement across the ensemble for the dynamic response to surface mass balance changes in areas where the simulated ice sheets overlap but differences arising from the initial size of the ice sheet. The model drift in the control experiment is reduced for models that participated in earlier intercomparison exercises.« less
Nelson, Richard E; Stevens, Vanessa W; Khader, Karim; Jones, Makoto; Samore, Matthew H; Evans, Martin E; Douglas Scott, R; Slayton, Rachel B; Schweizer, Marin L; Perencevich, Eli L; Rubin, Michael A
2016-05-01
In an effort to reduce methicillin-resistant Staphylococcus aureus (MRSA) transmission through universal screening and isolation, the Department of Veterans Affairs (VA) launched the National MRSA Prevention Initiative in October 2007. The objective of this analysis was to quantify the budget impact and cost effectiveness of this initiative. An economic model was developed using published data on MRSA hospital-acquired infection (HAI) rates in the VA from October 2007 to September 2010; estimates of the costs of MRSA HAIs in the VA; and estimates of the intervention costs, including salaries of staff members hired to support the initiative at each VA facility. To estimate the rate of MRSA HAIs that would have occurred if the initiative had not been implemented, two different assumptions were made: no change and a downward temporal trend. Effectiveness was measured in life-years gained. The initiative resulted in an estimated 1,466-2,176 fewer MRSA HAIs. The initiative itself was estimated to cost $207 million during this 3-year period, while the cost savings from prevented MRSA HAIs ranged from $27 million to $75 million. The incremental cost-effectiveness ratios ranged from $28,048 to $56,944/life-years. The overall impact on the VA's budget was $131-$179 million. Wide-scale implementation of a national MRSA surveillance and prevention strategy in VA inpatient settings may have prevented a substantial number of MRSA HAIs. Although the savings associated with prevented infections helped offset some but not all of the cost of the initiative, this model indicated that the initiative would be considered cost effective. Copyright © 2016 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, H; Chen, Z; Nath, R
Purpose: kV fluoroscopic imaging combined with MV treatment beam imaging has been investigated for intrafractional motion monitoring and correction. It is, however, subject to additional kV imaging dose to normal tissue. To balance tracking accuracy and imaging dose, we previously proposed an adaptive imaging strategy to dynamically decide future imaging type and moments based on motion tracking uncertainty. kV imaging may be used continuously for maximal accuracy or only when the position uncertainty (probability of out of threshold) is high if a preset imaging dose limit is considered. In this work, we propose more accurate methods to estimate tracking uncertaintymore » through analyzing acquired data in real-time. Methods: We simulated motion tracking process based on a previously developed imaging framework (MV + initial seconds of kV imaging) using real-time breathing data from 42 patients. Motion tracking errors for each time point were collected together with the time point’s corresponding features, such as tumor motion speed and 2D tracking error of previous time points, etc. We tested three methods for error uncertainty estimation based on the features: conditional probability distribution, logistic regression modeling, and support vector machine (SVM) classification to detect errors exceeding a threshold. Results: For conditional probability distribution, polynomial regressions on three features (previous tracking error, prediction quality, and cosine of the angle between the trajectory and the treatment beam) showed strong correlation with the variation (uncertainty) of the mean 3D tracking error and its standard deviation: R-square = 0.94 and 0.90, respectively. The logistic regression and SVM classification successfully identified about 95% of tracking errors exceeding 2.5mm threshold. Conclusion: The proposed methods can reliably estimate the motion tracking uncertainty in real-time, which can be used to guide adaptive additional imaging to confirm the tumor is within the margin or initialize motion compensation if it is out of the margin.« less
NASA Astrophysics Data System (ADS)
Scarino, B. R.; Smith, W. L., Jr.; Minnis, P.; Bedka, K. M.
2017-12-01
Atmospheric models rely on high-accuracy, high-resolution initial radiometric and surface conditions for better short-term meteorological forecasts, as well as improved evaluation of global climate models. Continuous remote sensing of the Earth's energy budget, as conducted by the Clouds and Earth's Radiant Energy System (CERES) project, allows for near-realtime evaluation of cloud and surface radiation properties. It is unfortunately common for there to be bias between atmospheric/surface radiation models and Earth-observations. For example, satellite-observed surface skin temperature (Ts), an important parameter for characterizing the energy exchange at the ground/water-atmosphere interface, can be biased due to atmospheric adjustment assumptions and anisotropy effects. Similarly, models are potentially biased by errors in initial conditions and regional forcing assumptions, which can be mitigated through assimilation with true measurements. As such, when frequent, broad-coverage, and accurate retrievals of satellite Ts are available, important insights into model estimates of Ts can be gained. The Satellite ClOud and Radiation Property retrieval System (SatCORPS) employs a single-channel thermal-infrared method to produce anisotropy-corrected Ts over clear-sky land and ocean surfaces from data taken by geostationary Earth orbit (GEO) satellite imagers. Regional and diurnal changes in model land surface temperature (LST) performance can be assessed owing to the somewhat continuous measurements of the LST offered by GEO satellites - measurements which are accurate to within 0.2 K. A seasonal, hourly comparison of satellite-observed LST with the NASA Goddard Earth Observing System Version 5 (GEOS-5) and the Modern-Era Retrospective Analysis for Research and Applications (MERRA) LST estimates is conducted to reveal regional and diurnal biases. This assessment is an important first step for evaluating the effectiveness of Ts assimilation, as well for determining the impact anisotropy correction has on observation - model bias, and is of critical importance for CERES.
A case study of multi-seam coal mine entry stability analysis with strength reduction method
Tulu, Ihsan Berk; Esterhuizen, Gabriel S; Klemetti, Ted; Murphy, Michael M.; Sumner, James; Sloan, Michael
2017-01-01
In this paper, the advantage of using numerical models with the strength reduction method (SRM) to evaluate entry stability in complex multiple-seam conditions is demonstrated. A coal mine under variable topography from the Central Appalachian region is used as a case study. At this mine, unexpected roof conditions were encountered during development below previously mined panels. Stress mapping and observation of ground conditions were used to quantify the success of entry support systems in three room-and-pillar panels. Numerical model analyses were initially conducted to estimate the stresses induced by the multiple-seam mining at the locations of the affected entries. The SRM was used to quantify the stability factor of the supported roof of the entries at selected locations. The SRM-calculated stability factors were compared with observations made during the site visits, and the results demonstrate that the SRM adequately identifies the unexpected roof conditions in this complex case. It is concluded that the SRM can be used to effectively evaluate the likely success of roof supports and the stability condition of entries in coal mines. PMID:28239503
A case study of multi-seam coal mine entry stability analysis with strength reduction method.
Tulu, Ihsan Berk; Esterhuizen, Gabriel S; Klemetti, Ted; Murphy, Michael M; Sumner, James; Sloan, Michael
2016-03-01
In this paper, the advantage of using numerical models with the strength reduction method (SRM) to evaluate entry stability in complex multiple-seam conditions is demonstrated. A coal mine under variable topography from the Central Appalachian region is used as a case study. At this mine, unexpected roof conditions were encountered during development below previously mined panels. Stress mapping and observation of ground conditions were used to quantify the success of entry support systems in three room-and-pillar panels. Numerical model analyses were initially conducted to estimate the stresses induced by the multiple-seam mining at the locations of the affected entries. The SRM was used to quantify the stability factor of the supported roof of the entries at selected locations. The SRM-calculated stability factors were compared with observations made during the site visits, and the results demonstrate that the SRM adequately identifies the unexpected roof conditions in this complex case. It is concluded that the SRM can be used to effectively evaluate the likely success of roof supports and the stability condition of entries in coal mines.
Impact of TRMM and SSM/I Rainfall Assimilation on Global Analysis and QPF
NASA Technical Reports Server (NTRS)
Hou, Arthur; Zhang, Sara; Reale, Oreste
2002-01-01
Evaluation of QPF skills requires quantitatively accurate precipitation analyses. We show that assimilation of surface rain rates derived from the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager and Special Sensor Microwave/Imager (SSM/I) improves quantitative precipitation estimates (QPE) and many aspects of global analyses. Short-range forecasts initialized with analyses with satellite rainfall data generally yield significantly higher QPF threat scores and better storm track predictions. These results were obtained using a variational procedure that minimizes the difference between the observed and model rain rates by correcting the moist physics tendency of the forecast model over a 6h assimilation window. In two case studies of Hurricanes Bonnie and Floyd, synoptic analysis shows that this procedure produces initial conditions with better-defined tropical storm features and stronger precipitation intensity associated with the storm.
Barua, Merry; Kaushik, Jaya Shankar; Gulati, Sheffali
2017-01-01
India is estimated to have over 10 million persons with autism. Rising awareness of autism in India over last decade with ready access to information has led to an increase in prevalence and earlier diagnosis, the creation of services and some policy initiatives. However, there remains a gaping chasm between policy and implementation. The reach and quality of services continues sketchy and uneven, especially in the area of education. The present review discusses existing legal provisions for children and adults with autism in India. It also discusses Governmental efforts and lacunae in existing health care facilities and education services in India. While there are examples of good practice and stories of hope, strong policy initiatives have to support grassroots action to improve the condition of persons with autism in India.
NASA Astrophysics Data System (ADS)
Sinha, T.; Arumugam, S.
2012-12-01
Seasonal streamflow forecasts contingent on climate forecasts can be effectively utilized in updating water management plans and optimize generation of hydroelectric power. Streamflow in the rainfall-runoff dominated basins critically depend on forecasted precipitation in contrast to snow dominated basins, where initial hydrological conditions (IHCs) are more important. Since precipitation forecasts from Atmosphere-Ocean-General Circulation Models are available at coarse scale (~2.8° by 2.8°), spatial and temporal downscaling of such forecasts are required to implement land surface models, which typically runs on finer spatial and temporal scales. Consequently, multiple sources are introduced at various stages in predicting seasonal streamflow. Therefore, in this study, we addresses the following science questions: 1) How do we attribute the errors in monthly streamflow forecasts to various sources - (i) model errors, (ii) spatio-temporal downscaling, (iii) imprecise initial conditions, iv) no forecasts, and (iv) imprecise forecasts? and 2) How does monthly streamflow forecast errors propagate with different lead time over various seasons? In this study, the Variable Infiltration Capacity (VIC) model is calibrated over Apalachicola River at Chattahoochee, FL in the southeastern US and implemented with observed 1/8° daily forcings to estimate reference streamflow during 1981 to 2010. The VIC model is then forced with different schemes under updated IHCs prior to forecasting period to estimate relative mean square errors due to: a) temporally disaggregation, b) spatial downscaling, c) Reverse Ensemble Streamflow Prediction (imprecise IHCs), d) ESP (no forecasts), and e) ECHAM4.5 precipitation forecasts. Finally, error propagation under different schemes are analyzed with different lead time over different seasons.
Direct Imaging of a Cold Jovian Exoplanet in Orbit around the Sun-Like Star GJ 504
NASA Technical Reports Server (NTRS)
Kuzuhara, M.; Tamura, M.; Kudo, T.; Janson, M; Kandori, R.; Brandt, T. D.; Thalmann, C.; Spiegel, D.; Biller, B.; Carson, J.;
2013-01-01
Several exoplanets have recently been imaged at wide separations of >10 AU from their parent stars. These span a limited range of ages (<50 Myr) and atmospheric properties, with temperatures of 800-1800 K and very red colors (J -H > 0.5 mag), implying thick cloud covers. Furthermore, substantial model uncertainties exist at these young ages due to the unknown initial conditions at formation, which can lead to an order of magnitude of uncertainty in the modeled planet mass. Here, we report the direct imaging discovery of a Jovian exoplanet around the Sun-like star GJ 504, detected as part of the SEEDS survey. The system is older than all other known directly-imaged planets; as a result, its estimated mass remains in the planetary regime independent of uncertainties related to choices of initial conditions in the exoplanet modeling. Using the most common exoplanet cooling model, and given the system age of 160(+350/-60) Myr, GJ 504 b has an estimated mass of 4(+4.5/-1.0) Jupiter masses, among the lowest of directly imaged planets. Its projected separation of 43.5 AU exceeds the typical outer boundary of approx.. 30 AU predicted for the core accretion mechanism. GJ 504 b is also significantly cooler (510(+30/-20) K)) and has a bluer color (J - H = -0.23 mag) than previously imaged exoplanets, suggesting a largely cloud-free atmosphere accessible to spectroscopic characterization. Thus, it has the potential of providing novel insights into the origins of giant planets, as well as their atmospheric properties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuzuhara, M.; Tamura, M.; Kandori, R.
Several exoplanets have recently been imaged at wide separations of >10 AU from their parent stars. These span a limited range of ages (<50 Myr) and atmospheric properties, with temperatures of 800-1800 K and very red colors (J - H > 0.5 mag), implying thick cloud covers. Furthermore, substantial model uncertainties exist at these young ages due to the unknown initial conditions at formation, which can lead to an order of magnitude of uncertainty in the modeled planet mass. Here, we report the direct-imaging discovery of a Jovian exoplanet around the Sun-like star GJ 504, detected as part of themore » SEEDS survey. The system is older than all other known directly imaged planets; as a result, its estimated mass remains in the planetary regime independent of uncertainties related to choices of initial conditions in the exoplanet modeling. Using the most common exoplanet cooling model, and given the system age of 160{sup +350}{sub -60} Myr, GJ 504b has an estimated mass of 4{sup +4.5}{sub -1.0} Jupiter masses, among the lowest of directly imaged planets. Its projected separation of 43.5 AU exceeds the typical outer boundary of {approx}30 AU predicted for the core accretion mechanism. GJ 504b is also significantly cooler (510{sup +30}{sub -20} K) and has a bluer color (J - H = -0.23 mag) than previously imaged exoplanets, suggesting a largely cloud-free atmosphere accessible to spectroscopic characterization. Thus, it has the potential of providing novel insights into the origins of giant planets as well as their atmospheric properties.« less
Mission analysis report for single-shell tank leakage mitigation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cruse, J.M.
1994-09-01
This document provides an analysis of the leakage mitigation mission applicable to past and potential future leakage from the Hanford Site`s 149 single-shell high-level waste tanks. This mission is a part of the overall missions of the Westinghouse Hanford Company Tank Waste Remediation System division to remediate the tank waste in a safe and acceptable manner. Systems engineers principles are being applied to this effort. Mission analysis supports early decision making by clearly defining program objectives. This documents identifies the initial conditions and acceptable final conditions, defines the programmatic and physical interfaces and constraints, estimates the resources to carry outmore » the mission, and establishes measures of success. The results of the mission analysis provide a consistent basis for subsequent systems engineering work.« less
Sensing power transfer between the human body and the environment.
Veltink, Peter H; Kortier, Henk; Schepers, H Martin
2009-06-01
The power transferred between the human body and the environment at any time and the work performed are important quantities to be estimated when evaluating and optimizing the physical interaction between the human body and the environment in sports, physical labor, and rehabilitation. It is the objective of the current paper to present a concept for estimating power transfer between the human body and the environment during free motions and using sensors at the interface, not requiring measurement systems in the environment, and to experimentally demonstrate this principle. Mass and spring loads were moved by hand over a fixed height difference via varying free movement trajectories. Kinematic and kinetic quantities were measured in the handle between the hand and the load. 3-D force and moments were measured using a 6 DOF force/moment sensor module, 3-D movement was measured using 3-D accelerometers and angular velocity sensors. The orientation was estimated from the angular velocity, using the initial orientation as a begin condition. The accelerometer signals were expressed in global coordinates using this orientation information. Velocity was estimated by integrating acceleration in global coordinates, obtained by adding gravitational acceleration to the accelerometer signals. Zero start and end velocities were used as begin and end conditions. Power was calculated as the sum of the inner products of velocity and force and of angular velocity and moment, and work was estimated by integrating power over time. The estimated performed work was compared to the potential energy difference corresponding to the change in height of the loads and appeared to be accurate within 4% for varying movements with net displacements and varying loads (mass and spring). The principle of estimating power transfer demonstrated in this paper can be used in future interfaces between the human body and the environment instrumented with body-mounted miniature 3-D force and acceleration sensors.
Suehs, Brandon T; Davis, Cralen; Ng, Daniel B; Gooch, Katherine
2017-07-01
Research has demonstrated that the use of potentially inappropriate medication (PIM) is highly prevalent among older individuals and may lead to increased healthcare costs, adverse drug reactions, hospitalizations, and mortality. The purpose of this study was to examine the impact of the 2015 updates to the Beers Criteria on estimates of prevalence and cost associated with potentially inappropriate use of antimuscarinic medications indicated for treatment of overactive bladder (OAB). A retrospective database analysis was conducted using a historical cohort design and including data collected between 2007 and 2013. Claims data were used to identify Medicare Advantage patients aged ≥65 years newly initiated on antimuscarinic OAB treatment. Patients were classified with potentially inappropriate use of antimuscarinic OAB drugs based on either the 2012 Beers Criteria or the 2015 Beers Criteria. Prevalence of PIM at the time of antimuscarinic initiation was determined. Bivariate comparisons of healthcare costs and medical condition burden were conducted to compare the marginal groups of patients (who qualified based on the 2012 Beers Criteria only or the 2015 Beers Criteria only). Differences in healthcare costs for patients with and without potentially inappropriate use of urinary antimuscarinics based on the 2012 and 2015 Beers Criteria were also examined. Of 66,275 patients, overall prevalence of potentially inappropriate use of OAB antimuscarinics was higher using 2015 Beers Criteria than when using the 2012 Beers Criteria (25.0 vs. 20.6%). Dementia was the most common PIM-qualifying condition under both versions. The 2015 Beers Criteria identified more females, more White people, and a younger population with PIM. Comorbid medical condition burden was lower using the 2015 Beers Criteria. The 2015 Beers Criteria only group had lower median unadjusted healthcare costs ($7104 vs. 8301; p < 0.001). The incremental net cost associated with potentially inappropriate use of antimuscarinic medication was higher under the 2012 Beers Criteria than under the 2015 Beers Criteria. In this cohort of patients newly initiated on antimuscarinic OAB treatment, substantial overlap of patients identified with PIM based on the 2015 Beers Criteria compared with the 2012 Beers Criteria was observed. In addition, the findings suggest that, when applied to antimuscarinic initiators, the 2015 Beers Criteria result in a greater prevalence of PIM and the identification of patients with less overall medical morbidity than the 2012 Beers Criteria.
Mathematical modelling of bone adaptation of the metacarpal subchondral bone in racehorses.
Hitchens, Peta L; Pivonka, Peter; Malekipour, Fatemeh; Whitton, R Chris
2018-06-01
In Thoroughbred racehorses, fractures of the distal limb are commonly catastrophic. Most of these fractures occur due to the accumulation of fatigue damage from repetitive loading, as evidenced by microdamage at the predilection sites for fracture. Adaptation of the bone in response to training loads is important for fatigue resistance. In order to better understand the mechanism of subchondral bone adaptation to its loading environment, we utilised a square root function defining the relationship between bone volume fraction [Formula: see text] and specific surface [Formula: see text] of the subchondral bone of the lateral condyles of the third metacarpal bone (MCIII) of the racehorse, and using this equation, developed a mathematical model of subchondral bone that adapts to loading conditions observed in vivo. The model is expressed as an ordinary differential equation incorporating a formation rate that is dependent on strain energy density. The loading conditions applied to a selected subchondral region, i.e. volume of interest, were estimated based on joint contact forces sustained by racehorses in training. For each of the initial conditions of [Formula: see text] we found no difference between subsequent homoeostatic [Formula: see text] at any given loading condition, but the time to reach equilibrium differed by initial [Formula: see text] and loading condition. We found that the observed values for [Formula: see text] from the mathematical model output were a good approximation to the existing data for racehorses in training or at rest. This model provides the basis for understanding the effect of changes to training strategies that may reduce the risk of racehorse injury.
Assessment of Low Cycle Fatigue Behavior of Powder Metallurgy Alloy U720
NASA Technical Reports Server (NTRS)
Gabb, Tomothy P.; Bonacuse, Peter J.; Ghosn, Louis J.; Sweeney, Joseph W.; Chatterjee, Amit; Green, Kenneth A.
2000-01-01
The fatigue lives of modem powder metallurgy disk alloys are influenced by variabilities in alloy microstructure and mechanical properties. These properties can vary as functions of variables the different steps of materials/component processing: powder atomization, consolidation, extrusion, forging, heat treating, and machining. It is important to understand the relationship between the statistical variations in life and these variables, as well as the change in life distribution due to changes in fatigue loading conditions. The objective of this study was to investigate these relationships in a nickel-base disk superalloy, U720, produced using powder metallurgy processing. Multiple strain-controlled fatigue tests were performed at 538 C (1000 F) at limited sets of test conditions. Analyses were performed to: (1) assess variations of microstructure, mechanical properties, and LCF failure initiation sites as functions of disk processing and loading conditions; and (2) compare mean and minimum fatigue life predictions using different approaches for modeling the data from assorted test conditions. Significant variations in life were observed as functions of the disk processing variables evaluated. However, the lives of all specimens could still be combined and modeled together. The failure initiation sites for tests performed at a strain ratio R(sub epsilon) = epsilon(sub min)/epsilon(sub max) of 0 were different from those in tests at a strain ratio of -1. An approach could still be applied to account for the differences in mean and maximum stresses and strains. This allowed the data in tests of various conditions to be combined for more robust statistical estimates of mean and minimum lives.
Virtual parameter-estimation experiments in Bioprocess-Engineering education.
Sessink, Olivier D T; Beeftink, Hendrik H; Hartog, Rob J M; Tramper, Johannes
2006-05-01
Cell growth kinetics and reactor concepts constitute essential knowledge for Bioprocess-Engineering students. Traditional learning of these concepts is supported by lectures, tutorials, and practicals: ICT offers opportunities for improvement. A virtual-experiment environment was developed that supports both model-related and experimenting-related learning objectives. Students have to design experiments to estimate model parameters: they choose initial conditions and 'measure' output variables. The results contain experimental error, which is an important constraint for experimental design. Students learn from these results and use the new knowledge to re-design their experiment. Within a couple of hours, students design and run many experiments that would take weeks in reality. Usage was evaluated in two courses with questionnaires and in the final exam. The faculties involved in the two courses are convinced that the experiment environment supports essential learning objectives well.
Nonlinear Thermal Instability in Compressible Viscous Flows Without Heat Conductivity
NASA Astrophysics Data System (ADS)
Jiang, Fei
2018-04-01
We investigate the thermal instability of a smooth equilibrium state, in which the density function satisfies Schwarzschild's (instability) condition, to a compressible heat-conducting viscous flow without heat conductivity in the presence of a uniform gravitational field in a three-dimensional bounded domain. We show that the equilibrium state is linearly unstable by a modified variational method. Then, based on the constructed linearly unstable solutions and a local well-posedness result of classical solutions to the original nonlinear problem, we further construct the initial data of linearly unstable solutions to be the one of the original nonlinear problem, and establish an appropriate energy estimate of Gronwall-type. With the help of the established energy estimate, we finally show that the equilibrium state is nonlinearly unstable in the sense of Hadamard by a careful bootstrap instability argument.
Object tracking algorithm based on the color histogram probability distribution
NASA Astrophysics Data System (ADS)
Li, Ning; Lu, Tongwei; Zhang, Yanduo
2018-04-01
In order to resolve tracking failure resulted from target's being occlusion and follower jamming caused by objects similar to target in the background, reduce the influence of light intensity. This paper change HSV and YCbCr color channel correction the update center of the target, continuously updated image threshold self-adaptive target detection effect, Clustering the initial obstacles is roughly range, shorten the threshold range, maximum to detect the target. In order to improve the accuracy of detector, this paper increased the Kalman filter to estimate the target state area. The direction predictor based on the Markov model is added to realize the target state estimation under the condition of background color interference and enhance the ability of the detector to identify similar objects. The experimental results show that the improved algorithm more accurate and faster speed of processing.
Time-of-flight PET time calibration using data consistency
NASA Astrophysics Data System (ADS)
Defrise, Michel; Rezaei, Ahmadreza; Nuyts, Johan
2018-05-01
This paper presents new data driven methods for the time of flight (TOF) calibration of positron emission tomography (PET) scanners. These methods are derived from the consistency condition for TOF PET, they can be applied to data measured with an arbitrary tracer distribution and are numerically efficient because they do not require a preliminary image reconstruction from the non-TOF data. Two-dimensional simulations are presented for one of the methods, which only involves the two first moments of the data with respect to the TOF variable. The numerical results show that this method estimates the detector timing offsets with errors that are larger than those obtained via an initial non-TOF reconstruction, but remain smaller than of the TOF resolution and thereby have a limited impact on the quantitative accuracy of the activity image estimated with standard maximum likelihood reconstruction algorithms.
Blanton, H; Gerrard, M
1997-07-01
Recent research has incorporated situational factors into assessment of risk. Working from a rational appraisal framework, however, these studies have not emphasized contextual features that might introduce motivated risk assessment. In the current study, participants (N = 40 male undergraduates) lowered their risk perceptions for STDs following the induction of a sexual motivation. In an initial baseline condition, participants estimated the risk of contracting STDs from partners with relatively high- or low-risk sexual histories. In a subsequent trial, participants repeated the imagery task while viewing photographs that were high or low in sex appeal. As predicted, participants reduced their risk perceptions when they viewed photographs high in sex appeal. The only necessary precondition was the presence of nondiagnostic information from which they could construct biased risk estimates.
Estimation of covariate-specific time-dependent ROC curves in the presence of missing biomarkers.
Li, Shanshan; Ning, Yang
2015-09-01
Covariate-specific time-dependent ROC curves are often used to evaluate the diagnostic accuracy of a biomarker with time-to-event outcomes, when certain covariates have an impact on the test accuracy. In many medical studies, measurements of biomarkers are subject to missingness due to high cost or limitation of technology. This article considers estimation of covariate-specific time-dependent ROC curves in the presence of missing biomarkers. To incorporate the covariate effect, we assume a proportional hazards model for the failure time given the biomarker and the covariates, and a semiparametric location model for the biomarker given the covariates. In the presence of missing biomarkers, we propose a simple weighted estimator for the ROC curves where the weights are inversely proportional to the selection probability. We also propose an augmented weighted estimator which utilizes information from the subjects with missing biomarkers. The augmented weighted estimator enjoys the double-robustness property in the sense that the estimator remains consistent if either the missing data process or the conditional distribution of the missing data given the observed data is correctly specified. We derive the large sample properties of the proposed estimators and evaluate their finite sample performance using numerical studies. The proposed approaches are illustrated using the US Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. © 2015, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Zhu, Qimeng; Chen, Jia; Gou, Guoqing; Chen, Hui; Li, Peng; Gao, W.
2016-10-01
Residual stress measurement and control are highly important for the safety of structures of high-speed trains, which is critical for the structure design. The longitudinal critically refracted wave technology is the most widely used method in measuring residual stress with ultrasonic method, but its accuracy is strongly related to the test parameters, namely the flight time at the free-stress condition ( t 0), stress coefficient ( K), and initial stress (σ0) of the measured materials. The difference of microstructure in the weld zone, heat affected zone, and base metal (BM) results in the divergence of experimental parameters. However, the majority of researchers use the BM parameters to determine the residual stress in other zones and ignore the initial stress (σ0) in calibration samples. Therefore, the measured residual stress in different zones is often high in errors and may result in the miscalculation of the safe design of important structures. A serious problem in the ultrasonic estimation of residual stresses requires separation between the microstructure and the acoustoelastic effects. In this paper, the effects of initial stress and microstructure on stress coefficient K and flight time t 0 at free-stress conditions have been studied. The residual stress with or without different corrections was investigated. The results indicated that the residual stresses obtained with correction are more accurate for structure design.
NASA Technical Reports Server (NTRS)
Wilson, Jack; Paxson, Daniel E.
2002-01-01
In one-dimensional calculations of pulsed detonation engine (PDE) performance, the exit boundary condition is frequently taken to be a constant static pressure. In reality, for an isolated detonation tube, after the detonation wave arrives at the exit plane, there will be a region of high pressure, which will gradually return to ambient pressure as an almost spherical shock wave expands away from the exit, and weakens. Initially, the flow is supersonic, unaffected by external pressure, but later becomes subsonic. Previous authors have accounted for this situation either by assuming the subsonic pressure decay to be a relaxation phenomenon, or by running a two-dimensional calculation first, including a domain external to the detonation tube, and using the resulting exit pressure temporal distribution as the boundary condition for one-dimensional calculations. These calculations show that the increased pressure does affect the PDE performance. In the present work, a simple model of the exit process is used to estimate the pressure decay time. The planar shock wave emerging from the tube is assumed to transform into a spherical shock wave. The initial strength of the spherical shock wave is determined from comparison with experimental results. Its subsequent propagation, and resulting pressure at the tube exit, is given by a numerical blast wave calculation. The model agrees reasonably well with other, limited, results. Finally, the model was used as the exit boundary condition for a one-dimensional calculation of PDE performance to obtain the thrust wall pressure for a hydrogen-air detonation in tubes of length to diameter ratio (L/D) of 4, and 10, as well as for the original, constant pressure boundary condition. The modified boundary condition had no performance impact for values of L/D > 10, and moderate impact for L/D = 4.
Windrum, Paul; García-Goñi, Manuel; Coad, Holly
2016-06-01
Education leads to better health-related decisions and protective behaviors, being especially important for patients with chronic conditions. Self-management education programs have been shown to be beneficial for patients with different chronic conditions and to have a higher impact on health outcomes than does didactic education. To investigate improvements in glycemic control (measured by glycated hemoglobin A1c) in patients with type 2 diabetes mellitus. Our comparative trial involved one group of patients receiving patient-centered education and another receiving didactic education. We dealt with selection bias issues, estimated the different impact of both programs, and validated our analysis using quantile regression techniques. We found evidence of better mean glycemic control in patients receiving the patient-centered program, which engaged better patients. Nevertheless, that differential impact is nonmonotonic. Patients initially at the healthy range at the patient-centered program maintained their condition better. Patients close to, but not within, the healthy range benefited equally from attending either program. Patients with very high glycemic level benefited significantly more from attending the patient-centered program. Finally, patients with the worst initial glycemic control (far from the healthy range) improved equally their diabetic condition, regardless of which program they attended. Different patients are sensitive to different categories of education programs. The optimal, cost-effective design of preventative programs for patients with chronic conditions needs to account for the different impact in different "patient categories." This implies stratifying patients and providing the appropriate preventative education program, or looking for alternative policy implementations for unresponsive patients who have the most severe condition and are the most costly. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Holtschlag, David J.
2009-01-01
Two-dimensional hydrodynamic and transport models were applied to a 34-mile reach of the Ohio River from Cincinnati, Ohio, upstream to Meldahl Dam near Neville, Ohio. The hydrodynamic model was based on the generalized finite-element hydrodynamic code RMA2 to simulate depth-averaged velocities and flow depths. The generalized water-quality transport code RMA4 was applied to simulate the transport of vertically mixed, water-soluble constituents that have a density similar to that of water. Boundary conditions for hydrodynamic simulations included water levels at the U.S. Geological Survey water-level gaging station near Cincinnati, Ohio, and flow estimates based on a gate rating at Meldahl Dam. Flows estimated on the basis of the gate rating were adjusted with limited flow-measurement data to more nearly reflect current conditions. An initial calibration of the hydrodynamic model was based on data from acoustic Doppler current profiler surveys and water-level information. These data provided flows, horizontal water velocities, water levels, and flow depths needed to estimate hydrodynamic parameters related to channel resistance to flow and eddy viscosity. Similarly, dye concentration measurements from two dye-injection sites on each side of the river were used to develop initial estimates of transport parameters describing mixing and dye-decay characteristics needed for the transport model. A nonlinear regression-based approach was used to estimate parameters in the hydrodynamic and transport models. Parameters describing channel resistance to flow (Manning’s “n”) were estimated in areas of deep and shallow flows as 0.0234, and 0.0275, respectively. The estimated RMA2 Peclet number, which is used to dynamically compute eddy-viscosity coefficients, was 38.3, which is in the range of 15 to 40 that is typically considered appropriate. Resulting hydrodynamic simulations explained 98.8 percent of the variability in depth-averaged flows, 90.0 percent of the variability in water levels, 93.5 percent of the variability in flow depths, and 92.5 percent of the variability in velocities. Estimates of the water-quality-transport-model parameters describing turbulent mixing characteristics converged to different values for the two dye-injection reaches. For the Big Indian Creek dye-injection study, an RMA4 Peclet number of 37.2 was estimated, which was within the recommended range of 15 to 40, and similar to the RMA2 Peclet number. The estimated dye-decay coefficient was 0.323. Simulated dye concentrations explained 90.2 percent of the variations in measured dye concentrations for the Big Indian Creek injection study. For the dye-injection reach starting downstream from Twelvemile Creek, however, an RMA4 Peclet number of 173 was estimated, which is far outside the recommended range. Simulated dye concentrations were similar to measured concentration distributions at the first four transects downstream from the dye-injection site that were considered vertically mixed. Farther downstream, however, simulated concentrations did not match the attenuation of maximum concentrations or cross-channel transport of dye that were measured. The difficulty of determining a consistent RMA4 Peclet was related to the two-dimension model assumption that velocity distributions are closely approximated by their depth-averaged values. Analysis of velocity data showed significant variations in velocity direction with depth in channel reaches with curvature. Channel irregularities (including curvatures, depth irregularities, and shoreline variations) apparently produce transverse currents that affect the distribution of constituents, but are not fully accounted for in a two-dimensional model. The two-dimensional flow model, using channel resistance to flow parameters of 0.0234 and 0.0275 for deep and shallow areas, respectively, and an RMA2 Peclet number of 38.3, and the RMA4 transport model with a Peclet number of 37.2, may have utility for emergency-planning purposes. Emergency-response efforts would be enhanced by continuous streamgaging records downstream from Meldahl Dam, real-time water-quality monitoring, and three-dimensional modeling. Decay coefficients are constituent specific.
Code of Federal Regulations, 2010 CFR
2010-10-01
... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...
Code of Federal Regulations, 2012 CFR
2012-10-01
... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...
Code of Federal Regulations, 2013 CFR
2013-10-01
... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...
Code of Federal Regulations, 2011 CFR
2011-10-01
... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...
Code of Federal Regulations, 2014 CFR
2014-10-01
... the initial mixing distance, is estimated by: Cp=25(Wi)/(T0.7 Q) where Cp is the peak concentration... equation: Tp=9.25×106 Wi/(QCp) where Tp is the time estimate, in hours, and Wi, Cp, and Q are defined above... downstream location, past the initial mixing distance, is estimated by: Cp=C(q)/(Q+ where Cp and Q are...
ERIC Educational Resources Information Center
Korendijk, Elly J. H.; Moerbeek, Mirjam; Maas, Cora J. M.
2010-01-01
In the case of trials with nested data, the optimal allocation of units depends on the budget, the costs, and the intracluster correlation coefficient. In general, the intracluster correlation coefficient is unknown in advance and an initial guess has to be made based on published values or subject matter knowledge. This initial estimate is likely…
Reducing orbital eccentricity of precessing black-hole binaries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buonanno, Alessandra; Taracchini, Andrea; Kidder, Lawrence E.
2011-05-15
Building initial conditions for generic binary black-hole evolutions which are not affected by initial spurious eccentricity remains a challenge for numerical-relativity simulations. This problem can be overcome by applying an eccentricity-removal procedure which consists of evolving the binary black hole for a couple of orbits, estimating the resulting eccentricity, and then restarting the simulation with corrected initial conditions. The presence of spins can complicate this procedure. As predicted by post-Newtonian theory, spin-spin interactions and precession prevent the binary from moving along an adiabatic sequence of spherical orbits, inducing oscillations in the radial separation and in the orbital frequency. For single-spinmore » binary black holes these oscillations are a direct consequence of monopole-quadrupole interactions. However, spin-induced oscillations occur at approximately twice the orbital frequency, and therefore can be distinguished and disentangled from the initial spurious eccentricity which occurs at approximately the orbital frequency. Taking this into account, we develop a new eccentricity-removal procedure based on the derivative of the orbital frequency and find that it is rather successful in reducing the eccentricity measured in the orbital frequency to values less than 10{sup -4} when moderate spins are present. We test this new procedure using numerical-relativity simulations of binary black holes with mass ratios 1.5 and 3, spin magnitude 0.5, and various spin orientations. The numerical simulations exhibit spin-induced oscillations in the dynamics at approximately twice the orbital frequency. Oscillations of similar frequency are also visible in the gravitational-wave phase and frequency of the dominant l=2, m=2 mode.« less
Integrated Impacts of environmental factors on the degradation of fumigants
NASA Astrophysics Data System (ADS)
Lee, J.; Yates, S. R.
2007-12-01
Volatilization of fumigants has been concerned as one of air pollution sources. Fumigants are used to control nematodes and soil-born pathogens for a pre-plant treatment to increase the production of high-cash crops. One of technologies to reduce the volatilization of fumigants to atmosphere is to enhance the degradation of fumigants in soil. Fumigant degradation is affected by environmental factors such as moisture content, temperature, initial concentration of injected fumigants, and soil properties. However, effects of each factor on the degradation were limitedly characterized and integrated Impacts from environmental factors has not been described yet. Degradation of 1,3- dichloropropene (1,3-D) was investigated in various condition of temperatures (20-60 °C), moisture contents (0 ¡V 30 %) and initial concentrations (0.6 ¡V 60 mg/kg) with Arlington sandy loam soil. Abiotic and biotic degradation processes were distinguished using two sterilization methods with HgCl2 and autoclave and impacts of environmental factors were separately assessed for abiotic and biotic degradations. Initially, degradation rates (k) of cis and trans 1,3-D isomers were estimated by first-order kinetics and modified depending on impacts from environmental factors. Arrhenius equation and Walker¡¦s equation which were conventionally used to describe temperature and moisture effects on degradation were assessed for integrated impacts from environmental factors and logarithmical correlation was observed between initial concentrations of applied fumigants and degradation rates. Understanding integrated impacts of environmental factors on degradation will help to design more effective emission reduction schemes in various conditions and provide more practical parameters for modeling simulations.
Interaction of lithotripter shockwaves with single inertial cavitation bubbles
Klaseboer, Evert; Fong, Siew Wan; Turangan, Cary K.; Khoo, Boo Cheong; Szeri, Andrew J.; Calvisi, Michael L.; Sankin, Georgy N.; Zhong, Pei
2008-01-01
The dynamic interaction of a shockwave (modelled as a pressure pulse) with an initially spherically oscillating bubble is investigated. Upon the shockwave impact, the bubble deforms non-spherically and the flow field surrounding the bubble is determined with potential flow theory using the boundary-element method (BEM). The primary advantage of this method is its computational efficiency. The simulation process is repeated until the two opposite sides of the bubble surface collide with each other (i.e. the formation of a jet along the shockwave propagation direction). The collapse time of the bubble, its shape and the velocity of the jet are calculated. Moreover, the impact pressure is estimated based on water-hammer pressure theory. The Kelvin impulse, kinetic energy and bubble displacement (all at the moment of jet impact) are also determined. Overall, the simulated results compare favourably with experimental observations of lithotripter shockwave interaction with single bubbles (using laser-induced bubbles at various oscillation stages). The simulations confirm the experimental observation that the most intense collapse, with the highest jet velocity and impact pressure, occurs for bubbles with intermediate size during the contraction phase when the collapse time of the bubble is approximately equal to the compressive pulse duration of the shock wave. Under this condition, the maximum amount of energy of the incident shockwave is transferred to the collapsing bubble. Further, the effect of the bubble contents (ideal gas with different initial pressures) and the initial conditions of the bubble (initially oscillating vs. non-oscillating) on the dynamics of the shockwave–bubble interaction are discussed. PMID:19018296
Interaction of lithotripter shockwaves with single inertial cavitation bubbles.
Klaseboer, Evert; Fong, Siew Wan; Turangan, Cary K; Khoo, Boo Cheong; Szeri, Andrew J; Calvisi, Michael L; Sankin, Georgy N; Zhong, Pei
2007-01-01
The dynamic interaction of a shockwave (modelled as a pressure pulse) with an initially spherically oscillating bubble is investigated. Upon the shockwave impact, the bubble deforms non-spherically and the flow field surrounding the bubble is determined with potential flow theory using the boundary-element method (BEM). The primary advantage of this method is its computational efficiency. The simulation process is repeated until the two opposite sides of the bubble surface collide with each other (i.e. the formation of a jet along the shockwave propagation direction). The collapse time of the bubble, its shape and the velocity of the jet are calculated. Moreover, the impact pressure is estimated based on water-hammer pressure theory. The Kelvin impulse, kinetic energy and bubble displacement (all at the moment of jet impact) are also determined. Overall, the simulated results compare favourably with experimental observations of lithotripter shockwave interaction with single bubbles (using laser-induced bubbles at various oscillation stages). The simulations confirm the experimental observation that the most intense collapse, with the highest jet velocity and impact pressure, occurs for bubbles with intermediate size during the contraction phase when the collapse time of the bubble is approximately equal to the compressive pulse duration of the shock wave. Under this condition, the maximum amount of energy of the incident shockwave is transferred to the collapsing bubble. Further, the effect of the bubble contents (ideal gas with different initial pressures) and the initial conditions of the bubble (initially oscillating vs. non-oscillating) on the dynamics of the shockwave-bubble interaction are discussed.
Lee, Jung Keun; Park, Edward J.; Robinovitch, Stephen N.
2012-01-01
This paper proposes a Kalman filter-based attitude (i.e., roll and pitch) estimation algorithm using an inertial sensor composed of a triaxial accelerometer and a triaxial gyroscope. In particular, the proposed algorithm has been developed for accurate attitude estimation during dynamic conditions, in which external acceleration is present. Although external acceleration is the main source of the attitude estimation error and despite the need for its accurate estimation in many applications, this problem that can be critical for the attitude estimation has not been addressed explicitly in the literature. Accordingly, this paper addresses the combined estimation problem of the attitude and external acceleration. Experimental tests were conducted to verify the performance of the proposed algorithm in various dynamic condition settings and to provide further insight into the variations in the estimation accuracy. Furthermore, two different approaches for dealing with the estimation problem during dynamic conditions were compared, i.e., threshold-based switching approach versus acceleration model-based approach. Based on an external acceleration model, the proposed algorithm was capable of estimating accurate attitudes and external accelerations for short accelerated periods, showing its high effectiveness during short-term fast dynamic conditions. Contrariwise, when the testing condition involved prolonged high external accelerations, the proposed algorithm exhibited gradually increasing errors. However, as soon as the condition returned to static or quasi-static conditions, the algorithm was able to stabilize the estimation error, regaining its high estimation accuracy. PMID:22977288
Observed secondary organic aerosol (SOA) and organic nitrate yields from NO3 oxidation of isoprene
NASA Astrophysics Data System (ADS)
Rollins, A. W.; Fry, J. L.; Kiendler-Scharr, A.; Wooldridge, P. J.; Brown, S. S.; Fuchs, H.; Dube, W.; Mensah, A.; Tillmann, R.; Dorn, H.; Brauers, T.; Cohen, R. C.
2008-12-01
Formation of organic nitrates and secondary organic aerosol (SOA) from the NO3 oxidation of isoprene has been studied at atmospheric concentrations of VOC (10 ppb) and oxidant (<100 ppt NO3) in the presence of ammonium sulfate seed aerosol in the atmosphere simulation chamber SAPHIR at Forschungszentrum Jülich. Cavity Ringdown (CaRDS) and thermal dissociation - CaRDS measurements of NO3 and N2O5 as well as Thermal Dissociation - Laser Induced Fluorescence (TD-LIF) detection of alkyl nitrates (RONO2) and Aerodyne Aerosol Mass Spectrometer (AMS) measurements of aerosol composition were all used in comparison to a Master Chemical Mechanism (MCM) based chemical kinetics box model to quantify the product yields from two stages in isoprene oxidation. We find significant yields of organic nitrate formation from both the initial isoprene + NO3 reaction (71%) as well as from the reaction of NO3 with the initial oxidation products (30% - 60%). Under these low concentration conditions (~1 μg / m3), measured SOA production was greater than instrument noise only for the second oxidation step. Based on the modeled chemistry, we estimate an SOA mass yield of 10% (relative to isoprene mass reacted) for the reaction of the initial oxidation products with NO3. This yield is found to be consistent with the estimated saturation concentration (C*) of the presumed gas products of the doubly oxidized isoprene, where both oxidations lead to the addition of nitrate, carbonyl, and hydroxyl groups.
NASA Astrophysics Data System (ADS)
Buyuk, Ersin; Karaman, Abdullah
2017-04-01
We estimated transmissivity and storage coefficient values from the single well water-level measurements positioned ahead of the mining face by using particle swarm optimization (PSO) technique. The water-level response to the advancing mining face contains an semi-analytical function that is not suitable for conventional inversion shemes because the partial derivative is difficult to calculate . Morever, the logaritmic behaviour of the model create difficulty for obtaining an initial model that may lead to a stable convergence. The PSO appears to obtain a reliable solution that produce a reasonable fit between water-level data and model function response. Optimization methods have been used to find optimum conditions consisting either minimum or maximum of a given objective function with regard to some criteria. Unlike PSO, traditional non-linear optimization methods have been used for many hydrogeologic and geophysical engineering problems. These methods indicate some difficulties such as dependencies to initial model, evolution of the partial derivatives that is required while linearizing the model and trapping at local optimum. Recently, Particle swarm optimization (PSO) became the focus of modern global optimization method that is inspired from the social behaviour of birds of swarms, and appears to be a reliable and powerful algorithms for complex engineering applications. PSO that is not dependent on an initial model, and non-derivative stochastic process appears to be capable of searching all possible solutions in the model space either around local or global optimum points.
NASA Astrophysics Data System (ADS)
Yip, Shui Cheung
We study the longitudinal motion of a nonlinearly viscoelastic bar with one end fixed and the other end attached to a heavy tip mass. This problem is a precise continuum mechanical analog of the basic discrete mechanical problem of the motion of a mass point on a (massless) spring. This motion is governed by an initial-boundary-value problem for a class of third-order quasilinear parabolic-hyperbolic partial differential equations subject to a nonstandard boundary condition, which is the equation of motion of the tip mass. The ratio of the mass of the bar to that of the tip mass is taken to be a small parameter varepsilon. We prove that this problem has a unique regular solution that admits a valid asymptotic expansion, including an initial-layer expansion, in powers of varepsilon for varepsilon near 0. The fundamental constitutive hypothesis that the tension be a uniformly monotone function of the strain rate plays a critical role in a delicate proof that each term of the initial layer expansion decays exponentially in time. These results depend on new decay estimates for the solution of quasilinear parabolic equations. The constitutive hypothesis that the viscosity become large where the bar nears total compression leads to important uniform bounds for the strain and the strain rate. Higher-order energy estimates support the proof by the Schauder Fixed-Point Theorem of the existence of solutions having a level of regularity appropriate for the asymptotics.
A latent transition model of the effects of a teen dating violence prevention initiative.
Williams, Jason; Miller, Shari; Cutbush, Stacey; Gibbs, Deborah; Clinton-Sherrod, Monique; Jones, Sarah
2015-02-01
Patterns of physical and psychological teen dating violence (TDV) perpetration, victimization, and related behaviors were examined with data from the evaluation of the Start Strong: Building Healthy Teen Relationships initiative, a dating violence primary prevention program targeting middle school students. Latent class and latent transition models were used to estimate distinct patterns of TDV and related behaviors of bullying and sexual harassment in seventh grade students at baseline and to estimate transition probabilities from one pattern of behavior to another at the 1-year follow-up. Intervention effects were estimated by conditioning transitions on exposure to Start Strong. Latent class analyses suggested four classes best captured patterns of these interrelated behaviors. Classes were characterized by elevated perpetration and victimization on most behaviors (the multiproblem class), bullying perpetration/victimization and sexual harassment victimization (the bully-harassment victimization class), bullying perpetration/victimization and psychological TDV victimization (bully-psychological victimization), and experience of bully victimization (bully victimization). Latent transition models indicated greater stability of class membership in the comparison group. Intervention students were less likely to transition to the most problematic pattern and more likely to transition to the least problem class. Although Start Strong has not been found to significantly change TDV, alternative evaluation models may find important differences. Latent transition analysis models suggest positive intervention impact, especially for the transitions at the most and the least positive end of the spectrum. Copyright © 2015. Published by Elsevier Inc.
Size and performance of anoxic limestone drains to neutralize acdic mine drainagei
Cravotta, C.A.
2003-01-01
Acidic mine drainage (AMD) can be neutralized effectively in underground, anoxic limestone drains (ALDs). Owing to reaction between the AMD and limestone (CaCO3), the pH and concentrations of alkalinity and calcium increase asymptotically with detention time in the ALD, while concentrations of sulfate, ferrous iron, and manganese typically are unaffected. This paper introduces a method to predict the alkalinity produced within an ALD and to estimate the mass of limestone required for its construction on the basis of data from short-term, closed-container (cubitainer) tests. The cubitainer tests, which used an initial mass of 4 kg crushed limestone completely inundated with 2.8 L AMD, were conducted for 11 to 16 d and provided estimates for the initial and maximum alkalinities and corresponding rates of alkalinity production and limestone dissolution. Long-term (5-11 yr) data for alkalinity and CaCO3 flux at the Howe Bridge, Morrison, and Buck Mountain ALDs in Pennsylvania, USA, indicate that rates of alkalinity production and limestone dissolution under field conditions were comparable with those in cubitainers filled with limestone and AMD from each site. The alkalinity of effluent and intermediate samples along the flow path through the ALDs and long-term trends in the residual mass of limestone and the effluent alkalinity were estimated as a function of the computed detention time within the ALD and second-order dissolution rate models for cubitainer tests. Thus, cubitainer tests can be a useful tool for designing ALDs and predicting their performance.
Offshore wind farm layout optimization
NASA Astrophysics Data System (ADS)
Elkinton, Christopher Neil
Offshore wind energy technology is maturing in Europe and is poised to make a significant contribution to the U.S. energy production portfolio. Building on the knowledge the wind industry has gained to date, this dissertation investigates the influences of different site conditions on offshore wind farm micrositing---the layout of individual turbines within the boundaries of a wind farm. For offshore wind farms, these conditions include, among others, the wind and wave climates, water depths, and soil conditions at the site. An analysis tool has been developed that is capable of estimating the cost of energy (COE) from offshore wind farms. For this analysis, the COE has been divided into several modeled components: major costs (e.g. turbines, electrical interconnection, maintenance, etc.), energy production, and energy losses. By treating these component models as functions of site-dependent parameters, the analysis tool can investigate the influence of these parameters on the COE. Some parameters result in simultaneous increases of both energy and cost. In these cases, the analysis tool was used to determine the value of the parameter that yielded the lowest COE and, thus, the best balance of cost and energy. The models have been validated and generally compare favorably with existing offshore wind farm data. The analysis technique was then paired with optimization algorithms to form a tool with which to design offshore wind farm layouts for which the COE was minimized. Greedy heuristic and genetic optimization algorithms have been tuned and implemented. The use of these two algorithms in series has been shown to produce the best, most consistent solutions. The influences of site conditions on the COE have been studied further by applying the analysis and optimization tools to the initial design of a small offshore wind farm near the town of Hull, Massachusetts. The results of an initial full-site analysis and optimization were used to constrain the boundaries of the farm. A more thorough optimization highlighted the features of the area that would result in a minimized COE. The results showed reasonable layout designs and COE estimates that are consistent with existing offshore wind farms.
NASA Astrophysics Data System (ADS)
Schull, M. A.; Anderson, M. C.; Kustas, W.; Cammalleri, C.; Houborg, R.
2012-12-01
A light-use-efficiency (LUE) based model of canopy resistance has been embedded into a thermal-based Two-Source Energy Balance (TSEB) model to facilitate coupled simulations of transpiration and carbon assimilation. The model assumes that deviations of the observed canopy LUE from a nominal stand-level value (LUEn - typically indexed by vegetation class) are due to varying conditions of light, humidity, CO2 concentration and leaf temperature. The deviations are accommodated by adjusting an effective LUE that responds to the varying conditions. The challenge to monitoring fluxes on a larger scale is to capture the physiological responses due to changing conditions. This challenge can be met using remotely sensed leaf chlorophyll (Cab). Since Cab is a vital pigment for absorbing light for use in photosynthesis, it has been recognized as a key parameter for quantifying photosynthetic functioning that are sensitive to these conditions. Recent studies have shown that it is sensitive to changes in LUE, which defines how efficiently a plant can assimilate carbon dioxide (CO2) given the absorbed Photosynthetically Active Radiation (PAR) and is therefore useful for monitoring carbon fluxes. We investigate the feasibility of leaf chlorophyll to capture these variations in LUEn using remotely sensed data. To retrieve Cab from remotely sensed data we use REGFLEC, a physically based tool that translates at-sensor radiances in the green, red and NIR spectral regions from multiple satellite sensors into realistic maps of LAI and Cab. Initial results show that Cab is exponentially correlated to light use efficiency. Incorporating nominal light use efficiency estimated from Cab is shown to improve fluxes of carbon, water and energy most notably in times of stressed vegetation. The result illustrates that Cab is sensitive to changes in plant physiology and can capture plant stress needed for improved estimation of fluxes. The observed relationship and initial results demonstrate the need for integrating remotely sensed Cab to facilitate improved mapping of coupled carbon, water, and energy fluxes across vegetated landscapes.
NASA Astrophysics Data System (ADS)
Kucera, P. A.; Steinson, M.
2016-12-01
Accurate and reliable real-time monitoring and dissemination of observations of precipitation and surface weather conditions in general is critical for a variety of research studies and applications. Surface precipitation observations provide important reference information for evaluating satellite (e.g., GPM) precipitation estimates. High quality surface observations of precipitation, temperature, moisture, and winds are important for applications such as agriculture, water resource monitoring, health, and hazardous weather early warning systems. In many regions of the World, surface weather station and precipitation gauge networks are sparsely located and/or of poor quality. Existing stations have often been sited incorrectly, not well-maintained, and have limited communications established at the site for real-time monitoring. The University Corporation for Atmospheric Research (UCAR)/National Center for Atmospheric Research (NCAR), with support from USAID, has started an initiative to develop and deploy low-cost weather instrumentation including tipping bucket and weighing-type precipitation gauges in sparsely observed regions of the world. The goal is to improve the number of observations (temporally and spatially) for the evaluation of satellite precipitation estimates in data-sparse regions and to improve the quality of applications for environmental monitoring and early warning alert systems on a regional to global scale. One important aspect of this initiative is to make the data open to the community. The weather station instrumentation have been developed using innovative new technologies such as 3D printers, Raspberry Pi computing systems, and wireless communications. An initial pilot project have been implemented in the country of Zambia. This effort could be expanded to other data sparse regions around the globe. The presentation will provide an overview and demonstration of 3D printed weather station development and initial evaluation of observed precipitation datasets.
Grulke, Norbert; Bailer, Harald
2010-10-01
To evaluate the correlation and concordance between patients' and physicians' estimations of prognoses before initiation of the conditioning regimen for allogeneic haematopoietic stem-cell transplantation. A total of 123 patients and their attending physicians were asked to estimate a prognosis on a six-point scale. The patients were also asked to fill out questionnaires addressing their psychological state and coping. The mean prognostic estimations differed by 1.17 points (p<0.001), with the patients being more optimistic than the physicians. With respect to concordance: Pearson correlation r=0.024 (ns); unweighted kappa and kappa with linear weighting are 0.115 and 0.068, respectively. The prognostic estimates of the patients correlated with their psychological state, but not with the objective disease- or treatment-related variables, whereas the physicians' estimates were partially based on such objective factors. A clear significant association between actual survival and the physicians' estimates, but not the patients' estimates, was observed. If agreement regarding the prognosis exists, the relationship between physicians' and patients' estimates is probably non-linear. Assessing one's chances of being cured is a highly emotional task, and psychological processes such as denial or repression most likely play a decisive role. Moreover, collusion between the patient and physician may be inevitable in this situation. Whether it is desirable to gain concordance and who will benefit from such efforts must be discussed and empirically studied. Copyright © 2009 John Wiley & Sons, Ltd.
Estimated prevalence of hearing loss and provision of hearing services in Pacific Island nations.
Sanders, Michael; Houghton, Natasha; Dewes, Ofa; McCool, Judith; Thorne, Peter R
2015-03-01
Hearing impairment (HI) affects an estimated 538 million people worldwide, with 80% of these living in developing countries. Untreated HI in childhood may lead to developmental delay and in adults results in social isolation, inability to find or maintain employment, and dependency. Early intervention and support programmes can significantly reduce the negative effects of HI. To estimate HI prevalence and identify available hearing services in some Pacific countries - Cook Islands, Fiji, Niue, Samoa, Tokelau, Tonga. Data were collected through literature review and correspondence with service providers. Prevalence estimates were based on census data and previously published regional estimates. Estimates indicate 20-23% of the population may have at least a mild HI, with up to 11% having a moderate impairment or worse. Estimated incidence of chronic otitis media in Pacific Island nations is 3-5 times greater than other Australasian countries in children under 10 years old. Permanent HI from otitis media is substantially more likely in children and adults in Pacific Island nations. Several organisations and individuals provide some limited hearing services in a few Pacific Island nations, but the majority of people with HI are largely underserved. Although accurate information on HI prevalence is lacking, prevalence estimates of HI and ear disease suggest they are significant health conditions in Pacific Island nations. There is relatively little support for people with HI or ear disease in the Pacific region. An investment in initiatives to both identify and support people with hearing loss in the Pacific is necessary.
Study of the ablative effects on tektites. [wake shielding during atmospheric entry
NASA Technical Reports Server (NTRS)
Sepri, P.; Chen, K. K.
1976-01-01
Equations are presented which provide approximate parameters describing surface heating and tektite deceleration during atmosphere passage. Numerical estimates of these parameters using typical initial and ambient conditions support the conclusion that the commonly assumed trajectories would not have produced some of the observed surface markings. It is suggested that tektites did not enter the atmosphere singly but rather in a swarm dense enough to afford wake shielding according to a shock envelope model which is proposed. A further aerodynamic mechanism is described which is compatible with hemispherical pits occurring on tektite surfaces.
A practical method to assess model sensitivity and parameter uncertainty in C cycle models
NASA Astrophysics Data System (ADS)
Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy
2015-04-01
The carbon cycle combines multiple spatial and temporal scales, from minutes to hours for the chemical processes occurring in plant cells to several hundred of years for the exchange between the atmosphere and the deep ocean and finally to millennia for the formation of fossil fuels. Together with our knowledge of the transformation processes involved in the carbon cycle, many Earth Observation systems are now available to help improving models and predictions using inverse modelling techniques. A generic inverse problem consists in finding a n-dimensional state vector x such that h(x) = y, for a given N-dimensional observation vector y, including random noise, and a given model h. The problem is well posed if the three following conditions hold: 1) there exists a solution, 2) the solution is unique and 3) the solution depends continuously on the input data. If at least one of these conditions is violated the problem is said ill-posed. The inverse problem is often ill-posed, a regularization method is required to replace the original problem with a well posed problem and then a solution strategy amounts to 1) constructing a solution x, 2) assessing the validity of the solution, 3) characterizing its uncertainty. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Intercomparison experiments have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF) to estimate model parameters and initial carbon stocks for DALEC using eddy covariance measurements of net ecosystem exchange of CO2 and leaf area index observations. Most results agreed on the fact that parameters and initial stocks directly related to fast processes were best estimated with narrow confidence intervals, whereas those related to slow processes were poorly estimated with very large uncertainties. While other studies have tried to overcome this difficulty by adding complementary data streams or by considering longer observation windows no systematic analysis has been carried out so far to explain the large differences among results. We consider adjoint based methods to investigate inverse problems using DALEC and various data streams. Using resolution matrices we study the nature of the inverse problems (solution existence, uniqueness and stability) and show how standard regularization techniques affect resolution and stability properties. Instead of using standard prior information as a penalty term in the cost function to regularize the problems we constraint the parameter space using ecological balance conditions and inequality constraints. The efficiency and rapidity of this approach allows us to compute ensembles of solutions to the inverse problems from which we can establish the robustness of the variational method and obtain non Gaussian posterior distributions for the model parameters and initial carbon stocks.
NASA Astrophysics Data System (ADS)
Sewell, Everest; Ferguson, Kevin; Jacobs, Jeffrey; Greenough, Jeff; Krivets, Vitaliy
2016-11-01
We describe experiments of single-shock Richtmyer-Meskhov Instability (RMI) performed on the shock tube apparatus at the University of Arizona in which the initial conditions are volumetrically imaged prior to shock wave arrival. Initial perturbations play a major role in the evolution of RMI, and previous experimental efforts only capture a single plane of the initial condition. The method presented uses a rastered laser sheet to capture additional images throughout the depth of the initial condition immediately before the shock arrival time. These images are then used to reconstruct a volumetric approximation of the experimental perturbation. Analysis of the initial perturbations is performed, and then used as initial conditions in simulations using the hydrodynamics code ARES, developed at Lawrence Livermore National Laboratory (LLNL). Experiments are presented and comparisons are made with simulation results.
NASA Astrophysics Data System (ADS)
Campos Braga, Ramon; Rosenfeld, Daniel; Weigel, Ralf; Jurkat, Tina; Andreae, Meinrat O.; Wendisch, Manfred; Pöschl, Ulrich; Voigt, Christiane; Mahnke, Christoph; Borrmann, Stephan; Albrecht, Rachel I.; Molleker, Sergej; Vila, Daniel A.; Machado, Luiz A. T.; Grulich, Lucas
2017-12-01
We have investigated how aerosols affect the height above cloud base of rain and ice hydrometeor initiation and the subsequent vertical evolution of cloud droplet size and number concentrations in growing convective cumulus. For this purpose we used in situ data of hydrometeor size distributions measured with instruments mounted on HALO aircraft during the ACRIDICON-CHUVA campaign over the Amazon during September 2014. The results show that the height of rain initiation by collision and coalescence processes (Dr, in units of meters above cloud base) is linearly correlated with the number concentration of droplets (Nd in cm-3) nucleated at cloud base (Dr ≈ 5 ṡ Nd). Additional cloud processes associated with Dr, such as GCCN, cloud, and mixing with ambient air and other processes, produce deviations of ˜ 21 % in the linear relationship, but it does not mask the clear relationship between Dr and Nd, which was also found at different regions around the globe (e.g., Israel and India). When Nd exceeded values of about 1000 cm-3, Dr became greater than 5000 m, and the first observed precipitation particles were ice hydrometeors. Therefore, no liquid water raindrops were observed within growing convective cumulus during polluted conditions. Furthermore, the formation of ice particles also took place at higher altitudes in the clouds in polluted conditions because the resulting smaller cloud droplets froze at colder temperatures compared to the larger drops in the unpolluted cases. The measured vertical profiles of droplet effective radius (re) were close to those estimated by assuming adiabatic conditions (rea), supporting the hypothesis that the entrainment and mixing of air into convective clouds is nearly inhomogeneous. Additional CCN activation on aerosol particles from biomass burning and air pollution reduced re below rea, which further inhibited the formation of raindrops and ice particles and resulted in even higher altitudes for rain and ice initiation.
Estimating the costs of human space exploration
NASA Technical Reports Server (NTRS)
Mandell, Humboldt C., Jr.
1994-01-01
The plan for NASA's new exploration initiative has the following strategic themes: (1) incremental, logical evolutionary development; (2) economic viability; and (3) excellence in management. The cost estimation process is involved with all of these themes and they are completely dependent upon the engineering cost estimator for success. The purpose is to articulate the issues associated with beginning this major new government initiative, to show how NASA intends to resolve them, and finally to demonstrate the vital importance of a leadership role by the cost estimation community.
NASA Astrophysics Data System (ADS)
Fan, Jishan; Li, Fucai; Nakamura, Gen
2018-06-01
In this paper we continue our study on the establishment of uniform estimates of strong solutions with respect to the Mach number and the dielectric constant to the full compressible Navier-Stokes-Maxwell system in a bounded domain Ω \\subset R^3. In Fan et al. (Kinet Relat Models 9:443-453, 2016), the uniform estimates have been obtained for large initial data in a short time interval. Here we shall show that the uniform estimates exist globally if the initial data are small. Based on these uniform estimates, we obtain the convergence of the full compressible Navier-Stokes-Maxwell system to the incompressible magnetohydrodynamic equations for well-prepared initial data.
Effective force control by muscle synergies.
Berger, Denise J; d'Avella, Andrea
2014-01-01
Muscle synergies have been proposed as a way for the central nervous system (CNS) to simplify the generation of motor commands and they have been shown to explain a large fraction of the variation in the muscle patterns across a variety of conditions. However, whether human subjects are able to control forces and movements effectively with a small set of synergies has not been tested directly. Here we show that muscle synergies can be used to generate target forces in multiple directions with the same accuracy achieved using individual muscles. We recorded electromyographic (EMG) activity from 13 arm muscles and isometric hand forces during a force reaching task in a virtual environment. From these data we estimated the force associated to each muscle by linear regression and we identified muscle synergies by non-negative matrix factorization. We compared trajectories of a virtual mass displaced by the force estimated using the entire set of recorded EMGs to trajectories obtained using 4-5 muscle synergies. While trajectories were similar, when feedback was provided according to force estimated from recorded EMGs (EMG-control) on average trajectories generated with the synergies were less accurate. However, when feedback was provided according to recorded force (force-control) we did not find significant differences in initial angle error and endpoint error. We then tested whether synergies could be used as effectively as individual muscles to control cursor movement in the force reaching task by providing feedback according to force estimated from the projection of the recorded EMGs into synergy space (synergy-control). Human subjects were able to perform the task immediately after switching from force-control to EMG-control and synergy-control and we found no differences between initial movement direction errors and endpoint errors in all control modes. These results indicate that muscle synergies provide an effective strategy for motor coordination.
Analysing Twitter and web queries for flu trend prediction.
Santos, José Carlos; Matos, Sérgio
2014-05-07
Social media platforms encourage people to share diverse aspects of their daily life. Among these, shared health related information might be used to infer health status and incidence rates for specific conditions or symptoms. In this work, we present an infodemiology study that evaluates the use of Twitter messages and search engine query logs to estimate and predict the incidence rate of influenza like illness in Portugal. Based on a manually classified dataset of 2704 tweets from Portugal, we selected a set of 650 textual features to train a Naïve Bayes classifier to identify tweets mentioning flu or flu-like illness or symptoms. We obtained a precision of 0.78 and an F-measure of 0.83, based on cross validation over the complete annotated set. Furthermore, we trained a multiple linear regression model to estimate the health-monitoring data from the Influenzanet project, using as predictors the relative frequencies obtained from the tweet classification results and from query logs, and achieved a correlation ratio of 0.89 (p<0.001). These classification and regression models were also applied to estimate the flu incidence in the following flu season, achieving a correlation of 0.72. Previous studies addressing the estimation of disease incidence based on user-generated content have mostly focused on the english language. Our results further validate those studies and show that by changing the initial steps of data preprocessing and feature extraction and selection, the proposed approaches can be adapted to other languages. Additionally, we investigated whether the predictive model created can be applied to data from the subsequent flu season. In this case, although the prediction result was good, an initial phase to adapt the regression model could be necessary to achieve more robust results.
Initial condition effect on pressure waves in an axisymmetric jet
NASA Technical Reports Server (NTRS)
Miles, Jeffrey H.; Raman, Ganesh
1988-01-01
A pair of microphones (separated axially by 5.08 cm and laterally by 1.3 cm) are placed on either side of the jet centerline to investigate coherent pressure fluctuations in an axisymmetric jet at Strouhal numbers less than unity. Auto-spectra, transfer-function, and coherence measurements are made for a tripped and untripped boundary layer initial condition. It was found that coherent acoustic pressure waves originating in the upstream plenum chamber propagate a greater distance downstream for the tripped initial condition than for the untripped initial condition. In addition, for the untripped initial condition the development of the coherent hydrodynamic pressure waves shifts downstream.
Kronholm, Scott C.; Capel, Paul D.; Terziotti, Silvia
2016-01-01
Accurate estimation of total nitrogen loads is essential for evaluating conditions in the aquatic environment. Extrapolation of estimates beyond measured streams will greatly expand our understanding of total nitrogen loading to streams. Recursive partitioning and random forest regression were used to assess 85 geospatial, environmental, and watershed variables across 636 small (<585 km2) watersheds to determine which variables are fundamentally important to the estimation of annual loads of total nitrogen. Initial analysis led to the splitting of watersheds into three groups based on predominant land use (agricultural, developed, and undeveloped). Nitrogen application, agricultural and developed land area, and impervious or developed land in the 100-m stream buffer were commonly extracted variables by both recursive partitioning and random forest regression. A series of multiple linear regression equations utilizing the extracted variables were created and applied to the watersheds. As few as three variables explained as much as 76 % of the variability in total nitrogen loads for watersheds with predominantly agricultural land use. Catchment-scale national maps were generated to visualize the total nitrogen loads and yields across the USA. The estimates provided by these models can inform water managers and help identify areas where more in-depth monitoring may be beneficial.
Parent-Child Communication and Marijuana Initiation: Evidence Using Discrete-Time Survival Analysis
Nonnemaker, James M.; Silber-Ashley, Olivia; Farrelly, Matthew C.; Dench, Daniel
2012-01-01
This study supplements existing literature on the relationship between parent-child communication and adolescent drug use by exploring whether parental and/or adolescent recall of specific drug-related conversations differentially impact youth's likelihood of initiating marijuana use. Using discrete-time survival analysis, we estimated the hazard of marijuana initiation using a logit model to obtain an estimate of the relative risk of initiation. Our results suggest that parent-child communication about drug use is either not protective (no effect) or—in the case of youth reports of communication—potentially harmful (leading to increased likelihood of marijuana initiation). PMID:22958867
The role of ensemble post-processing for modeling the ensemble tail
NASA Astrophysics Data System (ADS)
Van De Vyver, Hans; Van Schaeybroeck, Bert; Vannitsem, Stéphane
2016-04-01
The past decades the numerical weather prediction community has witnessed a paradigm shift from deterministic to probabilistic forecast and state estimation (Buizza and Leutbecher, 2015; Buizza et al., 2008), in an attempt to quantify the uncertainties associated with initial-condition and model errors. An important benefit of a probabilistic framework is the improved prediction of extreme events. However, one may ask to what extent such model estimates contain information on the occurrence probability of extreme events and how this information can be optimally extracted. Different approaches have been proposed and applied on real-world systems which, based on extreme value theory, allow the estimation of extreme-event probabilities conditional on forecasts and state estimates (Ferro, 2007; Friederichs, 2010). Using ensemble predictions generated with a model of low dimensionality, a thorough investigation is presented quantifying the change of predictability of extreme events associated with ensemble post-processing and other influencing factors including the finite ensemble size, lead time and model assumption and the use of different covariates (ensemble mean, maximum, spread...) for modeling the tail distribution. Tail modeling is performed by deriving extreme-quantile estimates using peak-over-threshold representation (generalized Pareto distribution) or quantile regression. Common ensemble post-processing methods aim to improve mostly the ensemble mean and spread of a raw forecast (Van Schaeybroeck and Vannitsem, 2015). Conditional tail modeling, on the other hand, is a post-processing in itself, focusing on the tails only. Therefore, it is unclear how applying ensemble post-processing prior to conditional tail modeling impacts the skill of extreme-event predictions. This work is investigating this question in details. Buizza, Leutbecher, and Isaksen, 2008: Potential use of an ensemble of analyses in the ECMWF Ensemble Prediction System, Q. J. R. Meteorol. Soc. 134: 2051-2066.Buizza and Leutbecher, 2015: The forecast skill horizon, Q. J. R. Meteorol. Soc. 141: 3366-3382.Ferro, 2007: A probability model for verifying deterministic forecasts of extreme events. Weather and Forecasting 22 (5), 1089-1100.Friederichs, 2010: Statistical downscaling of extreme precipitation events using extreme value theory. Extremes 13, 109-132.Van Schaeybroeck and Vannitsem, 2015: Ensemble post-processing using member-by-member approaches: theoretical aspects. Q.J.R. Meteorol. Soc., 141: 807-818.
SHM-Based Probabilistic Fatigue Life Prediction for Bridges Based on FE Model Updating
Lee, Young-Joo; Cho, Soojin
2016-01-01
Fatigue life prediction for a bridge should be based on the current condition of the bridge, and various sources of uncertainty, such as material properties, anticipated vehicle loads and environmental conditions, make the prediction very challenging. This paper presents a new approach for probabilistic fatigue life prediction for bridges using finite element (FE) model updating based on structural health monitoring (SHM) data. Recently, various types of SHM systems have been used to monitor and evaluate the long-term structural performance of bridges. For example, SHM data can be used to estimate the degradation of an in-service bridge, which makes it possible to update the initial FE model. The proposed method consists of three steps: (1) identifying the modal properties of a bridge, such as mode shapes and natural frequencies, based on the ambient vibration under passing vehicles; (2) updating the structural parameters of an initial FE model using the identified modal properties; and (3) predicting the probabilistic fatigue life using the updated FE model. The proposed method is demonstrated by application to a numerical model of a bridge, and the impact of FE model updating on the bridge fatigue life is discussed. PMID:26950125
NASA Astrophysics Data System (ADS)
Bieringer, Paul E.; Rodriguez, Luna M.; Vandenberghe, Francois; Hurst, Jonathan G.; Bieberbach, George; Sykes, Ian; Hannan, John R.; Zaragoza, Jake; Fry, Richard N.
2015-12-01
Accurate simulations of the atmospheric transport and dispersion (AT&D) of hazardous airborne materials rely heavily on the source term parameters necessary to characterize the initial release and meteorological conditions that drive the downwind dispersion. In many cases the source parameters are not known and consequently based on rudimentary assumptions. This is particularly true of accidental releases and the intentional releases associated with terrorist incidents. When available, meteorological observations are often not representative of the conditions at the location of the release and the use of these non-representative meteorological conditions can result in significant errors in the hazard assessments downwind of the sensors, even when the other source parameters are accurately characterized. Here, we describe a computationally efficient methodology to characterize both the release source parameters and the low-level winds (eg. winds near the surface) required to produce a refined downwind hazard. This methodology, known as the Variational Iterative Refinement Source Term Estimation (STE) Algorithm (VIRSA), consists of a combination of modeling systems. These systems include a back-trajectory based source inversion method, a forward Gaussian puff dispersion model, a variational refinement algorithm that uses both a simple forward AT&D model that is a surrogate for the more complex Gaussian puff model and a formal adjoint of this surrogate model. The back-trajectory based method is used to calculate a ;first guess; source estimate based on the available observations of the airborne contaminant plume and atmospheric conditions. The variational refinement algorithm is then used to iteratively refine the first guess STE parameters and meteorological variables. The algorithm has been evaluated across a wide range of scenarios of varying complexity. It has been shown to improve the source parameters for location by several hundred percent (normalized by the distance from source to the closest sampler), and improve mass estimates by several orders of magnitude. Furthermore, it also has the ability to operate in scenarios with inconsistencies between the wind and airborne contaminant sensor observations and adjust the wind to provide a better match between the hazard prediction and the observations.
Curvature estimation for multilayer hinged structures with initial strains
NASA Astrophysics Data System (ADS)
Nikishkov, G. P.
2003-10-01
Closed-form estimate of curvature for hinged multilayer structures with initial strains is developed. The finite element method is used for modeling of self-positioning microstructures. The geometrically nonlinear problem with large rotations and large displacements is solved using step procedure with node coordinate update. Finite element results for curvature of the hinged micromirror with variable width is compared to closed-form estimates.
Kosaka, Ryo; Fukuda, Kyohei; Nishida, Masahiro; Maruyama, Osamu; Yamane, Takashi
2013-01-01
In order to monitor the condition of a patient using a left ventricular assist system (LVAS), blood flow should be measured. However, the reliable determination of blood-flow rate has not been established. The purpose of the present study is to develop a noninvasive blood-flow meter using a curved cannula with zero compensation for an axial flow blood pump. The flow meter uses the centrifugal force generated by the flow rate in the curved cannula. Two strain gauges served as sensors. The first gauges were attached to the curved area to measure static pressure and centrifugal force, and the second gauges were attached to straight area to measure static pressure. The flow rate was determined by the differences in output from the two gauges. The zero compensation was constructed based on the consideration that the flow rate could be estimated during the initial driving condition and the ventricular suction condition without using the flow meter. A mock circulation loop was constructed in order to evaluate the measurement performance of the developed flow meter with zero compensation. As a result, the zero compensation worked effectively for the initial calibration and the zero-drift of the measured flow rate. We confirmed that the developed flow meter using a curved cannula with zero compensation was able to accurately measure the flow rate continuously and noninvasively.
Operating a sustainable disease management program for chronic obstructive pulmonary disease.
Endicott, Linda; Corsello, Phillip; Prinzi, Michele; Tinkelman, David G; Schwartz, Abby
2003-01-01
Chronic obstructive pulmonary disease (COPD) is one of our nation's most rapidly growing chronic health conditions. It is estimated that over 16 million individuals are diagnosed with COPD (Friedman & Hilleman, 2001). In addition, another 16 million are misdiagnosed as asthma or not diagnosed at all. COPD is a condition that affects the working-age as well as the elderly. Despite the high mortality rate, COPD is a treatable and modifiable condition. Disease management programs (DMPs) for asthma are a common initiative within many health insurance plans and integrated delivery networks. Similar initiatives are not as common for COPD. This article will highlight the National Jewish Medical and Research Center's COPD DMP interventions and outcomes. To outline interventions and operational strategies critical in developing and operating a sustainable and effective disease management program for COPD. Disease Management is an effective model for managing individuals with COPD. Applying a case management model that includes (1) risk-identification and stratification; (2) education and empowerment regarding self-monitoring and management; (3) lifestyle modification; (4) communication and collaboration amongst patients, healthcare providers, and case managers to enhance the treatment plan; (5) providing after-hours support; and (6) monitoring care outcomes is crucial. Applying these interventions in a credible manner will improve the quality of life and quality of care delivered to individuals with mild, moderate, severe, and very severe COPD. Additionally, these interventions can significantly reduce utilization events.
Wang, Lutao; Xiao, Jun; Chai, Hua
2015-08-01
The successful suppression of clutter arising from stationary or slowly moving tissue is one of the key issues in medical ultrasound color blood imaging. Remaining clutter may cause bias in the mean blood frequency estimation and results in a potentially misleading description of blood-flow. In this paper, based on the principle of general wall-filter, the design process of three classes of filters, infinitely impulse response with projection initialization (Prj-IIR), polynomials regression (Pol-Reg), and eigen-based filters are previewed and analyzed. The performance of the filters was assessed by calculating the bias and variance of a mean blood velocity using a standard autocorrelation estimator. Simulation results show that the performance of Pol-Reg filter is similar to Prj-IIR filters. Both of them can offer accurate estimation of mean blood flow speed under steady clutter conditions, and the clutter rejection ability can be enhanced by increasing the ensemble size of Doppler vector. Eigen-based filters can effectively remove the non-stationary clutter component, and further improve the estimation accuracy for low speed blood flow signals. There is also no significant increase in computation complexity for eigen-based filters when the ensemble size is less than 10.
Brouwer, Anne-Marie; López-Moliner, Joan; Brenner, Eli; Smeets, Jeroen B J
2006-02-01
We propose and evaluate a source of information that ball catchers may use to determine whether a ball will land behind or in front of them. It combines estimates for the ball's horizontal and vertical speed. These estimates are based, respectively, on the rate of angular expansion and vertical velocity. Our variable could account for ball catchers' data of Oudejans et al. [The effects of baseball experience on movement initiation in catching fly balls. Journal of Sports Sciences, 15, 587-595], but those data could also be explained by the use of angular expansion alone. We therefore conducted additional experiments in which we asked subjects where simulated balls would land under conditions in which both angular expansion and vertical velocity must be combined for obtaining a correct response. Subjects made systematic errors. We found evidence for the use of angular velocity but hardly any indication for the use of angular expansion. Thus, if catchers use a strategy that involves combining vertical and horizontal estimates of the ball's speed, they do not obtain their estimates of the horizontal component from the rate of expansion alone.
Leighton, David A.; Phillips, Steven P.
2003-01-01
Antelope Valley, California, is a topographically closed basin in the western part of the Mojave Desert, about 50 miles northeast of Los Angeles. The Antelope Valley ground-water basin is about 940 square miles and is separated from the northern part of Antelope Valley by faults and low-lying hills. Prior to 1972, ground water provided more than 90 percent of the total water supply in the valley; since 1972, it has provided between 50 and 90 percent. Most ground-water pumping in the valley occurs in the Antelope Valley ground-water basin, which includes the rapidly growing cities of Lancaster and Palmdale. Ground-water-level declines of more than 200 feet in some parts of the ground-water basin have resulted in an increase in pumping lifts, reduced well efficiency, and land subsidence of more than 6 feet in some areas. Future urban growth and limits on the supply of imported water may continue to increase reliance on ground water. To better understand the ground-water flow system and to develop a tool to aid in effectively managing the water resources, a numerical model of ground-water flow and land subsidence in the Antelope Valley ground-water basin was developed using old and new geohydrologic information. The ground-water flow system consists of three aquifers: the upper, middle, and lower aquifers. The aquifers, which were identified on the basis of the hydrologic properties, age, and depth of the unconsolidated deposits, consist of gravel, sand, silt, and clay alluvial deposits and clay and silty clay lacustrine deposits. Prior to ground-water development in the valley, recharge was primarily the infiltration of runoff from the surrounding mountains. Ground water flowed from the recharge areas to discharge areas around the playas where it discharged either from the aquifer system as evapotranspiration or from springs. Partial barriers to horizontal ground-water flow, such as faults, have been identified in the ground-water basin. Water-level declines owing to ground-water development have eliminated the natural sources of discharge, and pumping for agricultural and urban uses have become the primary source of discharge from the ground-water system. Infiltration of return flows from agricultural irrigation has become an important source of recharge to the aquifer system. The ground-water flow model of the basin was discretized horizontally into a grid of 43 rows and 60 columns of square cells 1 mile on a side, and vertically into three layers representing the upper, middle, and lower aquifers. Faults that were thought to act as horizontal-flow barriers were simulated in the model. The model was calibrated to simulate steady-state conditions, represented by 1915 water levels and transient-state conditions during 1915-95 using water-level and subsidence data. Initial estimates of the aquifer-system properties and stresses were obtained from a previously published numerical model of the Antelope Valley ground-water basin; estimates also were obtained from recently collected hydrologic data and from results of simulations of ground-water flow and land subsidence models of the Edwards Air Force Base area. Some of these initial estimates were modified during model calibration. Ground-water pumpage for agriculture was estimated on the basis of irrigated crop acreage and crop consumptive-use data. Pumpage for public supply, which is metered, was compiled and entered into a database used for this study. Estimated annual pumpage peaked at 395,000 acre-feet (acre-ft) in 1952 and then declined because of declining agricultural production. Recharge from irrigation-return flows was estimated to be 30 percent of agricultural pumpage; the irrigation-return flows were simulated as recharge to the regional water table 10 years following application at land surface. The annual quantity of natural recharge initially was based on estimates from previous studies. During model calibration, natural recharge was reduced from the initial
Huang, Jidong; Zheng, Rong; Chaloupka, Frank J.; Fong, Geoffrey T.; Jiang, Yuan
2015-01-01
Background There are few studies that examine the impact of tobacco tax and price policies in China. In addition, very little is known about the differential responses to tax and price increases based on socioeconomic status in China. Objective The goal of this study is to estimate the conditional cigarette consumption price elasticity among adult urban smokers in China using individual level longitudinal survey data. We also examine the differential responses to cigarette price increases among groups with different income and/or educational levels. Methods Multivariate analyses using the general estimating equations (GEE) method were conducted to estimate the conditional cigarette demand price elasticity using data from the International Tobacco Control (ITC) China Survey, a longitudinal survey of adult smokers in seven cities in China. The first three waves of the ITC China Survey data were used in this analysis. Analyses based on subsample by education and income were conducted. Findings Our results show that overall conditional cigarette demand price elasticity ranges from −0.12 to −0.14, implying a 10% increase in cigarette price would result in a reduction in cigarette consumption among adult urban Chinese smokers by 1.2% to 1.4%. No differential responses to cigarette price increase were found across education levels. The price elasticity estimates do not differ between high income smokers and medium income smokers. However, cigarette consumption among low income smokers did not seem to decrease after a price increase, at least among those who continued to smoke. Conclusion Relative to many other low- and middle-income countries, cigarette consumption among Chinese adult smokers is not very sensitive to changes in cigarette prices. The total impact of cigarette price increase would be larger if its impact on smoking initiation and cessation, as well as the price-reducing behaviors such as brand switching and trading down, were taken into account. PMID:25855640
NASA Technical Reports Server (NTRS)
Armstrong, G. P.; Carlier, S. G.; Fukamachi, K.; Thomas, J. D.; Marwick, T. H.
1999-01-01
OBJECTIVES: To validate a simplified estimate of peak power (SPP) against true (invasively measured) peak instantaneous power (TPP), to assess the feasibility of measuring SPP during exercise and to correlate this with functional capacity. DESIGN: Development of a simplified method of measurement and observational study. SETTING: Tertiary referral centre for cardiothoracic disease. SUBJECTS: For validation of SPP with TPP, seven normal dogs and four dogs with dilated cardiomyopathy were studied. To assess feasibility and clinical significance in humans, 40 subjects were studied (26 patients; 14 normal controls). METHODS: In the animal validation study, TPP was derived from ascending aortic pressure and flow probe, and from Doppler measurements of flow. SPP, calculated using the different flow measures, was compared with peak instantaneous power under different loading conditions. For the assessment in humans, SPP was measured at rest and during maximum exercise. Peak aortic flow was measured with transthoracic continuous wave Doppler, and systolic and diastolic blood pressures were derived from brachial sphygmomanometry. The difference between exercise and rest simplified peak power (Delta SPP) was compared with maximum oxygen uptake (VO(2)max), measured from expired gas analysis. RESULTS: SPP estimates using peak flow measures correlated well with true peak instantaneous power (r = 0.89 to 0.97), despite marked changes in systemic pressure and flow induced by manipulation of loading conditions. In the human study, VO(2)max correlated with Delta SPP (r = 0.78) better than Delta ejection fraction (r = 0.18) and Delta rate-pressure product (r = 0.59). CONCLUSIONS: The simple product of mean arterial pressure and peak aortic flow (simplified peak power, SPP) correlates with peak instantaneous power over a range of loading conditions in dogs. In humans, it can be estimated during exercise echocardiography, and correlates with maximum oxygen uptake better than ejection fraction or rate-pressure product.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinn, J. J.; Negri, M. C.; Hinchman, R. R.
2001-03-01
Estimating the effect of phreatophytes on the groundwater flow field is critical in the design or evaluation of a phytoremediation system. Complex hydrogeological conditions and the transient water use rates of trees require the application of numerical modeling to address such issues as hydraulic containment, seasonality, and system design. In 1999, 809 hybrid poplars and willows were planted to phytoremediate the 317 and 319 Areas of Argonne National Laboratory near Chicago, Illinois. Contaminants of concern are volatile organic compounds and tritium. The site hydrogeology is a complex framework of glacial tills interlaced with sands, gravels, and silts of varying character,more » thickness, and lateral extent. A total of 420 poplars were installed using a technology to direct the roots through a 25-ft (8-m)-thick till to a contaminated aquifer. Numerical modeling was used to simulate the effect of the deep-rooted poplars on this aquifer of concern. Initially, the best estimates of input parameters and boundary conditions were determined to provide a suitable match to historical transient ground-water flow conditions. The model was applied to calculate the future effect of the developing deep-rooted poplars over a 6 year period. The first 3 years represent the development period of the trees. In the fourth year, canopy closure is expected to occur; modeling continues through the first 3 years of the mature plantation. Monthly estimates of water use by the trees are incorporated. The modeling suggested that the mature trees in the plantation design will provide a large degree of containment of groundwater from the upgradient source areas, despite the seasonal nature of the trees' water consumption. The results indicate the likely areas where seasonal dewatering of the aquifer may limit the availability of water for the trees. The modeling also provided estimates of the residence time of groundwater in the geochemically altered rhizosphere of the plantation.« less
Predicting Ice Sheet and Climate Evolution at Extreme Scales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heimbach, Patrick
2016-02-06
A main research objectives of PISCEES is the development of formal methods for quantifying uncertainties in ice sheet modeling. Uncertainties in simulating and projecting mass loss from the polar ice sheets arise primarily from initial conditions, surface and basal boundary conditions, and model parameters. In general terms, two main chains of uncertainty propagation may be identified: 1. inverse propagation of observation and/or prior onto posterior control variable uncertainties; 2. forward propagation of prior or posterior control variable uncertainties onto those of target output quantities of interest (e.g., climate indices or ice sheet mass loss). A related goal is the developmentmore » of computationally efficient methods for producing initial conditions for an ice sheet that are close to available present-day observations and essentially free of artificial model drift, which is required in order to be useful for model projections (“initialization problem”). To be of maximum value, such optimal initial states should be accompanied by “useful” uncertainty estimates that account for the different sources of uncerainties, as well as the degree to which the optimum state is constrained by available observations. The PISCEES proposal outlined two approaches for quantifying uncertainties. The first targets the full exploration of the uncertainty in model projections with sampling-based methods and a workflow managed by DAKOTA (the main delivery vehicle for software developed under QUEST). This is feasible for low-dimensional problems, e.g., those with a handful of global parameters to be inferred. This approach can benefit from derivative/adjoint information, but it is not necessary, which is why it often referred to as “non-intrusive”. The second approach makes heavy use of derivative information from model adjoints to address quantifying uncertainty in high-dimensions (e.g., basal boundary conditions in ice sheet models). The use of local gradient, or Hessian information (i.e., second derivatives of the cost function), requires additional code development and implementation, and is thus often referred to as an “intrusive” approach. Within PISCEES, MIT has been tasked to develop methods for derivative-based UQ, the ”intrusive” approach discussed above. These methods rely on the availability of first (adjoint) and second (Hessian) derivative code, developed through intrusive methods such as algorithmic differentiation (AD). While representing a significant burden in terms of code development, derivative-baesd UQ is able to cope with very high-dimensional uncertainty spaces. That is, unlike sampling methods (all variations of Monte Carlo), calculational burden is independent of the dimension of the uncertainty space. This is a significant advantage for spatially distributed uncertainty fields, such as threedimensional initial conditions, three-dimensional parameter fields, or two-dimensional surface and basal boundary conditions. Importantly, uncertainty fields for ice sheet models generally fall into this category.« less
Bayesian Inference of High-Dimensional Dynamical Ocean Models
NASA Astrophysics Data System (ADS)
Lin, J.; Lermusiaux, P. F. J.; Lolla, S. V. T.; Gupta, A.; Haley, P. J., Jr.
2015-12-01
This presentation addresses a holistic set of challenges in high-dimension ocean Bayesian nonlinear estimation: i) predict the probability distribution functions (pdfs) of large nonlinear dynamical systems using stochastic partial differential equations (PDEs); ii) assimilate data using Bayes' law with these pdfs; iii) predict the future data that optimally reduce uncertainties; and (iv) rank the known and learn the new model formulations themselves. Overall, we allow the joint inference of the state, equations, geometry, boundary conditions and initial conditions of dynamical models. Examples are provided for time-dependent fluid and ocean flows, including cavity, double-gyre and Strait flows with jets and eddies. The Bayesian model inference, based on limited observations, is illustrated first by the estimation of obstacle shapes and positions in fluid flows. Next, the Bayesian inference of biogeochemical reaction equations and of their states and parameters is presented, illustrating how PDE-based machine learning can rigorously guide the selection and discovery of complex ecosystem models. Finally, the inference of multiscale bottom gravity current dynamics is illustrated, motivated in part by classic overflows and dense water formation sites and their relevance to climate monitoring and dynamics. This is joint work with our MSEAS group at MIT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paone, Jeffrey R; Bolme, David S; Ferrell, Regina Kay
Keeping a driver focused on the road is one of the most critical steps in insuring the safe operation of a vehicle. The Strategic Highway Research Program 2 (SHRP2) has over 3,100 recorded videos of volunteer drivers during a period of 2 years. This extensive naturalistic driving study (NDS) contains over one million hours of video and associated data that could aid safety researchers in understanding where the driver s attention is focused. Manual analysis of this data is infeasible, therefore efforts are underway to develop automated feature extraction algorithms to process and characterize the data. The real-world nature, volume,more » and acquisition conditions are unmatched in the transportation community, but there are also challenges because the data has relatively low resolution, high compression rates, and differing illumination conditions. A smaller dataset, the head pose validation study, is available which used the same recording equipment as SHRP2 but is more easily accessible with less privacy constraints. In this work we report initial head pose accuracy using commercial and open source face pose estimation algorithms on the head pose validation data set.« less
Nutrition advocacy and national development: the PROFILES programme and its application.
Burkhalter, B. R.; Abel, E.; Aguayo, V.; Diene, S. M.; Parlato, M. B.; Ross, J. S.
1999-01-01
Investment in nutritional programmes can contribute to economic growth and is cost-effective in improving child survival and development. In order to communicate this to decision-makers, the PROFILES nutrition advocacy and policy development programme was applied in certain developing countries. Effective advocacy is necessary to generate financial and political support for scaling up from small pilot projects and maintaining successful national programmes. The programme uses scientific knowledge to estimate development indicators such as mortality, morbidity, fertility, school performance and labour productivity from the size and nutritional condition of populations. Changes in nutritional condition are estimated from the costs, coverage and effectiveness of proposed programmes. In Bangladesh this approach helped to gain approval and funding for a major nutrition programme. PROFILES helped to promote the nutrition component of an early childhood development programme in the Philippines, and to make nutrition a top priority in Ghana's new national child survival strategy. The application of PROFILES in these and other countries has been supported by the United States Agency for International Development, the United Nations Children's Fund, the World Bank, the Asian Development Bank, the Micronutrient Initiative and other bodies. PMID:10361758
Atmospheric Spray Freeze-Drying: Numerical Modeling and Comparison With Experimental Measurements.
Borges Sebastião, Israel; Robinson, Thomas D; Alexeenko, Alina
2017-01-01
Atmospheric spray freeze-drying (ASFD) represents a novel approach to dry thermosensitive solutions via sublimation. Tests conducted with a second-generation ASFD equipment, developed for pharmaceutical applications, have focused initially on producing a light, fine, high-grade powder consistently and reliably. To better understand the heat and mass transfer physics and drying dynamics taking place within the ASFD chamber, 3 analytical models describing the key processes are developed and validated. First, by coupling the dynamics and heat transfer of single droplets sprayed into the chamber, the velocity, temperature, and phase change evolutions of these droplets are estimated for actual operational conditions. This model reveals that, under typical operational conditions, the sprayed droplets require less than 100 ms to freeze. Second, because understanding the heat transfer throughout the entire freeze-drying process is so important, a theoretical model is proposed to predict the time evolution of the chamber gas temperature. Finally, a drying model, calibrated with hygrometer measurements, is used to estimate the total time required to achieve a predefined final moisture content. Results from these models are compared with experimental data. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Diagnostics of Cold-Sprayed Particle Velocities Approaching Critical Deposition Conditions
NASA Astrophysics Data System (ADS)
Mauer, G.; Singh, R.; Rauwald, K.-H.; Schrüfer, S.; Wilson, S.; Vaßen, R.
2017-10-01
In cold spraying, the impact particle velocity plays a key role for successful deposition. It is well known that only those particles can achieve successful bonding which have an impact velocity exceeding a particular threshold. This critical velocity depends on the thermomechanical properties of the impacting particles at impacting temperature. The latter depends on the gas temperature in the torch but also on stand-off distance and gas pressure. In the past, some semiempirical approaches have been proposed to estimate particle impact and critical velocities. Besides that, there are a limited number of available studies on particle velocity measurements in cold spraying. In the present work, particle velocity measurements were performed using a cold spray meter, where a laser beam is used to illuminate the particles ensuring sufficiently detectable radiant signal intensities. Measurements were carried out for INCONEL® alloy 718-type powders with different particle sizes. These experimental investigations comprised mainly subcritical spray parameters for this material to have a closer look at the conditions of initial deposition. The critical velocities were identified by evaluating the deposition efficiencies and correlating them to the measured particle velocity distributions. In addition, the experimental results were compared with some values estimated by model calculations.
NASA Astrophysics Data System (ADS)
Goto, Akifumi; Ishida, Mizuri; Sagawa, Koichi
2010-01-01
The purpose of this study is to derive quantitative assessment indicators of the human postural control ability. An inverted pendulum is applied to standing human body and is controlled by ankle joint torque according to PD control method in sagittal plane. Torque control parameters (KP: proportional gain, KD: derivative gain) and pole placements of postural control system are estimated with time from inclination angle variation using fixed trace method as recursive least square method. Eight young healthy volunteers are participated in the experiment, in which volunteers are asked to incline forward as far as and as fast as possible 10 times over 10 [s] stationary intervals with their neck joint, hip joint and knee joint fixed, and then return to initial upright posture. The inclination angle is measured by an optical motion capture system. Three conditions are introduced to simulate unstable standing posture; 1) eyes-opened posture for healthy condition, 2) eyes-closed posture for visual impaired and 3) one-legged posture for lower-extremity muscle weakness. The estimated parameters Kp, KD and pole placements are applied to multiple comparison test among all stability conditions. The test results indicate that Kp, KD and real pole reflect effect of lower-extremity muscle weakness and KD also represents effect of visual impairment. It is suggested that the proposed method is valid for quantitative assessment of standing postural control ability.
NASA Astrophysics Data System (ADS)
Sévellec, Florian; Dijkstra, Henk A.; Drijfhout, Sybren S.; Germe, Agathe
2017-11-01
In this study, the relation between two approaches to assess the ocean predictability on interannual to decadal time scales is investigated. The first pragmatic approach consists of sampling the initial condition uncertainty and assess the predictability through the divergence of this ensemble in time. The second approach is provided by a theoretical framework to determine error growth by estimating optimal linear growing modes. In this paper, it is shown that under the assumption of linearized dynamics and normal distributions of the uncertainty, the exact quantitative spread of ensemble can be determined from the theoretical framework. This spread is at least an order of magnitude less expensive to compute than the approximate solution given by the pragmatic approach. This result is applied to a state-of-the-art Ocean General Circulation Model to assess the predictability in the North Atlantic of four typical oceanic metrics: the strength of the Atlantic Meridional Overturning Circulation (AMOC), the intensity of its heat transport, the two-dimensional spatially-averaged Sea Surface Temperature (SST) over the North Atlantic, and the three-dimensional spatially-averaged temperature in the North Atlantic. For all tested metrics, except for SST, ˜ 75% of the total uncertainty on interannual time scales can be attributed to oceanic initial condition uncertainty rather than atmospheric stochastic forcing. The theoretical method also provide the sensitivity pattern to the initial condition uncertainty, allowing for targeted measurements to improve the skill of the prediction. It is suggested that a relatively small fleet of several autonomous underwater vehicles can reduce the uncertainty in AMOC strength prediction by 70% for 1-5 years lead times.
Initial report of the osteogenesis imperfecta adult natural history initiative.
Tosi, Laura L; Oetgen, Matthew E; Floor, Marianne K; Huber, Mary Beth; Kennelly, Ann M; McCarter, Robert J; Rak, Melanie F; Simmonds, Barbara J; Simpson, Melissa D; Tucker, Carole A; McKiernan, Fergus E
2015-11-14
A better understanding of the natural history of osteogenesis imperfecta (OI) in adulthood should improve health care for patients with this rare condition. The Osteogenesis Imperfecta Foundation established the Adult Natural History Initiative (ANHI) in 2010 to give voice to the health concerns of the adult OI community and to begin to address existing knowledge gaps for this condition. Using a web-based platform, 959 adults with self-reported OI, representing a wide range of self-reported disease severity, reported symptoms and health conditions, estimated the impact of these concerns on present and future health-related quality of life (QoL) and completed a Patient-Reported Outcomes Measurement Information System (PROMIS®) survey of health issues. Adults with OI report lower general physical health status (p < .0001), exhibit a higher prevalence of auditory (58% of sample versus 2-16% of normalized population) and musculoskeletal (64% of sample versus 1-3% of normalized population) concerns than the general population, but report generally similar mental health status. Musculoskeletal, auditory, pulmonary, endocrine, and gastrointestinal issues are particular future health-related QoL concerns for these adults. Numerous other statistically significant differences exist among adults with OI as well as between adults with OI and the referent PROMIS® population, but the clinical significance of these differences is uncertain. Adults with OI report lower general health status but are otherwise more similar to the general population than might have been expected. While reassuring, further analysis of the extensive OI-ANHI databank should help identify areas of unique clinical concern and for future research. The OI-ANHI survey experience supports an internet-based strategy for successful patient-centered outcomes research in rare disease populations.
NASA Astrophysics Data System (ADS)
Olinde, L.; Johnson, J. P.
2013-12-01
By monitoring the transport timing and distances of tracer grains in a steep mountains stream, we collected data that can constrain numerical bedload transport models considered for these systems. We captured bedload activity during a weeks-spanning snowmelt period in Reynolds Creek, Idaho by deploying Radio Frequency Identification (RFID) and accelerometer embedded tracers with in-stream stationary RFID antennas. During transport events, RFID dataloggers recorded the times when tracers passed over stationary antennas. The accelerometer tracers also logged x, y, z-axis accelerations every 10 minutes to identify times of motion and rest. After snowmelt flows receded, we found tracers with mobile antennas and surveyed their positions. We know the timing and tracer locations when accelerometer tracers were initially entrained, passed stationary antennas, and were finally deposited at the surveyed locations. The fraction of moving accelerometers over time correlates well with discharge. Comparisons of the transported tracer fraction between rising and falling limbs over multiple flood peaks suggest that some degree of clockwise hysteresis persisted during the snowmelt period. Additionally, we apply accelerometer transport durations and displacement distances to calculate virtual velocities over full tracer path lengths and over lengths between initial locations to stationary antennas as well as between stationary antennas to final positions. The accelerometer-based virtual velocities are significantly faster than those estimated from traditional tracer methods that estimate bedload transport durations by assuming threshold flow conditions. We also subsample the motion data to calculate how virtual velocities change over the measurement intervals. Regressions of these relations are in turn used to extrapolate virtual velocities at smaller sampling timescales. Minimum hop lengths are also evaluated for each accelerometer tracer. Finally, flow conditions during the snowmelt hydrograph are modeled over the 11 kilometers of surveyed stream by utilizing 1m airborne LiDAR and HEC-GeoRAS. Cross-sectional HEC-RAS results are used to estimate the spatial distribution of longitudinal shear velocities over the observed discharges. At final accelerometer tracer positions, we analyze the HEC-RAS generated flow conditions for each disentrainment discharge magnitude. The techniques developed here have the potential to link individual grain characteristics during floods to a range of time and length scales.
Magma ocean formation due to giant impacts
NASA Technical Reports Server (NTRS)
Tonks, W. B.; Melosh, H. J.
1993-01-01
The thermal effects of giant impacts are studied by estimating the melt volume generated by the initial shock wave and corresponding magma ocean depths. Additionally, the effects of the planet's initial temperature on the generated melt volume are examined. The shock pressure required to completely melt the material is determined using the Hugoniot curve plotted in pressure-entropy space. Once the melting pressure is known, an impact melting model is used to estimate the radial distance melting occurred from the impact site. The melt region's geometry then determines the associated melt volume. The model is also used to estimate the partial melt volume. Magma ocean depths resulting from both excavated and retained melt are calculated, and the melt fraction not excavated during the formation of the crater is estimated. The fraction of a planet melted by the initial shock wave is also estimated using the model.
Connell, J.F.; Bailey, Z.C.
1989-01-01
A total of 338 single-well aquifer tests from Bear Creek and Melton Valley, Tennessee were statistically grouped to estimate hydraulic conductivities for the geologic formations in the valleys. A cross-sectional simulation model linked to a regression model was used to further refine the statistical estimates for each of the formations and to improve understanding of ground-water flow in Bear Creek Valley. Median hydraulic-conductivity values were used as initial values in the model. Model-calculated estimates of hydraulic conductivity were generally lower than the statistical estimates. Simulations indicate that (1) the Pumpkin Valley Shale controls groundwater flow between Pine Ridge and Bear Creek; (2) all the recharge on Chestnut Ridge discharges to the Maynardville Limestone; (3) the formations having smaller hydraulic gradients may have a greater tendency for flow along strike; (4) local hydraulic conditions in the Maynardville Limestone cause inaccurate model-calculated estimates of hydraulic conductivity; and (5) the conductivity of deep bedrock neither affects the results of the model nor does it add information on the flow system. Improved model performance would require: (1) more water level data for the Copper Ridge Dolomite; (2) improved estimates of hydraulic conductivity in the Copper Ridge Dolomite and Maynardville Limestone; and (3) more water level data and aquifer tests in deep bedrock. (USGS)
Peatross, J; Johansen, J
2014-01-13
Strong-field laser-atom interactions provide extreme conditions that may be useful for investigating the de Broglie-Bohm quantum interpretation. Bohmian trajectories representing bound electrons in individual atoms exhibit both even and odd harmonic motion when subjected to a strong external laser field. The phases of the even harmonics depend on the random initial positions of the trajectories within the wave function, making the even harmonics incoherent. In contrast, the phases of odd harmonics remain for the most part coherent regardless of initial position. Under the conjecture that a Bohmian point particle plays the role of emitter, this suggests an experiment to determine whether both even and odd harmonics are produced at the atomic level. Estimates suggest that incoherent emission of even harmonics may be detectable out the side of an intense laser focus interacting with a large number of atoms.
Atmospheric Fragmentation of the Canyon Diablo Meteoroid
NASA Technical Reports Server (NTRS)
Pierazzo, E.; Artemieva, N. A.
2005-01-01
About 50 kyr ago the impact of an iron meteoroid excavated Meteor Crater, Arizona, the first terrestrial structure widely recognized as a meteorite impact crater. Recent studies of ballistically dispersed impact melts from Meteor Crater indicate a compositionally unusually heterogeneous impact melt with high SiO2 and exceptionally high (10 to 25% on average) levels of projectile contamination. These are observations that must be explained by any theoretical modeling of the impact event. Simple atmospheric entry models for an iron meteorite similar to Canyon Diablo indicate that the surface impact speed should have been around 12 km/s [Melosh, personal comm.], not the 15-20 km/s generally assumed in previous impact models. This may help explaining the unusual characteristics of the impact melt at Meteor Crater. We present alternative initial estimates of the motion in the atmosphere of an iron projectile similar to Canyon Diablo, to constraint the initial conditions of the impact event that generated Meteor Crater.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salasovich, J.; Geiger, J.; Healey, V.
The U.S. Environmental Protection Agency (EPA), in accordance with the RE-Powering America's Land initiative, selected the Former Chicago, Milwaukee & St. Paul Rail Yard Company site in Perry, Iowa, for a feasibility study of renewable energy production. The National Renewable Energy Laboratory (NREL) provided technical assistance for this project. The purpose of this report is to assess the site for a photovoltaic (PV) system installation and estimate the cost, performance, and site impacts of different PV options. In addition, the report recommends financing options that could assist in the implementation of a PV system at the site. This study didmore » not assess environmental conditions at the site.« less
A compressed sensing based approach on Discrete Algebraic Reconstruction Technique.
Demircan-Tureyen, Ezgi; Kamasak, Mustafa E
2015-01-01
Discrete tomography (DT) techniques are capable of computing better results, even using less number of projections than the continuous tomography techniques. Discrete Algebraic Reconstruction Technique (DART) is an iterative reconstruction method proposed to achieve this goal by exploiting a prior knowledge on the gray levels and assuming that the scanned object is composed from a few different densities. In this paper, DART method is combined with an initial total variation minimization (TvMin) phase to ensure a better initial guess and extended with a segmentation procedure in which the threshold values are estimated from a finite set of candidates to minimize both the projection error and the total variation (TV) simultaneously. The accuracy and the robustness of the algorithm is compared with the original DART by the simulation experiments which are done under (1) limited number of projections, (2) limited view problem and (3) noisy projections conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Geet, O.; Mosey, G.
2013-03-01
The U.S. Environmental Protection Agency (EPA), in accordance with the RE-Powering America's Land initiative, selected the Tower Road site in Aurora, Colorado, for a feasibility study of renewable energy production. The National Renewable Energy Laboratory (NREL) provided technical assistance for this project. The purpose of this report is to assess the site for a possible photovoltaic (PV) system installation and estimate the cost, performance, and site impacts of different PV options. In addition, the report recommends financing options that could assist in the implementation of a PV system at the site. This study did not assess environmental conditions at themore » site.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salasovich, J.; Geiger, J.; Mosey, G.
2013-05-01
The U.S. Environmental Protection Agency (EPA), in accordance with the RE-Powering America's Land initiative, selected the Price Landfill site in Pleasantville, New Jersey, for a feasibility study of renewable energy production. The National Renewable Energy Laboratory (NREL) provided technical assistance for this project. The purpose of this report is to assess the site for a possible photovoltaic (PV) system installation and estimate the cost, performance, and site impacts of different PV options. In addition, the report recommends financing options that could assist in the implementation of a PV system at the site. This study did not assess environmental conditions atmore » the site.« less
Bozkoyunlu, Gaye; Takaç, Serpil
2014-01-01
Olive mill wastewater (OMW) with total phenol (TP) concentration range of 300-1200 mg/L was treated with alginate-immobilized Rhodotorula glutinis cells in batch system. The effects of pellet properties (diameter, alginate concentration and cell loading (CL)) and operational parameters (initial TP concentration, agitation rate and reusability of pellets) on dephenolization of OMW were studied. Up to 87% dephenolization was obtained after 120 h biodegradations. The utilization number of pellets increased with the addition of calcium ions into the biodegradation medium. The overall effectiveness factors calculated for different conditions showed that diffusional limitations arising from pellet size and pellet composition could be neglected. Mass transfer limitations appeared to be more effective at high substrate concentrations and low agitation rates. The parameters of logistic model for growth kinetics of R. glutinis in OMW were estimated at different initial phenol concentrations of OMW by curve-fitting of experimental data with the model.
Pharmacist-patient communication about medication regimen adjustment during Ramadan.
Amin, Mohamed E K; Chewning, Betty
2016-12-01
During Ramadan, Muslims fast from dawn to sunset while abstaining from food and drink. Although Muslim patients may be aware of their religious exemption from fasting, many patients still choose not to take that exemption and fast. This study examines pharmacists' initiation and timing of communication about medication regimen adjustment (MRA) with patients related to Ramadan. Predictors for initiating this communication with patients were also explored. A probability sample of community pharmacists in Alexandria, Egypt was surveyed. The self-administered instrument covered timing and likelihood of initiating discussion about MRA. Using ordered logistic regression, a model was estimated to predict pharmacists' initiation of the conversation on MRA during Ramadan. Ninety-three percent of the 298 approached pharmacists completed surveys. Only 16% of the pharmacists reported that they themselves usually initiated the conversation on MRA. Pharmacists' initiation of these conversations was associated with pharmacists' perceived importance of MRA on pharmacy revenue odds ratio ((OR) = 1.24, CI = 1.03-1.48). Eighty percent of the responding pharmacists reported the MRA conversation for chronic conditions started either 1-3 days before, or during the first week of Ramadan. These results suggest considerable pharmacist patient communication gaps regarding medication use during Ramadan. It is especially important for pharmacists and other health professionals to initiate communication with Muslim patients early enough to identify how best to help patients transition safely into and out of Ramadan as they fast. © 2016 Royal Pharmaceutical Society.
Mori, J.; Abercrombie, R.E.
1997-01-01
Statistics of earthquakes in California show linear frequency-magnitude relationships in the range of M2.0 to M5.5 for various data sets. Assuming Gutenberg-Richter distributions, there is a systematic decrease in b value with increasing depth of earthquakes. We find consistent results for various data sets from northern and southern California that both include and exclude the larger aftershock sequences. We suggest that at shallow depth (???0 to 6 km) conditions with more heterogeneous material properties and lower lithospheric stress prevail. Rupture initiations are more likely to stop before growing into large earthquakes, producing relatively more smaller earthquakes and consequently higher b values. These ideas help to explain the depth-dependent observations of foreshocks in the western United States. The higher occurrence rate of foreshocks preceding shallow earthquakes can be interpreted in terms of rupture initiations that are stopped before growing into the mainshock. At greater depth (9-15 km), any rupture initiation is more likely to continue growing into a larger event, so there are fewer foreshocks. If one assumes that frequency-magnitude statistics can be used to estimate probabilities of a small rupture initiation growing into a larger earthquake, then a small (M2) rupture initiation at 9 to 12 km depth is 18 times more likely to grow into a M5.5 or larger event, compared to the same small rupture initiation at 0 to 3 km. Copyright 1997 by the American Geophysical Union.
Le, Thao N; Stockdale, Gary
2011-10-01
The purpose of this study was to examine the effects of school demographic factors and youth's perception of discrimination on delinquency in adolescence and into young adulthood for African American, Asian, Hispanic, and white racial/ethnic groups. Using data from the National Longitudinal Study of Adolescent Health (Add Health), models testing the effect of school-related variables on delinquency trajectories were evaluated for the four racial/ethnic groups using Mplus 5.21 statistical software. Results revealed that greater student ethnic diversity and perceived discrimination, but not teacher ethnic diversity, resulted in higher initial delinquency estimates at 13 years of age for all groups. However, except for African Americans, having a greater proportion of female teachers in the school decreased initial delinquency estimates. For African Americans and whites, a larger school size also increased the initial estimates. Additionally, lower social-economic status increased the initial estimates for whites, and being born in the United States increased the initial estimates for Asians and Hispanics. Finally, regardless of the initial delinquency estimate at age 13 and the effect of the school variables, all groups eventually converged to extremely low delinquency in young adulthood, at the age of 21 years. Educators and public policy makers seeking to prevent and reduce delinquency can modify individual risks by modifying characteristics of the school environment. Policies that promote respect for diversity and intolerance toward discrimination, as well as training to help teachers recognize the precursors and signs of aggression and/or violence, may also facilitate a positive school environment, resulting in lower delinquency. Copyright © 2011 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Dual Extended Kalman Filter for the Identification of Time-Varying Human Manual Control Behavior
NASA Technical Reports Server (NTRS)
Popovici, Alexandru; Zaal, Peter M. T.; Pool, Daan M.
2017-01-01
A Dual Extended Kalman Filter was implemented for the identification of time-varying human manual control behavior. Two filters that run concurrently were used, a state filter that estimates the equalization dynamics, and a parameter filter that estimates the neuromuscular parameters and time delay. Time-varying parameters were modeled as a random walk. The filter successfully estimated time-varying human control behavior in both simulated and experimental data. Simple guidelines are proposed for the tuning of the process and measurement covariance matrices and the initial parameter estimates. The tuning was performed on simulation data, and when applied on experimental data, only an increase in measurement process noise power was required in order for the filter to converge and estimate all parameters. A sensitivity analysis to initial parameter estimates showed that the filter is more sensitive to poor initial choices of neuromuscular parameters than equalization parameters, and bad choices for initial parameters can result in divergence, slow convergence, or parameter estimates that do not have a real physical interpretation. The promising results when applied to experimental data, together with its simple tuning and low dimension of the state-space, make the use of the Dual Extended Kalman Filter a viable option for identifying time-varying human control parameters in manual tracking tasks, which could be used in real-time human state monitoring and adaptive human-vehicle haptic interfaces.
The relationship between offspring size and fitness: integrating theory and empiricism.
Rollinson, Njal; Hutchings, Jeffrey A
2013-02-01
How parents divide the energy available for reproduction between size and number of offspring has a profound effect on parental reproductive success. Theory indicates that the relationship between offspring size and offspring fitness is of fundamental importance to the evolution of parental reproductive strategies: this relationship predicts the optimal division of resources between size and number of offspring, it describes the fitness consequences for parents that deviate from optimality, and its shape can predict the most viable type of investment strategy in a given environment (e.g., conservative vs. diversified bet-hedging). Many previous attempts to estimate this relationship and the corresponding value of optimal offspring size have been frustrated by a lack of integration between theory and empiricism. In the present study, we draw from C. Smith and S. Fretwell's classic model to explain how a sound estimate of the offspring size--fitness relationship can be derived with empirical data. We evaluate what measures of fitness can be used to model the offspring size--fitness curve and optimal size, as well as which statistical models should and should not be used to estimate offspring size--fitness relationships. To construct the fitness curve, we recommend that offspring fitness be measured as survival up to the age at which the instantaneous rate of offspring mortality becomes random with respect to initial investment. Parental fitness is then expressed in ecologically meaningful, theoretically defensible, and broadly comparable units: the number of offspring surviving to independence. Although logistic and asymptotic regression have been widely used to estimate offspring size-fitness relationships, the former provides relatively unreliable estimates of optimal size when offspring survival and sample sizes are low, and the latter is unreliable under all conditions. We recommend that the Weibull-1 model be used to estimate this curve because it provides modest improvements in prediction accuracy under experimentally relevant conditions.
Nearshore Bathymetric Change Resolved by Depth Inversions, Sonic Altimeters, and In-Situ Surveys
NASA Astrophysics Data System (ADS)
Brodie, K. L.; Palmsten, M. L.; Hesser, T.; Dickhudt, P.; Ladner, H.; Elgar, S.; Raubenheimer, B.; Penko, A.
2016-12-01
Video-based remote sensing of shoaling and breaking surface gravity waves combined with a depth-inversion algorithm, cBathy, may be able to provide bathymetry information with high spatial and temporal resolution in the nearshore (Holman et al., 2013, JGR, Vol 118). Although the accuracy of cBathy has been assessed in low-wave conditions when coincident in-situ surveys are available, it has not been tested for many conditions with significant wave height > 1.5 m. During high wave conditions, the use of linear wave theory in the depth-inversion algorithm may result in estimates of water depth that are too deep. Here, measurements from an in-situ array of sonic altimeters and from frequent watercraft surveys are used to assess the ability of cBathy to estimate the spatio-temporal evolution of the seafloor during a range of wave conditions at a micro-tidal sandy beach in Duck, NC. Observations were collected continuously from 14 October to 01 November 2015 with 8 altimeters in 1.5 to 4 m water depth on 2 cross-shore transects separated by 75 m in the alongshore during waves that ranged from 0.5 to 1.0 m. Nearshore bathymetry was alongshore variable, with a crescentic bar that attached to the shoreline along one transect and was 150 m offshore along the other transect. Sand levels changed by as much as 1 m in some locations. Additional measurements were collected with 3 altimeters on a single cross-shore transect for 6 months, with wave heights from 0.3 to 5.0 m and sand level fluctuations of up to 1 m in a single day. Initial comparisons with surveys show cBathy RMSE and bias are of similar magnitude to prior studies. Although cBathy resolves the large-scale spatial morphology of the sandbar, when Hs > 1.3 m cBathy estimates of the sandbar location are 10 to 50 m onshore of the surveyed location. cBathy uncertainty estimates were a poor representation of actual errors when compared with the surveys. Six-month-long time series of altimeter data will be used to assess cBathy's performance during large wave conditions, and altimeter and survey data will be used to assess the spatial and temporal scales of change that can be resolved with cBathy. Funded by USACE, ASAALT, NRL, and ASD(R&E).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doubrawa, P.; Barthelmie, R. J.; Wang, H.
The contribution of wake meandering and shape asymmetry to load and power estimates is quantified by comparing aeroelastic simulations initialized with different inflow conditions: an axisymmetric base wake, an unsteady stochastic shape wake, and a large-eddy simulation with rotating actuator-line turbine representation. Time series of blade-root and tower base bending moments are analyzed. We find that meandering has a large contribution to the fluctuation of the loads. Moreover, considering the wake edge intermittence via the stochastic shape model improves the simulation of load and power fluctuations and of the fatigue damage equivalent loads. Furthermore, these results indicate that the stochasticmore » shape wake simulator is a valuable addition to simplified wake models when seeking to obtain higher-fidelity computationally inexpensive predictions of loads and power.« less
Kir'ianova, V V; Baburin, I N; goncharova, V G; Veselovskiĭ, A B
2012-01-01
The objective of the present clinical and psychopathological study was to estimate the influence of high-intensity white and yellow phototherapy on the clinical condition of 41 and 18 patients respectively presenting with astheno-depressive syndrome. The control group was comprised of 42 patients who were treated by standard pharmacotherapy. Clinical observations of the patients were supplemented by the evaluation of their conditions and characteristics using the Symptom Checklist-90 questionnaire, the Bekhterev Depression Inventory, and the Beck Depression Inventory. The patients of the three groups were surveyed before and within 20 days after the initiation of the treatment. It was shown that white light phototherapy considerably reduced the severity of asthenia and depression. Yellow light phototherapy proved more efficacious in the patients with asthenia and somatovegetative dysfunctions.
Doubrawa, P.; Barthelmie, R. J.; Wang, H.; ...
2016-10-03
The contribution of wake meandering and shape asymmetry to load and power estimates is quantified by comparing aeroelastic simulations initialized with different inflow conditions: an axisymmetric base wake, an unsteady stochastic shape wake, and a large-eddy simulation with rotating actuator-line turbine representation. Time series of blade-root and tower base bending moments are analyzed. We find that meandering has a large contribution to the fluctuation of the loads. Moreover, considering the wake edge intermittence via the stochastic shape model improves the simulation of load and power fluctuations and of the fatigue damage equivalent loads. Furthermore, these results indicate that the stochasticmore » shape wake simulator is a valuable addition to simplified wake models when seeking to obtain higher-fidelity computationally inexpensive predictions of loads and power.« less
Examining intention in simulated actions: are children and young adults different?
Gabbard, Carl; Caçola, Priscila
2014-10-01
Previous work with adults provides evidence that 'intention' used in processing simulated actions is similar to that used in planning and processing overt movements. The present study compared young adults and children on their ability to estimate distance reachability using a NOGO/GO paradigm in conditions of imagery only (IO) and imagery with actual execution (IE). Our initial thoughts were that whereas intention is associated with motivation and commitment to act, age-related differences could impact planning. Results indicated no difference in overall accuracy by condition within groups, and as expected adults were more accurate. These findings support an increasing body of evidence suggesting that the neurocognitive processes (in this case, intention) driving motor imagery and overt actions are similar, and as evidenced here, functioning by age 7. Copyright © 2014 Elsevier Inc. All rights reserved.
Ocean Data Assimilation in Support of Climate Applications: Status and Perspectives.
Stammer, D; Balmaseda, M; Heimbach, P; Köhl, A; Weaver, A
2016-01-01
Ocean data assimilation brings together observations with known dynamics encapsulated in a circulation model to describe the time-varying ocean circulation. Its applications are manifold, ranging from marine and ecosystem forecasting to climate prediction and studies of the carbon cycle. Here, we address only climate applications, which range from improving our understanding of ocean circulation to estimating initial or boundary conditions and model parameters for ocean and climate forecasts. Because of differences in underlying methodologies, data assimilation products must be used judiciously and selected according to the specific purpose, as not all related inferences would be equally reliable. Further advances are expected from improved models and methods for estimating and representing error information in data assimilation systems. Ultimately, data assimilation into coupled climate system components is needed to support ocean and climate services. However, maintaining the infrastructure and expertise for sustained data assimilation remains challenging.
Real-time caries diagnostics by optical PNC method
NASA Astrophysics Data System (ADS)
Masychev, Victor I.; Alexandrov, Michail T.
2000-11-01
The results of hard tooth tissues research by the optical PNC- method in experimental and clinical conditions are presented. In the experiment under 90 test-sample of tooth slices with thickness about 1mm (enamel, dentine and cement) were researched. The results of the experiment were processed by the method of correlation analyze. Clinical researches were executed on teeth of 210 patients. The regions of tooth tissue diseases with initial, moderate and deep caries were investigated. Spectral characteristics of intact and pathologically changed tooth tissues are presented and their peculiar features are discussed. The results the optical PNC-method application while processing tooth carious cavities are presented in order to estimate efficiency of the mechanical and antiseptic processing of teeth. It is revealed that the PNC-method can be sued as for differential diagnostics of a degree dental carious stage, as for estimating of carefulness of tooth cavity processing before filling.
Express diagnostics of intact and pathological dental hard tissues by optical PNC method
NASA Astrophysics Data System (ADS)
Masychev, Victor I.; Alexandrov, Michail T.
2000-03-01
The results of hard tooth tissues research by the optical PNC- method in experimental and clinical conditions are presented. In the experiment under 90 test-sample of tooth slices with thickness about 1 mm (enamel, dentine and cement) were researched. The results of the experiment were processed by the method of correlation analyze. Clinical researches were executed on teeth of 210 patients. The regions of tooth tissue diseases with initial, moderate and deep caries were investigated. Spectral characteristics of intact and pathologically changed tooth tissues are presented and their peculiar features are discussed. The results the optical PNC- method application while processing tooth carious cavities are presented in order to estimate efficiency of the mechanical and antiseptic processing of teeth. It is revealed that the PNC-method can be used as for differential diagnostics of a degree dental carious stage, as for estimating of carefulness of tooth cavity processing before filling.
Large-scale structure non-Gaussianities with modal methods
NASA Astrophysics Data System (ADS)
Schmittfull, Marcel
2016-10-01
Relying on a separable modal expansion of the bispectrum, the implementation of a fast estimator for the full bispectrum of a 3d particle distribution is presented. The computational cost of accurate bispectrum estimation is negligible relative to simulation evolution, so the bispectrum can be used as a standard diagnostic whenever the power spectrum is evaluated. As an application, the time evolution of gravitational and primordial dark matter bispectra was measured in a large suite of N-body simulations. The bispectrum shape changes characteristically when the cosmic web becomes dominated by filaments and halos, therefore providing a quantitative probe of 3d structure formation. Our measured bispectra are determined by ~ 50 coefficients, which can be used as fitting formulae in the nonlinear regime and for non-Gaussian initial conditions. We also compare the measured bispectra with predictions from the Effective Field Theory of Large Scale Structures (EFTofLSS).
Predicting Operator Execution Times Using CogTool
NASA Technical Reports Server (NTRS)
Santiago-Espada, Yamira; Latorella, Kara A.
2013-01-01
Researchers and developers of NextGen systems can use predictive human performance modeling tools as an initial approach to obtain skilled user performance times analytically, before system testing with users. This paper describes the CogTool models for a two pilot crew executing two different types of a datalink clearance acceptance tasks, and on two different simulation platforms. The CogTool time estimates for accepting and executing Required Time of Arrival and Interval Management clearances were compared to empirical data observed in video tapes and registered in simulation files. Results indicate no statistically significant difference between empirical data and the CogTool predictions. A population comparison test found no significant differences between the CogTool estimates and the empirical execution times for any of the four test conditions. We discuss modeling caveats and considerations for applying CogTool to crew performance modeling in advanced cockpit environments.
Monitoring and seasonal forecasting of meteorological droughts
NASA Astrophysics Data System (ADS)
Dutra, Emanuel; Pozzi, Will; Wetterhall, Fredrik; Di Giuseppe, Francesca; Magnusson, Linus; Naumann, Gustavo; Barbosa, Paulo; Vogt, Jurgen; Pappenberger, Florian
2015-04-01
Near-real time drought monitoring can provide decision makers valuable information for use in several areas, such as water resources management, or international aid. Unfortunately, a major constraint in current drought outlooks is the lack of reliable monitoring capability for observed precipitation globally in near-real time. Furthermore, drought monitoring systems requires a long record of past observations to provide mean climatological conditions. We address these constraints by developing a novel drought monitoring approach in which monthly mean precipitation is derived from short-range using ECMWF probabilistic forecasts and then merged with the long term precipitation climatology of the Global Precipitation Climatology Centre (GPCC) dataset. Merging the two makes available a real-time global precipitation product out of which the Standardized Precipitation Index (SPI) can be estimated and used for global or regional drought monitoring work. This approach provides stability in that by-passes problems of latency (lags) in having local rain-gauge measurements available in real time or lags in satellite precipitation products. Seasonal drought forecasts can also be prepared using the common methodology and based upon two data sources used to provide initial conditions (GPCC and the ECMWF ERA-Interim reanalysis (ERAI) combined with either the current ECMWF seasonal forecast or a climatology based upon ensemble forecasts. Verification of the forecasts as a function of lead time revealed a reduced impact on skill for: (i) long lead times using different initial conditions, and (ii) short lead times using different precipitation forecasts. The memory effect of initial conditions was found to be 1 month lead time for the SPI-3, 3 to 4 months for the SPI-6 and 5 months for the SPI-12. Results show that dynamical forecasts of precipitation provide added value, a skill similar to or better than climatological forecasts. In some cases, particularly for long SPI time scales, it is very difficult to improve on the use of climatological forecasts. However, results presented regionally and globally pinpoint several regions in the world where drought onset forecasting is feasible and skilful.
Almonroeder, Thomas G; Benson, Lauren C; O'Connor, Kristian M
2015-12-01
Foot orthotics are commonly utilized in the treatment of patellofemoral pain (PFP) and have shown clinical benefit; however, their mechanism of action remains unclear. Patellofemoral joint stress (PFJS) is thought to be one of the main etiological factors associated with PFP. The primary purpose of this study was to investigate the effects of a prefabricated foot orthotic with 5 ° of medial rearfoot wedging on the magnitude and the timing of the peak PFJS in a group of healthy female recreational athletes. The hypothesis was that there would be significant reduction in the peak patellofemoral joint stress and a delay in the timing of this peak in the orthotic condition. Cross-sectional. Kinematic and kinetic data were collected during running trials in a group of healthy, female recreational athletes. The knee angle and moment data in the sagittal plane were incorporated into a previously developed model to estimate patellofemoral joint stress. The dependent variables of interest were the peak patellofemoral joint stress as well as the percentage of stance at which this peak occurred, as both the magnitude and the timing of the joint loading are thought to be important in overuse running injuries. The peak patellofemoral joint stress significantly increased in the orthotic condition by 5.8% (p=.02, ES=0.24), which does not support the initial hypothesis. However, the orthotic did significantly delay the timing of the peak during the stance phase by 3.8% (p=.002, ES=0.47). The finding that the peak patellofemoral joint stress increased in the orthotic condition did not support the initial hypothesis. However, the finding that the timing of this peak was delayed to later in the stance phase in the orthotic condition did support the initial hypothesis and may be related to the clinical improvements previously reported in subjects with PFP. Level 4.
A 3-D wellbore simulator (WELLTHER-SIM) to determine the thermal diffusivity of rock-formations
NASA Astrophysics Data System (ADS)
Wong-Loya, J. A.; Santoyo, E.; Andaverde, J.
2017-06-01
Acquiring thermophysical properties of rock-formations in geothermal systems is an essential task required for the well drilling and completion. Wellbore thermal simulators require such properties for predicting the thermal behavior of a wellbore and the formation under drilling and shut-in conditions. The estimation of static formation temperatures also needs the use of these properties for the wellbore and formation materials (drilling fluids and pipes, cements, casings, and rocks). A numerical simulator (WELLTHER-SIM) has been developed for modeling the drilling fluid circulation and shut-in processes of geothermal wellbores, and for the in-situ determination of thermal diffusivities of rocks. Bottomhole temperatures logged under shut-in conditions (BHTm), and thermophysical and transport properties of drilling fluids were used as main input data. To model the thermal disturbance and recovery processes in the wellbore and rock-formation, initial drilling fluid and static formation temperatures were used as initial and boundary conditions. WELLTHER-SIM uses these temperatures together with an initial thermal diffusivity for the rock-formation to solve the governing equations of the heat transfer model. WELLTHER-SIM was programmed using the finite volume technique to solve the heat conduction equations under 3-D and transient conditions. Thermal diffusivities of rock-formations were inversely computed by using an iterative and efficient numerical simulation, where simulated thermal recovery data sets (BHTs) were statistically compared with those temperature measurements (BHTm) logged in some geothermal wellbores. The simulator was validated using a well-documented case reported in the literature, where the thermophysical properties of the rock-formation are known with accuracy. The new numerical simulator has been successfully applied to two wellbores drilled in geothermal fields of Japan and Mexico. Details of the physical conceptual model, the numerical algorithm, and the validation and application results are outlined in this work.
NASA Astrophysics Data System (ADS)
Okamoto, Kyosuke; Tsuno, Seiji
2015-10-01
In the earthquake early warning (EEW) system, the epicenter location and magnitude of earthquakes are estimated using the amplitude growth rate of initial P-waves. It has been empirically pointed out that the growth rate becomes smaller as epicentral distance becomes far regardless of the magnitude of earthquakes. So, the epicentral distance can be estimated from the growth rate using this empirical relationship. However, the growth rates calculated from different earthquakes at the same epicentral distance mark considerably different values from each other. Sometimes the growth rates of earthquakes having the same epicentral distance vary by 104 times. Qualitatively, it has been considered that the gap in the growth rates is due to differences in the local heterogeneities that the P-waves propagate through. In this study, we demonstrate theoretically how local heterogeneities in the subsurface disturb the relationship between the growth rate and the epicentral distance. Firstly, we calculate seismic scattered waves in a heterogeneous medium. First-ordered PP, PS, SP, and SS scatterings are considered. The correlation distance of the heterogeneities and fractional fluctuation of elastic parameters control the heterogeneous conditions for the calculation. From the synthesized waves, the growth rate of the initial P-wave is obtained. As a result, we find that a parameter (in this study, correlation distance) controlling heterogeneities plays a key role in the magnitude of the fluctuation of the growth rate. Then, we calculate the regional correlation distances in Japan that can account for the fluctuation of the growth rate of real earthquakes from 1997 to 2011 observed by K-NET and KiK-net. As a result, the spatial distribution of the correlation distance shows locality. So, it is revealed that the growth rates fluctuate according to the locality. When this local fluctuation is taken into account, the accuracy of the estimation of epicentral distances from initial P-waves can improve, which will in turn improve the accuracy of the EEW system.
An Integrated Approach to Indoor and Outdoor Localization
2017-04-17
localization estimate, followed by particle filter based tracking. Initial localization is performed using WiFi and image observations. For tracking we...source. A two-step process is proposed that performs an initial localization es-timate, followed by particle filter based t racking. Initial...mapped, it is possible to use them for localization [20, 21, 22]. Haverinen et al. show that these fields could be used with a particle filter to
Radi, Marjan; Dezfouli, Behnam; Abu Bakar, Kamalrulnizam; Abd Razak, Shukor
2014-01-01
Network connectivity and link quality information are the fundamental requirements of wireless sensor network protocols to perform their desired functionality. Most of the existing discovery protocols have only focused on the neighbor discovery problem, while a few number of them provide an integrated neighbor search and link estimation. As these protocols require a careful parameter adjustment before network deployment, they cannot provide scalable and accurate network initialization in large-scale dense wireless sensor networks with random topology. Furthermore, performance of these protocols has not entirely been evaluated yet. In this paper, we perform a comprehensive simulation study on the efficiency of employing adaptive protocols compared to the existing nonadaptive protocols for initializing sensor networks with random topology. In this regard, we propose adaptive network initialization protocols which integrate the initial neighbor discovery with link quality estimation process to initialize large-scale dense wireless sensor networks without requiring any parameter adjustment before network deployment. To the best of our knowledge, this work is the first attempt to provide a detailed simulation study on the performance of integrated neighbor discovery and link quality estimation protocols for initializing sensor networks. This study can help system designers to determine the most appropriate approach for different applications. PMID:24678277
Temperature-based death time estimation with only partially known environmental conditions.
Mall, Gita; Eckl, Mona; Sinicina, Inga; Peschel, Oliver; Hubig, Michael
2005-07-01
The temperature-oriented death time determination is based on mathematical model curves of postmortem rectal cooling. All mathematical models require knowledge of the environmental conditions. In medico-legal practice homicide is sometimes not immediately suspected at the death scene but afterwards during external examination of the body. The environmental temperature at the death scene remains unknown or can only be roughly reconstructed. In such cases the question arises whether it is possible to estimate the time since death from rectal temperature data alone recorded over a longer time span. The present study theoretically deduces formulae which are independent of the initial and environmental temperatures and thus proves that the information needed for death time estimation is contained in the rectal temperature data. Since the environmental temperature at the death scene may differ from that during the temperature recording, an additional factor has to be used. This is that the body core is thermally well isolated from the environment and that the rectal temperature decrease after a sudden change of environmental temperature will continue for some time at a rate similar to that before the sudden change. The present study further provides a curve-fitting procedure for such scenarios. The procedure was tested in rectal cooling data of from 35 corpses using the most commonly applied model of Henssge. In all cases the time of death was exactly known. After admission to the medico-legal institute the bodies were kept at a constant environmental temperature for 12-36 h and the rectal temperatures were recorded continuously. The curve-fitting procedure led to valid estimates of the time since death in all experiments despite the unknown environmental conditions before admission to the institute. The estimation bias was investigated statistically. The 95% confidence intervals amounted to +/-4 h, which seems reasonable compared to the 95% confidence intervals of the Henssge model with known environmental temperature. The presented method may be of use for determining the time since death even in cases in which the environmental temperature and rectal temperature at the death scene have unintentionally not been recorded.
Transfer of aged Pu to cattle grazing on a contaminated environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert, R.O.; Engel, D.W.; Smith, D.D.
1988-03-01
Estimates are obtained of the fraction of ingested or inhaled 239+240Pu transferred to blood and tissues of a reproducing herd of beef cattle, individuals of which grazed within fenced enclosures for up to 1064 d under natural conditions with no supplemental feeding at an arid site contaminated 16 y previously with Pu oxide. The estimated (geometric mean (GM)) fraction of Pu transferred from the gastrointestinal tract to blood serum was about 5 x 10(-6) (geometric standard error (GSE) = 1.4) with an approximate upper bound of about 2 x 10(-5). These results are in reasonable agreement with the value ofmore » 1 x 10(-5) recommended for human radiation protection purposes by the International Commission on Radiological Protection (ICRP) for insoluble Pu oxides that are free of very small particles. Also, results from a laboratory study by Stanley (St75), in which large doses of /sup 238/Pu were orally administered daily to dairy cattle for 19 consecutive days, suggest that aged 239+240Pu at this arid grazing site may not be more biologically available to blood serum than fresh 239+240Pu oxide. The estimated fractions of 239+240Pu transferred from blood serum to tissues of adult grazing cattle were: femur (3.2 X 10(-2), 1.8; GM, GSE), vertebra (1.4 X 10(-1), 1.6), liver (2.3 X 10(-1), 2.0), muscle (1.3 X 10(-1), 1.9), female gonads (7.9 X 10(-5), 1.5), and kidney (1.4 X 10(-3), 1.7). The blood-to-tissue fractional transfers for cattle initially exposed in utero were greater than those exposed only as adults by a factor of about 4 for femur (statistically significant) and of about 2 for other tissues (not significant). The estimated (GM) fraction of inhaled Pu initially deposited in the pulmonary lung was 0.34 (GSE = 1.3) for adults and 0.15 (GSE = 1.3) for cattle initially exposed in utero (a statistically significant difference).« less
Geomagnetic storm under laboratory conditions: randomized experiment
NASA Astrophysics Data System (ADS)
Gurfinkel, Yu I.; Vasin, A. L.; Pishchalnikov, R. Yu; Sarimov, R. M.; Sasonko, M. L.; Matveeva, T. A.
2017-10-01
The influence of the previously recorded geomagnetic storm (GS) on human cardiovascular system and microcirculation has been studied under laboratory conditions. Healthy volunteers in lying position were exposed under two artificially created conditions: quiet (Q) and storm (S). The Q regime playbacks a noise-free magnetic field (MF) which is closed to the natural geomagnetic conditions on Moscow's latitude. The S regime playbacks the initially recorded 6-h geomagnetic storm which is repeated four times sequentially. The cardiovascular response to the GS impact was assessed by measuring capillary blood velocity (CBV) and blood pressure (BP) and by the analysis of the 24-h ECG recording. A storm-to-quiet ratio for the cardio intervals (CI) and the heart rate variability (HRV) was introduced in order to reveal the average over group significant differences of HRV. An individual sensitivity to the GS was estimated using the autocorrelation function analysis of the high-frequency (HF) part of the CI spectrum. The autocorrelation analysis allowed for detection a group of subjects of study which autocorrelation functions (ACF) react differently in the Q and S regimes of exposure.
Geomagnetic storm under laboratory conditions: randomized experiment.
Gurfinkel, Yu I; Vasin, A L; Pishchalnikov, R Yu; Sarimov, R M; Sasonko, M L; Matveeva, T A
2018-04-01
The influence of the previously recorded geomagnetic storm (GS) on human cardiovascular system and microcirculation has been studied under laboratory conditions. Healthy volunteers in lying position were exposed under two artificially created conditions: quiet (Q) and storm (S). The Q regime playbacks a noise-free magnetic field (MF) which is closed to the natural geomagnetic conditions on Moscow's latitude. The S regime playbacks the initially recorded 6-h geomagnetic storm which is repeated four times sequentially. The cardiovascular response to the GS impact was assessed by measuring capillary blood velocity (CBV) and blood pressure (BP) and by the analysis of the 24-h ECG recording. A storm-to-quiet ratio for the cardio intervals (CI) and the heart rate variability (HRV) was introduced in order to reveal the average over group significant differences of HRV. An individual sensitivity to the GS was estimated using the autocorrelation function analysis of the high-frequency (HF) part of the CI spectrum. The autocorrelation analysis allowed for detection a group of subjects of study which autocorrelation functions (ACF) react differently in the Q and S regimes of exposure.
Andreu, Irene; Natividad, Eva
2013-12-01
In magnetic hyperthermia, characterising the specific functionality of magnetic nanoparticle arrangements is essential to plan the therapies by simulating maximum achievable temperatures. This functionality, i.e. the heat power released upon application of an alternating magnetic field, is quantified by means of the specific absorption rate (SAR), also referred to as specific loss power (SLP). Many research groups are currently involved in the SAR/SLP determination of newly synthesised materials by several methods, either magnetic or calorimetric, some of which are affected by important and unquantifiable uncertainties that may turn measurements into rough estimates. This paper reviews all these methods, discussing in particular sources of uncertainties, as well as their possible minimisation. In general, magnetic methods, although accurate, do not operate in the conditions of magnetic hyperthermia. Calorimetric methods do, but the easiest to implement, the initial-slope method in isoperibol conditions, derives inaccuracies coming from the lack of matching between thermal models, experimental set-ups and measuring conditions, while the most accurate, the pulse-heating method in adiabatic conditions, requires more complex set-ups.
Estimation of evaporation from equilibrium diurnal boundary layer humidity
NASA Astrophysics Data System (ADS)
Salvucci, G.; Rigden, A. J.; Li, D.; Gentine, P.
2017-12-01
Simplified conceptual models of the convective boundary layer as a well mixed profile of potential temperature (theta) and specific humidity (q) impinging on an initially stably stratified linear potential temperature profile have a long history in atmospheric sciences. These one dimensional representations of complex mixing are useful for gaining insights into land-atmosphere interactions and for prediction when state of the art LES approaches are infeasible. As previously shown (e.g. Betts), if one neglects the role of q in bouyancy, the framework yields a unique relation between mixed layer Theta, mixed layer height (h), and cumulative sensible heat flux (SH) throughout the day. Similarly assuming an initially q profile yields a simple relation between q, h, and cumulative latent heat flux (LH). The diurnal dynamics of theta and q are strongly dependent on SH and the initial lapse rates of theta (gamma_thet) and q (gamma q). In the estimation method proposed here, we further constrain these relations with two more assumptions: 1) The specific humidity is the same at the start of the period of boundary layer growth and at the collapse; and 2) Once the mixed layer reaches the LCL, further drying occurs proportionally to the deardorff convective velocity scale (omega) multiplied by q. Assumption (1) is based on the idea that below the cloud layer, there are no sinks of moisture within the mixed layer (neglecting lateral humidity divergence). Thus the net mixing of dry air aloft with evaporation from the surface must balance. Inclusion of the simple model of moisture loss above the LCL into the bulk-CBL model allows definition of an equilibrium humidity (q) condition at which the diurnal cycle of q repeats (i.e. additions of q from surface balance entrainment of dry air from above). Surprisingly, this framework allows estimation of LH from q, theta, and estimated net radiation by solving for the value of Evaporative Fraction (EF) for which the diurnal cycle of q repeats. Three parameters need specification: cloud area fraction, entrainment factor, and morning lapse rate. Surprisingly, a single set of values for these parameters are adequate to estimate EF at over 70 tested Ameriflux sites to within about 20%, though improvements are gained using a single regression model for gamma_thet that has been fitted to radiosonde data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steen, M.; Lisell, L.; Mosey, G.
2013-01-01
The U.S. Environmental Protection Agency (EPA), in accordance with the RE-Powering America's Land initiative, selected the Vincent Mullins Landfill in Tucson, Arizona, for a feasibility study of renewable energy production. Under the RE-Powering America's Land initiative, the EPA provided funding to the National Renewable Energy Laboratory (NREL) to support the study. NREL provided technical assistance for this project but did not assess environmental conditions at the site beyond those related to the performance of a photovoltaic (PV) system. The purpose of this report is to assess the site for a possible PV installation and estimate the cost and performance ofmore » different PV configurations, as well as to recommend financing options that could assist in the implementation of a PV system. In addition to the Vincent Mullins site, four similar landfills in Tucson are included as part of this study.« less
NASA Technical Reports Server (NTRS)
Koster, Randal D.; Walker, Gregory K.; Mahanama, Sarith P.; Reichle, Rolf H.
2013-01-01
Offline simulations over the conterminous United States (CONUS) with a land surface model are used to address two issues relevant to the forecasting of large-scale seasonal streamflow: (i) the extent to which errors in soil moisture initialization degrade streamflow forecasts, and (ii) the extent to which a realistic increase in the spatial resolution of forecasted precipitation would improve streamflow forecasts. The addition of error to a soil moisture initialization field is found to lead to a nearly proportional reduction in streamflow forecast skill. The linearity of the response allows the determination of a lower bound for the increase in streamflow forecast skill achievable through improved soil moisture estimation, e.g., through satellite-based soil moisture measurements. An increase in the resolution of precipitation is found to have an impact on large-scale streamflow forecasts only when evaporation variance is significant relative to the precipitation variance. This condition is met only in the western half of the CONUS domain. Taken together, the two studies demonstrate the utility of a continental-scale land surface modeling system as a tool for addressing the science of hydrological prediction.
NASA Astrophysics Data System (ADS)
Wu, Zan; Wadekar, Vishwas; Wang, Chenglong; Sunden, Bengt
2018-01-01
This study aims to reveal the effects of liquid entrainment, initial entrained fraction and tube diameter on liquid film dryout in vertical upward annular flow for flow boiling. Entrainment and deposition rates of droplets were included in mass conservation equations to estimate the local liquid film mass flux in annular flow, and the critical vapor quality at dryout conditions. Different entrainment rate correlations were evaluated using flow boiling data of water and organic liquids including n-pentane, iso-octane and R134a. Effect of the initial entrained fraction (IEF) at the churn-to-annular flow transition was also investigated. A transitional Boiling number was proposed to separate the IEF-sensitive region at high Boiling numbers and the IEF-insensitive region at low Boiling numbers. Besides, the diameter effect on dryout vapor quality was studied. The dryout vapor quality increases with decreasing tube diameter. It needs to be pointed out that the dryout characteristics of submillimeter channels might be different because of different mechanisms of dryout, i.e., drying of liquid film underneath long vapor slugs and flow boiling instabilities.
Oliviero, T; Verkerk, R; Van Boekel, M A J S; Dekker, M
2014-11-15
Broccoli belongs to the Brassicaceae plant family consisting of widely eaten vegetables containing high concentrations of glucosinolates. Enzymatic hydrolysis of glucosinolates by endogenous myrosinase (MYR) can form isothiocyanates with health-promoting activities. The effect of water content (WC) and temperature on MYR inactivation in broccoli was investigated. Broccoli was freeze dried obtaining batches with WC between 10% and 90% (aw from 0.10 to 0.96). These samples were incubated for various times at different temperatures (40-70°C) and MYR activity was measured. The initial MYR inactivation rates were estimated by the first-order reaction kinetic model. MYR inactivation rate constants were lower in the driest samples (10% WC) at all studied temperatures. Samples with 67% and 90% WC showed initial inactivation rate constants all in the same order of magnitude. Samples with 31% WC showed intermediate initial inactivation rate constants. These results are useful to optimise the conditions of drying processes to produce dried broccoli with optimal MYR retention for human health. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Fitton, N.; Datta, A.; Hastings, A.; Kuhnert, M.; Topp, C. F. E.; Cloy, J. M.; Rees, R. M.; Cardenas, L. M.; Williams, J. R.; Smith, K.; Chadwick, D.; Smith, P.
2014-09-01
The United Kingdom currently reports nitrous oxide emissions from agriculture using the IPCC default Tier 1 methodology. However Tier 1 estimates have a large degree of uncertainty as they do not account for spatial variations in emissions. Therefore biogeochemical models such as DailyDayCent (DDC) are increasingly being used to provide a spatially disaggregated assessment of annual emissions. Prior to use, an assessment of the ability of the model to predict annual emissions should be undertaken, coupled with an analysis of how model inputs influence model outputs, and whether the modelled estimates are more robust that those derived from the Tier 1 methodology. The aims of the study were (a) to evaluate if the DailyDayCent model can accurately estimate annual N2O emissions across nine different experimental sites, (b) to examine its sensitivity to different soil and climate inputs across a number of experimental sites and (c) to examine the influence of uncertainty in the measured inputs on modelled N2O emissions. DailyDayCent performed well across the range of cropland and grassland sites, particularly for fertilized fields indicating that it is robust for UK conditions. The sensitivity of the model varied across the sites and also between fertilizer/manure treatments. Overall our results showed that there was a stronger correlation between the sensitivity of N2O emissions to changes in soil pH and clay content than the remaining input parameters used in this study. The lower the initial site values for soil pH and clay content, the more sensitive DDC was to changes from their initial value. When we compared modelled estimates with Tier 1 estimates for each site, we found that DailyDayCent provided a more accurate representation of the rate of annual emissions.
Morrissey, M.M.; Savage, W.Z.; Wieczorek, G.F.
1999-01-01
The July 10, 1996, Happy Isles rockfall in Yosemite National Park, California, released 23,000 to 38,000 m3 of granite in four separate events. The impacts of the first two events which involved a 550-m free fall, generated seismic waves and atmospheric pressure waves (air blasts). We focus on the dynamic behavior of the second air blast that downed over 1000 trees, destroyed a bridge, demolished a snack bar, and caused one fatality and several injuries. Calculated velocities for the air blast from a two-phase, finite difference model are compared to velocities estimated from tree damage. From tornadic studies of tree damage, the air blast is estimated to have traveled <108-120 m/s within 50 m from the impact and decreased to <10-20 m/s within 500 m from the impact. The numerical model simulates the two-dimensional propagation of an air blast through a dusty atmosphere with initial conditions defined by the impact velocity and pressure. The impact velocity (105-107 m/s) is estimated from the Colorado Rockfall Simulation Program that simulates rockfall trajectories. The impact pressure (0.5 MPa) is constrained by the kinetic energy of the impact (1010-1012 J) estimated from the seismic energy generated by the impact. Results from the air blast simulations indicate that the second Happy Isles air blast (weak shock wave) traveled with an initial velocity above the local sound speed. The size and location of the first impact are thought to have injected <50 wt % dust into the atmosphere. This amount of dust lowered the local atmospheric sound speed to ???220 m/s. The discrepancy between calculated velocity data and field estimated velocity data (???220 m/s versus ???110 m/s) is attributed to energy dissipated by the downing of trees and additional entrainment of debris into the atmosphere not included in the calculations. Copyright 1999 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Morrissey, M. M.; Savage, W. Z.; Wieczorek, G. F.
1999-10-01
The July 10, 1996, Happy Isles rockfall in Yosemite National Park, California, released 23,000 to 38,000 m3 of granite in four separate events. The impacts of the first two events which involved a 550-m free fall, generated seismic waves and atmospheric pressure waves (air blasts). We focus on the dynamic behavior of the second air blast that downed over 1000 trees, destroyed a bridge, demolished a snack bar, and caused one fatality and several injuries. Calculated velocities for the air blast from a two-phase, finite difference model are compared to velocities estimated from tree damage. From tornadic studies of tree damage, the air blast is estimated to have traveled <108-120 m/s within 50 m from the impact and decreased to <10-20 m/s within 500 m from the impact. The numerical model simulates the two-dimensional propagation of an air blast through a dusty atmosphere with initial conditions defined by the impact velocity and pressure. The impact velocity (105-107 m/s) is estimated from the Colorado Rockfall Simulation Program that simulates rockfall trajectories. The impact pressure (0.5 MPa) is constrained by the kinetic energy of the impact (1010-1012 J) estimated from the seismic energy generated by the impact. Results from the air blast simulations indicate that the second Happy Isles air blast (weak shock wave) traveled with an initial velocity above the local sound speed. The size and location of the first impact are thought to have injected <50 wt% dust into the atmosphere. This amount of dust lowered the local atmospheric sound speed to ˜220 m/s. The discrepancy between calculated velocity data and field estimated velocity data (˜220 m/s versus ˜110 m/s) is attributed to energy dissipated by the downing of trees and additional entrainment of debris into the atmosphere not included in the calculations.
Climate change impact on North Sea wave conditions: a consistent analysis of ten projections
NASA Astrophysics Data System (ADS)
Grabemann, Iris; Groll, Nikolaus; Möller, Jens; Weisse, Ralf
2015-02-01
Long-term changes in the mean and extreme wind wave conditions as they may occur in the course of anthropogenic climate change can influence and endanger human coastal and offshore activities. A set of ten wave climate projections derived from time slice and transient simulations of future conditions is analyzed to estimate the possible impact of anthropogenic climate change on mean and extreme wave conditions in the North Sea. This set includes different combinations of IPCC SRES emission scenarios (A2, B2, A1B, and B1), global and regional models, and initial states. A consistent approach is used to provide a more robust assessment of expected changes and uncertainties. While the spatial patterns and the magnitude of the climate change signals vary, some robust features among the ten projections emerge: mean and severe wave heights tend to increase in the eastern parts of the North Sea towards the end of the twenty-first century in nine to ten projections, but the magnitude of the increase in extreme waves varies in the order of decimeters between these projections. For the western parts of the North Sea more than half of the projections suggest a decrease in mean and extreme wave heights. Comparing the different sources of uncertainties due to models, scenarios, and initial conditions, it can be inferred that the influence of the emission scenario on the climate change signal seems to be less important. Furthermore, the transient projections show strong multi-decadal fluctuations, and changes towards the end of the twenty-first century might partly be associated with internal variability rather than with systematic changes.